Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

While looking toward the future with our comprehensive slate of current content, we'd also like to recognize our rich past by drawing upon our extensive (and mostly free) online archive of work dating back to 1997. In an effort to highlight the best of what's gone before, we'll be bringing you a weekly blast from BP's past, introducing or re-introducing you to some of the most informative and entertaining authors who have passed through our virtual halls. If you have fond recollections of a BP piece that you'd like to nominate for re-exposure to a wider audience, send us your suggestion.

Kevin Goldstein tried to determine how much major-league talent the typical team can expect from its farm system in the piece reprinted below, which was originally published as a "Future Shock" column on March 20, 2006.

People love prospect rankings, and with good reason. Often, a top prospect list in print or online can be a fan's first exposure to a new name. The up-and-coming kid, the future All-Star, the savior. Baseball America first began publishing team-by-team Top 10 lists in the early '80s, and before the Internet was a daily part of everybody's lives (we call that time, "The 90s"), prospect fans had to wait with bated breath over more than three months for the six issues (one for each division) to arrive throughout the winter. I still believe that one of the reasons I was slow to warm up to college baseball was the inevitable college preview issue that would show up in my mailbox, interrupting the top prospect editions. I would curse that issue annually.

Prior to the 2001 season, Baseball America published their first Prospect Handbook, and it was a groundbreaking work. With rankings of the top 30 prospects for each team, and a write-up for each one, fans were suddenly able to get an in-depth look at their favorite team's system far more extensively than ever before. But somewhere between there and now, something happened. That something involved how fans interpreted these lists, as opposed to how they were compiled. With the growth in prospect interest over the past few years (due to a variety of reasons), rankings are now everywhere. Baseball America still does them, but plenty of other publications, Web sites, blogs, etc. have entered the fray. One of the key things I believe nearly all prospect rankers have failed to do, however, is to manage expectations. Many fans tend to believe that when they look at a top 10 list, they are looking at 10 future big league players, or even future stars. Although it's no fault of the rankers, the reality couldn't be further from the truth.

Two years ago I was talking to a team's farm director, and we were talking about the overall state of his team's system, and some other teams as well. I blurted out, "You know, if you take any system in baseball, and in the end, you get at the major league level one star hitter, a regular, a couple of bench players, a good starter and a pair of relievers out of those 150+ players under contract, that's a pretty good system." The farm director immediately agreed. It's important to note that I pulled that statement completely out of thin air. It was at least a conclusion based on some experience, but I've always wondered if it was right.

In an attempt to figure that out, I grabbed my beat-up copy of that first Prospect Handbook and tried to figure out what each team got out of their system. Looking at each team's top 30 list, I counted which players had established big league careers (multiple cups of coffee did not count), and in the end, it turned out I wasn't so stupid after all.

For the purposes of this study, I defined "established big league career" as either three years in the big leauges, or an obvious career path that indicates that the players will reach those three years.

Position Players were sorted into Star, Everyday and Bench. Everyday and Bench were simply a measurement of playing time. For an everyday player, I looked for three years (or a clear path to that) of an everyday job, with one performance measurement (125 or higher OPS+) to separate the stars.

Starting Pitchers were similarly sorted into Star, Good, and Back-End. Unlike position players, both Good and Back-End measured playing time and some performance. To get labeled Good, I was looking for consistent 200+ innings with an ERA league average or better. Star was established using ERA+, with 125 once again the threshold.

Relief Pitchers were divvied up between Relievers and Closers. For Closers I looked for two years with the closer job (or again, a clear path to that), while also pitching at what I labeled 'closer' level (125+ ERA).

There was some subjectivity and some projection to the assignments. A good example would be Miguel Cabrera. He's only been a fulltime player for two years, but there is absolutely no reason to think he won't be again next year, nor any reason to think he won't have an OPS+ over 125, therefore he was classified as a star.

Of the 900 players written up in the 2001 edition, 245 (30.6%) had established big league careers, broken down as follows:

Star Position Players:       10
Everyday Position Players:   49
Bench Position Players:      62
Star Starting Pitchers:       6
Good Starting Pitchers:      24
Back-End Starting Pitchers:  31
Closers:                      6
Relievers:                   57

So what do we have in the end? The average system generated right around four major league position players. Half of this group has established themselves as everyday players, and one out of 12 of those are stars. In addition, the average system created four pitchers, almost split exactly between starters and relievers, though only half of the starting pitchers would be generally considered No. 3 starters or better.

Was I right in my statement? A little bit. Taking a snapshot at a single point in time, the average system provided roughly two regular position players and two bench players, as well as two starters and two relievers. So all in all when you think about a team having four full-season minor league teams and then two more short-season teams come June, and all of that work to scout, sign and develop the equivalent of six teams' worth of baseball players, in the end, that effort provided around eight major league ballplayers, of which around half were of significant value. That is an incredibly high failure rate, but not one that needs to involve placing any sort of blame. It's the nature of the beast, and a tribute to just how hard this game is when it is played at the major league level.

So the next time you're looking at a Top 10 list, no matter who wrote it, temper your expectations. No matter what process one uses to develop prospect rankings, most of those players for one reason or another just aren't going to make it.

I really didn't want to get into the team-by-team tallies or individual players, because I did not want this piece to get distracted by criticism or praise for Baseball America's rankings or an individual team's performance. That doesn't mean I didn't find some things that are at least mildly interesting. Here they are, in bullet-point form:

  • The system I used is anything but perfect and did involve some subjectivity. There are plenty of situations where a player was graded a regular, but in the end still has a significant chance to become a star (Carl Crawford, for example). In addition, there are a very small number of players (less than 10) who were not counted at all, but still have a decent shot of establishing big league careers. Adam Wainwright and Shin-Soo Choo (good, but not elite prospects) were two notable examples, but they also fit the most common profile of these players–once highly-regarded prospects who have seen their careers stall a bit.
  • While the average team created 8.17 established big league players, the distribution was fairly even, with 22 of the 30 systems supplying at least seven. Three teams (White Sox, Pirates, Devil Rays) produced 12 or more, while only two teams (Brewers, Cardinals) provided less than five.
  • The Devil Rays produced a league-leading 14 players, of which a whopping nine were position players, including four regulars and a pair (Crawford and Rocco Baldelli) who have a chance of upgrading to star level. The number is made all the more impressive by the absence of once-upon-a-time top prospect Josh Hamilton, who had the makings of a sure-fire star before he self-destructed. As one person who saw the raw numbers astutely pointed out, the high total for this one team could be the result of a bad team that is not signing free agents and therefore creating more opportunity.
  • The Cardinals list is fascinating, as only one of the 30 players ranked made it, but that one is the single best player in the book (Albert Pujols). Bud Smith had the best shot other than Pujols, but was beset by arm injuries.
  • The Astros system created a relatively high total of 11 players, but arguably the best overall collection of talent. Almost as remarkable is the fact that almost all of the major talent is still with the Astros, including three members of their starting lineup (Morgan Ensberg, Adam Everett, and Jason Lane), as well as their now-ace (Roy Oswalt) and closer (Brad Lidge).
  • Of the six closers in the book, half of them are in the Angels Top 30 (Bobby Jenks, Francisco Rodriguez, Derrick Turnbow), and all three were starters at the time of publication.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe