April 18, 2001
Top 40 Prospects In Review: Part Seven
First, let's recap the complete Top 40 lists for each publication, along with the grade for each player:
You'll note that Alfonso Soriano has an asterisk next to his grade. We originally awarded Soriano a grade 2 when we first started this retrospective, back in February, and events since then would suggest that perhaps Soriano's stock hasn't fallen as much as I had thought. I'm still not a believer in his long-term future, but a grade 3 now seems more appropriate.
Let's go ahead and get the rankings out of the way. Using a weighted average (the grade of the #1 prospect is worth 40 points, the #2 prospect 39 points, etc., with the #40 prospect just 1 point), here is the average score for each publication:
There hasn't been a race that close since the 1973 NL East. While we'd like to claim victory by a whisker, the race is so close that if we give D'Angelo Jimenez a grade 1 instead of an incomplete, and give Ramon Hernandez a grade 4, the scores would be:
The point is not simply to prove who's the best at making predictions--if it was, I could have left Soriano's grade as a 2 and engineered a clear BP victory! It should be pretty clear that this was damn well near a dead heat, and none of the publications can claim a better prospect list than the others. In itself, this is very revealing. It appears that traditional scouting methods do not evaluate prospects any more accurately than newer sabermetric methods. That doesn't mean that scouts are useless; it does mean that objective analysis has as much a place in the evaluation of prospects as the scouts do.
The scores hover around 3.7, which (given that a grade 4 is our equivalent of the .500 pitcher) suggests that none of us did a particularly good job ranking prospects last year. That only adds to the argument that last year was a very tough year in which to evaluate prospects. As this is the first time we've evaluated top prospects lists in this way, we can't tell if it was just a bad year or if projecting prospects is simply more difficult than we thought. Hopefully, we'll have our answer next season when we evaluate our prospect lists again.
Let's dig a little deeper. It's no secret that we are much more conservative when it comes to evaluating pitching prospects than their hitting brethren. This next chart illustrates how much more conservative we are compared to Baseball America and Sickels.
Only 11 of the 40 players on our Top 40 list were pitchers, and the number of points awarded for pitchers on our list is less than 25% of the total, compared to 38% for BA and 36% for Sickels.
Is that conservatism warranted? Let's split up the pitchers and hitters on each list, and compare what the weighted grades for each group were:
This data is not particularly compelling, in that the worst projections for pitchers--and the best projections for hitters--came from John Sickels, whose philosophy on pitchers vs. hitters takes the middle ground between us and Baseball America. However, you'll note that the best grade by any publication in either category was for our projection of pitchers. We may be biased, but we'll take that to mean that by restricting inclusion on our list to only the very finest pitching prospects in the land, we can minimize the danger that we'll get burned by an injury to a John Patterson or make a mistake projecting an A-ball hurler like Wilfredo Rodriguez or believe in the myth of the minor-league closer with a Francisco Cordero.
The extra hitters that stock our list did not drag our grade for hitters down by much; our grade for hitters was just two-hundreths of a point lower than that of Baseball America, even while our grade for pitchers was nearly a quarter-point higher.
It's possible we swung too far the other way, however. If you look at just the bottom 10 prospects on our list, three of them are pitchers (Jon Garland, Mike Meyers, and Ramon Ortiz), who had respective scores of 6, 3, and 5. Of the seven hitters in our bottom 10, only one (Adam Piatt) had a grade higher than 3. It's reasonable to suggest that some of those extra hitters at the bottom of our Top 40 might be less deserving than the two or three best pitching prospects who didn't make the list.
If that's the case, hopefully we found a better balance this year. The Top 40 list in Baseball Prospectus 2001 includes 13 pitchers (up from 11) totalling 275 points (up from 204). Then again, 40 of those points were wasted on Ryan Anderson.
Let's look at how each publication fared with their "reaches," those players who didn't appear on either of the others' Top 40 lists. Both BA and Sickels had seven players unique to their Top 40 list, although Sickels only had six when you eliminate Ramon Hernandez (who wasn't a rookie last season, and so wasn't eligible for any other list). We had nine unique players, which isn't surprising given the lip service we sometimes pay to conventional wisdom. Here are those players:
It's clear that we did very poorly here. Six of our nine unique players, including all five ranked #34 or lower, scored a 3 or less. By comparison, Baseball America did very well with their unique players, and Sickels--thanks to big hits with Hee Seop Choi and Jesus Colome--had the highest weighted average in the group.
You'll also note that almost all of the poor grades in this category went to players at the bottom of the lists. Players that one organization gave a mention to at the bottom of their Top 40 list tended to crash and burn. By comparison, of the eight unique players who were ranked higher than #30--players that one publication was very optimistic about despite the lack of attention from the others--only two of them had grades below 4, Esteban German and Eric Munson.
So what's the lesson for the reader to take from this? Well, you could sum up the "points" for each player on all three lists and sum them together, creating a composite Top 40 list from the three publications. Such a list would look like this:
The only problem is that the anticipated synergy doesn't emerge; the weighted grade (sans Jimenez) of this Top 40 list is 3.64, which is lower than any of the three lists on their own!
So the lesson to take from all this is that there are no lessons. There are no hard and fast rules. There is not even an advantage to forming a consensus opinion. In short, the task of prognosticating minor leaguers is both a science and an art, and the most important step any evaluator needs to take is to recognize the inherent limitations in predicting the future. God is omnipotent; the rest of us just have to do the best that we can with the information that's available, and recognize that even the surest of prospects--like Nick Johnson--is no sure thing.
Rany Jazayerli is an author of Baseball Prospectus. Contact him by clicking here.