keyboard_arrow_uptop

Baseball Prospectus is pleased to announce five new additions to the 2014 player cards, three which have been missing for a few years.

The original intended use for UPSIDE and long-term projections was to aid in discerning how various prospects might do during the time when their organization had control over them (or roughly five full-time seasons). And the methodology was based on using actual performance of the most-comparable players over the years in question. There have been some changes to the details over the years, but the core concept has been brought back for 2014, with projections through 2023. Here are some clarifications of what we're doing now:

  • Our current definition of PEAK is the one set forth in the Glossary. PEAK doesn't refer to a particular statistic; it refers to a sum total of any given statistic over a contiguous set of five seasons. These five seasons will be the next five for player ages 24 or higher. But for player younger than 24, they will be the most productive five-year window up through and including the age-28 season. For example, either the age 23-27 or age 24-28 span will be used for a for a 23-year-old player, whichever represents the highest sum total of the statistic being measured. Continuing the example, such a player could have a "PEAK FRAA" that is the sum of his FRAA values from his age-23 through age-27 seasons and could also have a "PEAK WARP" that is the sum of his WARP values from his age-24 through age-28 seasons.
  • UPSIDE is now a composite of the PEAK values of non-negative WARP for the top 20 most-comparable players (weighted by similarity). As Nate Silver observed in 2006:

UPSIDE…is focused only the possibility that the player develops into an above average major leaguer. It doesn’t care whether a player winds up riding the major league bench, gets stuck in Double-A, becomes the new Luis Rivas, or goes off to Australia to smoke ganja with Ricky Williams. Each of these outcomes is equally undesirable, and UPSIDE recognizes that.

We'll be using UPSIDE to compare PECOTA's top prospects to those of the BP Prospect Staff in an upcoming article series, so stay tuned for that.

  • The top 20 most-comparable players are the same ones used for the 2014 PECOTA projections and can include both major-league and minor-league seasons, though players are considered much more comparable to players at similar classifications.

Now for the fun stuff. To get to the new features without scrolling all the way down the player card, simply click on the "More PECOTA" tab located near the middle of the navigation bar:

From there, all the new features follow one after another:

The "PEAK 5" UPSIDE value for Sogard is the sum of his first five UPSIDE scores, as he's beyond the age where the system considers other options.

Last but not least are the most-comparable players based on similarity score. "The comparables," as Colin Wyers put it, "represent a lot of tedious number crunching (measuring Euclidean distance in n-th dimensional space, if you want to be precise)." The "Similarity Index" is based on the Similarity Scores of the top 100 most-comparable players. And the Similarity Scores are based on the Euclidean distances between the player in question (in this case, Sogard in 2014, which will be his age-28 season) and every other player in our database at the same age (for example, Jeff Keppinger in 2008his age-28 season).

The "Trend" column can be a bit confusingit's a simple up/down/neutral metric based on whether the comparable player over- or under-performed his baseline projection by 20 percent. For the "baseline" projection, only a generic aging curve is used for comparisonsthe system doesn't evaluate the player's projection based on his comparables. Protip: ​Note the easily-overlooked arrow on the bottom right, which allows for the selection of comparable players 11-100.

We hope you enjoy these PECOTA-related offerings. If you have questions about methodology or navigation, please post them in the comments below or email customer service.

You need to be logged in to comment. Login or Subscribe
agnatgas
3/12
All the additions look great. Are they downloadable in mass in Excel?
mcquown
3/12
No mass download capabilities of this data at this time. We'll consider it for a future addition, as it is a good idea, but no promises.
gandriole
3/12
Seconded would definitely like that.
edman8585
3/12
A little bit of VBA scripting and you can probably get it on your own in excel if you really wanted.
marjinwalker
3/12
Fantastic! Are you going to revive the "Pecota takes on Prospects" series? That would be incredible!
mcquown
3/12
Yes, this is in the works.
johnjmaier
3/12
I am looking forward to perusing these additions. I am not sure if this is the best place to ask but I have mentioned it a couple of times in the past: any plans for a 10th through 90th percentile projection for team performance? For example, while the Redsox project for 88 wins, their "upside" is probably about 100-105. The Yankees currently have 82 projected wins but I can not imagine their upside being much better than 87-89.
mcquown
3/12
Good idea. We haven't ignored the suggestion - it's in the queue, but probably not something we'll see in 2014.
brownsugar
3/12
Eric Sogard? That is the player example that you decided to use? Eric Sogard? Fine, I'll just go search Mike Trout myself!
toanstrom
3/12
Well he is the #FaceofMLB
rawagman
3/12
He is definitely the #FaceOfTheFanOfMLB
Grizpin
3/12
Outstanding! Thanks for bringing these features back :)
doog7642
3/12
I can't thank you enough. I begged for this in comments sections for years, and y'all delivered. I have no idea how many hours of work must have gone into this...but thank you. And you're also bringing back "PECOTA takes on the prospects"! Seriously...I recall comments sections a couple or three years ago full of hand-wringing at how BP had gone in the shitter. You guys have gone above and beyond to make this better than ever. Thank you for bringing back some of the best elements of the work of Nate and the crew from the '90s/early '00s.
Lagniappe
3/12
I have missed the long term view the last few years and I am delighted it is back. The other new features are also fine additions. Kudo's, Rob.
hannibal76
3/12
The long-term projections on hitters seem too conservative (and don't seem to be in sync with the 2014 projections): - Chris Davis will never hit more than 26 home runs again (and rarely hit more than 20) - Bryce Harper will never hit more than 27 - Javier Baez will never hit above .240 - Andrew McCutchen will never bat above .287 - Hanley Ramirez will peak at a .272 average - Billy Hamilton will never steal more than 59 bases
mrenick
3/12
I think the 10 year projections are like the 50% forecasts. that's no saying that Harper will never hit more than 27 HR, that's just his 50% forecast for that season
mcquown
3/12
Those are all interesting examples. * For Davis, it seems that his comparables had trouble keeping regular playing time, as his HR rate isn't projected to decline much, but his playing time does. * For Harper, personally, I'd say the 19 HR estimate for 2014 is the "conservative" number here. Believing that his peak projectable (projection peaks will never match actual peaks) HR is eight higher than his 2014 projection seems entirely reasonable to me. * As a Cubs fan, I've often reviewed Baez projections this offseason. While I agree that his "hit skill" should definitely translate into better batting averages over time, it clearly didn't happen often enough among his comps to bring up his average. We do project him for a very high ISO, at least. * Batting average is a young player's skill. For 'cutch to never improve it (much) over his 2014 projection is perfectly reasonable. Obviously, statistical variance makes it incredibly likely that he will actually exceed that .287 figure, having that as his projection is quite reasonable. * See McCutchen comment. Though with Hanley, it's sort of baffling that his comps went down so much in average, given how high his average was recently. * Hamilton is a strange one - we may need to revisit how we project playing time for guys like him. But we do nothing to project pinch-running steals, and his SB-per-PA is still astonishing (as it should be), considering how infrequently he's projected to get on base by his own means.
mrenick
3/12
really happy to have these back! thank you
Grasul
3/12
Does someone like Jurickson Profar just not have enough data for PECOTA to capture the enthusiasm of the scouting reports? Profar has a 21% improve rate and the highest WARP in his 10 year projection is 1.6, which seems quite a bit tamer than expected based on being a #1 prospect, etc.
mcquown
3/12
I know one problem Profar faces is that he's being evaluated as a second baseman, where he's struggled defensively (which hurts his WARP both due to the lower FRAA and the higher replacement level of second basemen compared to shortstops). But as far as his TAv peaking at such a pedestrian level (though high for a middle infielder), it would seem that his comparable player list didn't do quite as well as could have been hoped. I'll investigate him in specific, he was coming out with a higher peak in earlier trials. He is an interesting case study.
Grasul
3/12
Thanks for the reply.
scothughes
3/12
Nice to have this data back on the player cards.
Dicktators
3/12
These UPSIDE and BREAKOUT estimates are potentially very helpful tools. I was wondering if you have done any after-the-fact regression analysis (or something similar) to see how valid these type of predictions have been in previous years. When I did a quick look at players with high breakout rates in 2013, there did not seem to be a high correlation with actual success. Thanks!
mcquown
3/12
UPSIDE and the diagnostic numbers are based on the performance of the comparable players. We are constantly doing test bench evaluations of the performance of these comparable players, to get the best set of parameters (and weights) to use for computing similarity. We definitely are striving for the most accuracy possible.
jdeich
3/12
It would be helpful to take a 'snapshot' of predictions before the 2014 season, and publish a review of its predictive power after the season is over. Obviously, you'll tweak and improve the model as time goes on (using 2014 data as it becomes available), but as a result it wouldn't be fair to compare the September 2014 model to 2014 performance. Pre-season snapshots will clearly show your progress to the audience.
Dicktators
3/12
Good, I'm glad to hear that you are bench testing to make sure you have the best possible matches with comparable players. However, the "best" match does not necessarily mean it has good predictive capability. It would be interesting to know if these UPSIDE numbers have a high likelihood of predicting future performance or not (and what that likelihood is). Thanks!
mcquown
3/12
Yes, good suggestions. And I definitely agree with your primary concern that these numbers should have some anchor to actual outcomes, else what good are they? What I mean by test bench evaluations is running the same code against past seasons and looking at exactly the sort of comparisons you are suggesting here for purposes of optimizing the accuracy. Obviously, we strive to make these projections as accurate as possible. As far as eyeballing 2013 breakout projections versus actual 2014 results, a "breakout" is defined as a season of +20% over established past performance levels, so for a full-time player to have a "break out" season is rather rare. Please do keep suggesting this sort of thing for content - I'm sure there are plenty of articles in this vein which can be written.
organizedfamine
3/12
This is really cool, thank you! Is this information only available on individual players pages? For example, is there a way to create a leaderboard for who PECOTA will project to lead the majors in home runs in 2016?
therealn0d
3/12
There is; download the PECOTA spreadsheet and order it by HRs.
therealn0d
3/12
Oops, nevermind. You can do that for 20014, though :)
ddietz2004
3/12
Well, thanks for making my work day as productive as BJ Upton's 2013 season.
edman8585
3/12
Good stuff, quick concern. I was looking at Jurickson Profar's FRAA projections, and I see that his FRAA numbers get worse as the percentile increases. That doesn't make sense to me, and could be a bug.
mcquown
3/12
I'm sure we can do better in the future at handling FRAA-specific percentiles, but for now - as in years past - percentiles are a function of at-the-plate offense (read: TAv) only. FRAA will always increase in magnitude as percentiles go up, as it's scaled strictly as a ratio of playing time.
edman8585
3/12
Or... it's just that fielding is assumed to be constant, and the extra playing time
edman8585
3/12
... only makes it more negative. I see you already addressed this. Thank you.
lipitorkid
3/12
First: I love you guys. Second, the old projections always seemed conservative, but you had to remember they weren't about a player's ceiling. I just found them useful for comparing players, especially in a keeper league. TY for bringing these back.
myshkin
3/12
Glad to see some old favorites back. I'm not sure when the BP Articles section changed, but it appears that there is now one entry for each author of each article. Is that working as intended?
mcquown
3/12
Do you mean on the player cards? The cards have actually always worked this way (at least since I joined BP). It's designed so that if you want to sort by authors to see everything a specific author has written about a player, you are able to do that easily.
myshkin
3/12
That is indeed what I meant, and I guess I hadn't noticed that. I tend to use the Search Articles page if I want to filter by author. I can see how that would be handy, though. Thanks for the clarification.
jfranco77
3/12
Is there some way to make UPSIDE more readable? I have a hard time looking at Mike Trout and knowing that his 'upside' in 2016 is 86.2 WARP. (Really, it's 20 players, so 86.2/20 is about 4. But not all of those 20 players because some of them were negative. Except maybe in Trout's case they weren't.)
mcquown
3/12
This is something we've discussed internally. I always had that same issue with the original UPSIDE articles, but we wanted to start by getting back to something close to the system that was in place before (though with the WARP redefinitions, it's not an apples-to-apples comparison through the years, especially for pitchers). We're definitely still considering ways to make UPSIDE more easily compared to other stats, such as projected 1-year WARP.
jonmischa
3/12
I guess I don't understand these long-term projections. Xander Bogaerts is the #2 prospect, and your scouting report says he "projects to hit for both a high average and game power." But according to the projections, you don't think he's ever going to hit for a better average than .267. I'm assuming I just don't understand the system or something.
mcquown
3/12
Good question. There's absolutely no doubt that Bogaerts has serious upside potential and is likely to hit well. But - as can be seen from some of the names on his comparable players list - sometimes things don't go as planned.
markpadden
3/12
It seems unusual that you are projecting a TAv of .277 in 2014 and then .273/.278 in 2015/16 for Bogaerts. Is there any way to create internal consistency between the 1-year forecasts and the long-term forecasts? Not critically important, as I realize these are rough estimates. But just curious if there is an obvious answer as to why 1-year and long-term PECOTAs might not agree.
mcquown
3/13
There is internal consistency. The long-term projections are based on the progression of comparable players beyond the first season, given the first season's projection (2014) as a starting point.
markpadden
3/13
I think the issue is that by using a projection as the input for another projection, the expected high-performers for 2014 are going to be treated as outlierish by your long-term algorithm, which will look for regression. This is my best guess as to what is creating the odd "pop-and-drop" aging curves for prospects (see my comments below).
markpadden
3/13
More generally, I don't think you want to be using a 1-year projection as fact when creating the projection for years 2+. For the same reason that you wouldn't try, before the the NCAA tourney started, to project second round winners after assuming specific outcomes of first round games. Unless you were averaging the results of many simulations, but it doesn't sound like you are doing that.
markpadden
3/15
Love that you simply ignored my comments. Am I too assume you agree with them but are too lazy to address them via actual changes to your algorithm?
eliyahu
3/12
This is really, really great. If I'm reading between the lines, you guys took these down for a while in order to make meaningful upgrades. While it's never going to be perfect, my faith in BP is such that if you're happy publishing these, I can trust them. Thanks for all the effort on this. Been waiting for this for a while.
mdthomp
3/12
As a Cardinals fan I hate Oscar Taveras number one comp.
markpadden
3/12
Please add the 2014 projection to the Long Term Forecast section. I would also be nice to see last year's (2013) MLE stats in this section, to get an idea of what kind of change you are projecting vs. current performance levels. Thanks.
markpadden
3/12
The aging curves for prospects really don't pass the eye test. For example: Addison Russell age 20 (2014): .249 TAv 21: .265 22: .255 23: .249 24: .256 Carlos Correa 19: .245 20: .275 21: .270 22: .271 23: .269 Joc Pederson 22: .269 23: .278 24: .250 25: .253 26: .278 PECOTA is saying all of these guys are essentially major league ready for 2014 (very bullish), but will experience declines in performance from 2015 to 2017 (extremely bearish/bizarre). In addition to the general curve shapes not looking right, there is the issue of lack of sufficient smoothing of the data (see Pederson's predicted roller coaster from age 23 to 26). I had high hopes for the revamped long-term projections, but honestly these numbers do not instill confidence.
philly604
3/13
But where you more confident in the old ones? Frankly these were never very good as far as I could tell. That's not meant to be harsh - who the hell would actually expect a computer algorithm to do a good job projecting human behavior 10 years out. As good as year 1 projections might be in a general sense, they are not *that* accurate. And it just gets harder every year further out. The problem with taking these away is that now that they've come back at a long hiatus people are giving them a more critical look and realizing that there just is not that much value to them, imo.
markpadden
3/13
No, I wasn't confident at all in the previous effort from Wyers et al. 2-3 years ago -- which is why I am disappointed the glaring issues present in those forecasts appear to remain unaddressed. The long-term forecasts by Silver (pre-2011) at least made sense. They had rational aging curves and were smoothed appropriately to reflect uncertainty. The two forecast algorithms -- the one used prior to 2011 and the one(s) used in 2011, 2012 and 2014 -- are quite different.
mbodell
3/13
Agreed. The behavior that used to be present was some players would be predicted to hit like: .260, .262, .258, .295, .255 Or some other weird spike several years out. That was suggestive of too tight a fit/overfit to similar players (and one of the many reasons a locked model for future evaluation is important instead of just back testing is desirable) or some sort of other bias or problem.
ravenight
3/13
This post and some of the similar complaints sound to me like people on poker forums complaining about AA being projected to lose 20% of the time against 22. Those projections look fairly reasonable to me if you take them for what they are: the performance of comparable players. Naively, what would you expect from guys knocking on the door? They should have a high chance of flameouts/career ending injuries/general disappointment, shouldn't they? Most prospects are risky. But take a 22 year old who is nearly major league ready, and project him to still be playing 4 years from now, what do you get? The numbers should tick up. Especially for an outfielder, since they get fewer shots to come back. The ones who are around at 26 tend to actually be good. My problem with the projections, though, is exactly this factor. If there's a 60% chance a guy is putting up a .220 league-adjusted TAv in AAA in 4 years, and a 40% chance he's putting up a .300 TAv in the majors, then it's not really accurate to project him as a .252 TAv, and it won't coincide with scout projections. I know it's nice to have a single number, but in this case the average just really doesn't convey useful information. Upside seems like a better approach. Maybe for a single number it would be better to use the median value among the comps? Weighted by similarity, perhaps, so you'd add up all the similarity scores in the top 100, divide by 2, then count down until you checked off that much similarity, and just use that player's performance in that years (in other words, re-order by performance in a given year, then count off the similarity).
markpadden
3/13
These are projections, not raw data dumps. The idea of an algorithmic projection is to use past data to predict the future as accurately as possible by whatever means available. No one is being constrained at BP to publishing raw data derived solely from similarity scores; nothing is preventing them from using comparable player data as one input to a more sophisticated predictive model. For all we know, they already are doing so. My argument is simply that the current black box model's output is not looking terribly logical, and doesn't appear to me to represent the best estimate of what a given player's TAv will be in a given year in the future. As for the curves looking reasonable to you, only a high prevalence of *non*-career ending injuries would cause a lot of ramps in skill followed by immediate declines in a players' early 20s. Not sure there are enough damaging-but-not-catastrophic injuries occurring to young position players (plus legitimate age ~21-onset skill declines) to make that the average case. Note that in many cases, both the ramps up and the declines are projected; it's not like the model is simply predicting regression for outperforming minor leaguers. I agree with you on the utility of upside. Would like to see 25th and 75th percentile projections for each year, so we get an idea of the volatility/uncertainty levels.
ravenight
3/13
Non-career-ending injuries and just general not making it is why the prospects go up and then down - that seems very realistic. Look through old top 101s and tell me how many of those guys had some decent years and then settled into mediocrity? Career-ending injuries and other forms of culling explain the ticks back up in even later years - players who stick around that long tend to play well or they wouldn't be employed.
fawcettb
3/13
I dunno. This seems like a very blunt instrument to me, one without anything to compensate for early career adjustments amd contingencies (v. Profar) or genuine breakouts (Chris Davis).
mwright
3/13
Glad you brought these back. Obviously it would be completely outstanding if they were ever added back to the spreadsheet for easy comparison purposes. Using the upside figure to discern between certain prospects I was considering lead to some killer drafts for me back in the day. In fact, dudes like McCann and Pedroia are still paying dividends.
rocket
3/18
Jaff Decker has a higher PEAK5 than Danny Salazar. Not sure what to do with that information.
Lenzkid10
3/19
I am really confused with the "Percentile Forecast" portion of the data. Continuing the Sogard and looking at lets say his 70 percentile data: 1) 70th percentile of what sample and what data value does it go by? All players by WARP? 2nd basemen by WARP? 2) The 70th percentile data has him 273 plate appearances. Just over half a season. This just doesn't make sense to me. Can anyone clarify? Thanks.