It’s been a “slow” offseason by all accounts. Most of the “major” free agents are still not entirely sure what hat they’ll be wearing next season. It’s been a little perplexing. The Hot Stove usually has a few more coals on it to keep us warm during the cold, dark months when there isn’t any baseball. And so, of course, stripped of the ability to talk about the reasons that a team might have signed a player and why that might or might not be a good idea, we are now reduced to writing about why teams haven’t signed a player (or players in general) and why that might or might not be a good idea.
There are several potential explanations that have been floated, from the nefarious (collusion!) to the coincidental (maybe it’s just a slow year) to the circumstantial (teams are just saving their bitcoins for next year’s loaded class) to the logical (umm, sure J.D. Martinez can hit, but he’s a poor-fielding corner outfielder and teams are getting too smart to be giving a guy like that $25 million a year). But there has been one corner of the free agent market where things have been swimming along nicely. Relievers (thanks, Colordado!) have signed quickly and at prices that I’m sure momentarily sparked some hope among the rest of the class.
I think the movement in the reliever market tells us something important about the rest of the market for free agents going forward.
Warning! Gory Mathematical Details Ahead!
Just about everyone reading this has been well trained over the past decade or so to view free agent signings through the lens of “dollars per WAR.” We’re told that there’s a price (X) that teams “should” pay for a projected win (or more to the point, a WAR). A player who projects as a two-WAR player next year should get 2X dollars. Sometimes teams sign particularly good players to longer-term deals in the hopes of trading off a few bad “decline” years on the back of the contract for a few “under-pay” years on the front end, but things are always phrased in the language of dollars per WAR.
Dollars are the final denomination of everything in free agency, but is WAR–and yes, here we need to point out that we’re talking about projected WAR–the right way to look at things? And I’m not talking about the inherent problem of predictions. Yeah, some guys fall apart when no one sees it coming and some guys put up value that no one knew they had in them. We all project guys based on their last couple of years and assume that, going forward, they’ll do that again, and maybe slip a little as they age. Sometimes, we’re wrong. But before any of that happens, most people are operating on that same assumption. It’s not even the annoying reality that there are 20 different versions of WAR out there. I want to ask a more fundamental question: Is WAR a good place to start to look for value when building a team?
Let’s for a moment remember what WAR is. It’s a fantastic statistic, but one that was meant to answer a very specific–and different–question. WAR was meant to solve the “group project” problem in baseball. WAR begins with the lament that baseball is a team game and that individual players can’t be held responsible for the behavior of their teammates. Famously, RBIs have been banished because a batter who plays on a team where there are a lot of runners aboard for him to knock in–something they doesn’t control–will look better than a guy who plays on a team that doesn’t have many good on-base guys even if the two players have exactly the same outcomes of their plate appearances. WAR wants to strip away all of the context so that we can evaluate individual players on their own merits, rather than those of their teams. There’s a lot of value in that. It’s also not the question that teams are asking in free agency.
The conceit of WAR around this time of year is that if a team signs a player whom everyone agrees is a “three-win player” and he’s replacing a guy whom everyone agrees is a replacement-level placeholder, then the team should be bumped up three wins in the projections for the coming year. That might not be a bad shorthand, but over-simplification has a way of being overly simple.
I have written at length about how I don’t believe that WAR adequately captures the contributions of relievers, for a very specific reason. WAR strips out not only the context of the team that surrounded the player, but also the situations that surrounded the player. For hitters, that makes sense. Hitters don’t pick when they come to bat in a game. The fact that they got that big hit with runners on second and third with two outs is nice, but it’s not like they lobbied to go up right then. Starters are the same way. They might find themselves in the middle of a close game, but a good chunk of that is actually going to depend on the hitters who also wear the same uniform.
Relievers, on the other hand, are different. The bullpen is the one part of a team where the manager picks the reliever to match the moment. The guys who go into the high-leverage spots go into them specifically because they are good. WAR generally does try to correct for this by applying a leverage adjustment to each reliever. If a reliever had a completely context-free WAR of 2.0 and faced an average leverage of 1.5, they gets their WAR multiplied by 1.25, which is halfway between 1.0 and 1.5. Still, that seems a little off.
For the 2017 season, I looked for the players who had the biggest discrepancy between their WAR and their Win Probability Added. WPA is a stat that specifically includes the context of a player’s contributions. Pitchers who record a 1-2-3 ninth inning with a one-run lead get a big boost in their WPA because they take a game that was still somewhat in doubt and put it safely in the win column. Had they done the same thing in the sixth inning with a three-run lead, it wouldn’t have been worth as much in ending the game, although WAR would have seen the two performances as largely the same (prior to the leverage adjustment).
Still, if we ought to believe that a player’s WAR should be roughly commensurate with the number of “extra” games that he helps his team to win, why does the WPA-minus-WAR leaderboard have these 10 men in the top spots?
- Addison Reed
- Wade Davis
- Brad Hand
- Ryan Tepera
- Shane Greene
- Kenley Jansen
- Jorge de la Rosa
- Alex Claudio
- Sean Doolittle
- Corey Knebel
If you read further down the list, there are a few non-relievers who poke their noses in (Melky Cabrera is no. 11), but it’s mostly relievers, and the top 10 all had deltas greater than 1.5 wins. WAR is significantly under-crediting them for how much actual value they end up providing to their team within the actual context that they play, and it was doing it more so than position players or starting pitchers. That conclusion is borne out by other numbers. The correlation between WAR and WPA among position players (minimum 250 plate appearances) was .739. For starters (minimum 90 innings), it was .775. For relievers (minimum 40 innings), it was only .680.
Now, WPA-minus-WAR is a junk stat at the player level. I don’t expect that Addison Reed, one of those relievers who just got a nice contract from the Twins, has some sort of special talent for out-pitching his WAR or that those numbers say anything specific about the players who recorded the highest numbers. But the fact that they are all relievers says something about how WAR can miss the mark on how much of an effect the position of “reliever”—and particularly late-inning/high-leverage reliever—can have on how many games a team actually wins or loses. It isn’t a secret why. High-leverage relievers go into the game during … high-leverage situations. Their performance “counts” for more in terms of actually winning and losing.
Now, it would be perfectly reasonable to say that those deltas may be a function of the fact that the WAR-WPA link for relievers might just have bigger error bars (the correlational evidence would suggest that). A two-win pitcher out of the pen might be most likely to get you two wins of WPA. The difference isn’t in the expected value overall, but in the range of values that are possible. So, wouldn’t it be silly to pay for larger error bars? Not necessarily. Error bars are actually pretty valuable in baseball.
Teams aren’t collecting context-neutral wins, which is what WAR is. They aren’t even specifically collecting raw wins. That may seem like a strange thing to say, but a successful regular season is denominated in getting to the playoffs, and wins are the thing that powers you there. And here, we have to remember one other thing. The win curve for getting into the playoffs is not linear. One win is not worth five percentage points toward a playoff spot. For example, consider a team that projects to win about 84 games. That’s not a playoff team. If you somehow made them three wins worse (81 wins), they still aren’t a playoff team but their playoff odds haven’t really gone down since they were essentially zero to start with. But if you made them three wins better (87 wins) they might sneak into a playoff spot somewhere.
Now imagine a player who makes our 84-win team one win (and exactly one win) better. The team is now an 85-win team, and still not likely to make the playoffs. Now imagine a player who might make them an 87-win team, but also might make them an 83-win team. The mid-point is still 85 wins, but now because there are error bars, there exists some set of circumstances that could get them to that 87-win threshold, something that the player who gave them one win and only one win couldn’t do. Our one-win reliever might not land in the positive side of those error bars, but the fact that they reach higher up is worth more than the fact that they might reach an equal length downward.
Let me put this another way. This is all a really mathematically complex way of saying that you want a good bullpen because you can put those guys into the really high-leverage situations when an actual game is in the balance. It might not work, because all pitchers melt down sometimes, but those are the games that will make or break a season. This is not a new concept. WAR just has a hard time reflecting that particular part of the geometry of baseball, because its initial impulse is to try to make all situations equal.
Relievers have that extra error bar and it might swing a team’s way, which means that a team might win an extra actual game (not some context-neutered, theoretical “on-paper” win) more than their WAR might suggest. And while everyone is looking at their WAR in evaluating how good of a contract it is (perhaps even their agent!), teams are realizing that the variance inherent in that WPA-minus-WAR delta is valuable unto itself. It means that the team is getting a better return than if they just evaluate the contract by dollars-per-WAR.
It makes relievers more valuable on a dollars-per-what-we’re-actually-looking-for basis, it’s just the surprisingly, this time what-we’re-actually-looking-for isn’t WAR. If relievers and their agents really are pricing their services using WAR, in an environment where all of the front offices are getting much wiser about such things, relievers will be more quickly snatched up. That’s what’s been happening.
Now, will someone please sign Yu Darvish?
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.Subscribe now