August 20, 2013
Saving the Save
Last week, I asked the question of whether teams should use their best reliever (the closer?) to protect a three-run lead in the ninth (a "cheap" save) or to prevent a one-run deficit in the ninth from getting bigger in the hopes of scoring and either tying the game or going ahead. Despite the fact that sabermetric wisdom says a closer should be used when the game is... close-er, it actually makes more sense to protect the bigger lead than to chase what is nearer to a lost cause than we might like to admit.
Let's formalize that, shall we?
Many of the fine folks who read Baseball Prospectus will be familiar with the concept of Leverage Index, invented by noted sabermetrican Tom Tango (and if you are you can skip this paragraph). Leverage Index begins with the concept of win probability, which is the idea that if I know the score, the inning, the number of outs, and where the runners are, I can look up how often a team has won or lost in games that featured the same set of circumstances. There are some situations where the possibilities for a big change in win probability, in one direction or the other, are high. Think of the home team being down a run in the bottom of the eighth, but with runners at second and third with two outs. What happens in this plate appearance will have a large influence on what happens in the game. Compare that to a plate appearance in the sixth inning of a 15-2 game. It doesn't matter what happens there. The Leverage Index is a way of mathematically formalizing the potential impact of an individual plate appearance on a game's outcome.
The Leverage Index is great if you want to make decisions or analyze outcomes from plate appearance to plate appearance. The problem is that not all decisions are made from plate appearance to plate appearance. The biggest one is constructing a strategy for using the bullpen. There will be situations where managers make decisions based on one or two hitters (probably involving a LOOGY), but since the dawn of the one-inning closer, who begat the one-inning eighth-inning set-up guy, bullpen strategy has focused ever more on relievers who are sent out to start and finish a single inning. The merits of this system are debatable, but since that is the way the game is structured, how can we figure out which innings managers should prioritize?
To do that, I created a small wrinkle on the Leverage Index.
Warning! Gory Mathematical Details Ahead!
The Leverage Index is really a measure of how much win probability usually changes, given a set of circumstances. For the initiated, the formula is just the standard deviation formula (technically, it's the root of the mean squared sum of differences), except that instead of summing the deviation from the mean, you sum the deviation of the win probability of the target game state from the subsequent game state. This pseudo-standard deviation is compared to the standard deviation for all events in all games, and voila! you have a scalar measure. A Leverage Index of 1.0 means that this plate appearance is exactly average in how much it could influence the game (compared to everything else), and a Leverage Index of 2.0 is twice as important as that. I used much the same approach, with the exception that I simply moved from the beginning of one half-inning to another. I used all games from 1993-2012 to derive these numbers. (For the acronymati, Tom Tango uses a Markov model to formalize his leverage values. I just used a really big sample here which will get me to "close enough.")
For the innings in which the home team was pitching (that is, the tops of innings), the most important innings and differences in score were:
Remember that all "9th" innings include extra innings as well.
For the innings in which the visiting team was pitching (bottoms), the most important innings and differences in score were:
Far and away, the ninth (or later) inning with the pitching team up by one run is the most important inning that either team will face. The rest of the list includes a lot of situations that are close and late (no surprise there), although there are some interesting differences to point out. One thing is clear: The two most important runs in a game are the run that ties the game and the run that unties the game. The reason that "Up 1" is so important is that a pitcher has the chance to give up both of those runs, and the later in a game it gets, the more toxic that becomes.
I do think it's interesting that for the home team, small leads appear to carry more weight than do tie games. Note that a tie in the ninth ranks fourth for the visiting team, but second for a home team. The commonly heard maxim is that in an extra-inning (or ninth-inning-tie) situation, a visiting team should play for a win, while a home team should play for the tie. Usually, that's in reference to hitting strategies, such as whether the home team should play for one run by bunting or try for a multi-run inning, which would mean winning the game. This suggests that when looking at things through the lens of the pitching team, the home team should more jealously guard a tie situation, while the visiting team should protect a would-be winning margin.
Finally, I have been on record in the past as saying that visiting teams, when faced with a ninth inning or later situation in which they find themselves tied, should insert the best available reliever that they have. Unfortunately, most teams actually wait to see whether they will get a lead and then insert their best reliever for the save situation. Looking at the second table, this actually appears to be correct strategy. A ninth-inning situation in which the visitors have a one- or two-run lead has a higher leverage value. Perhaps managers have gotten this right for years and those of us in the sabermetric movement have gotten the math wrong?
I would still say that waiting to use the closer is a tactical error, but it will take some explanation to get there. Let's look at that tie situation in extra innings. If things work out nicely, the closer should work in the inning of highest leverage, and, as the chart shows, that would be the potential inning where the visitors have gotten a one-run lead, rather than the tie. What the leverage chart is noting is that if the closer preserves the tie in the bottom of the ninth (or later), his team's chances of winning go from 33 percent to 48 percent, for a swing of 15 percent. If he gives up a run, the swing is negative 33 percent and the game is over.
Those are big swings, but compare them to the possibility that with a one-run lead (and a starting win probability value of 82 percent), the upside that he would add by doing his job is 18 percent (which is more than he can add in the tie), but the downside from surrendering the tying run, or worse, also surrendering the winning run is even bigger still (34 percent for the tying run, another 48 percent for the winning run). The stakes really are higher in the inning that starts with a small lead.
Last week, we saw that sending the closer out to stop a one-run lead from getting bigger is actually a silly strategy, because the un-tying run has already been surrendered, and the chances of salvaging the game are just not high enough to prioritize the closer going in. In a tie game in the ninth, the tying run has already been surrendered. So, relatively speaking, the situation just isn't as important as a situation in which the visitors could end up surrendering both the tying and winning runs, if they are not careful.
Here's why it's a tactical error. If a ninth-inning-tie situation presents itself, there is a 100 percent chance that a team will have to face it, and this inning has a leverage value of 1.86. One of two things will happen. Either the home team will score first and win (and there will be no more innings of any leverage) or the visiting team will score first and there will likely be a small lead scenario.
Let's assume that in all cases where the visitors score first, they score only one run and that's it. If we make the assumption that the visitors and home team will score first roughly 50 percent of the time each, we can expect that there will be a 50 percent chance of having to face that bottom of the 10th, up one scenario (leverage of 2.93), and a 50 percent chance of being in the showers after the game and crying over a loss (leverage of 0.00). That means that at the time the manager has to make the decision about who will pitch the ninth inning, the expected leverage for the 10th inning is 1.465.
Consider that the home team, because of home field advantage and the fact that we begin this story in the bottom of the ninth so they get to bat first, is more likely to score first. This means a greater chance for that 10th-inning leverage to be 0.00, and further lowers the expected leverage. Additionally, the visitors might score first and score three runs, which will lower the leverage in the bottom of the 10th as well, again lowering our expected leverage.
It is true that the bottom of the 10th might have a higher leverage rating than the bottom of the ninth. By holding back the closer, the manager is implicitly saying that he believes that the chances of a bottom of the 10th with a small lead to protect are high. In fact, if you work that out, managers appear to believe that they have at least a 63.5 percent chance of being first to score, despite the structural advantages that they have to overcome. The reality is probably not even 50 percent that they score first. That's a losing bet.
Saving the Save
One problem with the save is that it sets a rather arbitrary cutoff of three runs as the margin that defines a close or save-worthy lead. The way that modern bullpen usage has developed, it also prioritizes only what happens in the ninth inning. The ninth inning, three-run save checks in as the 12th-most-important situation for a visiting team and the 24th(!) most important for the home team. It's clear that tied situations and small (one- or two-run) leads in the sixth or seventh innings are actually more important than the three-run lead in the ninth. Yet, because of managers bending their strategy to fit the save rule and the belief that saves make a pitcher worth millions more than his peers, there is a giant inefficiency.
If managers really will bend their strategies to rules and raw save totals will still be seen as a sign of virtue, then I would suggest the following intervention to encourage managers to use a more efficient strategy. Let's redefine the save to be more in line with the evidence. A reliever gets a save if he enters the game in the seventh inning or later, with his team up by one or two runs, and when he leaves the lead is still there. Additionally, he gets a save if he enters the game in the seventh inning or later with the game tied and when he leaves, it's still tied. He gets as many saves as innings pitched, which means we could have thirds of saves, or guys picking up two or three saves in the same game. I'd be okay with a starter getting a save if he works in the seventh inning.
The point is that it would focus the rewards for top-line relievers (whether or not a team's closer is actually its best reliever is another issue) on the situations where they can make the biggest difference, including tie games. It doesn't short-change the eighth inning guy at contract time. It might even encourage elite relievers to go multiple innings. I'm fully aware that this rule isn't perfect. It skewers guys who come into situations with runners on base. It doesn't put any context around blown saves or the fact that some guys happen to be on teams with more leads to protect. But, it is better than the current save rule, and it's simple enough to not require advanced calculus.
Had this rule been in effect for the 2012 season, the league leaders in "new saves" would have been:
K-Rod and Vinnie Pestano gets their due on this list, despite the fact that John Axford and Chris Perez picked up most of the saves for their respective teams in 2012.
Hurrah! We've saved the save!