And suddenly, Andrew Miller is the toast of the baseball world. Miller, whose previous claims to fame included being “the guy picked right before Clayton Kershaw”, won the ALCS MVP award by turning the clock back to the 1970s and (get this!) pitching multiple innings in relief, and coming into the game in the (gasp!) fifth and sixth inning.
What gives? Did the Indians not explain the leverage index to Terry Francona? Francona, dispensing with the standard recipe that pitchers in a bullpen are supposed to be used in reverse order of their quality brought Andrew Miller into the game, in some cases as his first reliever out of the pen. In a couple of games, Miller pitched two innings and threw around 40 pitches. His compatriot, Cody Allen, also pitched past his normal bedtime.
The Twitterati also nearly lost their collective minds when Dave Roberts finally did something heroic in the post-season by inserting closer Kenley Janssen into the seventh inning of Game 5 of the NLDS (an elimination game!) and then asking Clayton Kershaw to get the final two outs in relief.
We have entered a brave new world of bullpen use!!! The world will never be the same!!! I’m going to use completely unnecessary exclamation points!!!
OK people, calm down.
It’s a little surprising that this sort of bullpen usage is a little surprising. The Indians were down two starters (Carlos Carrasco and Danny Salazar) and a third (Trevor Bauer) had an unfortunate encounter with a drone, so we all kinda figured that the fact that they acquired Andrew Miller at the trade deadline would come in handy. The idea of a bridge reliever or a multi-inning reliever in the playoffs is hardly a new thing. When Mariano Rivera first went to the playoffs in 1995 and 1996, he did so as a multi-inning reliever ahead of then-Yankee closer John Wetteland. And it’s not like elite relievers going multiple innings just started happening this year.
But no, this is not going to become a regular season thing all across baseball. At least not any time soon.
Warning! Gory Mathematical Details Ahead!
First off, let’s talk about why Andrew Miller seems so out of place. The graphs that follow will surprise no one, but sometimes it’s informative to just take an historical perspective on things. These graphs cover the years 1950 to 2015 (regular season only). The first shows the percentage of relief appearances which fit the criteria that the reliever 1) entered the game to begin an inning, 2) recorded three outs (whether three-up, three-down or if there was some blood on the mound afterward) and then 3) was sent to take a shower. This is the single inning (or the “your inning”) model of relief.
Even as recently as 1984, this “your inning” model described fewer than 20 percent of relief appearances. The graph shows that in recent years, nearly half of relief appearances were of the “in as the inning starts, out as it ends” variety. That under-states the number of times that this strategy was used, because there were surely times that a manager had in his mind to have his reliever throw just the seventh, but he gave up 5 runs and had to be airlifted out of there before getting all three outs.
We also know about the advent of “the closer” in the late 80s, and his cousin, the “set up man.” Again, let’s look at the historical trend on this one. This graph is limited to all 8th and 9th innings in which the pitching team entered the inning with a lead of three runs or less (in other words, a “save” situation). The lines represent the percentage of these innings which featured a “single inning” appearance (reliever enters the game for that inning and after gathering up three outs, leaves.) There’s a natural reason why many of those ninth inning relievers “only” got three outs, but we see that the same shift to the “single inning” model, lagging only slightly behind, has been happening in the eighth inning as well. At this point, more than 80 percent of save-worthy ninth inning leads are handled by one guy and more than 60 percent of hold-worthy eighth inning leads are similarly handled.
We know who those eighth and ninth inning relievers are. They tend to be elite relievers, or at least the best relievers on a team. And we can see plainly on the graph that the sharp climb seemed to start around 1988 when Tony LaRussa invented the “modern” bullpen, made Dennis Eckersley the first “modern” closer, and formally monetized the save total. This is all well-known, but I think that the graphs show a more important point that the ecosystem for relievers has tilted so heavily to the model of assigning specific innings to specific pitchers (in a way that would have seemed very foreign 40 years ago) that when a guy appears who is both elite and pitches in multiple innings…we don’t know what to call him anymore. We are in the era of the single-inning reliever.
There have been plenty of strategic fads in baseball that have fizzled out before. Bunting has seen its star fall. The A’s no longer carry a random Olympic track star to serve as a designated pinch runner. Even the days of the full-time designated hitter have seem to have come and gone. Yet, the drive toward using elite relievers in single inning bursts not only persists, but the evidence suggests an upward trend line. Why?
Some of that might be a self-fulfilling prophecy. As the Platonic ideal of “a reliever” becomes more associated with the form of “one inning, your inning” young pitchers train to throw one inning, which further strengthens the bond. And that philosophy has its benefits. Inserting a pitcher to start an inning means that he can warm up as his teammates bat. Pitching one inning means that he can go in and air it all out, and the fact that his appearance doesn’t span two (game) innings means that he doesn’t have to do a max effort-rest-max effort combo. Everything can be poured into those 10 minutes.
The story of modern bullpen usage, as it’s normally told, is that there’s been a shift toward hyper-specialization, mostly in the form of ROOGYs and LOOGYs. Pitchers who face one or two batters. This graph below shows that this narrative is a little misleading. This shows the percentage of relief appearances in each season that lasted fewer than three outs, those that lasted exactly three outs, and those that lasted more than three outs. Again, you can see the shift took place in the mid-80s, but the trade was not from firemen to LOOGYs, but firemen to one-inning relievers. The percentage of relief appearances lasting fewer than 3 outs has actually changed. (It’s worth saying that because teams use more relievers now, though the percentage is the same, it does represent a greater raw number of micro appearances.)
Yes, there are still multi-inning relief appearances, but from 2011-2015, 62 percent of relief outings that lasted two or more innings were cases where a reliever came into the game with his team losing. A lot of them were garbage time innings. That makes some amount of sense in terms of keeping everyone else fresh (it’s a bad idea to spend good money chasing bad), but it means that the two-inning bridge relief outing or the two-inning save are becoming an endangered species.
But does this make sense from a team strategy point of view? Should teams be training their young relievers, especially the good ones, to only expect to pitch one inning and then leave? Why not do the multi-inning thing more often and have a fireman specifically tasked with covering the seventh and eighth innings? Or at least have the closer pitch a second inning once in a while?
There’s actually a perfectly boring reason why. I looked for cases in which a reliever pitched two complete innings (started one inning, finished the next inning, exited; games from 2011-2015). To make sure I wasn’t getting garbage time early-game long relief appearances, I required that the starter entered in the seventh inning or later. I looked at his performance (same guys, same games) in his first inning of work and his second inning of work. In the first inning of work, our pitchers allowed 579 runs over 2,949 innings for an RA9 of 1.77. In his second inning of work, they allowed 721 runs (again, by definition in 2,949 innings) for an RA9 of 2.20. (Note: some of you are saying “That sounds really low.” The fact that I required two complete innings naturally censors out the guys who got bombed in the second inning of work and had to be removed. It also removes the guys who got bombed in the first inning of work and had to depart. Or who didn’t get sent out for their second inning of work because they completely torched it in the first inning. The point is that our two-inning pitcher – in the best of circumstances – suffers a “penalty” of 0.43 runs in RA9 during his second inning of work. And that’s the best case scenario where he finishes both innings and doesn’t implode. The actual penalty is probably a bit higher, but let’s call it half a run.
In 2016, I found all pitchers who threw at least 50 innings in relief (130 of them) and arranged them by their DRAs. The quintile cutoff points (i.e., the place which cuts off the bottom 20 percent, the next best 20 percent, the next best 20 percent after that, etc.) were 2.90, 3.64, 4.05, and 4.67. Those points are roughly (yes, very roughly) about half a run apart from each other. That means that even if you start with a “pretty good” reliever, it’s likely that by his second inning of work, the “second inning” penalty has taken him down a notch to the point where a manager might just have someone who is a better bet by virtue of the fact that he is fresh. In a blowout, he may not care, but in a close game where he needs his best out there, a lot of times, his best is going to be the single-inning strategy. (And we haven’t even started to discuss things like the platoon advantage and how it might affect calculations. If you pitch long enough in a game, you’ll eventually face a lefty.)
Still, we’re talking about a guy like Andrew Miller, who led all pitchers in MLB with a DRA of 1.24 in 2016. Even if we assume a penalty for going into his second inning of work, he might still be better than whoever else happens to be out there (though Cody Allen might beg to disagree), couldn’t we at least see some of those super-elite guys picking up some two-inning stints from time to time?
During the regular season, a manager has to worry about two issues with relievers. One is over-use and the other is tomorrow’s game (and to some lesser extent, the game the day after that). Your typical closer pitches 70-75 innings during the regular season and those innings are a precious and rare resource. We know that managers are concerned about over-use with relievers specifically. Two and a half years ago, I did some work that I think is rather informative here.
At that time, I looked at the effects that a day off had on how a manager used his pitching staff (at the time, I was interested in the effects of a complete game). What I found was that there was little evidence that when a manager had a day off in front of him (i.e., he didn’t have to worry about tomorrow’s game) he didn’t develop a quicker hook and he didn’t use more relievers or pile up the innings on his pen. It was quite striking how little difference there was between games before a day off and games where he literally had to think about tomorrow. These data are from 2009-2013 (reprinted from the original article).
Before a day off
No day off
Batters faced by starter
Outs recorded by starter
Number of pitchers used (inc. starter)
While there has been a good amount of research on what factors are associated with injuries to starting pitchers, we have very little evidence on how that works for relievers. It’s possible that relievers could handle more than the 75 innings that they all normally pitch during the regular season. Perhaps the best evidence to the contrary is what’s above showing that even when managers could be a little looser with the bullpen and pile on the innings, they actually don’t (during the regular season.)
The other – admittedly more circumstantial — case is derived from research on starters. I found previously that in eras where a four-man rotation was the norm, sudden unexplained absences of pitchers (we don’t have injury logs, so I had to fake it), were not associated with starting on what we would now consider “short” rest. I surmised that the injury risk to a starter had less to do with what he was actually asked to do and more to do with whether his body was conditioned to do that. In this case, we know that relief use, for good or for ill, has swung mainly to the single-inning model. It’s plausible (and again, I will gladly concede this isn’t airtight proof) that same principle would hold and that asking a pitcher to go multiple innings, when he hasn’t been trained to do that, would also be a risk factor.
Managers also want to be careful because using a closer today might impact his availability tomorrow, and baseball is a daily sport. It’s not that a closer can’t pitch back-to-back days (and sometimes back-to-back-to-back days) but there will come a point where he needs a break. And using him for two innings today might mean using him for no innings tomorrow.
Let’s assume that a manager is trying to decide whether to use his elite reliever/closer for two innings (the eighth and ninth?) with the knowledge that doing so will mean that tomorrow, his elite guy won’t be available. Tomorrow might be a 15-3 blowout. Tomorrow might be a nail biter. How to figure out mathematically if it’s a good idea?
We normally think about how important some event or situation is in a baseball game using the leverage index. The leverage index takes into account, the score, the inning, the runners, and the outs. That’s useful if you want to tell the importance of a specific plate appearance, but we’ve already seen that we live in an era where relief assignments are generally passed out an inning at a time. A couple of years ago, I introduced a concept called “inning leverage” which took the leverage index and instead of looking at changes in win expectancy from plate appearance to plate appearance, I modeled it simply as how much this (half) inning as a whole, starting with no outs and no runners, matters in determining who will win and who will lose compared to other types of half-innings. If I’m going to have to trade putting Andrew Miller in for two innings now and risk not having him available tomorrow during a hotly contested ninth inning, I need to be able to justify it.
Let’s take the most obvious non-ninth inning case, a case where it’s the eighth inning and the pitching team is clinging tight to a one-run lead. Using data from 2003-2015, I found that this situation has an inning leverage of 1.89 for the home team and 2.29 for the visiting team. We’ll use the visiting team as our standard. In that case, the only situation which rates more highly than a one-run lead in the eighth is a one-run lead in the ninth (inning leverage = 2.87). Assuming that Andrew Miller is healthy and rested enough to throw two innings, should he be warming up to come into the eighth? Well, the one-run ninth inning lead is 1.25 times more important than the one-run eighth inning lead and he wouldn’t be available for it tomorrow. How often does that more important situation happen? Well, a visiting team had a one-run lead going into the bottom of the ninth in 11.9 percent of games from 2003-2015.
It’s more than just saying “there’s a 12 percent chance of there being a more important situation tomorrow.” We need to adjust for the fact that this situation is more important by a factor of 1.25. But even weighting it for leverage, we can see that mathematically it makes sense to give up tomorrow’s possibility of a one-run ninth-inning save situation. You might get bitten when a ninth inning save situation pops up the next day, and the closer isn’t available, but that’s the mathematically correct way to deploy resources. According to this method, there are a few situations where it makes sense to put the closer into the pre-ninth inning high leverage situation that you know you have in hand right now vs. saving him tomorrow. As we might expect, they are all tie games and one-run leads in the 7th and 8th innings.
So a manager would be justified in pitching his ace reliever to protect a one-run lead in the eighth (and ninth) inning even if it means he won’t pitch tomorrow. But again, he needs a reliever who is good enough that when we apply the second inning penalty, he doesn’t drop below one of what one of his teammates could offer. He needs to be good enough against righties and lefties that the platoon effect doesn’t knock him out of consideration at some point during that second inning. He needs to be able to physically pitch two high quality innings. He needs to be understanding enough to know that this might require him to pitch some non-ninth inning (gasp!) non-save situations. In other words, he needs to be Andrew Miller.
On top of that, even if a manager wanted to run that sort of bullpen during the regular season, it wouldn’t actually net him out as much as you might think. A few years ago, I simulated a model of bullpen usage in which the traditional “closer” was used in the eighth and ninth inning of a one-run game, but ceded his three-run saves to the set-up guy to keep his overall workload roughly equal. It turns out that it saves a team approximately half of a blown save. Not half a win. Half of a blown save. That’s it.
Why Don’t They Make the Entire Plane Out of Andrew Miller?
There’s a reason that the Indians have been much more liberal in their use with Andrew Miller in the playoffs. (And other teams have followed suit with their best bullpeners.) It’s trite to say “because it’s the playoffs, obviously” but that’s really what it boils down to. Playoff baseball is different. Two (or three!) inning stints do happen during the regular season, but they are more likely to be the types of stints where a manager might be trying to conserve some of his other bullpen resources (i.e., we’re sending you out there for 2 because you’re one of only three guys available tonight.)
I think we need to discuss a very small distinction in how we think about bullpen strategy and its relationship to the leverage index. The maxim is normally framed as “you want your good relievers to pitch in high leverage/meaningful situations.” Allow me to make a subtle, but meaningful alteration to that. Instead, I’d argue that bullpen usage is actually run by the maxim that “You don’t want your good pitchers to pitch in non-meaningful situations.” That might seem like a small distinction, but it makes a big difference in practice.
If innings pitched by an elite reliever really are a rare and precious resource during the regular season, then a manager doesn’t want to “waste” his closer on a non-important situation (and yes, managers still define a three-run save opportunity as important, but not a tie game on the road in extra innings as important… we’re working on that). I used to think that managers used their bullpen from the least good relievers to the best because it gave them the psychological satisfaction of feeling like they could slowly choke the game off from their opponent. Now, I’ve come to think of it differently.
There’s a deliciously awkward moment in some games where a team is batting in the bottom of the eighth or top of the ninth with a 2 run lead. The closer is heating up in anticipation of a save opportunity, but in the way that baseball does sometimes, his team quickly puts two runners on and the next guy up hits one into the second deck. In a matter of 3 minutes, the game has gone from a save situation to what will likely be a humdrum ninth. And sometimes… the closer sits down. (If he’s been warming a while, he might come in anyway.) I think there’s a certain mathematical correctness to sitting the closer back down and the “reverse order” bullpen in general. A manager might have in his head, even in the sixth inning, that Andrew Miller – and these three other guys — are going to pitch if the game stays like this. Now it’s just a matter of figuring out what order they will pitch in. But the shape of the game can change, and quickly. If Miller innings are a precious resource, why not wait until the last minute until it’s necessary to use that resource?
During the regular season, there are concerns about overworking pitchers and a lack of days off that managers have to be worried about allocating those innings cautiously. In the playoffs, there are more days off and each game means so much more than one of the 162 during the regular season. There might be injury risk due to those long(er) outings or doing back to back days of two innings, but it’s the playoffs. During the regular season, “up by four in the ninth” might not call for the closer, but if everyone’s rested why chance it with some lesser reliever in the playoffs? In the playoffs, it doesn’t matter if the shape of the game changes (or at least, the parameters for “changes” get a little looser). And so if Terry Francona believes that now that his starter has had enough, and that the remaining four innings will be covered by some collection of Bryan Shaw, Andrew Miller, and Cody Allen, then it’s likely that all three will pitch, pretty much no matter what happens, and it’s just a matter of figuring out the best matchups for when they should enter.
We are missing a piece in here. We’re assuming that 75-80 innings represents an upper limit for reliever usage within a season and that managers need to ration those innings out jealously. We don’t have any public evidence of whether or not that’s the case. The best evidence that we have – how pitchers are actually used – suggests that teams and managers believe that reliever innings should be conserved, but it’s not like teams have never been wrong about this sort of thing.
If a team could identify an elite reliever capable of handling the increased workload, then perhaps the bridge reliever/two-inning closer might make a comeback, but for it to be a regular thing, you’d need a re-design of the entire ecosystem of baseball, or at least relief pitching. Until then, we’ll just have to ooh and aah at Andrew Miller as he comes into the sixth inning of almost every game in the playoffs, pitches two innings, wins the MVP, and… hold the sadness in our hearts that he probably won’t do it again next year. Until October.
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.Subscribe now
Seem to whom? There have always been a few teams that did not come up with a dedicated DH during the year. This year seemed normal to me.
Boston, Detroit, Kansas City, and Los Angeles had unquestionably full time dedicated DHs (Ortiz, Victor Martinez, Morales, and Pujols). We can overlook games played in National League cities.
Seattle's Nelson Cruz played a little more outfield than that, but not 1 game on the field since August 16.
You have to include Texas as having a dedicated DH all year. After Prince Fielder retired, the Rangers traded for Carlos Beltran who took over the role.
Avisail Garcia was the regular DH in Chicago until they signed Justin Morneau - who was 100% dedicated.
Cleveland had Carlos Santana as their regular DH all year, but switched with Mike Napoli at first-base on occasion.
A-Rod was the Yankees' DH when able most of the season - until they called up Gary Sanchez and moved Brian McCann there in a C/DH sharing role much like Cleveland's use of Santana and Napoli. If there is a trend, it may be that.
Encarnacion was Toronto's every day DH in the first half of the season except when they platooned Smoak at first-base. After allowing Bautista to DH while recovering from his injuries, Smoak never regained his job, and DH became a Saunders/Upton platoon.
Pedro Alvarez was pretty much a regular DH all year in Baltimore - at least, against right-handers.
I would draw the line about here. Evan Gattis was the regular DH almost all year in Houston, but more so than Alvarez left plenty of games for others to rest and recover while getting their at bats.
Along with maybe/maybe not Houston, that leaves only the three last place teams without dedicated DHs, although for a solid couple of months Billy Butler fulfilled that role in Oakland.
A sample of relievers who pitched 2 innings will be populated by pitchers who pitched exceedingly well in the first inning (hence the ridiculously low Ra9).
Try this: look at starters who pitched complete games and tell me their RA in the 8th and 9th. Starters who pitched 8 innings, look at their 7th versus 8th. Etc. You choose any time period where a manager has a choice of letting the player continue or not and you're always going to find the post-decision performance to be substantially worse than the pre-decision performance.
I'm sure I'm not telling you anything you didn't already know. You talked about it in terms of completing the second inning. It is true that there's also a selective sampling effect in the second inning (only those who pitch well in the second inning are allowed to complete it) but I'm guessing it's not nearly as large as the first to the second.
It is almost impossible to compare back to back innings for relievers because of this selective sampling phenomenon. Just looking at the performance in those 2 innings tells us nothing.
Why don't you look at all performance in 1 inning including relievers who pitched in 1 inning only and those who pitched in more than one? That's an unbiased sample. Then look at all second inning performance. That's also an unbiased sample. Then compare the two adjusting for the quality of the pitchers in each sample. That's what you're looking for. Your way is too problematic. We don't know the extent of the inning 1 to inning 2 selective sampling effect versus the start inning 2 to finish inning 2 effect.