Premium and Super Premium Subscribers Get a 20% Discount at MLB.tv!
May 26, 2005
Saving for Another Day
Baseball, like life and the school cafeteria, is filled with choices. Swing or take. Steal or stay put. Bring in the righthander or the southpaw. Most of these choices are fairly simple and immediate. If the batter swings at the pitch, the effects of that decision won't extend beyond that at-bat for the most part. But there are small ramifications from that decision that do extend beyond the immediate. Taking the pitch may increase the pitcher's pitch count; he may then leave the game slightly earlier; the earlier departure may lead to the opposing manager selecting a different reliever, etc, etc, etc. Those ramifications are unlikely to be simply the result of that one swing/take decision, but we've all heard the theory of the butterfly effect.
Bullpen and rotation management is on an entirely different level from those simple swing/take decisions. Teams plan out rotations weeks in advance; relievers are frequently unavailable one game after a longer outing the previous night. While decisions in games can be evaluated by their impact on the success or failure of winning that particular game, decisions like these must be evaluated based on the likelihood of winning both the game at hand and future games.
It's this situation that spurred a recent email to me from Will Carroll (while Will would probably think a LEFT JOIN is some kind of political movement, he's excellent for asking these kinds of questions):
Watching Cliff Lee pitch today (Sunday) and got to thinking about a couple things in relation to having a big lead and taking the pitcher out.
In the game Will was referring to, the Indians were leading 3-0 over interleague rival the Reds after scoring a run in the top of the sixth. The Reds scored a run in the bottom half of the inning, but the Indians notched six runs in the top of the ninth to put things out of reach. Up 3-0 heading to the bottom of the sixth, Indians manager Eric Wedge was faced with a multi-game decision. Lee had thrown 82 pitches through five innings having walked three but only allowed two hits and no runs. He clearly wasn't at much of a risk of getting into a dangerous pitch count, but perhaps Lee would perform slightly better in his next outing with a little more rest.
So the essential question is if the chances of the Indians winning their next game that he started increase more than their chances of winning this game decrease by pulling him, then Wedge should consider pulling him. While it's not quite that simple--there's the issue of additional strain on the bullpen and how that affects other game probabilities--the issue of bullpen fatigue takes into account a wide variety of factors that are difficult to quantify. Thus, let's see if the initial hypothesis checks out before getting into those more convoluted discussions.
To Will's first question, we can initially answer it using the Expected Wins Matrix. In 2005 through yesterday, starting the top of the seventh with a three, four, or five run lead, the visiting team had an 88.4 percent, 100.0 percent, and 100.0 percent chance of winning the game; down by those same margins, the chances drop to 1.7 percent, 14.3 percent, and 5.3 percent. (The home team's chances are the inverse.) Over a longer haul, those percentages change, but on the surface we can see that--despite all the well-publicized bullpen failures--a three run lead after the sixth inning provides teams with almost certain victory.
In order to answer the second question--what happens when the starter is pulled--let's look at some raw totals over a slightly longer timeframe. Since the beginning of 2004, teams had the following results:
Lead Pitcher Win Loss Win Perc 3 RP 218 32 87.2% 3 SP 171 13 92.9% 4 RP 188 14 93.1% 4 SP 129 4 97.0% 5 RP 113 8 93.4% 5 SP 115 3 97.5%
Looking at this larger set of data, the natural progression we'd expect becomes clearer. The difference between three- and four-run leads is much greater than the difference between four- and five-run leads. Further, if the starter is no longer in the game to start the seventh inning, teams see a reduction in their odds of winning the game by 5.7 percent, 3.9 percent, and 4.1 percent, respectively.
There's one small issue with that data that should be pointed out: it may look more important to keep the starter in than it really is because there are likely a disproportionate number of high scoring games among the ones in which the reliever is pitching the seventh. A three-run lead of 3-0 is more likely to see the starter return for the seventh but a 10-7 lead is not. Additionally, the 3-0 is more likely to be upheld than the 10-7 lead. So the difference between the starter and a reliever pitching is likely less than the observed 4 to 5 percent.
At this point, we've answered the first two questions: teams blow three-, four-, and five-run leads after the sixth very infrequently, but if the starter isn't the one pitching the seventh, their odds of winning decrease somewhere on the order of four to five percent. Now on to the third question: does a pitcher see any advantage in his next outing? A great deal of this question is covered in the PAP^3 research by Keith Woolner and Rany Jazayerli in Baseball Prospectus 2001. Unfortunately for this particular study, all of that research is broken down by pitch counts and not innings.
Instead, let's dig up some new data. We can't simply look at how all pitchers do in the outing following an outing of a certain number of innings because we'd be running into a massive selection bias: the worst pitchers don't pitch as many innings as the best, so breaking things down by innings pitched in a game would give us an inordinate number of bad pitchers. Furthermore, we cannot compare individual pitchers' short outings to their longer outings for the same reason.
Instead, we'll compare each pitcher's next start to his season average, arriving at a composite look at the kind of pitchers who usually manage six innings or fewer.
Next Game Season Year IP/GS RA K/PA BB/PA IP/GS RA K/PA BB/PA 2000 5.8 5.54 .155 .093 150.4 5.34 .157 .091 2001 5.8 5.19 .158 .082 153.7 4.95 .164 .080 2002 5.8 5.03 .153 .084 150.7 4.68 .159 .083 2003 5.8 4.85 .152 .081 153.6 4.82 .155 .080 2004 5.8 5.05 .154 .085 155.7 4.95 .159 .082 2005 6.1 4.73 .156 .082 50.8 4.85 .152 .082
So far this year, it appears that not pitching too many innings in the previous start improves RA by about .12 runs per game, but notice that from 2000-2004 things were the opposite. In every instance, pitchers pitching six or fewer innings in one start did invariably worse in their next start.
This finding again might be part of a selection bias. Perhaps pitchers who only pitch six or fewer innings are tired or struggling in general. So let's try to isolate times when the manager chooses to lift a starter rather than being forced to by poor performance. This time, we'll only include starts in which the pitcher threw up to six innings but allowed three or fewer runs.
Next Game Season Year IP/GS RA K/PA BB/PA IP/GS RA K/PA BB/PA 2000 5.7 5.54 .159 .092 145.7 5.21 .160 .092 2001 5.7 5.25 .163 .081 145.9 4.90 .168 .082 2002 5.7 5.04 .152 .084 144.2 4.57 .161 .083 2003 5.8 4.84 .154 .083 147.6 4.72 .158 .082 2004 5.7 5.14 .155 .089 152.4 4.80 .163 .084 2005 6.0 4.78 .156 .083 50.6 4.49 .155 .081
The problem, it appears, has just gotten worse. Interestingly, the gap between the expected performance in the following start based on a composite season profile of the pitchers involved and the actual performance has widened. The reason for this may be that we're selecting a disproportionate percentage of the good performances in a season and--instead of comparing them to everything else--we're comparing them a full season line or what is essentially a slightly worse season line because one good start cancels out.
Even if we turn things around and only select short starts in which the pitcher gets shelled (five or more runs), they still perform worse than expected in their following start. Without bringing PAP^3 and pitch counts into the equation, there's no evidence that a short start leads to a short term gain in subsequent starts.
Fortunately for those you concerned with article lengths, this means that we don't have to dig into reliever availability and the short term effects on the bullpen by a short start because we can already answer Will's question: pulling the starter with a three-, four-, or five-run lead after the sixth inning simply because you want to keep him rested for his next start is not a good strategy, both because it reduces the chances of winning the game at hand and because there's no evidence of starter improvement in following starts.
This isn't to say that leaving a pitcher in to throw an abusive number of pitches is the appropriate strategy, but keep in mind that PAP^3 showed that a 4 to 5 percent decline in pitching performance in subsequent starts over the next 21 days occurs after an outing of 136+ pitches. While a 4 to 5 percent drop in pitching performance isn't necessarily mapped to a 4 to 5 percent drop in win expectation in that game that results from pulling a starter, it's a reasonable point that the threshold for the more severe decline in pitching performance happens only in a pitch count range that has recently become quite rare. While abusing pitchers in the name of guaranteeing victory is likely to cost teams wins down the road, babying them won't magically turn Lee into Kevin Millwood.