A week ago, Tim Lincecum pitched a no-hitter against the San Diego Padres, striking out 13, walking four, and throwing—gulp!—148 pitches. He also drew a walk at the plate and scored a run. I'm sure recording the last out is a moment he’ll remember for the rest of his life, just as it was for Johan Santana, who last year pitched the first no-hitter in Mets history in a comparatively efficient 134 pitches.

Generally, pitchers don't go more than 100 pitches in a game, but this was a special occasion. I used to use the same logic when I wanted to stay up late as a kid. The thing is that once you use the "special occasion" excuse and find out how much fun it is to stay up until midnight, it becomes easy to think of every occasion as special. There's a re-run of that one episode of Deep Space Nine that was so cool? (The baseball one!) That's special and worth staying up the extra hour. The next day, you feel a little groggier, but you get through, and it's not like anything really bad happened. Right?

I have to imagine that a manager who has a pitcher nearing the 100-pitch threshold, but who really has good stuff that night, finds himself in the same basic position. Should he let the pitcher stay up late and face one more batter, or walk out to the mound with a glass of warm milk and tuck the pitcher in for the night?

Here at BP, the idea of pitcher abuse and extreme pitch counts has been previously discussed by Rany Jazayerli and Keith Woolner, but it's been more than a decade since their work. Let's re-visit the issue of pitch counts and the effects that a 140-pitch marathon might have on a pitcher and his performance the next time that he goes out to the mound.

But first…

Warning! Gory Mathematical Details Ahead!
I calculated pitch counts for all starters in all games from 2003-2012. For the purposes of these analyses, I used only pitchers who were starters in their previous outing and were now pitching again as starters five days later (that is, a standard four days of rest between starts).

As per usual, I controlled for general batter and pitcher quality through the log-odds method and used only plate appearances that involved a pitcher who faced at least 250 hitters in that year against a batter who also had at least 250 plate appearances. I controlled for whether the pitcher had the handedness advantage, and entered his pitch count for the current game prior to the individual plate appearance (i.e., Smith has thrown 37 pitches so far). I entered the pitch count from the previous game as our predictor of interest. I looked at how all of these variables did at predicting the seven basic outcomes of a plate appearance (strikeout, walk, HBP, single, extra base hit, home run, and out in play).

Pitch count from the previous game had a significant predictive effect on singles (p = .082, please spare me the lecture), home runs (.057), and outs in play (hooray, .015!) All three effects were bad news for the pitcher. There is a carry-over effect from one start to the next. How bad is it?

Let's assume that our pitcher is league average for 2012 and is facing a league-average batter, and compare what would happen if his previous outing had been 100 pitches vs. 110 pitches (and for fun, 140 pitches).


Expected — 100 Pitches Last Game

Expected — 110 Pitches Last Game

Expected — 140 Pitches Last Game





Home run




Out in Play




We see that extending a pitcher to 110 pitches in his previous start, compared to a 100-pitch outing, shaves a few hundredths of a percent off each of those outcome rates in his next start. To put that into some workable context, let's say that a manager routinely pushed all five of his league-average starters to 110 pitches, and another routinely stopped at 100. Figuring that a team's starters face about 4,000 batters per year, the first manager's team might be expected to give up roughly an extra single and an extra home run, while losing about four outs. (Yes, I know that doesn't add up. If we looked at the other events, there would probably be tiny fractions of those changing hands.)

All told, we're talking about roughly three or four runs for the team all season as the penalty for routinely pushing pitchers to 110 pitches, rather than 100. That's not zero. If you round a little bit, you can say the words "half a win" and not feel like a liar. Then again, if a manager went to 110 half the time with his pitchers (and how many do that even half the time?), the penalty would be "a run or two." Over an individual game, the effect is very small, and it would be overwhelmed by randomness anyway. There's a signal in that noise, but it's not as interesting a signal as people seem to believe.

Now, regularly pushing pitchers to 140 is a different story. A team would give up seven or eight singles, five extra home runs, and get 18 fewer outs in play (again, doesn't add up… I know). That makes the carry-over penalty over 4000 plate appearances around 15-20 runs for the season. It's a bad strategy if done constantly, but then there is no manager anymore who does this constantly.

I ran a couple of supplemental analyses (research speak for "I was playing around with the dataset") to check a couple other possible effects. I added an interaction term to the regression between pitch count from the last time out and pitch count up to this point in the game. Maybe a guy coming off a 120-pitch outing tires more quickly than a guy coming off a 100-pitch outing. That interaction term never got close to significance.

I also looked at whether the number of pitches from two outings ago made a difference by adding that into the regression. (I looked at cases in which both the immediately previous start and two starts ago were all on standard four-day rest.) Pitch count from two starts ago did not seem to have any additional effect. There is a carry-over effect on performance from one start to the next, but it doesn't appear to persist much past that.

I also tested a quadratic model (I entered pitch count from last time, squared) to account for the fact that at the extreme edges of pitch counts, the effects might be compounded more with each additional toss toward home. This didn't seem to fit the data, however.

How Long is Too Long?
Let's first deal with some big methodological issues around this study. First off, "pitchers who started after four days of rest" is a selective sample. If a guy threw too many pitches last time out or was feeling off, his next start might have been pushed back or he might have had a turn skipped. Also, one will notice that Bruce Bochy let Tim Lincecum throw 148 pitches right before the All-Star break, when he wouldn't need him to come back on the fifth day.

Finally, the guys who are allowed to go 120 pitches, for example, are (somewhat by definition) the guys whom the manager believes can handle 120 pitches in one game and come back on regular rest and still be effective (in other words, not Erik Bedard). Assuming that managers have some clue about what they're doing, we need to be careful in interpreting these results. Pushing any random pitcher to 120, perhaps one who's not built to do that, might (repeat, might) actually have much more catastrophic effects in his next start than these results might suggest. Then again, for those of you playing fantasy baseball, if a pitcher does have a 120-pitch outing, history shows that it will not affect him too greatly his next time out.

These results look only at a performance hangover effect from throwing a lot of pitches in one start. The risk of injury is another issue altogether. One could make a case that allowing a really good starter to work a little overtime in the seventh inning of a tight game when the bullpen is tired or not that good to begin with is actually worth the price to be paid in his next start. However, we know that throwing a lot of pitches is hazardous to a starter's health, and it does little good to get an extra inning out of him now if you lose him for two months down the road. I guess I'll have to do that injury study next.

Finally, there's the issue of the fact that Lincecum was chasing a no-hitter, and if Bochy had pulled him out, Lincecum would have spent the rest of his life wondering "what if." Might that have damaged his ego so much that it would have affected him through the rest of the season? Bochy may have been fully aware that letting Lincecum throw another 20 pitches would affect him, but believed that the alternative was worse. Part of the problem is that potential no-hitters don't come along very often, so it's hard to run a study on what has happened throughout history.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
Pizza, I'm wondering if this analysis runs into the same difficulty as JC Bradbury's analysis when he worked on this. The game in which a pitcher throws 110 or 140 pitches is generally a well-pitched game. So, that pitcher will have (on average) underperformed his seasonal averages in all other games that year including the game immediately after. In other words, if you did this same analysis on the game *before* the 110 pitch or 140 pitch game (or any other randomly sampled start from the season selected as being not the 110 pitch game in question) would you have found the same thing?
It's a variation on the "punishment illusion". Lincecum went 140 pitches _because_ he was throwing a no-hitter, and it's not likely that his next game will be as good. (Where do you go from there but down?)

I had considered that. I figured that the effect is ameliorated by the fact that my baseline for performance is his average stats for that year (although as you point out, this includes his likely awesome performance, which will skew the results). If anything, that if some of the decrease in performance is due to a regression to the mean bias, the small effect that I found just got smaller.
Russell, wait a minute. If a pitcher in the high pitch count game, gives up 1 run less per 9 (which is probably conservative), then in the next game, his average rpg, even if there is no effect from the previous game, is going to be around 1/30 rpg less than his seasonal average, right?

You are finding an effect of roughly the same amount! So where is there a residual effect? Is my math wrong?
Slight correction on your math. Let's say he's a 4.00 RA/9 pitcher usually, but throws a shutout (so, 4 runs per 9 less than usual). Figure that his starts aren't usually 9 innings, but he's a 180 IP guy seasonally (for ease of calculation). We'd expect him to give up 80 runs over 180 innings. Taking this masterpiece, but long start out, we assume 80 runs in 171 innings, which means that he's something like a 4.21 RA/9 pitcher. I don't know that we can make those sort of static state assumptions in real life, but the point is well-taken.

I believe that the argument you're making is that even the small effect I found might be even smaller, which I am happy to support.
Yes, except that if the difference was that a pitcher is really .21 runs/game worse in other games than all games (diff between 4 and 4.21) that would would be 20+ runs per season in the "all team all season" hypothetical and actually change the sign of the effect not just reduce it, right?

With MGL's more modest 1 r/g lower in long outings and your innings model we get maybe 5 runs per season in the all team all season hypothetical which changes the sign of the 110 pitch effect and cuts into the 140 pitch effect significantly (and 1 r/g is probably an understatement for 140 pitch outings). So, the adjustments might well be big enough to change part of the take home message. Given the size of the uncertainty here it seems uncertain whether the sign if positive or negative in either case. Perhaps it doesn't make sense to worry about the sign, when if the big take home is just that the effect, whatever it is, is a small one.
"All told, we're talking about roughly three or four runs for the team all season as the penalty for routinely pushing pitchers to 110 pitches, rather than 100."

One cannot look at this in a vacuum. Ten more pitches thrown by starters means there will be less pitches thrown in the game by relievers.

In the "good old days" relievers were certainly considered to be less effective than starters. With the specialization and shorter workloads in today's game, a fresh reliever might be expected to do better than a fatigued starter.

However, that might be neutralized by the fact that ten extra pitches or so per day will add another 1600 or so per year to the workload of the bullpen.

All in all, there are an awful lot of variables to take into account before it can be determined which plan is best.

I hinted at this in the article where I pointed out that you can make a decent argument that the couple of run penalty can actually be justified. Your ace is cruising, it's a tight game, and the bullpen pitched 6 innings yesterday. It might not be a horrible idea.
To extend your staying up late analogy for the article, given that Lincecum got 9 days off between the no-no and his follow-up tonight, could it be said that he was allowed to sleep in after the long night?
We need to do a lot more research in the effects of bullpen workload. We (at least I) have an idea that managers let starters, especially poor and mediocre ones, pitch too long when they are having a good game, given the strong evidence that pitchers fare a lot worse the 3rd and later times through the order, even when they are "cruising."

However, without having some idea as to the advantages of saving your bullpen, it is difficult to know how long to leave in your starter given that hey tend to fare worse and worse as the game goes on, no matter how they are pitching...
It seems to me that the real cost of high pitch counts is not so much what happens to the pitcher in his next start or his next few starts but, as happened with Santana, the increased chance that he will sustain a disabling injury.