Pretend for a moment that you’ve been hired to direct a major-league baseball team’s pitching staff, yet have no knowledge of current or historical pitcher-usage patterns. After pinching yourself to make sure you weren’t in the middle of a Walter Mitty daydream, and ignoring Groucho’s timeless advice by joining an organization that would choose you for such an important job, how would you go about structuring your staff? Your overriding concerns, of course, would be to: (a) allow the fewest runs possible; (b) keep your best pitchers healthy (unless you insist on remaining altruistic in the midst of a competitive environment, in which case you’ll probably try to keep all your pitchers healthy); and (c) ensure that those runs which are allowed are timed such as to add up to the fewest losses.

Since some pitchers are better than others, to minimize the number of runs allowed, you’ll want your best pitchers to work as many innings as possible, and to keep them healthy you’ll have to avoid letting them pitch when fatigued—a topic of ongoing debate, but for our purposes let’s just say this means you’ll have to have a reasonable number of pitchers on hand to get through a long season, some of whom may be significantly less effective than others. Given this, to serve purpose (c) you would want your best pitchers to throw the highest-leverage innings (i.e., those which have the most effect on whether you win or lose), and your worst pitchers to throw the lowest-leverage innings. While this will vary greatly from game to game and inning to inning, in aggregate the later you get into a game the higher the leverage. Thus, if a pitching staff is being used optimally, you might expect better pitchers to be pitching in later innings, and you might expect to see the average number of runs scored by inning decrease as the game moves on. Is this what currently happens in baseball today, and if so, should you and your fictional team decide to mimic current usage patterns?

Perhaps not. The chart above shows average runs scored per inning during the Retrosheet era, grouped into Early (innings one through three), Middle (innings four through six), and Late (seventh inning or later). There are some single-season fluctuations, of course, but in general, scoring rates were reasonably equivalent throughout a game from the mid-'50s into the mid-70s, when scoring in late innings became consistently lower than the early and middle innings, often dramatically so. This continued into the late '90s, at which point scoring in the middle innings started to consistently outpace early innings, leaving us today with a pretty clear stratification, but not necessarily the one we want: low scoring at the end of games, medium scoring at the start, and high scoring in the middle.

The decrease in late-inning scoring most certainly has to do with the evolution of the stopper, a position that grew into prominence in the '70s. Let’s look at the same chart, but split out the ninth inning from the others:

When broken out separately, the ninth inning (and later) seems to have always seen much lower run scoring—much lower than the seventh and eighth innings, which here drop back toward the pack. What’s interesting to me here is that stoppers in the ‘70s, usually the most effective pitchers on a staff in short doses, were often relied on for multi-inning saves, yet run prevention in the seventh and eighth innings during that time wasn’t much better than the six innings before. It really wasn’t until the late ‘80s, with the advent of the one-inning closer and greater bullpen specialization, when scoring rates in the seventh and eighth innings dropped significantly below that seen in the middle innings. When looked at in isolation, this specialization coincides with the likely benefit of gradually reduced run scoring by inning at the end of the game, but it has also reduced the number of innings pitched not only by closers but other late-inning specialists, all of whom tend to be very effective. Has compressing the timeframe in which late-inning specialists pitch created a cascade effect, where run scoring in the middle innings has gone up? Are mediocre pitchers at the back of the bullpen putting up more innings in middle relief? To find out, first let's look at how many of those innings are going to starters and how many to relievers.

This chart shows the percentage of batters faced by a starter in each inning. Note that the number of quick hooks, where the starter is pulled before completing five innings, has actually decreased slightly over time despite the increase in league-wide offensive production. The sixth inning has remained relatively flat, with only a slight decrease in starter workload. But starting in the mid-‘70s we see a steady decrease in pitchers working the seventh, and precipitous drops in the eighth and ninth. Is this because pitchers have lost the ability to retain effectiveness later in games?

Above, we see starting pitcher OPS allowed by inning—I’ve switched from runs to OPS since it’s easy to calculate, correlates reasonably well with runs scored, and it’s difficult to correctly assign runs to a pitcher that doesn’t work an entire inning. Take a look at the blue (early innings) and red (middle innings) line. Throughout the entire graph, starters have been more effective in the early innings than in the middle innings, albeit in the ‘70s the effect was much smaller. This loss of starter effectiveness has been noted by many analysts, and has been shown to be true not only in aggregate but for a significant percentage of starters. Notice the green line, however. Initially, the seventh and eighth innings saw similar effectiveness to the middle innings, but starting in the mid-‘70s, starter effectiveness in the late innings gradually met and then surpassed early-inning effectiveness. This coincides with the reduction in starter workload in the late innings, showing this to be a survivor effect, where only the best pitchers or those pitching particularly well that day are allowed to pitch that long. Another child of the ‘70s—the five-man rotation—may also help explain this, as adding a fifth, probably lesser, starter will increase the percentage of games in which the bullpen needs to be called on in the late innings.

In any case, these usage changes have meant that late innings are often worked by the best starters or relievers, which should be a good thing—but as we can see from the chart above, starters are still facing roughly the same percentage of batters in the middle innings as they always have, yet run prevention in the middle innings has become significantly worse than at the start of the game. As is true for many of us, baseball’s “soft middle” has become more pronounced over the last decade, and prevents teams from ensuring their best pitchers work the most important innings. In this age of 12-man pitching staffs, perhaps bullpen specialization has forced lesser relievers to work the middle innings than in days gone by:

If so, it hasn’t been a particularly large effect. Over time, the relievers replacing pitchers pulled in the middle innings have pitched as well as starters pitch during the middle innings. Not that the starters shouldn’t be pulled—it’s reasonable to assume that the starters being yanked are bad pitchers or those having bad days and the Starter OPS line above would be higher had they been allowed to continue—but if relievers pitching the middle innings have grown worse since the ‘90s, then starters have grown worse at pretty much the same rate.

It’s hard to identify a single cause, but the fact remains that baseball teams now give up a higher percentage of runs in the middle innings than ever before, which is nowhere near the optimal pattern. Too often, the current paradigm allows the middle of the game to be worked by starters who have lost much of their effectiveness or mediocre relievers, a pattern which doesn’t help teams win. Some of the changes teams could experiment with are:

  1. Allow late-inning relievers to pitch more and earlier innings, taking innings from the least-effective hurlers on the staff (and possibly eliminating the need for 12 pitchers).
  2. Bring back the four-man rotation, ensuring that a higher percentage of the middle innings are worked by better starters.
  3. Try out a tandem-starter routine, such as SOMA, to minimize the deleterious effects of starters losing effectiveness as they face more batters in a game.

Any of these might help reduce run-scoring in the middle innings without increasing it early or late. Imagine you’ve been hired to direct the pitching staff of a team that isn’t tied to any current or historical usage patterns, and think of how much difference you’d have the opportunity to make.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
What is SOMA? I do not believe I have ever heard of it.
Sorry--I should have embedded a link there. SOMA is a proposed paired starter usage patterns (it stands for Shorter Outings, More Appearances). You can read about it here:
I think you may have a selection bias looking a pitcher performance in extra innings. For example, there will be an eleventh inning if there is no scoring in the tenth but there will be an eleventh only if the scoring in the tenth is the same for each team. (You can check this by looking at R/IP in the tenth inning when there is an eleventh inning vs. when there isn't one.) This selection bias, rather than "stopper" effectiveness, may be the reason for lower R/IP rates in extra innings.

Another reason for lower R/IP in the ninh inning or later is the "walk-off" effect. Home team run scoring is limited to overtaking the visitor's total.
Great point--you've made me re-run the chart separating 9th from Extras, since what you're saying makes perfect sense. However, I'm as surprised as you to see that recently run scoring in Extras has actually been higher than in the 9th. Prior to the mid-80s, scoring in Extras was generally lower; since then, it's been generally higher. This is especially true lately, with only one year since 2000 seeing more runs per inning in extras than in the 9th. There are far more 9ths than extras, so it doesn't change the 9th line much.

Your "walk-off" point is also well taken, though I suspect that might have as small an effect as extra innings do.
Interesting study, Ken, thanks.

The Percentage of Batters Facing the Starting Pitcher graph shows that teams are managing pitchers' usage more. Relative to how well a pitcher is pitching towards determining when he is hooked, it is relatively more important these days to give each starter his four innings, then become discretionary about his 5th inning, while hooking him in the last three innings at any sign of struggle.

Frankly, I don't see anything in these charts to say relievers are not being used optimally. The relative superiority of relievers to starters keeps changing - I don't see a pattern or trend, except possibly that relievers were likely underutilized in the 50s and over utilized in the late 90s and early 00s. Perhaps, that is why we have 12 man pitching staffs, now, because relieving has its own set of stresses, and teams may have recently learned that having more pitchers on hand alleviates that.

Four man rotations? SOMA? 11 man staffs? I would like to see some teams try or retry these strategies, and I urge all of us to find evidence that might convince teams to do so. If not, there might be some side knowledge coming out of such an attempt which is what I think is the case here.

Thanks, HSR. I don't necessarily disagree with you about relievers being used optimally (or at least reasonably well). The charts definitely show that in the late innnings, at least, scoring has gone down relative to the rest of the game. But that doesn't mean pitchers as a whole are being used optimally. Ideally, the middle innings should see less scoring than the early innings if you want to ensure more effective pitching in higher leverage situations. That's not the case, and the reason is probably because pitchers in the middle innings (starters who are going through the lineup a third time, and middle relievers) aren't as good as fresh starters at the beginning of the game or better relievers at the end of it. It's exactly that problem which SOMA tries to address, and I'd love to see teams experiment with different approaches to reduce scoring in the middle innings.
Another probable reason for more runs in the middle of the game is that the batters are seeing those pitches for the second or third at bat - so they have a better sense of their speed, movement, etc.

Nonetheless, the differences look miniscule to me, if significant at all. Hence, any adjustment for this should probably be in equal measure.