Pretend for a moment that you’ve been hired to direct a major-league baseball team’s pitching staff, yet have no knowledge of current or historical pitcher-usage patterns. After pinching yourself to make sure you weren’t in the middle of a Walter Mitty daydream, and ignoring Groucho’s timeless advice by joining an organization that would choose you for such an important job, how would you go about structuring your staff? Your overriding concerns, of course, would be to: (a) allow the fewest runs possible; (b) keep your best pitchers healthy (unless you insist on remaining altruistic in the midst of a competitive environment, in which case you’ll probably try to keep all your pitchers healthy); and (c) ensure that those runs which are allowed are timed such as to add up to the fewest losses.
Since some pitchers are better than others, to minimize the number of runs allowed, you’ll want your best pitchers to work as many innings as possible, and to keep them healthy you’ll have to avoid letting them pitch when fatigued—a topic of ongoing debate, but for our purposes let’s just say this means you’ll have to have a reasonable number of pitchers on hand to get through a long season, some of whom may be significantly less effective than others. Given this, to serve purpose (c) you would want your best pitchers to throw the highest-leverage innings (i.e., those which have the most effect on whether you win or lose), and your worst pitchers to throw the lowest-leverage innings. While this will vary greatly from game to game and inning to inning, in aggregate the later you get into a game the higher the leverage. Thus, if a pitching staff is being used optimally, you might expect better pitchers to be pitching in later innings, and you might expect to see the average number of runs scored by inning decrease as the game moves on. Is this what currently happens in baseball today, and if so, should you and your fictional team decide to mimic current usage patterns?
Perhaps not. The chart above shows average runs scored per inning during the Retrosheet era, grouped into Early (innings one through three), Middle (innings four through six), and Late (seventh inning or later). There are some single-season fluctuations, of course, but in general, scoring rates were reasonably equivalent throughout a game from the mid-'50s into the mid-70s, when scoring in late innings became consistently lower than the early and middle innings, often dramatically so. This continued into the late '90s, at which point scoring in the middle innings started to consistently outpace early innings, leaving us today with a pretty clear stratification, but not necessarily the one we want: low scoring at the end of games, medium scoring at the start, and high scoring in the middle.
The decrease in late-inning scoring most certainly has to do with the evolution of the stopper, a position that grew into prominence in the '70s. Let’s look at the same chart, but split out the ninth inning from the others:
When broken out separately, the ninth inning (and later) seems to have always seen much lower run scoring—much lower than the seventh and eighth innings, which here drop back toward the pack. What’s interesting to me here is that stoppers in the ‘70s, usually the most effective pitchers on a staff in short doses, were often relied on for multi-inning saves, yet run prevention in the seventh and eighth innings during that time wasn’t much better than the six innings before. It really wasn’t until the late ‘80s, with the advent of the one-inning closer and greater bullpen specialization, when scoring rates in the seventh and eighth innings dropped significantly below that seen in the middle innings. When looked at in isolation, this specialization coincides with the likely benefit of gradually reduced run scoring by inning at the end of the game, but it has also reduced the number of innings pitched not only by closers but other late-inning specialists, all of whom tend to be very effective. Has compressing the timeframe in which late-inning specialists pitch created a cascade effect, where run scoring in the middle innings has gone up? Are mediocre pitchers at the back of the bullpen putting up more innings in middle relief? To find out, first let's look at how many of those innings are going to starters and how many to relievers.
This chart shows the percentage of batters faced by a starter in each inning. Note that the number of quick hooks, where the starter is pulled before completing five innings, has actually decreased slightly over time despite the increase in league-wide offensive production. The sixth inning has remained relatively flat, with only a slight decrease in starter workload. But starting in the mid-‘70s we see a steady decrease in pitchers working the seventh, and precipitous drops in the eighth and ninth. Is this because pitchers have lost the ability to retain effectiveness later in games?
Above, we see starting pitcher OPS allowed by inning—I’ve switched from runs to OPS since it’s easy to calculate, correlates reasonably well with runs scored, and it’s difficult to correctly assign runs to a pitcher that doesn’t work an entire inning. Take a look at the blue (early innings) and red (middle innings) line. Throughout the entire graph, starters have been more effective in the early innings than in the middle innings, albeit in the ‘70s the effect was much smaller. This loss of starter effectiveness has been noted by many analysts, and has been shown to be true not only in aggregate but for a significant percentage of starters. Notice the green line, however. Initially, the seventh and eighth innings saw similar effectiveness to the middle innings, but starting in the mid-‘70s, starter effectiveness in the late innings gradually met and then surpassed early-inning effectiveness. This coincides with the reduction in starter workload in the late innings, showing this to be a survivor effect, where only the best pitchers or those pitching particularly well that day are allowed to pitch that long. Another child of the ‘70s—the five-man rotation—may also help explain this, as adding a fifth, probably lesser, starter will increase the percentage of games in which the bullpen needs to be called on in the late innings.
In any case, these usage changes have meant that late innings are often worked by the best starters or relievers, which should be a good thing—but as we can see from the chart above, starters are still facing roughly the same percentage of batters in the middle innings as they always have, yet run prevention in the middle innings has become significantly worse than at the start of the game. As is true for many of us, baseball’s “soft middle” has become more pronounced over the last decade, and prevents teams from ensuring their best pitchers work the most important innings. In this age of 12-man pitching staffs, perhaps bullpen specialization has forced lesser relievers to work the middle innings than in days gone by:
If so, it hasn’t been a particularly large effect. Over time, the relievers replacing pitchers pulled in the middle innings have pitched as well as starters pitch during the middle innings. Not that the starters shouldn’t be pulled—it’s reasonable to assume that the starters being yanked are bad pitchers or those having bad days and the Starter OPS line above would be higher had they been allowed to continue—but if relievers pitching the middle innings have grown worse since the ‘90s, then starters have grown worse at pretty much the same rate.
It’s hard to identify a single cause, but the fact remains that baseball teams now give up a higher percentage of runs in the middle innings than ever before, which is nowhere near the optimal pattern. Too often, the current paradigm allows the middle of the game to be worked by starters who have lost much of their effectiveness or mediocre relievers, a pattern which doesn’t help teams win. Some of the changes teams could experiment with are:
Allow late-inning relievers to pitch more and earlier innings, taking innings from the least-effective hurlers on the staff (and possibly eliminating the need for 12 pitchers).
Bring back the four-man rotation, ensuring that a higher percentage of the middle innings are worked by better starters.
- Try out a tandem-starter routine, such as SOMA, to minimize the deleterious effects of starters losing effectiveness as they face more batters in a game.
Any of these might help reduce run-scoring in the middle innings without increasing it early or late. Imagine you’ve been hired to direct the pitching staff of a team that isn’t tied to any current or historical usage patterns, and think of how much difference you’d have the opportunity to make.
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.Subscribe now