Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

How many pitches did pitchers really throw “back then?” You know, during the days when men were men, a mustache was a mustache, and pitchers weren’t coddled. No one did any drugs ever, especially in baseball, and pitchers finished what they started. Just ask any lawn care professional who specializes in youth removal and was a fan of the game back then. Yes, the 1960s and 1970s were the halcyon days of high pitch counts, when all that you needed was a 10-man pitching staff. It was glorious.

But of course, something changed. Now, a pitch count much further north of 100 is treated like a major catastrophe and teams often carry enough relievers to fully staff a couple of barbershop quartets. The complete game has gone the way of Britney Spears’s dignity. Everything is so different now, and those who, for whatever reason, are beholden to the idea that 40 or 50 years ago, everything was so much better want to know what happened. The answer is a little surprising: Things have changed, but not as much as one might think, nor as dramatically.

Perhaps the biggest shift over time has been from the four-man rotation to the five-man rotation (and we’ll take that up some other time), but we’ve also seen a shift within a single start. In 1950, the average starter faced 29.4 batters and recorded 20.0 outs during his time on the mound. By 2012, those numbers had fallen to 25.1 batters and 17.5 outs. Over more than 60 years, starters have lost a little less than an inning of durability. In a nine-inning game, that’s not inconsequential, but a graph of how it happened shows that there wasn’t any one point where suddenly, starters stopped being interested in going deep into games. In fact, what’s striking about this graph is how gentle a descent it really is.

But let’s talk about pitch counts. It’s relatively easy to find pitch count data for recent games. Even before PITCHf/x information was collected, Retrosheet (for the Hall of Fame!) has near-complete records of games played since the turn of the millennium, and fairly good data going back into the 1990s (and even enough to merit mention in the late 1980s). The big problem is that for anything before 1988, there’s almost nothing there. Almost. In the 1950s and 1960s, the Brooklyn Dodgers kept pitch sequencing and pitch count data that give us some insight into what was really going on back then. But what happened the rest of the time? Were pitch counts really that high?

Tom Tango (yikes! 10 years ago!) has taken a quick look at this question and developed a pitch count estimator for use with historical data. His work holds up several years later, although I would suggest a few improvements. This chart shows the number of average pitches per plate appearance in baseball from 1950-1963, and then from 1988-2012. (My graphical output smushes them all together. I drew a red line on the chart to show the split.) We can see that over time, there’s been a drift upward from 3.37 pitches per PA in 1950 to 3.77 pitches in 2012.

One thing that we can use to our advantage is that we know that if a plate appearance ended in a strikeout, it must have taken at least one, two, three strikes until he was out. Walks necessitate at least four pitches. Again, from 1950-1963 and 1988-2012, we see the average number of pitches per PA, broken down by what the eventual outcome of the PA was. We see some drift upward in the number of pitches each batter sees (although not with strikeouts, interestingly). Balls in play (hits and outs) all congregate just above three pitches per plate appearances, with strikeouts slightly higher at around four and a half, and walks higher than that at around five and a quarter. It’s more important to know what the outcome was, but we do need to account for that upward drift.

With the data that we have available, we can make a nice little regression line that tells us how much we should adjust upward in each year per plate appearance. In addition, I also wanted to see whether pitchers who have high strikeout rates have more (or less?) efficient strikeouts than the Aaron Cook types. So, I built a model that regressed the number of pitches contained in a strikeout, based on the year and the pitcher’s annual rates of several outcomes (strikeouts, walks, singles, etc.). I then did similarly for the number of pitches contained in a walk, and so forth. It turns out that this did improve the model’s performance a little bit, but not by a great deal.

I totaled everything up for each starting pitcher in each game for which Retrosheet has a full log of the game’s events. I went back and looked at how well the model did compared to the games for which there are actual pitch count data. The model correlated with all available data at .88, and looking specifically at data from before 1988, the model correlated at .95.

A few other model diagnostics for the super-initiated (#GoryMath)

All Available Data (1950-2012)

Pre-1988 Data

Correlation (predicted vs. actual)

.884

.958

Equation of the line predicting actual

.887 * predicted + 9.54

.941 * predicted + 0.69

Median residual

-0.10

0.09

RMSE

9.23

9.24

The correlation is quite good, and with the median almost exactly at zero, we at least aren’t worried about the distribution of the residuals being overly skewed. However, the equation of the line suggests that our predicted values are slightly on the high side. This is confirmed by this graph showing the residuals based on actual pitch count (for the pre-1988 data; the other one looks similar). Here we see that as we cross into outings that we know in reality took more than 100 pitches to complete, the model tends to somewhat over-estimate the number. So, our results are a bit biased upward for high-pitch-count games. We’ll keep that in mind.

And yes, the RMSE is nine and a quarter pitches, meaning that a customary two-standard-deviation margin of error is going to be 18.5 pitches wide in either direction when we are talking about our estimate of a single game. We need to be a little careful in our interpretations from here on out.

Have Pitch Counts Really Gone Down Over Time?
Here’s what the average estimated pitch counts look like over time by year.

I estimate that in 1950, the average starter threw 104 pitches in a game. In 2012, that number had fallen to 93, so yes, average pitch counts for starters have gone down. Again, this is not insignificant, but let’s keep some sense of perspective. We’re talking about a 10-percent drop in average workload. Not only that, remember that in environments where there were more high-pitch-count games (and we’ll see in a moment that there really were in decades past), the model tends to inflate how many pitches it thinks were thrown. The actual drop may have been slightly smaller.

But perhaps the more important (or more discussed) issue is the question of games where the pitch count really starts climbing. In 2013, Tim Lincecum threw a no-hitter and delivered 148 pitches in the process. The Giants and manager Bruce Bochy were roundly criticized (including by me) for letting Lincecum throw so many pitches, for fear that the outing might lead to arm troubles. In 2010, Edwin Jackson had a similar 149-pitch (eight-walk) no-hitter. Those criticisms were generally met by assertions that pitchers used to throw 140 pitches on a regular basis. What’s the big deal?

The data do back up that particular critique. This chart shows the percentage of starts within each year that, according to our estimates, lasted (moving vertically downward by line) more than 100 pitches, 110 pitches, 120, 130, and 140, at least based on our pitch-count estimator. Again, we see a gentle downward trend in ultra-marathon pitch count games to the point where they mostly disappear over the past few years, with the curious note that the line has a bit of a spike in the late 1960s and 1970s. The 60s and 70s that everyone longs for may have been an era of high pitch counts, but in the overall trend, they seem to be the aberration. Not surprisingly, looking at the data on a molecular level, the jump occurs after 1968, when the pitching mound was lowered and offense went up. Batters got more hits, and so pitchers had to face more batters, and thus they needed more pitches to finish their work. At least until the market corrected itself and the line resumed its downward trend.

We do see that pitch counts of 140 were never a particularly common event, even in 1950. The model estimates that in 1950, only 12.5 percent (one in eight starts) reached that point, although that’s high enough that a couple of starters around MLB (assuming a then-full slate of 16 starters) would do so on an average day. One quarter of games in 1950 (26.7 percent) saw a starter throw 130 pitches. It happened, but not every single time a pitcher took the mound. By the time we get to 1970, only 13.5 percent of starts featured 130 pitches and 4.9 percent featured 140. There once was a time when starters had a longer leash, but let’s not exaggerate what they actually did with it.

Maybe the more interesting trend is the distance between the “over 100 pitches” line and the “over 110 pitches” line. Much is made about the fact that 100 pitches seems to be a magic numbers that teams adhere to (maybe too much?) in deciding whether to remove a pitcher. We see that in the 1980s, the line of pitchers who made it to 110 starts to drift downward faster than the line of those who made 100. It’s as if someone started screaming, “The line must be drawn here! 105 pitches, no father!”

Here’s a graph (again, by year) of the percentage of all pitchers who had already crossed the (estimated) 100-pitch line, and then were allowed to exceed 110. There has been a general move toward a 100-pitch limit over time, but sometime in the early 1980s, there was a sort of inflection point. In 1950, 85.9 percent of pitchers who made it to the century mark were allowed to throw another 10 pitches (at least). As late as 1983, more than two thirds of pitchers who made it to 100 also made it to 110. But over the last 30 years, that rate has fallen precipitously to fewer than one third (32.1 percent in 2012). Between 2000 and 2001, there was a particularly sharp drop from 51.3 percent to 43.8 percent. To say that teams have only recently become obsessed with the number 100 isn’t entirely correct. The movement has consistently been in this direction, but it picked up steam in the 1980s, and even more steam after the turn of the millennium (not long after the initial publication of Pitcher Abuse Points).

Same as it Ever Was…
If there’s a take-home message from all of this, it’s that the idea of asking starting pitchers to do a little less is not a recent development. Looking back at what things were like 40 or 50 years ago, it’s easy to see the huge differences in the way that managers approach the job description of a starting pitcher. What’s harder to see is how it got where it is today. The impulse to protect pitchers is something that’s been acting for several decades—yes, even back in the “golden age” of pitching when men were men. Throughout the last 60 years, pitch counts per start have consistently and steadily gone down, save for one brief period in the 1960s and ’70s when the game was still adjusting to the new mound. High-pitch-count games have slowly and consistently declined. The idea of 100 pitches being a Platonic ideal for what a start should look like is something that’s become more and more accepted over time, and there never really was a time when things weren’t generally headed in this direction of asking starters to do a little less. There is nothing new under the sun.

Maybe the game would be better if starters put in longer shifts, whether from the point of view of cultural aesthetics (this is not an empirical question) or strategic optimization (this is). But either way, there was no golden age when managers were not afraid of what a huge workload would do to their pitchers, whether you consider that a sign of manliness or foolhardiness. There was only a time when the impulse to protect the pitcher hadn’t had as much time to work yet.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
donwinningham
12/02
Maybe coincidence, but it the modern high water mark - 1976 - was also the first year of true free-agency. Could be that starting-pitcher usage inversely tracks team spending on starting-pitchers?
hotstatrat
12/03
Interesting point
smitty99
12/02
What I find interesting about pitching staffs in the 1950s is they are not 5 man starting staffs or even 4 man staffs. Most teams had two or three Ace or near Ace starters who would throw more than 200 innings. Studs like Roberts and Spahn threw even more. Each staff looked like they had a bunch of swing men who would make a few starts in a season. Maybe Sunday double header guys? Also, almost every starter had a few relief appearances in a season. Robin Roberts! for example had 25 saves in his career.
NYYanks826
12/02
The more important question here is whether or not you have a graph that shows the percentage of pitchers who had a breakfast of chipped beef and scotch the morning that they were scheduled to start.
Tom9418
12/03
Don't you need to have a discussion/adjustment for the DH? I would think you'd see an impact there.