Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

In the beginning, there were no rotations. There were no relievers. There was only one pitcher, and the term “everyday player” had no meaning. In 1876, George Bradley started all 64 games for the St. Louis Brown Stockings, completing 63 of them; his teammates combined to throw four innings all year.

Of course, in the early days of the National League, the task performed by the pitcher bore little resemblance to what we call “pitching” today. At various times in the first two decades of professional baseball, the distance from the pitcher to home plate was less than 50 feet; a walk required nine balls; bunts that landed in fair territory before skidding to the backstop were considered fair balls; hitters could call for a “high” or “low” pitch; pitchers could throw the ball from a running start; and curveballs and overhand pitches were illegal.

The game changed quickly, and it quickly became impossible for a team to rely on a single pitcher for its entire season. And once that point was reached, the question of how best to maximize each pitcher’s usage was born.

Concomitant with the realization that man was not created to pitch every day was the idea that, on some level, man was not created to pitch at all. Pitchers got hurt, and the more they pitched, the more quickly they got hurt. Tommy Bond, who started pitching professionally at age 18, was one of the early stars of the National League, winning 154 games in the league’s first four years. He also threw over 2000 innings in that span, and won his last major league game in 1880–at age 24. (Note for the truly anal: I don’t consider the Union Association a major league.)

So how do you get the most value out of a pitcher without 1) diminishing his effectiveness and/or 2) getting him hurt? More than a century later, baseball has only begun to answer that question.

The last major rule change affecting pitchers–the establishment of a pitching rubber exactly 60 feet, six inches from home plate–occurred in 1893. That year, Amos Rusie led the National League with 482 innings. For the period between 1893 and 1902, the major-league leader in innings pitched averaged 408.5 innings. Here’s how that number has changed over the years:


                       Average
  Decade           League-Leading IP

1893 - 1899             421.5
1900 - 1909             401.1
1910 - 1919             370.1
1920 - 1929             328.9
1930 - 1939             312.9
1940 - 1949             320.7
1950 - 1959             311.8
1960 - 1969             314.4
1970 - 1979             342.3
1980 - 1989             281.7
1990 - 1999             262.8
2000 - 2003             258.4

(Note: Seasons of under 130 games, specifically 1918, 1981, and 1994, were not counted.)

As we can see, for the first two decades of the modern pitching era, it was not unusual for a pitcher to throw 400 innings in a season. The emergence of the live-ball era resulted in a massive drop in league-leading innings totals, but from 1920 through 1970 these numbers were essentially stable, with slightly higher totals in low-offense decades like the 1940s and 1960s. Interestingly, the 1970s saw a significant uptick in inning totals for the top pitchers, even though offensive levels were higher than in the 1960s. And then, after a half-century of relative stability, inning totals plunged once again over the next two decades, and are still declining at a slow rate.

Another chart may help explain the underlying changes over the last century-plus:


Decade       Relievers   CG%  Swingmen  20% GS  24% GS

1893 - 1899     0.00    83.8%    2.3%   2.30    1.55
1900 - 1909     0.00    79.0%    5.1%   1.94    0.72
1910 - 1919     0.00    56.8%   18.8%   1.39    0.37
1920 - 1929     0.02    49.6%   11.8%   1.38    0.23
1930 - 1939     0.03    44.6%   13.1%   1.14    0.10
1940 - 1949     0.14    42.6%    4.2%   0.88    0.06
1950 - 1959     0.41    33.5%    5.4%   1.26    0.13
1960 - 1969     1.18    25.2%    2.1%   1.57    0.21
1970 - 1979     1.27    25.3%    0.6%   1.70    0.32
1980 - 1989     1.81    15.6%    0.1%   1.53    0.02
1990 - 1999     2.71     7.4%      0%   1.37    0.00
2000 - 2003     3.59     4.4%      0%   1.09    0.00

There are a lot of columns in this chart that need explaining. “Relievers” refers to the number of pitchers per team that relieved in at least fifty games. As you can see, the notion of a specialized relief pitcher didn’t exist at all until the 1920s, when Firpo Marberry became the game’s first true reliever. The development of the relief position proceeded slowly until around 1960, when it exploded, and we’ve seen ever-escalating usage of relievers ever since.

As relief pitchers have become more in vogue, the concept of the complete-game starter has naturally become obsolete. “CG%” refers to the percentage of starts that were completed by the starting pitcher in that decade. As you can see, the complete game has been 95% eradicated from the game in the last hundred years, declining in prevalence in every decade but one, and it only increased in the 1970s by the slimmest of margins. Remember how the previous chart showed that pitchers threw more innings in the 1970s than at any time in the live-ball era? It wasn’t because they were staying in games longer.

While the first two columns both had a straight-line trend, the third column shows a much different pattern. “Swingmen” refers to the percentage of pitchers making at least 25 starts that also relieved in at least 10 games in the same season. The notion of using your starting pitcher–often your best starting pitcher–in relief as the need arose, blossomed in the 1910s and stayed popular for over 30 years. Walter Johnson and Lefty Grove, for instance, both routinely relieved their peers 10 or more times a year in their peak. The tradeoff, though, was that being available to relieve so often cut into a pitcher’s starting assignments. Grove, perhaps the greatest left-handed pitcher of all time, started 35 games in a season just once in his career.

In the early days of the 20th century, it was widely accepted that all good pitchers should be starting pitchers; no one considered the possibility that some pitchers who didn’t make good starters could thrive in relief. So while teams readily acknowledged the need to use relievers on days when the starter didn’t have his best stuff or simply ran out of gas, they reflexively turned to another starter when they needed a new pitcher with the game on the line.

Once teams realized that a pitcher could be a dedicated reliever and still be an asset to his team, the trend of the swingman starter declined into oblivion as the relief revolution took hold. But the fact remains that, for over 30 years, teams routinely maximized the number of innings thrown by their starting pitchers by using them in relief on days they didn’t start. And there’s no evidence that using starters in such a frequent and unpredictable manner led to more injuries.

The last two columns refer to the number of pitchers per team that started at least 20%, or 24%, of their team’s games, roughly corresponding to the number of starts expected in a five-man or four-man rotation. As we mentioned before, the number of pitchers who made at least 20% of their team’s starts dropped precipitously between 1930 and 1950, and (no doubt helped by the high turnover of players during the war years) there were fewer high-volume starting pitchers in the 1940s than at any other point in major league history.

That trend changed in a hurry. The emergence of full-time relief pitchers in the 50s and 60s gave teams the luxury of using their starting pitchers without having to account for the possibility of an emergency relief outing. Freed to focus their starters’ usage solely as starting pitchers, teams were able to plan their starting pitchers days and weeks in advance–and the pitching rotation was born.

This is an important point to understand: the controversy over the use of a five-man vs. a four-man rotation makes it easy to forget that, prior to around 1960, there was no such thing as a rotation. In the 1950s, Casey Stengel routinely saved his best pitcher, Whitey Ford, to pitch against the best teams in the American League. While Ford’s starts may have been more valuable in such an arrangement, he never started more than 33 games under Stengel. Stengel was fired after the 1960 season and replaced with Ralph Houk, who immediately switched the Yankees to a fixed rotation. The result: in 1961, Ford started 39 games, threw 283 innings, and won 25 games–all career highs.

As teams embraced the four-man rotation, the number of pitchers who made 20% or 24% of their team’s starts went up dramatically, and by the early 1970s had reached its highest point since the height of the dead-ball era. The re-definition of the strike zone, which led to the dominant pitching era from 1963 to 1968, no doubt made it easier for pitchers to start 40 games in a season without negative consequences. But even after the strike zone returned to its normal size in 1969, starting pitchers continued to work in a four-man rotation and make 40 or more starts in a season without negative consequences.

This is why starters in the 1970s threw more innings than at any time in the previous 50 years–because they were starting more games than at any time since 1920. Some teams even dabbled with a three-man rotation; in 1972, the White Sox top three starters (Wilbur Wood, Stan Bahnsen, and Tom Bradley) combined to start 130 of the team’s 154 games. Wood’s 376.2 innings were the highest total of the live-ball era, and his 49 starts were the most since 1904.

But as the chart shows, 30 years of progress was turned back–and then some–in the span of a decade. The five-man rotation sprung into vogue around 1974, and by 1980 almost every team in baseball had switched to it. The 40-game starter was rendered extinct; whereas there were 12 of them in 1973 alone, since 1982 there has been only one such season. No starter has made even 37 starts since Greg Maddux in 1991.

The switch to the five-man rotation represents the last significant change in the usage patterns of starting pitchers, but over the past 15 years there has been a slow trend towards teams sticking with a five-day rotation as opposed to a five-man rotation: whereas in the 1980s, teams would take advantage of a day off in the schedule by skipping the fifth starter, now teams use the day off to give their starters additional rest. As a result, the number of pitchers making 20% of their team’s starts–corresponding to 33 starts in a full-season–has dropped by nearly a third over the last two decades.

Tomorrow: “How We Measure Pitcher Usage,” by Rany Jazayerli.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe