January 3, 2014
Rethinking Hall of Fame Standards in Expansion Eras
Most of our writers didn't enter the world sporting an @baseballprospectus.com address; with a few exceptions, they started out somewhere else. In an effort to up your reading pleasure while tipping our caps to some of the most illuminating work being done elsewhere on the internet, we'll be yielding the stage once a week to the best and brightest baseball writers, researchers, and thinkers from outside of the BP umbrella. If you'd like to nominate a guest contributor (including yourself), please drop us a line.
Kevin Whitaker never met a school assignment he couldn’t turn into something baseball-related. His research on the MLB Draft was published at the 2013 Sloan Sports Analytics Conference, and he writes about college basketball at Beanpot Hoops. Follow him on Twitter at @whitakk.
It’s no secret that this year’s Hall of Fame ballot features a historically large pool of qualified candidates. Among the nominees are the all-time home runs leader, a 3,000-hit second baseman, and a durable slugger with an MVP award—and those are just the players whose names begin with “B.”
In part, the current backlog reflects the slow pace at which voters have inducted Hall of Famers over the past few seasons. But a bigger factor is that more qualified players have become eligible. In the last five elections, 18 players debuted on the ballot with JAWS scores of at least 90 percent of their position’s Hall of Fame standard. In the entire decade before that, only 15 such players entered the ballot.
What do these new candidates have in common? All 18 were active for the entirety of 1993-2002, and all but Roger Clemens and Barry Larkin had the majority of their peak seasons in that decade. By JAWS and other advanced metrics, the best players of the late 1990s simply have better Hall of Fame cases than did the best players of the previous era.
Baseball changed in several ways during the 1990s, providing a few possible causes of the current wave of Hall of Fame candidates. The biggest on-field change was a dramatic increase in run environment — but JAWS is based on WAR or WARP, which are context-neutral stats. The ’90s were also, famously, the “steroid era.” But we have very little understanding of how steroids affect baseball performance, and many extreme performances came from Greg Maddux, Pedro Martinez, Ken Griffey Jr. and others who are widely believed to be “clean,” even at a time when players are tied to PEDs with very little evidence.
Another element of 1990s baseball—one that I believe had a more concrete effect on the era’s star performers—was expansion. Between 1992 and 1998, MLB grew from 26 to 30 teams, adding 100 new roster spots to the league. Players who would have been minor leaguers at the beginning of the decade became major leaguers after expansion, and former bench players were forced into full-time roles. As a result, players who were already everyday starters posted better raw statistics, because hitters got to face weaker overall pitching (and vice versa). Likewise, their context-adjusted numbers also improved, because the best hitters were being compared to a weaker aggregate group of position players (and ditto for pitchers).
Over a long period of time, this effect is washed out by MLB’s increasing talent pool, which is constantly expanding due to both domestic population growth and the internationalization of the game. But in the years immediately following expansion, history has shown that stars benefit from weaker competition:
(Strike-shortened seasons of 1981, 1994, and 1995 removed)
Since 1950, there have been seven seasons in which 17 or more players finished with at least six WARP. All seven came within five years after an expansion, even though fewer than half of all seasons since 1950 fit that condition. Excluding strike-shortened seasons of 1981, 1994, and 1995, an average of 15.5 players have posted at least six WARP in the five years after expansion, compared to 11.0 players in all other seasons.
When comparing performances from different eras, we don’t just compare each player’s raw stats, because we understand that those numbers come from different environments. Good analysis instead uses context-neutral statistics, comparing performance to that of an average or replacement-level player. But when “average” or “replacement level” changes—as in the case of expansion eras—should we not also consider that factor?
To estimate the impact of expansion on stars’ statistics, we can compare the performance of the top players before and after expansion. In post-expansion seasons, the top 10 players in baseball have had an average WARP of 7.8; in all other years, their average is 7.4, a statistically significant difference. Using different cutoffs (eg., top five or top 20) yields similar estimates of about .4 wins per season.
For the stars of the late 1990s, who played 10 seasons (including most of their peaks) in post-expansion years, that translates to about 3-4 points of JAWS earned just by playing against weaker competition. That’s certainly not enough to keep Greg Maddux out of Cooperstown. But if we adjust our Hall of Fame standards for the impact of expansion, the borderline candidates of this era—such as Larry Walker, Edgar Martinez, or Rafael Palmeiro—won’t seem quite as deserving as they do today.