Premium and Super Premium Subscribers Get a 20% Discount at MLB.tv!
August 8, 2005
An Objective Hall of Fame
Let me start out by saying that I think the idea of a genuine, objectively selected Hall of Fame is the most idiotic idea since the inside the egg shell scrambler.
Then why are you doing a bunch of articles on it, Clay?
Just because I don't think the real Hall of Fame should be chosen in this way doesn't mean that there's no value in having an objectively defined Hall for us to know about. Ideally, you want the Hall to be a mix of objective greatness and subjective fame, which is a lot easier to do if you establish the objective Hall beforehand. Only then are you in a position to mark this guy down, and this guy up, based on everything else--character, personality, off-field contributions, you name it. When we are talking about the highest honor the game has, everything should count. We'll let real people make the final determination, but they should at least be able to start with facts, and then adjust--at least that's the idea.
Now, an objective Hall is going to require a set of rules. Simply taking the top N players in history right now would definitely not be the way to go; like the real Hall, the Objective Hall should evolve, and be based on the players who were available to the real Hall (within reason). I've tried to set these rules up, for the most part, to follow the rules the Hall actually uses, either de jure or de facto. So let's start with the easiest ones.
The objective Hall will start in 1936, the same year the real Hall started.
There will be a maximum of five inductees in any given year, in honor of the five men originally chosen in 1936.
There will be a five year waiting period from the last game played in the majors for all players. Sorry, Babe, you're not going to get into the opening class. The only exception I can recall making to this rule was for Minnie Minoso; I let his eligibility start five years after 1964, when his real career, minus a couple of publicity stunt returns, ended.
Until the Hall reaches capacity--more on that in a moment--and for five years after that, the Hall will be free to choose from all of baseball history. After that time, players must have retired less than 20 years before, resulting in the "15 years for eligibility, five year waiting period" condition we normally have today.
Ah, the size of the Hall. There are a great many people who think the Hall of Fame is too big, and that it should be reserved for the really, really great players, like Aaron, Mays, and Ruth. No matter where you draw the line, you are always going to argue about players who are right on the edge; instead of arguing between "good" and "great," you move towards "great," "really great," and "really, really great." I'm going to take the position that the Hall has defined itself to have 195 players through 2005--not counting the executives, umpires, managers, pioneers, and (regrettably) Negro League selections. To get the appropriate number of players for any other year, I'm going to count teams. Through 2005, looking at the National League, American League, and American Association, there have been 2419 team-seasons, giving us about 12.4 team-seasons for each Hall of Famer. At each year, then, I'll divide the number of teams that have played by 12.4 to get my Hall quota.
Through 1936 there had been 889 team-seasons, so the Hall was allowed to have 71 members (889/12.4 = 71.7, and I'm always rounding down). Of course, they were just getting started, and I only allow five entries per year, so they are going to be below quota for some time. The phantom Hall is going to end up inducting five players a year, every year, until it catches up, which will take until the elections of 1954. The Hall will have 90 players through 1953, and a quota for 1954 of 94; so they'll get four players through in 1954, and after that players will be inducted in ones and twos. As noted earlier, that means the 5-and-15 rule will kick in starting in 1959.
That leaves the harder question--how do you objectively rank the players for induction, recognizing the differences between peak and career production?
For a couple of years, I've been playing around with a very simple way to combine these two elements--so simple, in fact, that I've been hesitant to trot it out in BP-related work, as it might make everyone think I'd gone soft. Yet after so much time using it for my own ends, I feel comfortable with it, and reasonably confident of the results it produces. It tends to favor the peak side of the argument a bit, but that's alright with me; Bill James' comparison of Drysdale and Pappas in The Politics of Glory (or What Ever Happened to the Hall of Fame?, depending on which version you bought) made a lasting impression.
Simply put, I am going to treat each season of a player's career like an entry on an MVP ballot. His best season in your chosen stat--and I'll be using WARP3--counts 14 times. His second best season counts nine times, third best eight times, and so on, down to tenth best, which counts for just one. Everything after ten also counts as one. Let us consider, for example, the real Hall of Fame's newest member, Wade Boggs. Sort his career by WARP3, and we get
Year WARP3 Mult Score Cumulative 1987 12.7 14 177.8 177.8 1988 12.4 9 111.6 289.4 1985 11.9 8 95.2 384.6 1986 11.6 7 81.2 465.8 1989 11.6 6 69.6 535.4 1983 10.8 5 54.0 589.4 1991 9.9 4 39.6 629.0 1984 9.8 3 29.4 658.4 1994 7.4 2 14.8 673.2 1993 7.2 1 7.2 680.4 1995 6.9 1 6.9 687.3 1990 6.0 1 6.0 693.3 1982 5.3 1 5.3 698.6 1996 4.6 1 4.6 703.2 1992 3.9 1 3.9 707.1 1997 3.6 1 3.6 710.7 1998 3.4 1 3.4 714.1 1999 2.0 1 2.0 716.1That's good for the 28th best mark in history. He makes it into the objective Hall of Fame easily.
Why WARP3? You could use any variable you wanted, and for specific applications I will apply it to just one or the other of batting, pitching, or fielding stats. Since I am trying to gauge a player's total value, it makes sense to use the statistics that combine all of a player's hitting, pitching, and fielding performance. I want the adjustments for difficulty that seperate WARP2 from WARP1--if one league is weaker than another, like the AA compared to the NL in the 19th century, then I want to make sure I don't go overboard taking players who are standing out above a weaker league. Likewise, I don't want players penalized by the schedules, which prevented 19th century non-pitchers from racking up gaudy career totals.
But that isn't quite the whole system. There are a couple of situations which require adjustments to the player's stats.
The first adjustment is for players who missed time in their careers. There are a lot of reasons for missing time, but I'm only going to go back and give credit for missing time for three reasons: military service, forced segregation, and death. Left to my own, I probably would have held the line at service and segregation; however, the fact that the Hall of Fame did select Ross Youngs and Addie Joss suggests that they considered an untimely death to be something worth considering. Like I said, I'm going to try to follow their established policy.
The adjustment mechanism is the same for all three cases, and it is probably going to be easiest to describe by just working through an example. Consider, oh, Enos Slaughter. Slaughter was, in my opinion, highly underrated as a player, and his 521.9 Career MVP score ranks as the 175th best, making him a solid Hall of Fame contender even before I make any adjustments. However, he did spend three years--what would have been his age 27, 28, and 29 seasons--serving in the Army Air Corps. If we run the career MVP score again, but force everybody in history to miss their 27-28-29 seasons, we have created a somewhat equal handicap across the board. Slaughter still scores a 521.9, but now that score is good for 94th place. If we look back at the original list, we see that 94th place belongs to Craig Biggio, with a score of 579.9. Slaughter's adjusted score becomes 579.9.
(Actually, it will be 580.2. I base the adjustment on how much the 92-96 ranked players on the original list gained on the 173-177 players, two spots on either side of the rankings I come up with. It allows for adjustments of players who don't actually change ranks, which happens at the high end, like Musial, Mays, and Ted Williams.)
These adjustments, rerunning the system for every player in history with blocks of time taken out, were made for every player who missed time to military service and had an initial career MVP score of at least 250. For players who lost roughly five years to segregration--Jackie Robinson, Roy Campanella, Larry Doby--the system makes a reasonable estimate of their lost value. Unfortunately, it breaks down for players who lost a lot more than that, like Monte Irvin or Satchel Paige.
The second adjustment I need to make is one that hurts, not helps, and is a product of my decisions about how to treat leagues with a designated hitter. The trouble isn't with the DHs, but rather with the pitchers from those leagues; by not having to bat, they are being given an advantage over equal pitchers in the National League...an advantage that doesn't come from the game itself, but from the way I set up the WARP3 system, where replacement level hitting is defined as a .230 equivalent average. Almost no pitcher reaches that level, and so pitchers end up carrying a -5, -10, or even -20 batting runs above replacement--a score which is going to take points off their WARP score. AL pitchers, not having to hit, don't suffer from that drag, and a 1 WARP/year advantage can certainly skew the Hall voting. Accordingly, I've worked out an estimate for how many batting runs a pitcher in a DH league didn't get, based on his established hitting level whenever possible.
The way the system is going to work is by taking the highest rated player (by the Career MVP WARP3) from the available pool, up to the number of players it is allowed to take at a given time. In general, that score is going to be somewhere in the neighborhood of 500. Anybody above 600 I'm going to consider a lock; anybody below 400 I'm going to consider as a clear mistake. In between, there are reasonable arguments to be made.
The information presented below is the player's name, position, Career MVP WARP3, (year elected by real HOF).
Objective Hall Class of '36:
How can I rank Collins ahead of Cobb? Cobb has a 20-run margin on Collins by career WARP3, but Collins has five seasons with a 12+ WARP compared to Cobb's two. While not the hitter that Cobb was, Collins was an outstanding defensive player...and as we'll see, repeatedly, the WARP system gives a lot of weight to defense, and that is going to be the reason behind a lot of surprising ratings.
And Wagner? Wagner's problem wasn't defense, but those difficulty ratings I talked about wanting to have. With all due apology to Dick Cramer, who performed the landmark study in assessing league difficulty over time, I cannot see how he could conclude that the National League was stronger than the American in the decade following 1901. Once the AL proved itself by surviving its first year, the players practically swarmed over to the league that wasn't tainted by the stench of the player/owner battles of the 1890s. If you trace the players from the 1900 National League, you'll find that by 1902 more of those players were in the AL than were in the NL--and they had more of the ABs and equivalent runs from that league. Even after peace was declared, the massive talent shift meant that the American League would be the stronger league for years to come. And that is why Wagner fell behind the others, and part of why Mathewson didn't get into the first class at all.
To be continued...