There will be a very short planned maintenance outage of the site tonight (7/22) at 11 PM ET
October 28, 2002
The Player Cards are back!
First, let me emphasize that these player cards are an ongoing project. I fully expect to be able to make future revisions to the cards themselves and the glossary without having to take everything down again. Some of these changes--like a full set of translated statistics for every player, to go with their actual statistics-are already in the works. Others-like extensions to the glossary-will follow the questions from readers.
I'll leave most of the statistical descriptions to the glossary, and spend my time here talking about some of the general principles behind the numbers. Please, please, please, read the glossary. I know there are a lot of blanks to fill in yet, but the basics are covered there.
The key statistic that everything builds toward is called WARP--Wins Above Replacement Player. The Replacement Player I'm using for my comparison is at replacement level across the board--batting, pitching, and fielding. A replacement level hitter is one who would hit for a .230 equivalent average; that implies a winning percentage, other things being equal, of about .350 (i.e., a team with average pitching and fielding that hit for a .230 EQA would go 57-105). A replacement level pitcher is also defined to have a winning percentage of .350; given a standard ERA of 4.50, that would mean a replacement level ERA of 6.11. The combination of replacement level hitting and pitching gives you a winning percentage of .227, or 37-125... not too different from the 1962 Mets. However, this team is also replacement level in the field, a team full of fielders like Dean Palmer and Greg Luzinski and Steve Sax and Jose Offerman. The idea is that replacement level fielding was about equal to the worst regular in the league, maybe a little less. That's going to cost the team as much as the pitching does, so our fully replacement level team is only going to be worth about a .137 percentage (22-140). The closest thing in history to a replacement level team was the 1899 Cleveland Spiders, who went 20-134, a .130 winning percentage.
When a player is rated for WARP, each component is rated independently. The hitting component of WARP is always found by comparing the player to one with a .230 EQA, regardless of what position he plays (yes, even pitchers). A batter is a batter--there is no position at the plate. Pitchers are rated by comparing them to an adjusted ERA of 6.11 (well, not exactly; the relative pitching/fielding shares of defense will modify that. The pitcher/fielding combination has a replacement ERA value of 7.36). The closest thing to a "position adjustment" in the WARP system comes from the defensive data, not the offensive skills of the guys who play there. An average player at a skill position is considered to be farther above replacement than an average player at a side positon. How much depends on the number of plays the position is called upon to make. To look at the Spiders again, they had a team EqA of .229; 5 runs below replacement level. Their pitching rated 27 runs above replacement; their fielding was just 13 above (keep in mind that these batting, pitching, and fielding numbers would be 189, 158, and 210 runs below average). The ratings for the team come to just 3 WARP, which does sort of imply that I would have only expected them to win 24 games.
So two things to be aware of--the replacement level is set so low as to be almost zero, and the fielding input is pretty high. That brings me to another distinction you'll see on the cards, the difference between season-adjusted statistics and alltime-adjusted statistics. Pitching and fielding statistics do not project across time as readily as offensive statistics, because the balance between the two has changed, dramatically, over time. Fielding has never been less important to the outcome of the game than it is today, what with the very high (historically speaking) numbers of strikeouts, walks, and home runs. Go back to the 1880s, though, and not only is virtually every ball in play, but the difference between the best and worst fielders was much wider than it is now. I have players rated at being 50 runs above average in the field in a 110-game season--a rate that would be absolutely impossible to achieve today. If I simply converted those numbers to today's standards (by standard deviations, say), that might convert to 20 runs above average--and I just converted 30 runs the man truly provided in his time out of existence. That isn't entirely right. So the season side doesn't convert the numbers, but lets them stand as they were within the season. The alltime side does convert them.
The other thing going on in the all-time stats is a correction for league difficulty. I'm going to ask you all to wait for the details on that--I'm planning on using that for a research article in the 2003 Prospectus. Unlike the book DTs, where everything gets translated to a single difficulty level, the historical numbers are adjusted to a sloping difficulty standard. The most extreme cases are visible by looking at players from exceptionally weak leagues, like the 1884 Union Association (Fred Dunlap; note how both the hitting and fielding numbers, after adjustment, no longer stand out from the remainder of his career) or the 1882 American Association (Pete Browning). This is an adjustment that has not been made by other systems, to my knowledge.
The pitching statistics represent the first full-scale application of the methods I presented in the 2002 Prospectus, breaking down the team defense into pitching and fielding components. I've done the best I can to separate the team fielding from each pitcher's line. A consequence of that is that pitchers from the 19th century are pretty harshly judged--despite pitching two or three times as many innings as a modern pitcher, they were only responsible for 10-20% of the total defense, compared to about 50% today, pretty much offsetting that advantage.
A more controversial, I think, adjustment is found in the XIP statistic. I've chosen to combine actual innings pitched with decisions--including saves--in order to try to give credit for pitching in higher-leverage situations. Getting a decision certainly implies that you were involved in the game while it was still up for grabs. The result of this, of course, is that closers are valued at (typically) 50% higher than their innings alone would lead you to believe. In light of other research, I think this is a reasonable adjustment to make.
Also controversial will be my decision to post outfield fielding statistics as left field, right field, and center field. In virtually all cases, I don't know the actual number of plays made in each outfield spot. I know the total outfield statistics, and I know the number of games played at each outfield position, and I wrote a program to try to separate the numbers out. I decided to post them that way, but there is a considerable speculative element involved in them.
So, please, everybody, sit back and check out the player cards. Let me know what needs changing in the glossary--please keep in mind that we are working on the 2003 book right now, and I may not be able to get to it right away.