I've written before about why I dislike the error and its cousins, due to its subjective nature. But how much does it matter, especially over a long period of time? Is there a practical consequence to the subjectiveness of the error?
So here's what I did. I took a look at the rate of errors per balls fielded by infielders in all of a team's home games, including the visitors, from 2002 – 2009. I did the same for a team's road games. Each of those rates were then regressed to the mean, then the regressed home rate divided by the regressed road rate to produce park factors. Then I averaged all the one-year regressed park factors over the full time span, and here's what I got:
|New York (AL)||105|
|New York (NL)||103|
The standard deviation of the group over the eight-year time span was about four percent in either direction; in other words, teams had an error park factor of between 96 and 104 about 68% of the time.
Of course, this raises the question – how much of the difference between parks is scorers, and how much of it is actual changes in the way balls are hit at infielders? Bear in mind, for something to be an opportunity here, a fielder has to reach the ball. So the park effect would have to control how cleanly a fielder can field a ball.
Now, I have my own suspicions (and familiarity with my work would probably tell you what those suspicions are). But it's just a suspicion – I'm not sure we really know, and I'm not sure that we will ever know for certain.
But it does give one pause – or at least it should give one pause – when using errors to attribute value to players. Which is of course something we frequently do for hitters, pitchers and fielders right now. (Anyone who thinks we don't use errors for evaluating hitters, or that we shouldn't – the very act of considering an error an at-bat but not a time where he reached safely implicitly, if not explicitly, includes the error in an evaluation of a hitter's prowess.)