A few years ago on this very website, Gary Huckabay briefly discussed the manner of the presentation of information. As an exercise, he presented the 2002 Angels’ stats as percentages of plate appearances, noting that perhaps certain round numbers would catch our eye if they were typically presented that way. Everyone knows 100 RBI or a .300 batting average, but there are countless ways in which we can view current and past baseball statistics that dramatically change the location of those eye-catching round numbers in relation to player value.

One of the main points is that we’re all working with a learned scale of context. If you’re Clay Davenport and you’re developing EqA, you can take advantage of it by mapping the scale of your metric to the construct that all baseball fans find familiar. Or, if you’re perhaps pushing OPS as the next best offensive metric, you just have to talk about it enough and show OBP+SLG enough that people start to develop that scale. I would argue that they haven’t. For example, how good is an .800 OPS? League average? All-star level?

Actually, it could be either. A player with a .400 OBP and a .400 SLG is highly valuable. One with a .320 OBP and .480 SLG–despite having the same OPS–is not quite as valuable, or at least not to the league average team. This, of course, is one of the problems with OPS, but let’s leave that to another article.

Instead, just as an exercise, I want to discuss an alternative method of looking at player statistics taking advantage of standard deviations and the league average or mean. For example, here’s how the distribution of batting averages shook out this past year for all players with at least 200 ABs:

And some vital statistics about the distribution:

Min .202 Quartile .252 Mean .270 Quartile .289 Max .335 Standard Deviation .026

Despite a few dips here and there, the distribution is, for the most part, normal without any tailing on either side. The upper and lower quartiles–the 25th and 75th percentiles of the data set–are almost exactly equidistant from the mean, as are the minimum and maximum value. (This fact in and of itself is a little surprising since we assume there’s a talent pyramid in baseball with very few stars at the upper end. However, the removal of a below-average player is gradual enough in terms of batting average that it balances the natural tail up to the elite talent. It bears noting that in a more perfect market and with a metric that better maps to total player value, the graph above should have a significantly longer tail to the right than the left.) Now, instead of looking at a player’s raw batting average and applying the learned scale that we all use, let’s instead look at how a few players do when we display their batting average in standard deviations above the mean:

YEAR BATTER AVG AVG+ 2005 Derrek Lee .335 2.54 2005 Ichiro Suzuki .303 1.29 2005 Bobby Abreu .286 0.63 2005 Aaron Rowand .270 0.01 2005 Andruw Jones .263 -0.27 2005 Wily Mo Pena .254 -0.62 2005 Phil Nevin .237 -1.28 2005 David Newhan .202 -2.65

This doesn’t necessarily tell us anything we don’t already know: **Derrek Lee** had a great season when it comes to batting average and **David Newhan** did not. But we can quantify things a bit better, seeing that Newhan was almost as bad as Lee was good.

However, when applying this kind of number presentation to other situations, it can be not only time-saving, but highly informative. For example, let’s jump down to Double-A and take a look at two pitchers from this year, **Yusmeiro Petit**–the key player in the **Carlos Delgado** deal–and **Shawn Kohn**–a relatively unknown 2002 draft pick by the Oakland A’s who spent the season relieving in Midland:

Year Pitcher Team Org Lg Level IP K UBB K/9 UBB/9 2005 Yusmeiro Petit Binghamton NYN EAS AA 117.7 130 17 9.94 1.30 2005 Shawn Kohn Midland OAK TEX AA 84.0 92 17 9.86 1.82

Disregarding their age, park factors, the fact that K/PA and UBB/PA are better measures of pitcher performance, and other information for a minute, the statistics tell us that each player is equally adept at striking out opposing batters and that Petit holds a slight edge when it comes to issuing the free pass. But look what happens if we adjust for league context and display in standard deviations above the mean:

Year Pitcher K/9 UBB/9 K/9+ UBB/9+ 2005 Yusmeiro Petit 9.94 1.30 1.48 -1.36 2005 Shawn Kohn 9.86 1.82 2.18 -0.95

Suddenly, Kohn holds a large edge in K/9, albeit while still trailing in UBB/9. The reason for this is obvious: batters in the Eastern League struck out much more often than those in the Texas League, to the tune of 7.4 to 6.6 per nine innings. Additionally–and likely as a result–the standard deviation is also higher in the Eastern League than Texas, meaning that Petit’s raw advantage over the rest of the league is somewhat muted because the variance in performance is higher: there are significantly more pitchers in the Eastern League who struck out more men per 9 innings than Petit than there are in Texas who outperformed Kohn.

Viewing player performance through this lens, not only are we provided with a default context for all metrics–players one standard deviation above the mean are in the top 16-17% of their league, for example–but we can also compare players across leagues without having to provide context for each number. Plus, rather than keying on different round numbers for each metric–a .300 average, .400 OBP, walks in 10% of PAs, etc–now we simply have one scale to remember.

This approach to measuring player performance is neither novel nor perfect. However, it’s extremely useful from time to time to refresh the way we measure player performance. The quality of competition changes not only from league to league but from year to year and if we don’t continually adjust our own learned constructs with those changes, the numbers we use to determine the value of players suddenly lose all meaning or, worse, lead us down the wrong path.

Not to mention winning or losing a few arbitration cases along the way.