BP Comment Quick Links

July 16, 2012 Baseball TherapyIt's a Small Sample Size After AllWho said sabermetrics hasn't gone mainstream? We've now reached the point where even mainstream analysts are yelling "small sample size!" at one another. There's always been some understanding that a player who goes 4for5 in a game is not really an .800 hitter, but now, people are being more explicit in talking about sample size. I consider that a victory. Hooray for sabermetrics! How big does a sample size need to be before it stops being... small? As I understand it, the most commonly cited study on the topic was written by a man codenamed "Pizza Cutter" almost five years ago at a blog that no longer exists. Mr. Cutter's idea was that he'd look at something called splithalf reliability. BP's Derek Carty did something similar a while ago, while picking plate appearances at random. Mr. Cutter took two equal samples of the same number of PA for a bunch of players and checked to see how well they correlated with one another. The idea was that over time, a statistic becomes more and more "stable," meaning that it becomes a better indicator of his true talent level over that time frame. After reading the original Pizza Cutter article, I am amazed that anyone pays attention to this study given its many methodological flaws. Among them:
Let's see if we can make this better. I would propose to duplicate Mr. Cutter's study with much better methodology. As always, if the numbers scare you, you can close your eyes for the next part, and go to "the results."
Warning! Gory Mathematical Details Ahead! The data were Retrosheet playbyplay logs from 20032011. Pitchers batting were eliminated, as were all intentional walks (I counted them as never happening). Only batters who had at least 2000 PA in that time frame were selected. There were 311 such batters. All batter PAs were lined up chronologically and numbered in order, and I took the first 2000 PAs for each batter. This means that I was able to get reliability coefficients on samples up to 1000 PA. For stats that had other denominators, such as batting average (ABs) or grounders (balls in play), I note the inclusion criteria in the chart below. This time, instead of splitting up the sample into evensandodds as Mr. Cutter did, I used a much better methodology, the KuderRichardson reliability formula. (For the initiated, I used KR21, a derivative of Cronbach's alpha. There were a couple cases where the outcome was not binary—SLG, ISO—where I used Cronbach.) The baseball statistics in which we are most interested are binary outcomes (strikeout rate is a yes/no question of whether the batter struck out over a series of PA), and KuderRichardson specifically assesses measure reliability in binary outcomes. The formula is available elsewhere online, but the basic idea is this. Imagine that you had a sample of six PAs for a bunch of hitters. Now imagine if, instead of splitting them 135 vs. 246 (i.e., evens and odds), you could split them into every single possible combination available and correlate those two halves. So, you could see what the correlation between 123 and 456 would be, or the correlation between 124 and 356. Then, let's say that you could take the average of all of those correlations. Mathematically, that's what KuderRichardson (and Cronbach) does. So, if I have a sample of 500 PAs for a list of batters, this method will tell me what happens when you split that into a pair of 250 PA samples in every possible way. The result will be a much better estimate of how reliable an indicator of a player's true talent level a statistic is over 250 PA. Of course, we know some stats reach higher levels of reliability at lower levels of PA, but it's interesting to note which ones are which and what that says about player evaluation as the season goes along. I looked for the place where reliability passed .70, which is about the only thing that Mr. Pizza Cutter got right. At .70, the rate of signal to noise crosses the halfway point (.707 * .707) = 50%. Of course, with any sort of bright line, there's always the objection that it's a black/white contrast where 50 shades of grey are called for. I don't know what else to say other than "Yeah, I know." The Results
Hopefully, Colin Wyers won't kill me for using Retrosheet batted ball classifications. *  In some cases, the magic .70 mark was not reached within the constraints of the data set, so I used the SpearmanBrown prophecy formula to estimate at what point .70 was most likely to occur.
What it means Perhaps we need to talk about the five factual outcomes for hitters? I realize that TTO is meant to describe a hitter like Adam Dunn or Jack Cust who has a style of play that emphasizes those three outcomes. However, between 20072010, when the two of them were duking it our for the title of TTO king, Cust began to see his rate of singles rise (while his HR fell), while Dunn hit comparatively fewer singles and kept his HRs (freakishly) consistent. Rates of doubles and triples were an odd duck. There's been a certain sabermetric (should I use the word fetish here?) over the past few years for guys who have high doubles numbers, but whom the market overlooks because they don't have sexy HR totals. Those doubles numbers may be illusions. The home run numbers are more likely to be real. Caveat emptor. Or amator. Ground balls and fly balls stabilize at roughly the same time (and quickly!). Skill in producing line drives is given to much more noise. Again, Colin Wyers has written over and over that it's hard to trust a classification of a line drive because it's a subjective judgment. But even trusting that Retrosheet is 100 percent correct that a player's line drive rate will likely vary a lot, his GB/FB ratio will be quick to stabilize. Some players are GB hitters, some are FB hitters, but line drives occasionally happen and it's hard to know why. Overall, these numbers aren't vastly different from the original article by Pizza Cutter, but the methodological improvements that I've made take away some of the concerns that could be raised about the originals. The techniques are a little more obscure, but after five years, it's time for an update. If I see some other older works that might benefit from some methodological sprucing up, especially from this Pizza Cutter guy, I might look into doing just that. (If there's a stat that you wish I had done, leave it in the comments, and I will do my best to get around to it. Let's stick to hitters for now.) Next time, we'll talk about how these numbers are often misused and what they can and can't be used to show.
Russell A. Carleton is an author of Baseball Prospectus. Follow @pizzacutter4
16 comments have been left for this article.
 
Great article.
How about some swing diagnostic metrics?
OSwing%
ZSwing%
Swing% (swings/pitches)
OContact%
ZContact%
Contact% (ball in play + foul / swings)
Zone%
FStrike%
SwStr%
I had dreams of doing these initially... I'll see if I can fire up my PFX data base later.