Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52

Who said sabermetrics hasn't gone mainstream? We've now reached the point where even mainstream analysts are yelling "small sample size!" at one another. There's always been some understanding that a player who goes 4-for-5 in a game is not really an .800 hitter, but now, people are being more explicit in talking about sample size. I consider that a victory. Hooray for sabermetrics!

How big does a sample size need to be before it stops being… small? As I understand it, the most commonly cited study on the topic was written by a man code-named "Pizza Cutter" almost five years ago at a blog that no longer exists.

Mr. Cutter's idea was that he'd look at something called split-half reliability. BP's Derek Carty did something similar a while ago, while picking plate appearances at random. Mr. Cutter took two equal samples of the same number of PA for a bunch of players and checked to see how well they correlated with one another. The idea was that over time, a statistic becomes more and more "stable," meaning that it becomes a better indicator of his true talent level over that time frame.

After reading the original Pizza Cutter article, I am amazed that anyone pays attention to this study given its many methodological flaws. Among them:

  • It is written by a man who named himself after an auxiliary kitchen utensil
  • According to Mr. Cutter, who at the time was working with data from 2001-2006, he used consecutive pairs of years (2001-2002, 2003-2004, 2005-2006) for each player. This means that in his sample, Barry Bonds and anyone else who played in all six years would have been in his data set three times. Sloppy.
  • He used an evens-and-odds method to split his sample. In this case, he lined up everyone's PAs in chronological order, numbered them from one to whatever, and then split them into even and odd numbered PAs and calculated a correlation between these two buckets. This is a man who needs more methodological sophistication. It may not be likely, but what if his findings were the result of his even-and-odd method?
  • When looking at batted ball type rates, he did them with the denominator of per PA, rather than per ball in play.
  • He used a case-wise deletion strategy. So, his sample for 100 PA reliability is different from his 200 PA reliability sample.
  • He used 50 PA intervals. I'm going to use 10. Better resolution.
  • He left pitchers-as-batters in the sample. They should really be taken out. At higher levels of PA, they will naturally be selected out, but at low levels of PA, they might be muddying up the sample.
  • Why would a man obscure his real name like that on such an important study? Was he afraid that people would find out who he is?

Let's see if we can make this better. I would propose to duplicate Mr. Cutter's study with much better methodology. As always, if the numbers scare you, you can close your eyes for the next part, and go to "the results."

Warning! Gory Mathematical Details Ahead!
I missed doing that.

The data were Retrosheet play-by-play logs from 2003-2011. Pitchers batting were eliminated, as were all intentional walks (I counted them as never happening). Only batters who had at least 2000 PA in that time frame were selected. There were 311 such batters. All batter PAs were lined up chronologically and numbered in order, and I took the first 2000 PAs for each batter. This means that I was able to get reliability coefficients on samples up to 1000 PA.

For stats that had other denominators, such as batting average (ABs) or grounders (balls in play), I note the inclusion criteria in the chart below.

This time, instead of splitting up the sample into evens-and-odds as Mr. Cutter did, I used a much better methodology, the Kuder-Richardson reliability formula. (For the initiated, I used KR-21, a derivative of Cronbach's alpha. There were a couple cases where the outcome was not binary—SLG, ISO—where I used Cronbach.) The baseball statistics in which we are most interested are binary outcomes (strikeout rate is a yes/no question of whether the batter struck out over a series of PA), and Kuder-Richardson specifically assesses measure reliability in binary outcomes.

The formula is available elsewhere online, but the basic idea is this. Imagine that you had a sample of six PAs for a bunch of hitters. Now imagine if, instead of splitting them 1-3-5 vs. 2-4-6 (i.e., evens and odds), you could split them into every single possible combination available and correlate those two halves. So, you could see what the correlation between 1-2-3 and 4-5-6 would be, or the correlation between 1-2-4 and 3-5-6. Then, let's say that you could take the average of all of those correlations. Mathematically, that's what Kuder-Richardson (and Cronbach) does.

So, if I have a sample of 500 PAs for a list of batters, this method will tell me what happens when you split that into a pair of 250 PA samples in every possible way. The result will be a much better estimate of how reliable an indicator of a player's true talent level a statistic is over 250 PA. Of course, we know some stats reach higher levels of reliability at lower levels of PA, but it's interesting to note which ones are which and what that says about player evaluation as the season goes along.

I looked for the place where reliability passed .70, which is about the only thing that Mr. Pizza Cutter got right. At .70, the rate of signal to noise crosses the halfway point (.707 * .707) = 50%. Of course, with any sort of bright line, there's always the objection that it's a black/white contrast where 50 shades of grey are called for. I don't know what else to say other than "Yeah, I know."

The Results



Stabilized at


Strikeout rate

K / PA

60 PA


Walk rate


120 PA

IBB's not included

HBP rate


240 PA


Single rate

1B / PA

290 PA


XBH rate

(2B + 3B) / PA

1610 PA


HR rate


170 PA







H / AB

910 AB

Min 2000 ABs


(H + HBP + BB) / PA

460 PA



(1B + 2 * 2B + 3 * 3B + 4 * HR) / AB

320 AB

Min 2000 ABs, Cronbach's alpha used, Estimate*


(2B + 2 * 3B + 3 * HR) / AB

160 AB

Min 2000 ABs, Cronbach's alpha used





GB rate

GB / balls in play

80 BIP

Min 1000 BIP, Retrosheet classifications used

FB rate

(FB + PU) / balls in play

80 BIP

Min 1000 BIP including HR

LD rate

LD / balls in play

600 BIP

Min 1000 BIP including HR, Estimate*

HR per FB


50 FBs

Min 500 FB


Hits / BIP

820 BIP

Min 1000 BIP, HR not included

Hopefully, Colin Wyers won't kill me for using Retrosheet batted ball classifications.

* – In some cases, the magic .70 mark was not reached within the constraints of the data set, so I used the Spearman-Brown prophecy formula to estimate at what point .70 was most likely to occur.

What it means
Take a look at the basic outcomes of a PA. The idea of the "three true outcomes" (TTO) has been something of a staple of sabermetric thinking for a while. The idea of the holy triad of strikeout, walk, and home run being "true" was something that came from DIPS theory and applied mostly to pitchers. While they are the three that stabilize for hitters most quickly, it's actually a gentle progression upward to HBP rate and then singles rate.

Perhaps we need to talk about the five factual outcomes for hitters? I realize that TTO is meant to describe a hitter like Adam Dunn or Jack Cust who has a style of play that emphasizes those three outcomes. However, between 2007-2010, when the two of them were duking it our for the title of TTO king, Cust began to see his rate of singles rise (while his HR fell), while Dunn hit comparatively fewer singles and kept his HRs (freakishly) consistent.

Rates of doubles and triples were an odd duck. There's been a certain sabermetric (should I use the word fetish here?) over the past few years for guys who have high doubles numbers, but whom the market overlooks because they don't have sexy HR totals. Those doubles numbers may be illusions. The home run numbers are more likely to be real. Caveat emptor. Or amator.

Ground balls and fly balls stabilize at roughly the same time (and quickly!). Skill in producing line drives is given to much more noise. Again, Colin Wyers has written over and over that it's hard to trust a classification of a line drive because it's a subjective judgment. But even trusting that Retrosheet is 100 percent correct that a player's line drive rate will likely vary a lot, his GB/FB ratio will be quick to stabilize. Some players are GB hitters, some are FB hitters, but line drives occasionally happen and it's hard to know why.

Overall, these numbers aren't vastly different from the original article by Pizza Cutter, but the methodological improvements that I've made take away some of the concerns that could be raised about the originals. The techniques are a little more obscure, but after five years, it's time for an update. If I see some other older works that might benefit from some methodological sprucing up, especially from this Pizza Cutter guy, I might look into doing just that.

(If there's a stat that you wish I had done, leave it in the comments, and I will do my best to get around to it. Let's stick to hitters for now.)

Next time, we'll talk about how these numbers are often misused and what they can and can't be used to show.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
Great article.

How about some swing diagnostic metrics?

Swing% (swings/pitches)
Contact% (ball in play + foul / swings)
I had dreams of doing these initially... I'll see if I can fire up my PFX data base later.
Coming at this from another angle... If you assume every plate appearance is an independent random trial, you can actually compute confidence intervals around (e.g.) a batting average. Roughly, that line of reasoning leads you to the conclusion that 100 at bats gives you a confidence interval of about +-.100. 400 at bats gives you a confidence interval of +-.50, 1600 AB's: +-.25. For Wade Boggs' entire career of about 9000ish at bats, the confidence interval is still +-.10 around his .328 average. And I expect that these confidence intervals are, if anything, too narrow. A plate appearance is not an independent random trial because players run hot-and-cold, and their abilities improve and decline over time, so there is also unaccounted for time series variation. Very similar logic would apply to something like a walk-rate or a homerun rate, which we normally think of as more reliable (and which stabilizes faster according to your methodology). The first conclusion I'd draw is that we are kidding ourselves when we report the 3rd digit of detail on anyone's batting average. We should just say Robinson Cano is hitting 32% (plus or minus 7%). It'd be nice to see somebody tackle the issue of how useful statistical analysis can be in baseball given that we're typically drawing inferences from such imprecise statistics.
I think I've actually seen this done elsewhere. I just can't remember where. Confidence intervals are woefully under-used in baseball... and life.
Very nice article. I think this article may help answer a related question: "How quickly should we forget the past?"

If you'll bear with me, I like to think of player evaluation from a Bayesian standpoint. We could model past OBP performance as a beta distribution where alpha="the number of times a player reaches base" and beta="the number of times they do not reach base" during the same number of plate appearances. If the player reached base 30 times out of the past 90 plate appearances we'd have a beta distribution with alpha=30 and beta=60 and our estimate of OBP would be 1/3.

Take a look:

Let's say that same player reaches base 8 out of the next 10 times(the likelihood), our new estimate of the player's OBP (our posterior) would be a new beta distribution(alpha=38,beta=62), theta(or OBP) = .38.

The hard part here is knowing when to forget old performance i.e. how strong should the prior be? Do you think your estimation of proper sample size informs that question?
Is there some sort of publicly available retrosheet parser? I find it very cool that you can specify data set modifiers like completely ignoring intentional walks. Is this an in-house creation or some modification of publicly available parsers? Either way, this was a great article.
I use the statistical program SPSS for my work, which can be publicly bought (although it is expensive). The good news is that any spreadsheet style program can handle Retrosheet data.
SPSS is fine of course but for the budget concious, this can all be done in R (which is free!). Here's a nice intro from Jim Albert who cowrote "Curve Ball: Baseball, Statistics, and the Role of Chance in the Game":
Russell, Great update to a classic column. I had fun comparing this one to the "525,600 minute" version that is at FG. Picking up on Ian in Chicago, how do you weigh projections vs. 2012 performance? Now that most regular starters have 300 PA how do you merge a HR projection with actual? Let's look at Pujols: He's hitting HR's at a 3.9% per PA clip (I didn't remove IBB for sake of speed) compared to the final pre-season PECOTA of 5.4%. How do you weight each going forward? Am I correct in assuming at the 170 PA mark (when r = .7) you'd take 50% of each? Thanks for the work.
The standard way that this has been done is that at r = .7, you take 70% performance and 30% league. As to how to weight performance vs. projection, that's one that I've never really looked at.
So, hey, that pizza cutter guy was on to something. Hope he's still got gainful employment somewhere if that baseball statistics hobby/career path doesn't pan out.
Guy's a no-talent hack if you ask me.
Then he's got a future in baseball for sure.
Pardon my ignorance, but what exactly do we mean when we say that "GB rates stabilize..."?

I'm thinking Justin Upton here:
2008: 37.2%
2009: 45.5%
2010: 41.4%
2011: 36.9%
2012: 46.0%

His FB rates have been just as sporadic.

So what does it mean that those rates stabilize at 80 BIP?
I'll talk about that in next week's article. The short version is that you can be fairly certain that a player's GB rate over those 80 PA was reflective of his true talent _over that period of time_. But true talent unto itself can change and does so more rapidly than we'd like to think.
Have you tried bases-per-hit? (or "power factor" as it's sometimes called) I've been confused for awhile about why we use ISO instead of that.