BP Comment Quick Links

Premium and Super Premium Subscribers Get a 20% Discount at
MLB.tv!


March 29, 2010 Baseball TherapyCredit Where It's Due, Part 1
I usually put a warning in my pieces that gory mathematical details are about to follow. This week, it comes at the beginning of the piece. You’ve been warned. Then again, depending on what you like, you might be titillated. Who is to blame for a strikeout? There are several candidates, after all. Was it the pitcher’s deception and skill? Was it the batter’s inability to read the strike zone? Was it the umpire’s obscenely large strike zone? Was it dumb luck? Was it… Canada? It’s a deceptively simple question, and the answer is probably some combination of all of the above, but what if we could go deeper than that? What if we could start putting percentages on how much credit and blame can be placed on each player and/or country? I think it’s entirely possible, and the results could be rather enlightening. It’s generally assumed that a strikeout is halfway the fault of the batter and halfway the credit of the pitcher. But that’s just based on a "that makes sense" model rather than any empirical data. In baseball’s doubleentry accounting system, we find someone to place the blame on from the offense’s standpoint that the team now has one fewer out to work with (and the batter makes the most sense), and someone from the pitching team to thank (the pitcher, obviously). There’s a third actor that plays into every event in life, and that’s dumb luck. Sometimes things just happen. I propose a more nuanced look at who’s really to blame for a strikeout. Where do strikeouts come from (and more importantly, how do we know)? In what can now safely be called an iconic sabermetric paper, Voros McCracken posited that pitchers were, for the most part, responsible for their strikeouts, walks, and home runs given up, but not for anything that happened when the ball was in play. Since that time, the DIPS theory, as it became known, has been revised and the bold certainty of the original theory has given way to a few more qualifiers, but McCracken opened up the discussion of how Sabermetricians might separate out luck from skill in baseball. In mathematical terms, we’re trying to partition (or as I prefer, "chop up" the variance into its component parts.) McCracken’s paper was amazingly simple in its methodology considering its broadreaching implications. Given a respectable minimum of innings pitched, he looked at the yeartoyear correlations in a pitcher’s stats and found a high correlation from year to year for home runs, strikeouts, and walks, but not BABIP. Since the pitcher seemed able to repeat his performance (more or less) from year to year on strikeouts, it was considered that he had control. BABIP, on the other hand, was considered to be out of his control, as the yeartoyear correlation was rather low, suggesting that it was random chance driving the findings. Such a conclusion is actually a logical fallacy (it has to do with a misunderstanding of null hypothesis testing), but it’s a useful one. One year of BABIP doesn’t tell you much about what next year will bring, and that’s generally how people think. This framework of yeartoyear repeatability was the first major framework in figuring out credit and blame. There’s a small problem with this framework. Yeartoyear correlations are very dependent on how many plate appearances/batters faced are included in each season. Consider for a moment that if a baseball season lasted a billion games or so, we’d know exactly what each player was capable of. If a hitter goes 4for5 one night, we don’t assume that he’s an .800 hitter. I found that the problem with BABIP is that it takes a long time before we get a reliable number on an individual pitcher, something on the order of 5,000 or so batters faced, but eventually we can tell what a pitcher’s skill level is, or at least was, over those 5,000 batters faced. What it means isn’t that there isn't any pitcher talent involved, only that there’s a very low signaltonoise ratio. It doesn’t mean there isn't anything interesting to find, only that we have to look a little harder to find it. Others have used variance partitioning methods that involve looking at the variation among pitcherseasons, in which they look at the variance that can be observed between pitchers and subtract out the random variance that would be expected given the parameters of the data set. The problem is that the most commonly used formula: Variance (observed) = Variance (random) + Variance (actual) ... is missing a term. In addition to random variance and actual variance in pitcher abilities, there’s another type of variance that can creep in. It’s measurement bias. Suppose that the measure that we’re using favors one pitcher over another? That would bias the data, but how can that be? Let’s go back to strikeouts. There’s one other factor that could be driving a pitcher’s (or a batter’s) strikeout rate, and I don’t think it’s been adequately controlled: the quality of the opposition batters/pitchers. Suppose for a moment that a pitcher could face Mark Reynolds all the time. What do you suppose his strikeout rate would look like? It’s a builtin assumption that a pitcher has faced a bunch of hitters who, all strung together, aggregate into a leagueaverage profile, and that everyone faces essentially the same suite of hitters. At first blush, it’s not a bad assumption, but let’s see if it stands up to scrutiny. For a moment, let’s pretend that a pitcher has no role in his strikeouts, and that whether a plate appearance ends in a strikeout is entirely in the hands of the batter. It’s silly to think that the batter is totally in control, but it’s also silly to assume that the batter would not be involved. In 2009, which pitchers would have had the highest and lowest "expected" strikeout rate (minimum of 250 batters faced)?
So, if the batter was the sole determinant of whether a plate appearance ended in a strikeout, then Russ Ortiz(!) would have had the thirdhighest strikeout percentage in baseball. I told you it was silly to pretend that pitchers had no control over their strikeout rate. Some of the variance from year to year in a pitcher’s strikeout rate (or any sort of stat) could simply be a change in the quality of the opposition, in addition to the usual variation that comes with noninfinite sample sizes. That spread between high and low is about two percentage points, which isn’t gigantic, but not negligible either. Overall, the 2009 sample had a standard deviation .004 or 0.4 percent in the strikeout rate. (Actual observed K rate had a standard deviation of about 5.2 percent.) It may be small, but it is a bias in the data. A solution to all the world’s problems We have a problem. Several of them, in fact. No, not poverty, disease, and war. More important than that. We need a method that will allow us to look for effects of variables that have a lot of noise around them and that allow for the interaction of several actors. We also need a method that can allow us to have many more data points than we can get from simply looking at a list of pitcherseasons. The easiest way to get around the issue of variables with a lot of noise around them is to pump up the size of your data set, but there are only so many pitchers who have thrown seasons with a certain minimum of batters faced. That may not be enough to get reliable results. This problem needs a different framework. I propose that we use a technique called binary logistic regression, and use it at the plateappearance level. Here’s how it works with strikeouts. I took all plate appearances from 19932009 (excluding intentional walks and plate appearances involving the pitcher batting), and coded them as either ending in a strikeout or not. This gives us a database of 2.9 million data points with which to work (although I used only plate appearances in which a batter with at least 250 PA in that season faced a pitcher with 250 BFP, which left me with a mere two million cases). Binary logistic regression looks at outcomes that are denominated in either a yes or a no and how various independent variables affect the odds of the answer being "yes" or "no." For each plate appearance, I took the pitcher’s seasonal strikeout rate and the batter’s as well (our best observable guesses as to their true talent levels for the year). I also took the league strikeout rate. For technical reasons, I converted all of these probabilities into odds ratios, and then took the natural log. (For the curious: it helps to normalize the distribution.) I entered all three predictors into our logic regression and pushed play. The interesting output for our purposes here is which variables pick up how much variance. Binary logic doesn’t operate give off the same Rsquared as a regular linear regression, but there is something like it. For the morbidly curious: the change in the 2 log likelihood ratio if the variable were excluded, which are listed below.
Parceling that out, the batter’s K rate overall picks up about 56.0 percent of the explained variance [40289.865 / (40289.865 + 31138.462 + 490.247)], while the pitcher gets 43.3 percent, with the league mean picking up about 0.7 percent. The word "explained" is important to note in that the overall regression had an overall Naglekerke Rsquared value of about 6 percent. There’s a lot more that goes into a strikeout than knowing the approximate skill levels of the pitcher and batter, such as what actually happened during the plate appearance. But this gives us an idea of how much variance can be accounted for by each of the three main actors in this play relative to each other. A batter deserves 56 percent of the blame for his strikeout. The other 44 percent can be chalked up to the thought that he was facing Nolan Ryan in his prime and to the thought that everyone strikes out sometimes. What then of BABIP? I hope to come back to balls in play later for a more finegrained look, but I ran a similar regression on the outcome of balls in play (strikeouts, walks, hit by pitches, and home runs were treated as nonevents), using the same method as above. The results:
Hits on balls in play have a lot to do with the batter’s prowess, which is expected. What might surprise people is that the number for the pitcher is so high, but one must remember that "pitcher" here includes both the pitcher and his defense. In a future article, I’ll look at how to break things down a little more specifically on that account. I hope to continue this line of research. Next time, we’ll go over a few more common events (like home runs, walks, hit batsmen, and mascot interference), and then talk about a framework for looking into events that have more moving parts, such as the aforementioned balls in play. Eventually, I’d like to chop up stats like win probability in a little more finegrained manner. Keep reading. This should be fun.
Russell A. Carleton is an author of Baseball Prospectus. Follow @pizzacutter4
3 comments have been left for this article.

I enjoyed this article, but when I started reading it, I thought it was going to love it. This is a great macro look at the strikeout from the batter vs. pitcher perspective. What I was hoping for was a conclusion or observation that could be applied to an individual plate appearance. When I watch baseball, who wins the duel between pitcher and hitter is my favorite subplot. I do not think that this information will change my perception of who wins that duel. Not intended to be a negative comment on the article, I just got a little excited hoping that was what I was going to get.