Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

“Twenty-five hits a year in 500 at-bats is fifty points. Okay? There’s six months in a season, that’s about 25 weeks. You get one extra flare a week-just one, a gork, a ground ball with eyes, a dying quail-just one more dying quail a week and you’re in Yankee Stadium!”
–Ron Shelton, Bull Durham

So far, we’ve looked primarily at BP’s tool set to give you an idea of the uses and limitations of specific statistics. This week, we’re going to take a slightly more conceptual approach to a question from reader J.B.:

I was wondering if you could write an article on a pitcher’s BABIP? I ask because I’ve seen some discussion lately that Voros’ predominant theory behind DIPS (that pitchers have little or no control on balls in play) is no longer relevant. I’d like to see you explain why this is so.

I’m going to give this a try, but we’ll start by backing up a little. When I was growing up, batting average was the first stat you looked at to determine how well a baseball player hit. If they were available, you’d also look at a guy’s homers and RBI, or maybe how many stolen bases they had, but for the most part, when they printed the hitting leaders (as the batting average leaderboard was known) in the newspaper, it was pretty much accepted that these were the top bats in the league.

All right, I know that I’m using a number of terms that are bound to confuse some of our younger readers-“batting average,” “newspaper,” “printed”-but don’t worry, this will all tie in to the present day pretty soon. Batting average fell from its perch as the über-stat based on criticisms on two separate levels, which we’ll call retrospective and prospective. Retrospectively, we look at how a player’s performance-his statistics-related to helping his team win by scoring or preventing runs. From this point of view, batting average is a bad statistic because it doesn’t give you a complete picture of the player’s offensive contributions. A guy can have a good batting average, but not contribute much on offense because he doesn’t get on base often enough, or hit for much power-look at Placido Polanco‘s .295 batting average last year, for an example-or you can have a player like Adam Dunn, who had a .234 batting average last year, but made an impact thanks to hitting 40 homers and drawing 112 walks.

From the prospective point of view, the question is: what does a player’s past performance tell us about what he’ll do in the future? The simplest way that we can figure out a statistic’s prospective value is by looking at how the statistic correlates for players from year to year. The more consistent the statistic is from year to year, the more that it can be attributed to a talent or skill the player possesses. The more it fluctuates from year to year, the more that statistic is attributed to factors outside of the player’s abilities, such as luck. It so happens that batting average fluctuates quite a bit, compared to measures of a batter’s power or his tendency to draw walks. In some part that’s because of the flares, gorks and dying quails that Ron Shelton had Crash Davis talk about in the opening quote-the element of chance that comes into play whenever the batter puts the ball in play.

What does this have to do with J.B.’s question? Well, in 2001 Voros McCracken published an article here at Baseball Prospectus on the flipside of batting average, the pitcher’s batting average allowed. What he found was that if you eliminated those elements of pitching that the team’s defense has no effect on-walks, hit batsmen, strikeouts-and looked at a pitcher’s batting average on balls in play (BABIP), you’d see that the statistic fluctuated wildly, even for guys that were considered “hard to hit,” like in-their-prime hurlers like Greg Maddux and Randy Johnson. Based on that data, McCracken concluded, “(t)here is little if any difference among major league pitchers in their ability to prevent hits on balls hit in the field of play.”

What this meant, from a prospective point of view, is that you could have a better idea of how a pitcher would perform in subsequent years by looking at his defense-independent statistics and ignoring his BABIP, than you could by looking at his ERA or a component ERA. McCracken devised a statistic called DIPS (Defense Independent Pitching Statistics) ERA to give us a view of what kind of performance to expect from a pitcher going forward, unclouded by a high or low BABIP. So, the idea was that if a pitcher’s actual ERA was higher than his DIPS ERA, he could be considered unfortunate and likely to bounce back the following season.

When McCracken’s DIPS theory hit the mainstream, the reaction was allergic, to put it mildly. Unless it’s Lucy van Pelt talking to Charlie Brown, no one wants to hear a pitcher who’s getting raked say to their manager, “Recent studies show that the pitcher has no influence on whether balls hit in play become outs.” That negative reaction overshadowed the fact that virtually from day one, the theory has been studied and tested, with some of the best minds in sabermetrics-Keith Woolner, Tom Tippett, Mitchell Lichtman, Arvin Hsu, Erik Allen, and Tom Tango, to name a few-making contributions and in some cases identifying exceptions to the theory. For example, it was found that knuckleballers show an ability to affect their BABIP, and that extreme groundball pitchers and flyball pitchers have different BABIP tendencies.

Based on that work, and others, there have been a number of revisions to the DIPS theory since 2001. Most recently, some contributors have applied advanced play-by-play data to the problem, producing metrics such as David Gassko’s DIPS 3.0 (which adjusts for pitcher’s batted ball types allowed, such as fly balls, grounders, and line drives) and Lichtman’s Pitcher’s Zone Rating (or PZR, which uses the zone rating chart of batted ball destinations), to account for factors which can affect pitchers’ BABIP.

Does all this mean that the DIPS theory is no longer relevant? Not necessarily. DIPS has evolved, and the brash statement that a pitcher has no control over his BABIP now has a huge jumble of modifiers attached to it. Nonetheless, luck does still play a large role in a pitcher’s batting average on balls in play, so it’s still instructive to look at the defense-independent aspects of pitching performance.

Notes:

Further Reading

Voros McCracken, “Pitching and Defense: How Much Control Do Hurlers Have?”: The original DIPS article.

Keith Woolner, “Counterpoint: Pitching and Defense”: A commentary on the sample size of McCracken’s original study, and an additional study of a sample of major league veterans.

Voros McCracken and Keith Woolner, “From the Mailbag-Special Edition: Pitching and Defense”: A collection of answers to early questions about DIPS theory.

Dayn Perry, “When Does a Pitcher Earn an Earned Run” in Baseball Between the Numbers: A good intermediate-level summary of DIPS research.

Joe Sheehan, Prospectus Today: Making Contact”: An example of the practical application of pitcher’s BABIP in small-sample situations.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe