Premium and Super Premium Subscribers Get a 20% Discount at MLB.tv!
February 2, 2012
Resident Fantasy Genius
Verducci Effect: Fact or Fake?
It’s that time of year again! No, I’m not talking about that rascally Puxatawny Phil (though his predictions may be more accurate than the ones I’m about to discuss). Rather, it’s that time of year for the “Year After Effect” (or the “Verducci Effect”) to start making the rounds. The theory’s namesake, Sports Illustrated senior writer Tom Verducci, has published his annual warning over at SI, it was discussed on MLB Network’s “Hot Stove” late last week, and the blogosphere has done its part in expressing concerns over the pitchers who made this year’s cut.
For those unfamiliar, Verducci describes his process as such:
I used a rule of thumb to track pitchers at risk: Any 25-and-under pitcher who increased his innings by 30 or more I considered to be at risk. (In some cases, to account for those coming off injuries or a change in roles, I used the previous innings high regardless of when it occurred.) I also considered only those pitchers who reached the major leagues.
Essentially, young pitchers with a large spike in innings pitched are considered to be at risk and make the Verducci Effect list. This year’s class:
2012 Verducci Effect List
Verducci has been tracking his theory for over 10 years now, and has been writing about it for six (as far as I can tell)—plenty of time for analysts to examine its validity. While the “Year After Effect” might make sense in theory, the evidence is stacked strongly against it. My former colleague, David Gassko, at The Hardball Times, former BP writer Jeremy Greenhouse at Baseball Analysts, J.C. Bradbury at Sabernomics, Michael Weddell in the Baseball Forecaster, and Advanced NFL Stats have all run studies refuting the theory to one extent or another.
The problems with the Year After Effect are multifold. In Verducci’s article this year, he asserts the validity of the effect by saying, “In just the past six years, for instance, I flagged 55 pitchers at risk for an injury or regression based on their workload in the previous season. Forty-six of them, or 84 percent, did get hurt or post a worse ERA in the Year After.” He later says, “Two out of the nine pitchers I red flagged last year actually stayed healthy or improved… more typical, though, were the regressions last year by David Price, Phil Hughes, Mat Latos and Brett Cecil, all of whom I red-flagged.”
One of the problems with this logic is that Verducci doesn’t compare his red-flagged pitchers to any sort of control group. Yes, some of his pitchers regressed or got injured, but how do those rates compare to what non-flagged pitchers do? Pitchers regress and get injured all the time; the real question is not whether these Year After Effect pitchers exhibit this behavior, but whether their behavior differs from other pitchers.
The other enormous flaw with the Year After Effect’s logic is its inherent selection bias and the fact that it ignores regression to the mean, a force trumped only by gravity in strength. You see, for a player to actually make the list in the first place, he must have been allowed to exceed his previous innings totals. And for a player to be given this chance, he likely performed well enough to warrant it, either on the surface or peripherally. Because of what we know about regression to the mean, this performance (or overperformance, really) should be expected to decline the following season. So when Verducci talks about guys like Price and Latos regressing, that’s exactly what we should expect them to do, Year After Effect or not! Are we really going to expect them to improve upon their sub-3.00 ERAs?
While perhaps overkill at this point given all of the work that’s been done on the topic, I thought I’d run my own study on the Year After Effect, approaching the issue from a different angle.
The study I’ve run resembles one I did a couple of years back when examining the “Home Run Derby Hangover Effect.” I’ve taken all pitchers who made Verducci’s list over the past five years (all that have been published, as far as I can tell) and manually matched each player with a comparable player who didn’t make the list. By comparing the performance of a “Verducci List” to a “Comparable List” (a control group), we can see if the guys red-flagged by Verducci perform worse than non-flagged pitchers, as the theory suggests they should.
To avoid biasing the comparables I was selecting, I looked only at player stats, keeping the names of the players out of sight. (I excluded pitchers who threw fewer than 100 major-league innings, as these were often top prospects that received a cup of coffee and would have been difficult to find a good comp for without looking at names; this lowers our sample to 37 pitchers.) I first tried to find a close match from the year in question on innings pitched, followed by ERA, and then, if possible, on strikeout and walk rates. For example, David Price made last year’s “Year After Effect” list. His comparable wound up being Clayton Kershaw:
All told, the two groups broke down as such:
Pretty darn close. Once all 37 pitchers were matched up and I averaged out their production, I looked at how the two groups performed in the next season (the year the Verducci Effect predicted demise). Here are the results:
We see very little difference between the two groups and, in fact, the Verducci Group actually performs slightly better in the “Year After Effect” season. They lose fewer innings, strike out more batters, and walk fewer opponents. Of course, the differences between the two groups are negligible, and we’re dealing with a small sample size, but this is just one more piece of evidence in the “the Verducci Effect is a myth” pile.
This isn’t, of course, to say that a large spike in innings can’t harm a pitcher. Every pitcher has a different physiology, different mechanics, different conditioning habits, etc., and there are certainly limits to how hard a pitcher should be worked. There just doesn’t seem to be a hard-and-fast rule that applies to everyone, which makes perfect sense when you think about it in this way. Everyone is different, and unless we know all of these different things about the players in question, it doesn’t seem that we can draw any meaningful conclusions.
While Tom Verducci is a terrific beat writer and a great personality (so much so that he may even have a One Hour Photo-esque stalker right here at BP—I actually worry about the repercussions I might suffer myself after writing this), it really is about time he puts the Year After Effect to rest. The mounds of evidence he’s swimming in at this point might as well be gold coins to his Scrooge McDuck.