Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

I’ve spent the past few weeks romping through baseball history in this space, and in the meantime we’ve passed both the one-quarter and one-third marks in the season. Now that the sample sizes have been beefed up a bit, the statistics we’re seeing have started to accumulate a bit more weight, both on the spreadsheet and in the public mind.

Nonetheless, I’m unconvinced there’s that much of note when it comes to the curious decline of scoring that’s been discussed at length elsewhere on this site and at points beyond. Particularly on my weekly radio hits, I’ve been preaching caution when it comes to jumping to any conclusions on this topic. For one thing, the standings grid still shows a fair share of 0-0 records, as teams haven’t played a full cross-section of opponents. More importantly, recent history has shown that early-season scoring lags tend to iron themselves out on the way to October.

Consider that, through the end of May, National League scoring rates are down a hair, from 4.71 to 4.60 runs per game, while the American League has seen a more precipitous half-run drop, from 4.90 to 4.40 runs per game. A closer look at the monthly splits shows no apparent pattern:

          -------2007-------   -------2008-------
Lg Month    G     R      R/G    G      R     R/G   +/-
AL April   340   1591   4.68   390   1763   4.52  -0.16
AL May     392   1927   4.92   393   1679   4.27  -0.65
AL Rest   1536   7596   4.95
AL Total  2268  11114   4.90   783   3442   4.40  -0.50
NL April   400   1769   4.42   444   2022   4.55  +0.13
NL May     450   1987   4.42   451   2093   4.64  +0.22
NL Rest   1744   8452   4.85
NL Total  2594  12208   4.71   895   4115   4.60  -0.11

April totals include March games, and Rest includes all games after May 31, though I’m omitting the early June 2008 data to avoid the distraction of a small sample size. American League scoring in both April and May of this season was below corresponding 2007 levels, dramatically so in the latter month, when scoring declined nearly six percent off April 2008 levels. National League scoring, on the other hand, was actually up from last year in both months, but it still lags behind last year’s overall rate.

Perhaps the bigger head-scratcher is that three out of the four 2007 April and May scoring numbers across both leagues lagged far behind where they ended up the rest of the year; that’s a bit odd. The AL’s gap of .22 runs between April and their overall levels was the biggest in that league since 1997, while the NL’s gap of .29 runs was the biggest in its league since 2000. Did we ever get a coherent explanation for those gaps in the manner of the ones we’re being offered now? You say it snowed for a few days in Cleveland? Wow.

Generally from 2000 to 2007, early-season scoring levels have tracked well with the rest of the year, with April and May tending to straddle the overall average, and the smaller set of early-season games creating a wider distribution of results:


Lg Month     G     R      R/G  stdev
AL April  13505   2728   4.95   0.25
AL May    15103   3070   4.92   0.16
AL Rest   60849  12337   4.93   0.15
AL Total  89457  18135   4.93   0.16
NL April  14743   3144   4.69   0.29
NL May    16435   3572   4.60   0.29
NL Rest   65543  14015   4.68   0.14
NL Total  87999  19063   4.62   0.11

In light of this info, it’s certainly possible that these lower early-season levels portend lower full-season ones. But even so, we’ve seen fluctuations on the order of the AL’s half-run drop a few times in the past decade and a half:


Yr   Lg   R/G    +/-
1993 AL   4.71
1994 AL   5.23  +0.52

1998 NL   4.60
1999 NL   5.00  +0.40

1996 AL   5.39
1997 AL   4.93  -0.46

2000 AL   5.30
2001 AL   4.86  -0.44

The strike, performance-enhancing drugs, smaller parks, and ball juicing were the most-cited explanations for those earlier rises, but when scoring dropped just as rapidly, similarly convenient alibis were drowned out by a chorus of chirping crickets. This time around, however, we’ve got no shortage of explanations. The decline has been variously attributed to drug testing, ball juicing, defective maple bats, smaller ballparks, changing strike zones, bad umpiring, cooler weather, selection effects, a talent exodus from the AL to the NL, a talent influx from the minors into the NL, the Democratically-controlled Congress, and Miguel Cabrera‘s rapidly expanding waistline.

Changes in the drug policy are perhaps the most frequently invoked, and in fact, the two-month dip we’ve seen in the AL would make for the most dramatic drop in the drug-testing era if it held out over the full season. But baseball’s drug policy has evolved gradually without exhibiting a consistent effect on scoring. Consider:


Year    NL     AL    Key Policy Changes
2000   5.00   5.30
2001   4.70   4.86
2002   4.45   4.81
2003   4.61   4.86   Survey testing
2004   4.64   5.01   Treatment for first offense
2005   4.45   4.76   10-day suspension for first offense, precursors banned
2006   4.76   4.97   50-game suspension for first offense
2007   4.71   4.90   Amphetamines banned
2008   4.60   4.41   More frequent in-season and off-season testing

Where’s the pattern? Regarding the current year, one can’t even invoke the effects of the new policy, which basically doubled the number of in-season and off-season tests. It wasn’t ratified until less than two weeks ago. Some may say that expectations of enhanced testing in the wake of the Mitchell Report are what’s driving the drop, but that’s pure speculation.

Having studied the matter for the past few years, I’m not a big believer in PED-based explanations; I tend to favor the ball-doctoring theory to explain the scoring and power fluctuations throughout the entire post-1993 era. The magnetic resonance images (MRIs) from Universal Medical Systems show a synthetic rubber ring that’s unaccounted for in MLB’s ball specifications, not to mention other anomalies that suggest wider disparities in the balls used than MLB should be allowing. Furthermore, MLB’s own studies confirm such disparities–the use of out-of-tolerance balls–as well as finding that the flight distances of balls at the extreme ends of official tolerances could differ by as much as 49 feet despite being struck under the same conditions.

While Joe Sheehan used fly-ball rates to dismiss the possibility that ball changes might be factoring into what we’ve seen this year, the decrease in total bases per hit–what Eric Walker calls Power Factor–from recent historical levels of about 1.60 to 1.56 last year and 1.53 this year suggests this explanation may still be in play. However convenient it may be, until we have more data under our belts, not to mention new scans of 2008 balls that can be compared to last year’s models, it’s premature to haul the ball-doctoring explanation out to explain this year’s results. (In a brief conversation with UMS president David Zavagno, I was told that such scans are forthcoming; I’m planning a lengthier discussion with him in the near future.)

As for alternative possibilities, a couple of weeks ago J.C. Bradbury of the Sabernomics blog offered a weather-based explanation for the drop in home runs (not scoring) based on average April temperatures from the National Climate Data Center. I’ll count myself among those who bought into that explanation, repeating it on radio and barstool a couple of times, but upon closer scrutiny, it falls apart. The NCDC data cited by by Bradbury isn’t confined to Major League Baseball’s 28 cities, it’s a nationwide average. A look at a map of the more regionalized temperature variations shows that temperatures were actually above normal in every major league city on the East Coast and in the upper Midwest. In fact, it appears that the only relevant cities where cooler temperatures prevailed in April were Seattle, Oakland, San Francisco, San Diego, Dallas, Minneapolis (where they play in a dome), Kansas City, and St. Louis. Data for May has yet to be released, but any temperature-related explanations have been dealt a blow. In the meantime, Bradbury himself has has since dismissed the temperature change explanation for the drop in home runs.

Having basically lobbed more spitballs than Gaylord Perry on Old-Timer’s Day in this article, I’m not going to send you away with any firm conclusions, because I don’t think there are any to be drawn. Scoring has fluctuated considerably during Bud Selig’s reign, a time of nearly constant change in the game. The crush of coverage that’s developed during that time via electronic media, 24-hour news cycles and the blogosphere can lead to a rush of attempts to explain the game’s current trends and anomalies, often–particularly when it comes to those high-profile talking heads–twisted to fit a preconceived narrative rather than backed by hard data. By October, the larger sample sizes will likely steer us back to scoring levels more in line with recent history. While it may not be a catchy answer to say, “Let’s see where the data is at the end of the year,” before we firm up our theories about this scoring drop, it’s the right one.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe