Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

While looking toward the future with our comprehensive slate of current content, we'd also like to recognize our rich past by drawing upon our extensive (and mostly free) online archive of work dating back to 1997. In an effort to highlight the best of what's gone before, we'll be bringing you a weekly blast from BP's past, introducing or re-introducing you to some of the most informative and entertaining authors who have passed through our virtual halls. If you have fond recollections of a BP piece that you'd like to nominate for re-exposure to a wider audiencesend us your suggestion.

As Jered Weaver prepares to serve his six-game suspension, take in some trends in HBP rates over time, which originally ran as a "Schrodinger's Bat" column on May 4, 2006.

“The great tragedy of Science–the slaying of a beautiful hypothesis by an ugly fact.”
–British biologist Thomas H. Huxley (1825-1895)

On April 22nd, Rockies setup man Jose Mesa drilled Giants shortstop Omar Vizquel in the back with his first pitch. The next day, Giants starter Matt Morris hit both Matt Holliday and Eli Marrero in the first eight pitches he threw and was tossed from the game, along with manager Felipe Alou and pitching coach Dave Righetti. That was followed by the customary warnings to both teams, in observance of the practice that Major League baseball adopted in 1994.

Later in the game, Jeff Francis hit Steve Finley and was not ejected, much to the consternation of what was left of the Giants coaching staff. Of course, under the double warning rule, the umpires still have discretion over whether to eject a pitcher after the warnings have been issued; a discretion that yours truly thinks is not exercised nearly as often as it should be. Finally, Ray King plunked Vizquel again in the 8th, and was ejected along with Rockies skipper Clint Hurdle.

The Mesa/Vizquel feud dates back to 1998, when the two were still teammates with the Indians and Vizquel celebrated a spring training home run off of Mesa by doing a cartwheel afterwards. Things went downhill after the 2002 publication of Vizquel’s book Omar! My Life On and Off the Field, wherein Vizquel said of Mesa’s performance in Game Seven of the 1997 World Series:

"The eyes of the world were focused on every move we made. Unfortunately, Jose's own eyes were vacant. Completely empty. Nobody home. You could almost see right through him. Not long after I looked into his vacant eyes, he blew the save and the Marlins tied the game.”

Well, at least no one can accuse Vizquel of being the model teammate.

Mesa then vowed to hit Vizquel every time he faced him, and he did exactly that on June 12, 2002, in the 9th inning of a 7-3 game when Mesa was pitching for the Phillies. And he hit him the next time the two faced each other, which was two Saturdays ago in Denver.

Mesa is now appealing a four-game suspension handed down by Bob Watson. I kid you not, Rockies GM Dan O’Dowd said on the Rockies radio pre-game show on April 29th that he was surprised Mesa was suspended, and that he didn’t think Mesa was throwing at Vizquel. I know GMs like to stand by their players, but really…

Putting the emotions and politics aside, of the more than 14,600 games that have been played since the beginning of the 2000 season, the April 23rd game marks the 138th time that four or more batters have been hit in the same game. Pondering that fact led me to take up the topic of hit batsmen in this week’s column.

A Pair of Trends

To lead off, it’s always good to have a historical perspective. In that vein, I offer the following graph that shows the number of hit batsmen per 1,000 plate appearances in both the American and National Leagues since 1901.

There are several interesting aspects to this graph that lead us to ask two primary questions.

First, you’ll notice that the number of hit batsmen has fluctuated fairly widely over time, with a high of 10.67 per 1,000 plate appearances in the American League in 2001 to a low of 2.82 in the American League in 1947. The rate at which batters were hit decreased steadily from the turn of last century through the late 1940s, and then increased for the next twenty years to a peak in 1968. It then decreased again until the early 1980s, but from 1985 it rose quickly through 2001, to a rate where it has since leveled off.

We humans love causal explanation for apparent trends like this, so the first question that comes to mind is: just what is it that can explain these changes over time?

Secondly, as you can see, batters have historically been hit at slightly different rates in the two leagues, with the American League seeing more hit batsmen from 1909 through 1928, and the National League then doing so until 1950. The leagues then traded the title back and forth until 1970 when the AL would lead for more than 20 years until the strike-shortened 1994 season. Since that time the back and forth has returned, with the AL leading seven times and the NL five. The second question then is: what are we to make of these differences between the leagues?

In the remainder of this week’s column we’ll tackle the first question related to the overall historical trends, and leave the second–which deals with league differences–for next week.

The Big Picture Trend

There have been a number of theories proposed attempting to explain the historical trends we see in the rate of hit batsmen. Let’s look at them.

On August 16, 1920 Carl Mays of the Yankees hit Ray Chapman of the Indians in the head with a pitch. The next day, Chapman died and became the only professional player ever fatally injured in a game. Although Mays was vilified in some quarters, dirty balls were also held responsible; as a result, umpires began to replace balls that had been dirtied much more often in-game.

At first reflection, any baseball fan might assume that this tragic event would have had an immediate impact on the way the game was played, with the result being that more pitchers were afraid to throw inside, which would reduce the number of hit batsmen. Additionally, fewer soiled balls in play would theoretically allow for their being spotted more easily by hitters, which might allow them to duck, dive, or dodge the inside pitch. In either case, we’ll call this the “physical hazard” theory to explain the reduction in hit batsmen.

While it’s a nice theory, you can see from the graph that the longer trend in the reduction of batters hit had been operative in the American League since 1911, and in the National League stretching all the way back to 1901. In fact, contrary to the theory that the Chapman beaning may have had a dampening effect, a closer examination of the period between 1919 and 1925 reveals that hit batsmen per 1,000 plate appearances actually briefly went up the year following the beaning (1921) through 1923, before resuming its downward trend.

AL      NL
1919    6.80    6.28
1920    6.49    5.76
1921    6.76    5.12
1922    7.22    5.62
1923    7.35    5.62
1924    6.94    4.99
1925    5.67    4.90

So the physical hazard theory seems to have little validity. From this, one might then reason that if that monumental event didn’t signal a change then it’s unlikely that any other isolated incident or play would have, either.

So what about a broader theory that takes into account a cost/benefit valuation of hitting batters? For example, it could be the case that pitchers adjusted their frequency of hitting opposing batters based on their recognizing the costs of doing so. In times where runs are scarce, hitting a batter would cost relatively more than when runs are plentiful, since there is a greater probability that the batter would have been put out had they not been hit. The result is that there would be fewer hit batsmen in depressed offensive environments, and more in inflated environments. Sounds like a reasonable idea and we’ll dub it the “offensive context theory.”

We can test this theory by taking a look at the cost of hitting a batter in terms of the Win Expectancy Framework (WX) for both the American and National Leagues since 1901. The framework allows us to estimate how much a hit by pitch is worth in terms of wins and we can then graph the results for both leagues.

As you might have guessed, the increase in Win Expectancy for each hit batsman was high in the Deadball Era at over 3%, and then decreased from the early 1920s until the late 1930s as offensive levels rose, reaching a low point just over 2.6%. The values then began to climb again, reaching over 3% in the 1960s, and after a brief spike in 1989 fell as offensive levels rose again.

So, does the offensive context theory hold water? If you were to overlay these two graphs you would find little in common. For example, the rate of hit batsmen in the Deadball Era declined steadily, even though the cost remained fairly constant until the offensive explosion of 1920. Offensive levels then began to decline in the late 1930s, making the cost of hitting a batter rise, although we find that hit batsmen rates continued to decline into the late 1940s. And again, as the cost of hitting batters rose in the 1950s and from 1993 on, more batters were being hit. In fact, the WX value of a hit by pitch turns out to have almost zero correlation with the rate at which batters are hit. Another beautiful theory spoiled by some ugly facts.

Okay, offensive levels don’t seem to drive HBP rates, but what if an increased rate of hitting batters has the effect of depressing offense, and vice versa? We’ll label this the “intimidation theory.” After all, offensive levels rose as batters were being hit less often throughout the 1920s, and run-scoring dropped as batters were being hit more often in the 1960s. Many former players, especially those who had the “pleasure” of facing Don Drysdale and Bob Gibson, tend to favor this theory.

Unfortunately, the intimidation theory has the same underlying problem as the one that preceded it. While the examples cited in the previous paragraph seem to make sense, the theory fails to explain why hit batsmen declined throughout the Deadball Era, and why in the offensive eras of the 1950s and post-1993 the rate of hitting batters has actually increased.

Another theory that is popular, and one that we’ll tackle in next week’s column, is that since 1973 and the introduction of the designated hitter, hit batsman have been on the rise since the pitcher does not himself face the consequences of hitting opposing batters. This is the so called “moral hazard theory.” A quick glance at the first graph militates this idea, however, since the HBP rate actually began to decline in 1969, and continued to do so through the first eleven years of the DH. In addition, the rate rose and fell in both leagues, rather than affecting only the AL as you would expect.

A couple years ago, J.C. Bradbury of the excellent blog Sabernomics along with Doug Drinen studied the issue of HBP differences using play-by-play data. One of the conclusions they came to was that talent dilution as the result of the 1993 expansion draft contributed to the rise in hit batsmen post 1993. The theory is that a greater percentage of pitchers with less experience produce more accidental hit batsmen. At first glance this “expansion theory” makes a lot of sense. Take a look at the following table that lists each expansion event along with the rates the year prior to as well as the first year of the expansion.

Pre            Post             Diff
AL 1960  5.76  AL 1961  5.22   -0.54
NL 1961  5.48  NL 1962  6.11   +0.63
AL 1976  5.18  AL 1977  5.42   +0.24
NL 1992  5.48  NL 1993  6.66   +1.18
NL 1997  9.02  NL 1998  8.38   -0.64
AL 1997  7.78  AL 1998  8.77   +0.99

In all but two instances, the rate of hitting batters went up in the league to which baseball added teams. It should be noted that in the first four expansions the league that did not expand also saw their rate increase, which you might expect since expansion in one league also dilutes talent in the other.

What this table doesn’t show–though it's captured in the graph–is that the overall trends in each case were not really affected. When expansion came to the AL in 1961 and the NL in 1962 hit batsmen were already on the rise. When the AL expanded in 1977 the rates were declining and continued to do so after 1977. In both 1993 and 1998 the rates had already been increasing since 1985, and so while expansion may have egged on the increase, it clearly wasn’t the only factor. In other words, expansion did not signal a change in direction of trends that were already underway. As a result, it doesn’t appear that the expansion theory can be invoked as a general explanation and in any case can’t be invoked to shed any light on the trends prior to 1961 when both leagues had eight teams.

Finally, there have been articles in the popular press over the past few years that argue that a confluence of factors is responsible for the increasing rate at which batters are being brushed back. For example, a 2003 article from USA Today argued that a 2000 directive from Major League Baseball to change how umpires called strikes (in order to conform more closely to the rule-book definition) was the primary culprit. The “new strike zone theory” contends that adhering to the traditional definition has resulted in calling more strikes on the inside corner, and that pitchers are taking advantage of the fact, with hitters being plunked more often as they dive out over the plate in an attempt to hit what used to be strikes off the outside corner. Unfortunately for the new strike zone theory (at least as a single explanation), the increase in batters being plunked can be traced to almost 15 years before the “new” strike zone was implemented.

In addition, if you’re looking for single causes, one might imagine that the double-warning rule instituted in 1994 would have a dampening effect on hit batsmen. After a warning, pitchers might be wary of throwing at or near guys when they would almost certainly be ejected. However, although the rate went down slightly in 1994 in the AL, it did not in NL, and after that continued its upward trend.

Another factor mentioned in the article, however, appears to be more promising. First, the article speculates that a generation of pitchers accustomed to pitching to hitters with aluminum bats don’t go inside as often, since doing so is less effective when hitters can still fist a ball on their hands for a hit using a bat that doesn’t shatter. As a result of this “aluminum theory,” hitters have adjusted to looking for pitches over the outside corner, and therefore dive at the ball and stand closer to the plate. When this style of hitting is coupled with pitchers who, at the professional level, finally do try and pitch inside but do poorly at it, you end up with lots more batters being hit.

What is satisfying about this theory is that it accounts for the recent rise in HBP rates in both leagues and seems to have timing on its side. Although the first patent for a metal bat was granted in 1924, Worth didn’t introduce the first aluminum bat until 1970, and it wasn’t until the late 1970s that bats by Worth (and, especially, Easton) significantly increased the popularity of aluminum bats. Seeing the rates begin to climb five to ten years later would seem to therefore be in line.

Systemic Theories

In the end, theories like the aluminum bat theory are the kinds of systemic explanations that seem to be needed to explain shifts in the game such as those related to hit batsmen. Instead of looking for single incidents such as the physical hazard or strike zone theories, or very subtle causes like the offensive context or intimidation theories, what we should probably be looking for are systematic changes in how the game is played, changes that may even originate well before players reach the professional level. While I don’t have any immediate answers for the forty-year decline in the first part of last century, or the increase during the following twenty years, I think those lines of inquiry will prove to be more promising, and the theories they produce less likely to be the victim of a few inconvenient facts.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now

Dan Fox

 

You need to be logged in to comment. Login or Subscribe
BillJohnson
8/04
Interesting to revisit this after some of the HBP unpleasantness of the last week. One historical note: While Don Drysdale was every bit the head hunter he was made out to be, Bob Gibson's career HBP rate was only slightly above league average at the time, at about 6.3 HBP per 1000 plate appearances. (Drysdale's rate approached twice that.) Of course, the players _thought_ Gibby was throwing at them, and they should know. This leads to an alternative interpretation of those curves: might there be era-to-era variations in the probability that a pitch thrown in the direction of a batter's body actually hits him?
kmbart
8/04
A very interesting look at a difficult topic. It would seem to me that the two main factors that determine whether a pitch hits a batter are the direction it's thrown in (more toward the batter = more likely to hit him) its velocity (faster = less time to get out of the way). If we assume that HBPs are predominantly unintentional events, and that pitchers do not intend to throw AT the batter (near, sure), then my first glance impression would be that, as overall talent/skill levels rose over time due to an expanding field of talent from which to draw players, HBP rates fell, reaching their nadir around integration, which marked a significant expansion in the talent pool.

Beyond that, while I obviously don't have average pitch speeds for the last century, I think most would find it fair to assume that pitchers have thrown, on average, faster and faster over time, resulting in less and less time for batters to get out of the way. Even if that trend was in place before the low HBP point in the 1940s, there is always a trade off between the effect of the two "causes," and maybe the '40s were when the overall HBP-decreasing effect of increasing pitcher accuracy began to be trumped by the HBP-increasing effect of rising pitch speeds. Then the "metal bat theory" posited above could also have had an effect from the 1980s until today.

Thanks for re-posting this one, good stuff to chew on.
mikefast
8/04
What drives HBP rate far more than pitch speed is pitch type and pitcher/batter handedness.

Pitches thrown inside from a same-handed pitcher are far more likely to hit the batter in the torso--sinkers are the classic pitch type for this, but four-seam fastballs, changeups, and curveballs also occasionally qualify.

Pitches thrown inside from an opposite-handed pitcher are more likely to hit the batter in the feet or lower legs--sliders are the classic pitch type for this, and sometimes cutters and four-seam fastballs, though this doesn't happen as often as batters getting hit in the torso by a same-handed pitcher.

Hm, it strikes me I should probably get my research together and publish something on this.
kmbart
8/04
Thanks, Mike. In that case, maybe increased LOOGY and ROOGY use is another driver.
BillJohnson
8/04
Interesting idea, but it doesn't seem to be supported by the facts. A semi-random check of many (but not all) LOOGYs active in the last 15 years reveals that most have HBP rates ranging from about 7 per 1000 batters faced to 12 -- in other words, more or less the same as league average for that time, maybe just a tad lower. Outliers range from 3/1000 to close to 20/1000 (I had no idea that Trever Miller was such a "gunner"). The possibility remains open that LOOGY use increases HBP rates indirectly, by reducing the RH/LH at-bats that, by Mike's reasoning, rarely lead to HBPs. However, the LOOGYs themselves don't seem to be particularly guilty, by and large.
mikefast
8/05
The HBP rate for same-handed batter and pitcher is about twice the rate for opposite-handed. That's for the time period 2007-2010.

From that you could figure out whether LOOGY/ROOGY usage is a significant part of the trend.
Oleoay
8/08
Dan had a lot of great stuff.
caernavon
8/04
...it could be the case that pitchers adjusted their frequency of hitting opposing batters based on their recognizing the costs of doing so. In times where runs are scarce, hitting a batter would cost relatively more than when runs are plentiful, since there is a greater probability that the batter would have been put out had they not been hit.

Isn't the opposite just as likely to be true? In a depressed runs environment, the cost of hitting a batter is lessened, as it will be harder for the hit batter's teammates to drive him in. And in an environment when runs are plentiful, hitting a better could be very costly as his teammates are more likely to knock in the hit batter.