A strong bench would seem to be one of those indispensable elements of a successful team. After all, if you need to generate some offense late in a game, you need players on the bench you can count on. (Not to mention, of course, the need to give players a break and have replacements for injuries. For this article I’m just looking at pinch hitting.)
The 2005 Phillies seemed to have just what they needed, in the form of four quality outfielders: Bobby Abreu (.286/.405/.474) in right, Pat Burrell (.281/.389/.504) in left, and Kenny Lofton (.335/.392/.420) and Jason Michaels (.304/.399/.415) sharing time in center. This must have been a manager’s dream, right? Lofton or Michaels on the bench, able to come in and get on base to keep a rally going? Let’s see how it worked out.
Not surprisingly, both Lofton and Michaels had plenty of pinch hitting appearances that season: 13 for Lofton, and 32 for Michaels. Combined, they hit a whopping .175/.267/.250 in those 45 plate appearances. Surely, this isn’t what Charlie Manuel had been hoping for. So, what happened?
Naturally, we can’t conclude much based on 45 plate appearances. So, let’s try a larger sample. How about all players who pinch hit in 2005? Overall, pinch hitters hit .228/.306/.336 that season, while the same players averaged .254/.319/.395 when not pinch hitting. Clearly the 140-point drop in batting average for Lofton and Michaels included a lot of bad luck, but there is still a very significant decline in performance for players when pinch hitting. In fact, this brings into question the usefulness of pinch hitting as a strategy–if players perform that much worse coming off the bench, the player being replaced had better be really bad.
Considering only pinch hitting situations in which the player being replaced is not a pitcher, our worst fears appear to be confirmed. In 2005, the average player being replaced hit .250/.315/.392, and the average pinch hitter had season stats of .257/.322/.402. So, on the surface, managers are bringing in superior hitters to help the offense. However, the pinch hitters averaged just .224/.306/.328 in these situations, which is significantly worse than the season averages of the players they were replacing.
To better quantify the effect on offenses, I will switch from traditional batting stats to wOBA, a statistic that translates directly to run production (a change of 0.100 in wOBA corresponds to 0.087 runs per plate appearance). The statistic is scaled such that players with average hitting profiles will have wOBA values comparable to their OBPs. Redoing the analysis above in terms of wOBA, the average position player being replaced had a wOBA of .322, the average pinch hitter was .329 for the season, and the actual performance of the pinch hitter was .298. In other words, instead of increasing a team’s scoring, the act of pinch hitting appears to have decreased it. Expanding the sample to include 2000-2005, the numbers remain virtually the same: .322, .329, and .304, respectively.
Of course, there are a lot of extra factors that haven’t been accounted for. I haven’t accounted for platooning, which is one of the biggest reasons teams replace position players. Likewise, managers will tend to bring reserves into blowouts to get them playing time, thus intentionally bringing in an inferior player. Nor have I considered the identity of the opposing pitcher, whether he is a starter or reliever, the ballpark, platoon effects, groundball/flyball tendencies, or home field advantage.
To create a more thorough analysis, I’ve created a baseball model that includes all of these variables. The goal of the model is to be able to determine the odds of any particular outcome in a matchup between any batter and pitcher, in any setting. While naturally one can’t do this with 100% accuracy for any particular matchup, we’re interested in overall trends here, and thus the model should suffice quite nicely. Running some sanity checks on the model, all appears to be well. For example, no one will be surprised to learn that Coors is the best hitters’ park in the majors (by a huge margin), that the home team hits about 10 points better (in wOBA), that platoon effects are real, or that pitchers are more effective as relievers than as starters.
So, what about pinch hitters? Indeed, an average hitter being used as a starter will have a wOBA of .341, while an average hitter pinch hitting will have a wOBA of .320. That’s a 21-point drop in wOBA when pinch hitting (or, more precisely, a 6% drop), completely corrected for everything I can think of. For those who have read The Book, this is a familiar result. And indeed, we still find that the largest cause of this is a much higher strikeout rate (23% higher than when starting), though it’s also true that fewer batted balls are hits. (The current data also confirm the assertion made in The Book that all pinch hitters are equally penalized; there is no such thing as a “pinch hitting specialist” who is expected to hit just as well, or even better, as a pinch hitter than he would as a starter.)
Thus, we’re set to address the question raised above: is pinch hitting good or bad, and by how much? I’ll use the same 2000-2005 data set from above, but this time will only consider pinch hit situations in which the batting team is no more than four runs down and no more than two runs up. (This eliminates the cases in which managers knowingly bring inferior players into blowouts.) There were 21,784 such pinch hit chances in those six years, of which 12,083 involved a pitcher being replaced. Surely, even with a 21-point wOBA loss, a pinch hitter should be better than a pitcher, right? Yes. According to the model, the pinch hitter should have hit 132 points better than the pitcher would have hit; in reality he hit “just” 130 points better, nearly dead on the model’s estimate. (Had we omitted the pinch hit penalty, we would have expected the hitter to be 152 points better than the pitcher.) OK, we can breathe a sigh of relief. Pinch hitting for a pitcher is generally a very safe move.
Also, a small fraction of pinch hitters had the platoon disadvantage, while the position player being replaced would have had a platoon advantage. Since the majority of these are instances in which a new pitcher was brought in to face the pinch hitter, I will ignore these in this analysis.
The remaining 9025 pinch hit chances have been broken into those that created a platoon advantage (i.e., a switch hitter or lefty replacing a righty when facing a RHP, or vice versa), and the rest. Let’s first look at the moves designed to create a platoon advantage, which accounts for 78% of these chances. On average, the model predicts a modest improvement of 19 points in wOBA for the pinch hitter, compared with the player he is replacing. Again, this is less than what one would expect if ignoring the “pinch hitting penalty,” but it still amounts to an extra run being created every 60 or so times that such a move is made.
The remaining 22% of the time, the pinch hitter replaced a position player who bats from the same side. In these, the difference between the expected wOBA of the pinch hitter and the player he replaced, according to the model, is eleven points. Of course, this is just an average, with some substitutions being better and others worse. Or, in other words, a significant fraction (about 1/3) of such pinch hitters are actually worse than the players being replaced, once one accounts for the significant pinch hitting penalty. (Of course, if one isn’t aware of the pinch hitting penalty, one would expect the pinch hitters to average 30 points better than the replaced players, thus making nearly all of these substitutions profitable.)
To wrap this study up, I’ve tabulated pinch hitting performances for all managers with at least 100 pinch hitters in my non-pitcher sample. To read the first line of the table, Bob Boone managed 428 games from 2000-2005, and in those games there were 101 pinch hitters in situations meeting our sample criteria. According to the model, his pinch hitters should have averaged .034 better in wOBA than the players being replaced, while in actuality they were .011 better. The final six columns contain the pinch hitting stats broken into situations in which the substitution created a platoon advantage and those in which both hitters bat from the same side.
|All Pinch Hitters||PH for Platooning||PH Same Hand|
First, a check on the model. Comparing its predictions with the actual results, one finds that the differences are entirely explainable by random effects in the actual data. Thus, at least for the purposes of this study, the model can be treated as the absolute truth.
Looking over this table, what do we see? Well, first of all, it’s certainly positive that the vast majority of MLB managers generally make their teams better when bringing in a pinch hitter; only Buck Martinez and Ozzie Guillen managed to bring in hitters worse than the position players being replaced. (Interestingly, both lucked out, in that the pinch hitters outperformed the model.) These two even managed the impressive feat of bringing in worse pinch hitters even when the switch gave their teams the platoon advantage, thus negating one of baseball’s best-known matchup tactics.
What overall trends are there in the table? it’s clear that managers who pinch hit more frequently ranked lower in the list. In fact, the only managers to pinch hit for a position player more than once every two games were John Gibbons, Tom Kelly, and Buck Martinez; all rank in the bottom third of managers. In other words, there are only a limited number of good pinch hit opportunities.
There is an equally strong correlation between number of games managed and the effectiveness of pinch hitting. The cause-effect link, however, is unclear-it’s equally likely that veteran managers become more effective at using pinch hitters as it is that poor tacticians get canned. Given that the difference between Jerry Manuel and Buck Martinez is under two runs per 162 games, it seems unlikely that pinch hitting mishaps are enough to get a manager fired. More likely, astute managers will eventually pick up on the fact that pinch hitters simply don’t perform as well as one would have expected, and adjust their strategies accordingly.
What can we conclude from all this? Simply put, when a pinch hitter comes in, you should expect him to do significantly worse than if he were a starter, so a substitution that would look good on paper may actually cost you runs. Managers beware!
Thank you for reading
This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.Subscribe now