Happy Thanksgiving! Regularly Scheduled Articles Will Resume Monday, December 1
January 25, 2010
Profiling a Manager, Part 2
Your team has a bit of a problem. Namely, it’s the eighth inning and you are behind by two runs as you take the field to play defense. Worse, your starter is tired and you need to make a call to the bullpen. The question now is whom you should summon. After all, bringing a pitcher in now affects his availability for tomorrow. Should you bring in your ace set-up reliever, try to keep the deficit at two, and hope your offense can come back? Should you bring in the lesser reliever, figuring there’s no use wasting such a valuable resource on a game that, more likely than not, you will lose? Decisions, decisions.
This is the anti-save situation: It’s late, it’s close, but, unlike the actual save situation, you are losing. If this were a regular save situation, there would be a predictable rhythm to how it would unfold. If it’s the eighth inning, you bring in the "set-up guy." If it’s the ninth, you employ your "closer." In other words, it’s a rote formula that anyone could use. What to do, though, when it’s the other way around and you’re not holding a lead, but chasing one? It says a lot about a manager’s style as to how he handles this situation. Is he the sort who likes to go for broke, or does he prefer to save his resources for a rainy day? No one bothers to think about this sort of situation, as they don’t hand out medallions for keeping the score close so that the offense at least has a chance. As a reliever, if you go get a win, then you are derided as a vulture!
Figuring out how a manager handles this situation turned out to be harder than I expected, because, once again, a manager is limited to what resources he has been given. Some managers are blessed with a bevy of outstanding relievers who can handle any situation. Some have to make do with bullpens that are a threat to explode on a nightly basis. Furthermore, reliever performance is a fickle thing due to the small sample sizes inherent in pitching to only a few batters every couple of nights.
Warning: Gory Methodological Details
The first part, defining anti-save situations, was easy enough. I looked for situations in the seventh and eighth innings in which a team was pitching and down by either one or two runs at the start of the inning. (My database stretched from 2003-2009.) The second part, figuring out how the manager was playing the situation, turned out to be tougher.
At first, I figured that I would look at the raw run averages (RA) of the pitchers brought into these situations. Leaving aside for a moment the problem of statistical reliability, a manager who brought in a guy with a 3.00 RA clearly values the situations more than the manager who brings in the guy with the 5.00 RA. But what happens if a manager doesn’t have a 3.00 RA guy to bring in, and, unlikely, but possible, the 6.00 RA guy is actually his best bet?
Instead, I went for an ordinal ranking method. For each team-year, I ranked their relievers from No. 1 to whatever based on seasonal FIP (that is, the rankings for 2009 were based on what the relievers’ FIPs were at the end of the 2009 season). This way, if a manager brings in his best guy in this situation, he’s not penalized if that the guy is a bum, and he gets some recognition for at least trying. The problem is that I’ve taken data that are roughly interval and turned them into a much less-useful ordinal variable. Worse, I’m going to do something I often told my stats classes not to do and that’s take the average of ordinal data! It’s not ideal, but this is a tough nut to crack.
In anti-save innings in which more than one reliever appeared, I took the best-rated reliever of the bunch. Because I looked at both the seventh and eighth innings, a manager could have two "anti-save" innings per game. Most had around 50-60. I looked at the average ranking, again relative to the other bullpen options available, that the manager employed in these anti-save innings.
The manager who most jealously guarded an anti-save lead during the 2009 season? Clint Hurdle (average ranking = 1.75). Before he, you know, got fired. Lest you think that was the reason, though, in second place was his successor, Jim Tracy (2.11), who took the Rockies to the playoffs and won National League Manager of the Year. Then again, in third place was Bob Melvin (2.23), prior to his untimely dismissal from the Diamondbacks. Maybe there’s something to this theory, as now-former Indians manager Eric Wedge came in sixth. For the record, Manny Acta, who also got the axe in Washington, finished in the middle of the list.
The manager who was most lackadaisical about trying to keep the deficit close was Dave Trembley, followed by Cecil Cooper and Ken Macha. I’m not sure what the message is there with those three. But for the morbidly curious, here’s the list, complete with the average ranking of the reliever brought in, relative to his own bullpen:
Manager Average Reliever Ranking Clint Hurdle 1.75 Jim Tracy 2.11 Bob Melvin 2.23 Ron Washington 2.33 Jim Riggleman 2.34 Eric Wedge 2.41 Lou Piniella 2.47 Jim Leyland 2.58 Charlie Manuel 2.60 Jerry Manuel 2.63 Ozzie Guillen 2.65 Joe Maddon 2.71 Bob Geren 2.82 Bobby Cox 3.21 John Russell 3.22 Manny Acta 3.22 Cito Gaston 3.22 Bud Black 3.35 Freddi Gonzalez 3.35 Joe Torre 3.35 Dusty Baker 3.38 Joe Girardi 3.39 A.J. Hinch 3.53 Tony LaRussa 3.54 Mike Scioscia 3.56 Don Wakamatsu 3.69 Ron Gardenhire 3.82 Terry Francona 3.94 Trey Hillman 4.00 Bruce Bochy 4.04 Ken Macha 4.15 Cecil Cooper 4.17 Dave Trembley 4.28
As in my last article, I looked for the reliability of this new toy stat using intra-class correlation (AR(1) rho). It came up as a disappointing .253 over four years (2006-2009). That’s high enough that it makes me think that there’s something there, but that we’d need a few more years’ worth of data to get a good read on it.
Then again, the variability may be due to the fact that often, roles within a bullpen seem to be based on incumbency, rather than performance, or at least that there is a bit of lag between diminishing performance and being politely asked to leave a set-up role. There’s also the small sample size problem with relievers, which can lead to large variations in performance metrics. Relievers face roughly 200-300 batters during a season. Sometimes weird things happen, and the guy who in reality is the second-best pitcher in that pen looks like the fifth-best. Sometimes, it works the other way around. So, the manager, in his own mind, may think that he is calling for his second-best reliever, but my model would say that the man he’s called for is one of the scrubs. This is the greatest difficulty of psychological research: I can’t look inside the manager’s head. I can only look at the results and make a reasonable guess as to what he was thinking. Still, I think this situation is an overlooked window into how a manager approaches the game.
Does the Anti-Save Explain Home-Field Advantage?
One interesting side note that occurred to me: Could these anti-save situations help to explain the presence of home-field advantage in baseball? Consider that the home team has a small structural advantage here. If it’s the eighth inning, the home team pitches in the top of the eighth, meaning that if they are behind, their offense still has two at-bats in which to gather the needed runs to tie or go ahead. For the visitors, it’s the bottom of the eighth, meaning that they have only one more time to bat.
Using a lesser reliever might actually be a sensible strategy in the long run, especially on the road. There is a price to be paid for bringing in a good reliever, which is his possible unavailability the next day. Even if the good reliever holds the other team scoreless, his efforts might be in vain if his team doesn’t score any runs. So, it makes more sense for the manager to take this risk when his team has a greater number of times at bat. On the road, it might make more sense to hold the good reliever back and give up today for a better chance at tomorrow.
I looked to see by inning and by home/visitor status what the average ranking of the pitcher brought in was. There was almost no difference, and what difference existed was in the direction of the visitors bringing in a slightly better pitcher. In the seventh inning, the visitors, on average, brought in their 3.28th-best reliever, while the home team brought in their 3.31th-best reliever. In the eighth inning, the visitors went with their 3.11th-best reliever, while the home team went with their 3.13th. Road managers appear no more likely to want to give up than home managers.
Given the rationale above, I’m left to wonder if this represents inefficiencies in general bullpen management. Certainly, there’s a cultural taboo against "giving up," especially within pro sports, and sending out a less effective reliever may be seen as giving up. But if the goal is to win as many games as possible, it is at least conceivable that there might be a set of circumstances that would leave giving up today and conserving resources for tomorrow as the preferable strategy. (Perhaps a team with a bad offense?) Given the structural advantage that the home team enjoys in batting last, this set of circumstances would be more likely to happen on the road. There should be a difference in those scores.
The near identical rankings observed make me believe that managers, as a whole, are not making these kinds of rational cost-benefit calculations, either explicitly or implicitly. Instead, they appear to be responding to some sort of cultural expectation around the issue, probably around some sort of idea of dishonor at having given up on a game. The eighth inning with a deficit of X runs calls for the nth-best pitcher, whether it’s the top or the bottom of the inning. It’s not about winning or losing, but saving face. That cultural expectation is certainly strong, but if it’s getting in the way of winning as many games as possible, is it not a better idea to buck the culture?