So, it turns out that all those John Hughes movies that made me think of
beautiful L.A. days were actually set in a fictional suburb of Chicago. I had
no idea. I mean, would it have killed John to put someone in earmuffs at some
point, or at Soldier Field (Bueller aside) or talking about the power of
mini-Ditkas? I genuinely thought those movies–well, most of them,
anyway–were set in L.A. and environs. E-Sheehan (lots).
One of the ongoing debates in sabermetrics is how to best deploy your top
reliever. Most analysts dislike the strict restriction of a team’s top
reliever to save situations or tie games at home when a save is not possible.
There are too many high-leverage situations that don’t fall into that narrow
band–and too many low-leverage situations that do–to make that the optimal
usage pattern, but the idea that the ninth-inning is all-important has taken
hold within the game.
This notion leads to some completely illogical choices, a particular subset of
which I want to address here today. I’m not going to argue for radical changes
to how teams assign relief innings, for a return to the ace-reliever model of
the 1970-1985 period. All I want to see is a little thought going into the
process, a little deviation from the established norm.
Assume, for the moment, that teams choose their closer and their set-up man in
a manner that funnels the best pitcher to the former and the second-best to
the latter. In practice, teams often don’t align their talent this way,
usually because they overvalue service time or previous closing experience. It
is fair to say that they intend to have their best pitcher on the mound in
save situations. We’ll go with that for now.
What the rigid assignment of roles often does is create opportunities for the
other team. Nearly every day, a manager uses his second-best (or worse)
reliever to face the middle of the opposing lineup, while holding back his
best reliever to face the bottom of the order. Joe Torre is probably going to
the Hall of Fame, and his use of Mariano Rivera in the
postseason has shown that he understands leverage. But last week in
Arlington, he nearly gave the Rangers a game by allowing Aaron
Small (gee, who saw this coming?) and Kyle
Farnsworth to pitch through the top and middle of the Ranger order.
It’s the last decision on which I want to focus. With the Yankees holding an
8-3 lead, two on and one out in the eighth, Torre went to the mound to relieve
Small and bring in Farnsworth, with the #2 spot in the Rangers’ order due up.
Farnsworth would face three batters and leave a two-out, bases-loaded
situation–now a save opportunity–to Rivera. Rivera eventually escaped with a
lead and hung on in the ninth to preserve an 8-7 win.
The problem is the decision to use Farnsworth rather than Rivera against the
Rangers’ best hitters, Michael Young and Mark
Teixeira. Allowing them to face an inferior pitcher, even in an 8-3
game, gives them a chance to further the rally. Rivera is being paid twice
what Farnsworth is; shouldn’t he be facing the best hitters late in the game,
rather than the bottom of the lineup?
Phil Garner provides an even better example. Twice in three days
last week, Garner used Dan Wheeler to pitch against the other
team’s best hitters in the eighth, then brought in Brad Lidge
for three relatively easy outs in the ninth. Lidge may not be off to a great
start, but he’s still the Astros’ closer, and Garner wasn’t making this
decision based on the two pitchers’ recent performances. He was only
considering the score and the inning, and by not looking at the opposing
hitters, he aligned his talent in an incongruous fashion.
About nine months ago, I looked
at this phenomenon in a more general fashion. I found that the difference in
quality of batters faced between set-up men and closers was small; however,
Tom Fontaine reported that the top and middle of the order were more likely to
bat in the eighth inning, while the bottom of the order tended to bat in the
ninth. The closer-centric bullpen means that the better reliever is going to
face the worse hitters a significant portion of the time.
On a macro level, these decisions wash out a bit. On a micro level, when
you’re using your second-best reliever to get out Albert
Pujols and Jim Edmonds, while using your best on the
likes of Yadier Molina and Aaron Miles,
you’re costing yourself. Even if you think the ninth inning is somehow harder
than the other eight–a dubious, but popular, perception–can you really argue
that it’s harder than the difference between 1000-OPS guys and 700-OPS guys?
At what point do the soft factors yield to the hard ones?
This isn’t about turning Lidge or Rivera or Billy Wagner into
a Carter-era ace reliever who pitches 110 innings and often comes in as early
as the seventh inning. It’s just about thinking a little more deeply about who
you want on the mound in the game’s most important at-bats, and when those
at-bats occur. Even in a framework of reasonably rigid roles, would it be that
difficult to swap the innings of the set-up man and reliever based solely on
where the other team’s lineup falls? For example, instead of getting Wheeler
up in the top of the eighth inning last week in St. Louis, so that he could
face Pujols and company, Garner would get Lidge up. Same idea–he pitches one
inning–but now he’s facing the best hitters. Wheeler would pitch the ninth.
It’s actually a small change; you’re asking the same work from your pitchers,
just reversing the order in which they appear.
There has to be more to relief usage, even within a framework of fairly
limited roles, than score and inning. There has to be more than mapping usage
to a scoring rule that does little more than pump up one number on one guy’s
Aligning the bullpen in the above manner makes sense, but one reason it won’t
happen is that relievers now get paid based on one number. It would help if
“saves” were the next statistic to fall out of public favor. Over the past 25
years, the practitioners of performance analysis and the guys like me who
stand on their shoulders have done a very good job of reducing the importance
placed on batting average, on RBI and even on pitcher wins. However, rarely
did those statistics ever drive the usage of players the way the save
statistic has. In a span of 20 seasons, the save went from a way of properly
crediting an important contributor to team success–the reliever who pitched
well in preserving a close victory–to a set of criteria for using the team’s
best reliever. There can be no question that teams would not have arrived at
this particular model for relief usage without the development of the rule.
The closer mindset–this whole mythology about the importance and the
difficulty and the personal qualities required to pitch the ninth inning–is a
farce. There were ninth innings for a hundred years before Bruce
Sutter, and no one had any problems with letting the guy who had
pitched the seventh and eighth preserve a one-run lead in the ninth. Check out
some of the boxscores from the 1970s and early 1980s sometime. Again, I’m not
advocating that kind of usage, but it’s important to realize that the closer
myth isn’t even as old as the DH is.
What’s particularly amusing is that there’s a lot of commonality among the
people who propagate the closer myth and those who would argue that starting
pitchers are babied nowadays. It’s not starters–who pitch in a completely
different sport than their predecessors did 40 years ago–who have been
coddled, it’s relievers. Everyone has to have a “role” and know their role and
pitch only in their role with sufficient warning and god forbid they have to
warm up twice in a week without getting into the game. Workloads are way down,
and working conditions are way up. I can give you a lot of good reasons why no
one throws 15 complete games a year anymore, but I can’t give you one good one
why Trevor Hoffman is used the way he’s used.