Premium and Super Premium Subscribers Get a 20% Discount at MLB.tv!
August 25, 2010
Last week, we took a look at how to evaluate a pitcher in terms of runs allowed. Now we want to talk about how to convert those runs into value in terms of wins.
What we don’t care about are “pitcher wins” in the sense that is recorded in the box score. That definition of “win” pretty much attributes everything the team does in terms of hitting and defense to the pitcher on the mound.
We can do better than that, I think. Just remember—when I talk about a pitcher’s win value from here on out, I’m talking about contributing to wins as part of a team, not as if he is his team.
An important point when converting pitcher runs to wins is that we want to consider all of the runs a pitcher allows, not just those runs considered “earned.”
I’ve talked in the past about why I don’t like errors and, by extension, the distinction between earned and unearned runs. But there’s another side to it—that it distorts the scale of accomplishment.
What we need to know when evaluating a pitcher’s performance in terms of wins is not only his own runs allowed but how many runs his opponent allows (or in the abstract, how many runs the average pitcher would allow). What we’re interested in is the differential.
And so we have to ask ourselves what causes an unearned run. There are really two causes:
It’s the second property that really interests us here—the ERA scale tends to “compress” the differences between two pitchers more than it should.
So when talking about a pitcher’s runs and how they relate to wins, you have to use all runs, not just earned runs (or an estimate of runs on an ERA scale).
So let’s consider a team’s chances of winning a game—one single game, mind you—presuming an average opponent. We can use Pythagenpat to figure out an assumed win percentage, like so:
OK. So for any starter, we can pretty readily measure his RA and IP (or, as we did last week, his presumed RA and IP given average defensive support and bequeathed runners scoring at average rates). rIP should just be the total expected IP for the game minus the starter’s IP. (Because of extra innings and the home half of the ninth when the home team leads, the average IP per game isn’t exactly nine, but it’s pretty close.)
So our mystery variable is ReliefRA. Now, if we think that a starting pitcher has no control over the quality of his relievers, we can just use the league average RA of relief pitchers and go from there.
The trouble is, we shouldn’t think that.
There are two ways a starting pitcher can control the quality of his relief pitchers. The first is how far he is able to pitch into games. Looking at 2003 through 2009:
A starter that gets blown out of the game early is going to get the mop-up guys—your long relievers, the guys the manager hasn’t bothered to throw for over two weeks, etc. At about 6 IP (roughly the average duration of a start) you see a relief RA of 4.42, while relievers in that time span had an average RA of about 4.52. And if you can really go deep into the game and give seven or eight innings of solid performance, you can get solidly above-average relief support.
The other thing a starter can do to influence the quality of his relief support is pitch well during his time on the mound. Looking at games where the starter went between five and six innings:
Again, pitching better gets you a higher quality of bullpen support—taking a shutout into the middle innings gets you relievers who are a little above average, while taking a blowout into the later innings gets you the mop-up guys.
What I’m emphasizing here is that there is more to evaluating a starting pitcher than his RA would suggest—a pitcher can, with his performance, control the quality of his relievers to an extent that has a real effect on his value to a team.
Well, we’ve looked at how a starting pitcher’s performance can affect his reliever’s performance. This should have implications on how we measure reliever performance. We’ll explore that next.