Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

Last week, we took a look at how to evaluate a pitcher in terms of runs allowed. Now we want to talk about how to convert those runs into value in terms of wins.

What we don’t care about are “pitcher wins” in the sense that is recorded in the box score. That definition of “win” pretty much attributes everything the team does in terms of hitting and defense to the pitcher on the mound.

We can do better than that, I think. Just remember—when I talk about a pitcher’s win value from here on out, I’m talking about contributing to wins as part of a team, not as if he is his team.

Making Scale

An important point when converting pitcher runs to wins is that we want to consider all of the runs a pitcher allows, not just those runs considered “earned.”

I’ve talked in the past about why I don’t like errors and, by extension, the distinction between earned and unearned runs. But there’s another side to it—that it distorts the scale of accomplishment.

What we need to know when evaluating a pitcher’s performance in terms of wins is not only his own runs allowed but how many runs his opponent allows (or in the abstract, how many runs the average pitcher would allow). What we’re interested in is the differential.

And so we have to ask ourselves what causes an unearned run. There are really two causes:

  1. How many errors there are. (This is a function of a pitcher’s environment, whether or not he tends to get ground balls, etc.)
  2. How well he prevents baserunners from scoring once they’ve gotten aboard (mainly this is a function of his ability to prevent home runs and strike out batters).

It’s the second property that really interests us here—the ERA scale tends to “compress” the differences between two pitchers more than it should.

So when talking about a pitcher’s runs and how they relate to wins, you have to use all runs, not just earned runs (or an estimate of runs on an ERA scale).

So let’s consider a team’s chances of winning a game—one single game, mind you—presuming an average opponent. We can use Pythagenpat to figure out an assumed win percentage, like so:

RPG^X/(RPG^X + (StarterRA * sIP + ReliefRA * rIP)^X)

RPG is the average runs per game, StarterRA is the RA of the starting pitcher, sIP is his IP, and so on for the relievers. X is equal to:

(RPG + (StarterRA * sIP + ReliefRA * rIP))^.285

OK. So for any starter, we can pretty readily measure his RA and IP (or, as we did last week, his presumed RA and IP given average defensive support and bequeathed runners scoring at average rates). rIP should just be the total expected IP for the game minus the starter’s IP. (Because of extra innings and the home half of the ninth when the home team leads, the average IP per game isn’t exactly nine, but it’s pretty close.)

So our mystery variable is ReliefRA. Now, if we think that a starting pitcher has no control over the quality of his relievers, we can just use the league average RA of relief pitchers and go from there.

The trouble is, we shouldn’t think that.

Going Deep

There are two ways a starting pitcher can control the quality of his relief pitchers. The first is how far he is able to pitch into games. Looking at 2003 through 2009:

ReliefRA by StarterIP

A starter that gets blown out of the game early is going to get the mop-up guys—your long relievers, the guys the manager hasn’t bothered to throw for over two weeks, etc. At about 6 IP (roughly the average duration of a start) you see a relief RA of 4.42, while relievers in that time span had an average RA of about 4.52. And if you can really go deep into the game and give seven or eight innings of solid performance, you can get solidly above-average relief support.

The other thing a starter can do to influence the quality of his relief support is pitch well during his time on the mound. Looking at games where the starter went between five and six innings:

Graph of ReliefRA by StarterRA

Again, pitching better gets you a higher quality of bullpen support—taking a shutout into the middle innings gets you relievers who are a little above average, while taking a blowout into the later innings gets you the mop-up guys.

What I’m emphasizing here is that there is more to evaluating a starting pitcher than his RA would suggest—a pitcher can, with his performance, control the quality of his relievers to an extent that has a real effect on his value to a team.

What's Next

Well, we’ve looked at how a starting pitcher’s performance can affect his reliever’s performance. This should have implications on how we measure reliever performance. We’ll explore that next.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
rawagman
8/25
This is pretty thought-provoking stuff, Colin. Thanks. Of course we are assuming that the team in question has solid relievers to speak of at all. There would still be some outliers (2010 Diamondbacks), but I think the concept holds true. If you pitch well, you'll get your team's better relievers, if the manager has been handling his bullpen in a fashion resembling optimal over the course of the season.
beeker99
8/25
I am thoroughly loving this whole series, Colin. You are a fantastic writer, and convey this stuff incredibly clearly and simply.

I wonder - does current bullpen usage (call it the LaRussian bullpen model) affect the Reliever RA by Starter IP graph? That is, what happens if nearly every team is using their best reliever in a high leverage situation in earlier in the game, than waiting for a save opportunity in the 9th? Would a graph of Reliever RA by Starter IP for, say, 1972-1978 look significantly different?

Of course, even today, not every team uses its best reliever as the closer, so maybe that balances it out?
cwyers
8/25
First, thanks for the kind words.

Yeah, this is something that doesn't apply equally over all of baseball history. I hope to get deeper into that aspect of things soon.
cakuffner
8/25
"Again, pitching better gets you a higher quality of bullpen support—taking a shutout into the middle innings gets you relievers who are a little above average, while taking a blowout into the later innings gets you the mop-up guys."

True, but isn't there a team offense component to this, too? I mean, look at Dustin Moseley's start for the Yankees last night. He went six innings and gave up two runs, a perfectly decent effort by any measure. But the Yankees were leading 11-2 when he left, so in came Chad Gaudin, who wound up yielding three runs in two innings (Kerry Wood then pitched a scoreless ninth). Had the score been 3-2, we probably would have seen an inning each from David Robertson, Joba Chamberlain, and Mariano Rivera. How does that affect your assumptions? Thanks!
cwyers
8/25
There is an offensive component, but we're trying to isolate out the value contributed by a pitcher's performance.
ScottBehson
8/25
Very interesting. There's going to be a lot of "noise" in measuring this, as cakuffner states. I would like to see how Johan Santana and Josh Johnson fare when your numbers are complete.
markpadden
8/25
The main assumption (used in your conclusions after the second graph) is that there is no correlation between quality of starting pitching and quality of relief pitching from team to team.

The problem is that there is a pretty strong correlation. This year, for example, it's around 0.44. Some of this is the fact that the teams that spend more and/or scout better for SPs tend to do so for RPs as well. Also, park effects play a major role. Padre starting pitchers will have low RAs on average, and so will their relief pitchers, e.g., which makes it look like there is some causality between starterRA and relieverRA that may not exist.
stimetsr
8/25
The first effect noted here made perfect sense to me... the more precious relievers are saved for games where the starter has done well. When I read the second one though, the first thought that popped into mind was that starters do well against bad offensive teams, and less well against the better teams. When they leave the game, the relief corps should enjoy/suffer the same effects, right? If an offense has its "hitting shoes" on one night, it really doesn't seem to matter who the other team throws out there, they will get hit. That's why relievers hide...
studes
8/25
Great stuff, Colin, though I agree with evo34. Looking at this data by team would get rid of the multicollinearity, or whatever you call it.

BTW, I think there is an error in your formula. You use RPG for offense, but RA for defense. I assume that's Run Average and not Runs Allowed, but you multiply the RA by the total number of innings pitched for both starters and relievers.

That would give you total runs allowed, not runs allowed per game. If I'm reading it correctly.
cwyers
8/25
In this case, we're applying Pythag on a per-game basis, so it works out to the same thing.

I'll see if I can provide some graphs for the second effect with controls for team/park effects.
studes
8/25
Got it. Thanks.
studes
8/25
By the way (and I understand this is off the point), but does the Pythag formula work well for determining the probability of winning a specific game? I don't believe I've seen it used that way before. Just curious...
studes
8/25
Well, never mind. Stupid question. If it works for 162 games, it should work for one. I'm just slow, is all.