May 6, 2004
Taking One for the Team
When Does it Make Sense to Sacrifice?, Part 1
One of the most striking discoveries of much of the statistical research done in baseball over the last 20 years is that outs are more valuable than bases. This breakthrough means that stolen bases are only good when the stolen base percentage is above a certain break-even point. Furthermore, it means that "sacrifices" are an extremely bad idea if you're trying to score runs, which we'd like to assume everyone is trying to do--even that team in Los Angeles.
This line of thinking derives almost entirely from the grid of expected runs in an inning based on the outs and runner situation, originally developed by John Thorn and Pete Palmer in The Hidden Game of Baseball:
Runners Outs None 1st 2nd 3rd 1st&2nd 1st&3rd 2nd&3rd Loaded 0 0.454 0.783 1.068 1.277 1.380 1.639 1.946 2.254 1 0.249 0.478 0.699 0.897 0.888 1.088 1.371 1.546 2 0.095 0.209 0.348 0.382 0.457 0.494 0.661 0.798
Looking at the table, the most obvious feature is that expected runs decrease much more quickly going down (as outs increase) than they increase going across (as runners move around the bases).
Where do these numbers come from? Originally, this grid was the summation of years of data for the seasons up to and including 1983. These antiquated numbers are the first problem with basing conclusions on this chart. It should be news to no one that things have changed a great deal since then; players are bigger, parks are smaller, and run scoring is much higher than it was 20 years ago. Thus, it's important to use information from more recent seasons. Here are the numbers from 2003:
Runners Outs None 1st 2nd 3rd 1st&2nd 1st&3rd 2nd&3rd Loaded 0 0.531 0.919 1.177 1.380 1.551 1.869 2.023 2.474 1 0.282 0.535 0.706 1.032 0.909 1.211 1.428 1.544 2 0.109 0.237 0.341 0.384 0.454 0.518 0.541 0.797
As offensive levels have increased over the last 20 years, the cost of an out has moved accordingly--and as a result, sacrifices appear to make less sense, from a tactical perspective, than ever before. Looking specifically at sacrifice situations--a runner on first and no outs, first and one out, or second and no outs--it's possible to calculate exactly how detrimental sacrificing is to the offense. In the original table, the cost of sacrificing can be determined by subtracting the run expectation from the situation before the sacrifice from the situation after. Specifically, with a runner on first and no one out, 0.783 runs are expected; subtract a runner on second and one out, 0.699, and we can see that the sacrifice "costs" 0.084 runs. Repeating this calculation with the 2003 data, a sacrifice in this particular situation now costs 0.213 runs (0.919 - 0.706), or more than twice as much. For other sacrifice situations--such as a runner on 1st and one out or a runner on 2nd and no outs--the cost has increased from 0.130 to 0.193 and, interestingly, decreased from 0.171 to 0.145. Regardless, sacrificing still looks like a bad idea.
Of course, it's possible that in some situations, sacrificing does make sense. The information above is simply the league average for the year, but when making the decision to sacrifice, each manager is presented with much more specific information than the table provides. In particular, depending on the particular batters who are due up in the lineup, the manager can adjust his run expectation in the inning and better determine if sacrificing would increase that expectation, overall. Our objective now is to find those points in the lineup where sacrificing begins to pay dividends.
To this end, let's look at the three specific common sacrifice situations mentioned above:
To get an estimate of the expected offensive output, we'll employ a Strat-O-Matic style analysis. Each hypothetical player will be assigned rates for singles, doubles, triples, home runs, and BB/HBP per plate appearance. Then, we can estimate from these rates the expected outcome of each plate appearance, and adjust the run expectation accordingly.
Constants and Assumptions
The first example will deal with the first situation--a runner on 1st and one out--and we'll call the next two men up in the lineup Batter One and Batter Two. For each batter, there are nine possible outcomes to the at-bat: the five mentioned above, as well as a sacrifice, an out, and divisions within single and double for a hit that advances the runner the extra base. Thus, our results will be labeled:
B11B * (R1-2B) - Batter One hits a single that advances the runner only one base
Before we get started, we'll make several initial assumptions:
Assumptions clearly detract from the accuracy of any analysis, but in these cases, the minute detraction from accuracy is compensated for by the ease in analysis. The range in difference between the best player at avoiding a double play and the worst isn't significant enough at this point to really affect the data. Furthermore, double plays are rare enough to look over for now. The last two assumptions are based on the strategy employed. As a bunted third strike is an out, the batter can quickly find himself in a situation where sacrificing is highly dangerous if he waits for the runner to attempt to steal. This makes stealing an even riskier proposition when the sacrifice is on, and thus it will not be considered. While a 100% success rate is clearly an unrealistic expectation, we're dealing with major league ballplayers, most of whom should be able to drop down a decent bunt. Adjustments can be made later for expected success rate, but for initial simplicity, we'll assume everyone can do their job. Finally, the ability of the runner to advance the extra base is dependent on the runner's individual skill, the type of hit (infield singles almost never advance even the fastest runner), and the out situation. We can correct for the out situation, but deciphering whether a runner advanced on this own merit or because of the quality of the hit is nearly impossible, and adding it yields almost no additional significant findings.
The constants for the base-running values are taken from the 2003 season and are as follows:
.251 = Percentage of runners who advance from first to third with less than two outs
A Simple Example
As an example of how this will proceed, let's take a look at a simple situation where Batter One is a pitcher batting who bats .100, never walks and always hits singles. If we were simply looking to find the likelihood that the team's expected runs would increase if he sacrificed, we would use the following formula:
0.341 - (B11B * ((R1-3B * 1.211) + (R1-2B * 0.909)) + B1OUT * 0.237)
B11B is the probability that Batter One (B1) will single
The constants are the expected runs for the rest of the inning taken from the 2003 Run Expectation table:
0.341 = Man on second, two outs (the result of a successful sacrifice)
Plugging in the numbers yields the following:
.341 - (.100 * ((.249 * 1.211) + (.751 * .909)) + .900 * .237) = 0.029
Thus, in this case, sacrificing makes just the slightest bit of sense, to the tune of three one-hundredths (~0.003) of a run. By substituting x and (1-x) for .100 and .900, and setting the equation to zero, we can determine the average at which point sacrificing is just as valuable as swinging away in this situation: 0.139. Thus, if you're presented with a hitter who only hits singles, sacrifices successfully 100% of the time, and you have the league average lineup coming up with the next few batters, you should sacrifice only if the hitter hits worse than 0.139. This is overly simplistic, but illustrative.
Adding Batter Two
The next step is to substitute more accurate numbers than the raw numbers from the run-expectancy table based on Batter Two. Batter Two should factor into the decision to have Batter One sacrifice just as much as Batter One, because Batter Two must be the type of batter who would benefit from a sacrifice.
What kind of player makes a good candidate for Batter Two? Intuitively, we can say that sluggers are not the type of hitter in front of which it's a good idea to sacrifice. Why is this? Consider the following: If Batter One sacrifices and Batter Two hits a home run, triple, walks, or gets out, the resulting runners/outs situation is the same as if Batter One had simply gotten out. It is only in singles and doubles by Batter Two that sacrificing makes a difference. But let's make sure and do a short proof.
Keeping Batter One as a singles machine, we sum the following to determine what to expect from Batter Two, assuming that Batter One got out:
B21B * ((R1-3B * .518) + (R1-2B * .454))
This list is simply the run expectations after each outcome of Batter Two's plate appearance, multiplied by the expected runs from the situation after that outcome. Again, the constants are the run expectations from the Thorn-Palmer table based on the resultant runner-out situation after the at-bat. Actual runs scored are added to additional expected runs. For instance, if Batter Two hits a home run, the situation is now Bases Empty with Two outs, yielding a run expectation of 0.109. However, two runs have scored, so the constant is 2.109.
Similarly, summing the following yields the expected runs after Batter One sacrifices:
B21B * ((R2-R * 1.237) + (R2-3B * .518))
Plugging in the known values for runners advancing and subtracting the two sums equal to each other gives us:
((B21B * 1.064) + (B22B * 1.341) + (B23B * 1.384) + (B2HR * 2.109) + (B2BB * 0.454)) - ((B21B * 0.474) + (B22B * 0.989) + (B23B * 1.384) + (B2HR * 2.109) + (B2BB * 0.454))
which reduces to
B21B * 0.590 + B22B * 0.352
This equation effectively says that a team's run expectation increases by 0.590 runs per single and 0.352 runs per double that Batter Two hits, if Batter One successfully sacrifices instead of getting out. From this, we can extrapolate that hitters who hit a higher percentage of singles and doubles will benefit more from a sacrifice in front of them than will hitters with lower percentages. Thus, sluggers do not make good candidates for Batter Two.
Going back to the original problem, how can we apply this finding to the decision that the manager faces when trying to decide whether or not Batter One should sacrifice? Quite simply, we must weigh the expected benefits of the sacrifice that we determined above against the likelihood that Batter One actually gets on base instead of getting out. If we go through the same process for Batter Two after a Batter One single, we get the following run expectations:
B21B * 1.449
Multiply these by Batter One's batting average (B11B)--remember, he hits nothing but singles--then add the chances that Batter One gets out (1 - B11B) times the expected runs if he gets out. This addition yields the expected runs if Batter One swings away. Then subtract from this the expected runs when he sacrifices. If the result is positive, sacrificing is a bad idea. Negative, it's a good idea. In numerical form, that would look like this:
Probability of Batter One getting out times the expected runs if Batter One gets out:
Add to that the probability of Batter One getting a hit and the expected runs:
Then subtract the expected runs after a sacrifice:
That very long string of numbers represents a much better estimation of the benefits of sacrificing. Unfortunately, it's too long and cumbersome to be of much practical use. Instead, we can draw some conclusions by substituting real players for Batter Two.
Below are the AVG, OBP, and SLG for several players in a season. The second column, called Breakeven, is the batting average that our theoretical Batter One--who hits nothing but singles--would have to hit above to make sacrificing detrimental to run expectation:
Player Year Breakeven AVG OBP SLG Barry Bonds 2001 .075 .328 .517 .863 Mike Cameron 2002 .120 .239 .342 .442 Todd Helton 2000 .162 .372 .470 .698 Alfonso Soriano 2002 .170 .300 .336 .547 Alex Cora 2003 .183 .249 .288 .338 Ichiro Suzuki 2001 .243 .350 .384 .457
Interestingly, Barry Bonds and Ichiro Suzuki are the low and high ends of the spectrum, both during MVP seasons. Thus, we can conclude that, in this simple case, no matter who is coming up next, any batter hitting below .075 should always sacrifice, while any batter hitting better than .243 should never sacrifice. If nothing else, this conclusion lends further credibility to the idea that pitchers should almost always sacrifice if given the opportunity.
It's also enlightening to note that players with vastly different statistical lines share very similar breakeven points. In fact, looking at all players from 2000-2003, the coefficient of determination is very low for each individual metric. (The Coefficient of Determination, or r-squared, is a measure of the precision of the relationship on a scale from 0 to 1. The closer the value is to 1, the better the relationship. An r-squared of 0 represents complete randomness, 1 is a perfect relationship, and anything above 0.5 is considered viable.) Specifically, for AVG it is 0.1625, OBP is 0.0173, and SLG is 0.1080. As none of these comes close to being statistically significant, we must concede at this point that using any of baseball's more traditional metrics individually to determine the effectiveness of sacrificing will yield inconclusive results.
Finally, even given our extremely light-hitting Batter One, the upper range of Breakeven--where sacrifices increase run expectation--indicates that there is a decent chance that there are some cases around the league where sacrificing may be beneficial. Enhancing the equations should allow us to further estimate where that threshold may lie.
Next time: looking at one-run situations, enhancing the equations, extrapolating results, and drawing conclusions...