December 26, 2013
Rest an Extra Day to Keep the Doctor Away?
Last time we met, we contemplated the curious case of the fifth starter. He is, somewhat by definition, worse than the other four guys who might otherwise be starting tonight’s game. Yet there he is, standing out there for the next 3 1/3 innings until he inevitably gets chased after giving up his sixth run. Why not just skip this exercise in futility and let the other (better) guys pitch the game? Last week, we saw that pitchers didn’t suffer much from going on three days’ rest. It was a high pitch count in his last outing that was a problem. If pitchers have, historically, performed just as well on three days’ rest as four, why is baseball so afraid to go back to the four-man rotation?
I’m a believer that if something exists in baseball, there must be a reason for it. It might not be a good reason, but there’s got to be something. Maybe it’s that the five-man rotation, while it does bleed away value in starts given to a glorified long reliever, is actually a hedge against injury.
A team’s ace starter in the standard five-man system makes about 34 starts a year. In a four-man rotation, he’d probably notch 40—assuming that he was healthy enough to finish out the season. Maybe teams went to a five-man rotation because of a simple cost-benefit analysis. Yes, you give starts to an inferior starter, but lowering the injury risk for the other four guys by not overworking them is worth it. We know that pitchers who threw a lot of pitches last year are at risk of being injured this year. So adding a fifth starter is, hypothetically, a way to hedge against injury risk for four separate spots on the roster. Could it be that the potential lost productivity from injuries is greater than the price of giving extra starts to no. 5 (and let’s be honest, nos. 6, 7, and 8)?
Warning! Gory Mathematical Details Ahead!
I looked for cases in which a pitcher went 20 days or more without pitching in a big league game. At that point, we know it’s more than just skipping a start. However, to try to guard against guys who came up to make a spot start and then were sent back down, I required that he had started at least 50 games before I started looking at his data points. At that point, he’s established himself as a starter who’s good enough to get a season-and-a-half’s worth of starts.
Additionally, I looked for cases in which a pitcher’s last appearance came before the beginning of September (again, minimum 50 previous games started). Before you begin filling up the comments section with objections, let me make them all for you. We have no way of knowing that what happened was an injury. There are certain players who may have been doing just fine health-wise, but were demoted to the minors. There are some young players (Matt Harvey comes to mind) who suffered a major injury, but who would not appear in my sample. I am painfully aware that there are limitations of this method. If you don’t want to call them injuries, just call them “mysterious disappearances.”
One thing that we do know is that pitchers who have spent time on the disabled list in the prior year have a better than 40 percent chance of returning there this year. (The rate for pitchers who have not been previously injured is around three percent.) Even after a completely healthy year, a pitcher who had a DL trip two years ago still carries a better than 30 percent rate of taking some time off to rehab an injury. Keeping a pitcher from sustaining his first injury has major implications for his future health.
I tried to model what factors might predict whether or not a pitcher might sustain his first career “mysterious disappearance.” I used a Cox regression, which is a method most often used to model the chances of how likely a person is over time to die or have some other unfortunate event happen to them. (I have previously used it here and here.) It controls for the fact that once a person has died, they no longer produce data, while those who survive continue to do so. It also controls for the fact that some people die (or that some pitchers have a mysterious disappearance) early on, and that there’s no real reason for it other than bad luck.
In my previous work, I found that when a starter crosses over a pitch count of 120, he starts to reach much greater levels of risk for an injury later in the season, and that this “scar” stays with him all season long. I entered four predictors into the Cox regression, in addition to the time variable (the career number of starts that the pitcher has made). The first was the number of times (so far) this season that he has started on three days’ rest. Then, using the pitch count estimator I created a few weeks ago, I entered the number of times (so far) in a season that the pitcher’s estimated pitch count had risen above 120, the number of estimated pitches he had thrown so far, and the number of estimated pitches he threw in the previous season.
The overall Cox regression showed that increasing all four of these covariates actually lessened the chances that a pitcher would suffer his first “mysterious disappearance.” That is, throwing lots of pitches made for healthier pitchers. Common sense tells us that there may be selection bias in that those who are allowed to throw a lot of pitches are the ones whom their managers believe capable of doing so with lesser risk. However, common sense also tells us that we may have a gigantic, but heterogeneous data set which gives us the illusion of clarity. As we have seen before, the rates at which pitchers start on three days’ rest has varied (mostly declined) over the years.
I re-ran the regression by decades (1950s, ’60s, ’70s, ’80s, ’90s, and 2000-2012), and the results shook out differently as time went on. In the 1950s and ’60s, the marginal effect of pitching on three days’ rest and an extra hundred pitches (both “costs” that must be considered in the decision to throw a pitcher on three days’ rest) was actually to reduce the chances of a “mysterious disappearance.” As the 1970s rolled into the 1980s, and rates of starts on three days’ rest started to wane, the effect was more muddled. By the ’90s, the arrow was pointing in the direction of an increase in the rate of injury for three days’ rest. A similar process happened with the variable coding for games in which the pitcher’s count exceeded 120, again following a trend in which fewer pitchers were asked to do this.
Within the data set for 2000-2012, a pitcher who pitched on three days’ rest carried a much greater risk of injury (or mysterious disappearance, if you prefer). The function isn’t linear (it’s logarithmic and based on an odds ratio of hazard over time), so it defies the easy “one time is worth X percent increased chance of injury” encapsulation. And with so few cases of pitchers being used on three days’ rest, it’s hard to know whether the low frequency of the event is causing some strange effects in the regression.
However, if we believe the regression (the coefficient on one appearance of three days’ rest is .287; the coefficient for an additional pitch is -.0006), an extra 100-pitch outing on three days’ rest would contribute something like .227 to the cumulative hazard function—about half of what I found in this article as the contribution of a DL stint last year, which we know to be a huge risk factor for further injury. And because a four-man rotation would require four pitchers to consistently take the ball on three days’ rest, the chances that one of them would be bitten by the injury bug is increased. Looking only at recent years, we would conclude that it’s a horrible idea to have a pitcher go on three days’ rest, as he would normally have to do in a four-man rotation. But, looking back, it wasn’t always this way.
Should Someone Have Eaten that Chicken?
We could also assume that because the zeitgeist of an earlier time called for starting pitchers to throw long games and work more frequently, there was a selection bias for those pitchers who could handle that sort of workload, both in terms of who got scouted and signed and who survived through the minors with that sort of expectation. Because baseball has moved away from this sort of strategy, there’s no longer that pressure to find that sort of workhorse body. No one runs a four-man rotation anymore, so when they try to it leads to more injuries, because pitchers aren’t trained to do that sort of thing, because no one runs a four-man anymore. Anyone else hungry for some chicken and eggs?
We might be stuck with the five-man rotation mostly because teams now select, train, and promote players to exist within an ecosystem that contains the five-man rotation. If one team (the Rockies, for instance) tried to go to a four-man rotation, and expected those pitchers to reach the same 100-pitch threshold that is commonly expected of starters today, they would essentially have to stock their rotation almost entirely with products of their own system who had been trained to do that. Where else would they get spare parts when they needed one? Perhaps the five-man rotation is inefficient in the sense that if we could re-create the entire baseball ecosystem, we wouldn’t design things this way, but it’s a conceit to believe that we do.
And about that team that decided to go with a four-man rotation: if they did try, they’d have to draft and sign players who could handle it. I’m left to wonder whether that would put them at some sort of disadvantage. Perhaps the real driver of the five-man rotation is that when you have an extra day of rest, and have to start only 34 games rather than 40 over the course of a season, you can pitch differently, and perhaps more effectively? It is true that fifth starters are about a run worse per nine innings than aces, and from that perspective, it makes no sense to give a fifth starter the ball. But that assumes that the sort of aces who would exist in a world where three days’ rest was normal would be the same as those that exist now. That’s an assumption that needs more examination.