Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

Last week, there was an article making the rounds about a baseball study that was published in an actual scientific journal! The study, published in the Proceedings of the National Academy of Science (PNAS) and written by researchers Alex Song, Thomas Severini, and Ravi Allada of Northwestern University, looked at major-league games from 1992-2011. They specifically looked at the effects of travel across time zones for teams.

It makes sense that travel might impact a player’s (and a team’s) performance. MLB teams are located in four different time zones, and when a team goes from New York to Los Angeles they might find themselves in a position where their bodies think it’s 7:00 pm and time to start tonight’s game, but the clock thinks that it’s 4:00 pm. The clock usually wins that debate.

While I always appreciate “real” scientists taking up baseball as a medium, we also need to make sure that the work is high quality. I am not enough of an expert on circadian rhythms and sleep adjustment to pass judgment on their contention that a human being needs one day per time zone crossed to fully adjust to the new time, but that sounds reasonable and they have some references, so I will accept it as valid.

The problem is everything else that they did.

The full article (which can be read here) says that the researchers used game logs from Retrosheet (sounds good so far) and figured out how many time zones teams had recently traveled over and how much time had passed since they did so. Based on the idea of one day per time zone of adjustment (that is, going from New York to Los Angeles crosses through three time zones, and so would take three days to fully adjust to), they classified teams into two groups—either 2-3 hours offset or 0-1 hours offset.

On the first day of their trip out to LA, our team traveling from NYC would be three hours offset, but by the third day, they would be down to one hour. The researchers did explicitly account for scheduled off days, which the schedule-maker mercifully usually puts in front of a big trip. However, they then did something rather strange, or at least if they didn’t do something strange they explained it in a strange way.

I assume that they wanted to make sure that they were controlling for the fact that there are seven teams in the Pacific Time Zone (the five California teams, Seattle, and Arizona, which doesn’t observe daylight saving time, but during the summer they are effectively on Pacific time), one in the Mountain Time Zone (Rockies), and everyone else is in either the Central or Eastern zones. It means the Dodgers (and Angels, Padres, A’s, Giants, and Mariners), in order to play their 81 road games, have to cross time zones a bit more often. In fact, the Eastern divisions in both leagues consist of teams entirely based in the Eastern time zone, and the Central division consists of those in the Eastern and Central zones, which are an hour apart from each other.

It’s going to mean a lot of games in the same time zone or only one hour removed (which the researchers don’t count as “jet lagged”) for those teams. The West Coast teams, when traveling to a road game not in their division, will almost always be crossing two time zones. It means that they will be over-represented in the “jet lag” group. We have to worry about the fact that “jet lagged” might just be a comment on the relative health of those West Coast teams over the two decades under study.

So, to (sort of) address this they included parameter estimates to adjust for each franchise’s performance both at home and away. (Good North Siders that they are—this is Northwestern in Evanston—they used the Cubs as their reference group.) What’s not clear was whether they paneled those effects by year. It seems that they just asked for a parameter that adjusted for performance over the entire 20-year timeframe. There’s a big hole in that.

For example, one of their outcome variables was the number of hits that a team notched in a game. (Indeed, all of their outcome variables were team-level game totals.) That might help to adjust for the fact that the Dodgers traveled a lot during those 20 years, but there’s more important information that comes with (Dodgers = 1) than that. There’s going to be some variation year-to-year—especially over a 20-year sampling window—the number of hits that an individual Dodgers team could expect based on who was wearing Dodger blue that year. There were lean years and great years for just about all teams in that time frame. By simply asking for a franchise effect over 20 years, they’re missing a lot of important information.

There’s also a lack of accounting for the fact that over those 20 years, MLB changed a great deal. In 1992 (when their sample begins), teams scored an average of 4.12 runs per game. By 2000, that reached a peak of 5.14 runs. In 2011 (when their sample ended), it had fallen to 4.28. That’s important because during their sampling frame, MLB switched from a balanced schedule during the 90s to an unbalanced schedule in 2001. In the balanced format, teams played the other teams in their league an approximately equal number of times, regardless of their division. In the unbalanced format, teams spent more time playing teams in their own division and by extension their own time zone.

We also didn’t see a direct adjustment for one well-known time zone practice. On the day before a team travels, they will often send the next night’s starting pitcher ahead to the next city to get settled. On top of all that, while the story got play as “jet lag affects baseball players,” their findings were not particularly strong. Because they wanted to control for home field advantage, they ran their analyses first for only the home team, then the visitors. They also ran separate analyses for offensive and defensive events. They also ran separate analyses based on whether the team had traveled east along the by-ways or were stabbing westward.

The problem was that while they did have findings that were significant, findings were not consistently significant across specific outcome variables (they tested several, including walks, strikeouts, singles, doubles, triples, home runs, stolen bases, and oddly sacrifice flies). When you run that many tests, it’s not surprising that a few come out significant. It’s called building Type I error. The hypothesis that teams might be affected by jet lag is reasonable. It might even be true, but there are so many problems running through this study that I don’t think we have conclusive proof.

So …

Warning! Gory Mathematical Details Ahead!

I used data from 2012-2016, and we’re going to do this at the plate appearance level, rather than the team-game level. Similar to the Northwestern researchers, I calculated how “lagged” a team was, based on the time zone of their most recent stop, and the number of days that had passed for them to accommodate. I coded all hitters and pitchers as either “lagged” (adjustment off-set was 2-3 hours) or “not lagged” (adjustment was 0-1 hour).

For each batter-pitcher matchup, I also created a control variable based on their seasonal stats using the log-odds ratio method. For those unfamiliar with the technique, this uses the batter’s seasonal rate for a statistic, the pitcher’s, and the league’s context to create a control variable on a PA-by-PA basis for what we might expect. We can then enter any other variables into the mix that we want. If any of them come out significant, we know that they are predicting the outcome of a plate appearance over and above what our estimate just based on a “random roll of the dice” model might predict.

In addition to controlling for batter-pitcher matchup (and indirectly, for the league context), I controlled for whether the batter and pitcher were of the same handedness because a) we know that’s important and b) it’s really easy to do.

I looked at several possible outcomes from a plate appearance (strikeout, walk, HBP, single, extra-base hit, home run, out in play, different types of batted balls, and whether there was an on-base event of any kind). Similar to their methodology, I analyzed teams when they were on offense separately from when they were on defense. I looked at the lagged variable in general, then for eastward travel, then for westward travel. I looked at it when teams were at home and teams were on the road.

And yes, I got a few significant findings. When you run that many analyses, you’re going to get some of those. Here, listed in a nice table, are the outcomes that had significant findings for the “jet lag” variable, independent of the control variables.

Teams on Offense

Jet lag overall

Fly balls (down)

At home

No findings

On the road

Groundballs (up), Fly balls (down)

When traveling west

Walks (down), Outs in play (up), Balls in play (up), Fly balls (down)

When traveling east

Outs in play (down), Balls in play (down)

Teams Pitching/On Defense

Jet lag overall

HBP (up), Fly balls (down),

At home

On the road

HBP (down), Fly balls (down)

When traveling west

Groundballs (up), Fly balls (down)

When traveling east

HBP (down), HR (up),

When the starter is pitching

HBP (down), Balls in play (up), Fly balls (down)

When a reliever is pitching

Doubles/Triples (down), Outs in play (down), Balls in play (down), Fly balls (down)

It turns out to be something of a random assortment of things that come up, and that’s a problem. There’s not a lot of coherent story to be had here. Maybe there’s something about jet lag affecting fly balls, but I’m struggling to come up with a coherent rationale behind it other than “that’s a lot of Type I error.”

For the most part, we have scattershot findings that read together are a bunch of jibberish. And honestly, that’s what the original study found as well. Despite how the story was covered, I don’t think there’s reason to believe that jet lag negatively affects player performance in MLB.

I See You’ve Done This Before

The jet lag hypothesis makes sense on its face, but not everything that makes sense at first is actually true. Anyone who has taken a cross-country (or trans-oceanic) flight can likely relate to the experience of jet lag, but major-league players might have a secret weapon in the fight against jet lag: they’ve done this before. It’s entirely possible that MLB players experience jet lag physiologically the same way that all the rest of us mortals do, but they also have every incentive to learn how to adapt to play while jet lagged. Maybe it’s as simple as the fact that while they do cross time zones, their clocks are only off-set by a couple of hours. They can deal.

The evidence shows that there isn’t really a coherent jet lag effect, despite how the story was covered in the media. And that’s a little concerning in these rather … interesting times. It’s a lesson in reading all the way through something to make sure you’ve got good information.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
tearecrules
2/01
This is something that's been studied in the NFL as well. West Coast teams have like a .270 winning percentage when visiting East Coast. The theory is that it's not necessarily the travel that is the problem, but the start time. Teams West coast teams playing the early game (1pm start time) do awful (close to a .200 winning percentage) but the late game or Sunday or Monday night they're much closer to the normal road team winning percentage. The early games just move everything about the day so players are waking up earlier than they would in their home time zone, warm-ups earlier, etc. The later games let them maintain a schedule more in line with their normal time zone.

Since MLB games are usually starting later in the day, the players would still have the opportunity to "sleep in" and get a normal night's rest, etc.
LlarryA
2/01
I've seen that as well. IIRC, East Coast teams going west don't have nearly the drop off, which makes some sense if the issue is more a matter of routine than just sleep. NFL teams have been trying a couple of other things to try and adjust for this. One is going east a day earlier, another is if they have two east coast games in a row, they just stay east for the week. Both of those would help the players (and coaches, for that matter) restore their usual gameday routine and take the actual time spent traveling out of the schedule. As more teams start doing these things, we may get some data on how effective it is.
tearecrules
2/01
The study also covered a 40 year period. Coast-to-coast travel for millionaires flying on a billionaire's dollar is probably a different experience in the last 10 years covered than in the first 10 years. I'd be curious if some of the recent improvement from WC teams is due to improved travel planning. If keeping guys on schedule gets/prevents some additional points and results in an extra win on the East Coast it would be a huge deal for a 16 game season.
lichtman
2/01
Russell nice article and I am glad you critiqued this paper. I was not very impressed with it for exactly the reasons you articulate.

I surely would like to see your results converted to expected runs, i.e., how the change in the various events would affect runs per game, using linear weights.

I'd also like to see simply W/L records and RS and RA numbers for jet lagged and non-jet lagged teams overall and for west to east and east to west, adjusted for H/R, team strength, schedule, and era of course.

Just listing a bunch of significant variables with no numbers is not very helpful or interesting to be honest. I mean we don't really care about GB or FB rates other than how that might affect run scoring.

Also you talk about them building Type I errors. However they also used Benjamini–Hochberg false discovery rates. Isn't that supposed to account for that?
pizzacutter
2/01
They did use B-H, but when you do adjustments like that, it doesn't eliminate false positives. It just means that you're being a bit more careful. The problem is that their conclusion that "jet lag has an effect" seems to be based on "We got _some_ significant findings, so they must mean something!" rather than "We got some significant findings, although even with our post-hoc corrections against Type I, there's still going to be a few false positives in here, and even if these are all real findings, they don't tell an actual baseball narrative that makes any sense.

Also, I looked at a suite of outcome events (single, XBH, HR, K, BB, etc.) and nothing really shook out significantly. There's just not a lot of LWTS to be had here.
lichtman
2/01
Very good, thanks for the reply.
forrest
2/01
Might be worthwhile to look into rookie v. veteran effects, if experience is a factor. Did you have a chance to compare those groups?
vincentvdvinne
2/02
Thank you Russell! Great follow-up on an intriguing but not very satisfying paper. As a BP reader with a day-job investigating the effects of disruptions of the biological clock on physiology and health I was excited to see the original paper and then very frustrated about the lack of any stats beyond the basic boxscore to address the question.

Although I agree with your assessment that the authors seemed to be looking for anything that would give them a significant result, their first conclusion that transcontinental travel has an adverse effect on baseball performance seems to be still valid. This has been published before using the 91-93 seasons (Recht et al., 1995, Nature). This study showed that East coast teams won 63% of games when hosting West coast teams that traveled transcontinentally in the previous 2 days. West coast teams hosting East coast teams won 56% of games while home teams facing teams that didn't travel transcontinentally won 54% of games. These results were confirmed in a follow-up study (Winter et al., 2009) and in the present study.

Overall, I think we can conclude that jetlag impacts baseball performance (winning percentage) but that we still don't understand what aspects of player performance are responsible for this. Hopefully someone will be able to use more advanced baseball stats to address this question in the near future.
vincentvdvinne
2/02
I asked the authors, and they did indeed use a single parameter over the full 20 years to model the team strength.