Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

This week’s question comes from Joel Wirth:


Plenty is made about players pre- and post-All-Star Break numbers, but what
I'm interested in our "cool" and "hot" weather splits.
Who does significantly better/worse in June, July, and August compared to
April, May, and September/October? Thanks.

Thanks for the question, Joel.

We often hear about players who start slowly attributing it to the colder
weather. And indeed, science gives us good reasons to believe that offense
should rise as the temperature does. Robert K. Adair’s fine book
The Physics Of Baseball
has a good explanation of why balls carry farther in
warm air than in cold, for example.

However, is it the temperature that causes the difference, or simply a
player getting used to his in-season regimen and routine? Since the cooler
days tend to occur at the beginning and the end of the season, it will be
hard to separate the climatic effect from personal adjustments and work
habits.

However, let’s speculate for a moment that there are players who are more
sensitive to temperature than other players. Now, if offense as a whole is
up during warm weather, we shouldn’t be surprised that any individual player
plays better in the heat. As a baseline, we would expect a certain amount of
increase from any player. What we’re really interested in is whether players
tend to gain more (or less) than average from year to year.

Thanks to groups like Retrosheet
and The Baseball
Workshop, we have recorded game-time temperature data for most games of the
past few years, and can use this to see whether certain players gain more or
less offense than average when the temperature rises. As it turns out, the
mean game-time temperature over the past decade has been around 72 degrees,
so I’ll break up the data into two categories:

Cold games (72 degrees or less)

Hot games (73 degrees or more)

(For the nitpickers: all game temperatures are recorded as integers, so we
don’t have to worry about where a 72.5-degree game ought to fit.)

I looked at all players active in 1999 and 2000, and selected those who had
at least 100 plate appearances in both hot and cold games in each season
(ignoring games for which temperature data wasn’t available). I computed the
OPS (on-base percentage plus slugging average) for each player in each
temperature condition, and took the ratio of the Cold OPS to the Hot OPS.

For example:

During 1999, Phil Nevin had an 816 OPS in cold games, and a 948 OPS
in hot games. His ColdOPS to HotOPS ratio (or C/H) was 816/948 = 0.861. In
other words, his production declined about 14% in cold weather.

During 2000, Nevin posted a 748 OPS in cold games, and a 1037 OPS in hot
games, for a C/H of 0.721, an OPS decline of nearly 28% in cold games.

I ran similar calculations for each qualifying player (224 in all), and
plotted their 1999 C/H versus their 2000 C/H to see if a distinct linear
trend emerged that would indicate that players respond differently to
temperature fluctuations, and charted the results in the graph below:



As it turns out, not only does there not seem to be a consistent temperature
effect, but rather the reverse–players who do unusually well in hot weather
in one year are slightly more likely to be below average the next year (a
negative correlation of -0.15165). The black line shows the best-fitting
linear trend in the data, and it slopes slightly downward rather than upward
as we would have expected it to do if heat-preferring players existed. I
wouldn’t jump to the opposite conclusion (that being unusually good one year
makes you more likely to be unusually bad the next), especially with such a
low correlation. The linear relationship between 1999 and 2000 C/H
performance explains less than 2.3% of the overall variance, and should
probably be attributed to chance at this juncture.

Of course, there are several factors we didn’t include in the analysis. For
example, players who play for teams in domed stadiums would have a nearly
constant temperature for half their games. Assuming this temperature is 72,
that’s a large portion of the "Cold" games that are affected by
that park’s specific features (a tendency for a dome to be a pitchers’ park
would overly depress the Cold OPS figures), rather than a mix of hot and
cold games in that park. A hot game in Denver does not have the same effect
on OPS as a hot game in Oakland, even if the temperatures are identical.

We haven’t considered players who’ve changed teams, divisions, or leagues,
and for whom the mix of parks in which they played are different from year
to year. Nor have we considered how different climactic patterns affect the
mix of hot/cold games in the same park from year to year. We’re also not
considering how temperatures may change during a game. Plate appearances in
the late innings of night games probably tend to be cooler by a couple of
degrees, whereas late innings in day games might warm up a bit. We’re
assuming that all plate appearances during the game can be assigned to the
game-time reported temperature without significantly distorting the
relationship we’re investigating.

In fact, there still could be a measurable temperature-sensitive
characteristic for players that can be shown in their C/H ratio, but which
only emerges over longer periods of time. The random fluctuations in
comparing just two years of data may not be sufficient for the pattern to
emerge. But the short answer, for now, is that we haven’t found any evidence
that players consistently under- or over-perform expectation in games of
varying temperature.


Keith Woolner is an author of Baseball Prospectus. Contact him by

clicking here
.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe