keyboard_arrow_uptop

Part Two

Baseball: an uncertain game

Sabermetrics is all about making accurate estimates of processes that occur on the baseball field and are, in some sense, random. In my previous post, I looked at the process of aging, particularly from one age to the next, in terms of WAR. For the most part, we think of player aging in terms of either the curve that is created by averaging across all players of a given age or, alternately, in terms of the aging patterns of players that are “nearest” in some sense to the player of interest.

These methods, while useful in many contexts, are suboptimal when it comes to predicting the performance of a given player. In the case of the average curve, which is used for the so-called “delta method”, the aging pattern for the average player does not map well onto players who have not been particularly close to average. The k-nearest neighbors method, which incorporates aging patterns of historical seasons that are “near” the previous season of the player of interest, is complicated by the problem of quantifying “nearness” between seasons by very different players. Moreover, neither method can quantify the probability of a given player’s next season performance, only a point prediction of that future performance. For these reasons, I’ve embarked on a quest to design a simple model that can describe player aging in a probabilistic way, i.e. a model such that the probability of meeting a given WAR can be described.

The usefulness of some such model can be seen by again returning to an example from before:

Say you’re a GM in February 2018. You get a call from the Dodgers, and they are ready to part with Yasiel Puig on the last year of his contract. In exchange, they’re looking to pull three prospects from you. The consensus is that over the lifetime of the trade, you’d expect to make off slightly better in the long run with team control of the prospects than you will with one year of Puig. But at the same time, you can see that your window is closing, and a 3.0 WAR 2018 season from Puig should bounce your projected record from a chance at a wild card slot to a pennant contender. Puig would be going into his age 27 season in 2018, and in the previous season, he had 3.8 WAR. Do you take the trade? Ultimately, what any GM would want in this situation is a prediction of the likelihood that Yasiel Puig gives a 3.0 WAR season in 2018. Do we have a 80% chance of making the target numbers? One-in-four? Five percent? This is a capability that hasn’t appeared in the public baseball literature to date- and one that we could learn a lot from.

With all of this in mind, we can get started.

The model: understanding aging as a random percentage change in player win contributions

In order to formulate this model, we jump to the conclusion of the previous post. In short, in aging from one season to the next, a player’s percent change in win production appears to be a random process with a fairly predictable distribution. The percent change of a given player appears to be roughly chosen from a family of distributions, which are in turn specified as a function of both player age and the observed player value in the antecedent season. This can be visualized, roughly, in the following plot:

We start the model specification by assuming that the percent change in WAR is drawn from a Gaussian distribution, with a mean and variance that are both functions of player age and previous season WAR. From there, we will quantify and fit a model for these two parameters, and use it to predict and describe probabilities of future performances.

The model: gory details

The mathematical details that undergird the method are shown in this section. Feel free to skip ahead to the next section, which summarizes the results of this section, at any point.

So to formalize our assumption from the previous section, we assume that the percent change going into the \(i\)th season, mathematically speaking \(\left( \frac{\Delta \mathrm{WAR}}{\mathrm{WAR}_{i – 1}} \right)_i\) can be described as a draw from a Gaussian distribution with a mean \(\mu\) and a variance \(\sigma^2\): \[
\left( \frac{\Delta \mathrm{WAR}}{\mathrm{WAR}_{i – 1}} \right)_i \sim
\mathcal{N} (\mu, \sigma^2)
\]

In order to capture the behavior shown above, I assume that two mathematical functions, each dependent on age and last season’s \(\mathrm{WAR}\), describe the mean and variance of the probability distribution of any given percent change in a player’s ability: \[\mu \approx \mu_\mathrm{model}(\mathrm{age}, \mathrm{WAR}_{i – 1})\] and \[\sigma^2 \approx \sigma^2_\mathrm{model}(\mathrm{age}, \mathrm{WAR}_{i – 1})\] This is a rather arbitrary step, but using intuitions and various trials and errors, I have deduced a functional form: \[
\mu_\mathrm{model}= \alpha_0 + \exp(-\alpha_1~\mathrm{WAR}_{i – 1} – \alpha_2
~\mathrm{WAR}_{i – 1}^2) (\alpha_3~\mathrm{age}^2 + \alpha_4~\mathrm{age}
+ \alpha_5)
\]
and \[
\sigma^2_\mathrm{model}= \beta_0 + \exp(-\beta_1~\mathrm{WAR}_{i – 1}
– \beta_2~\mathrm{WAR}_{i – 1}^2) (\beta_3~\mathrm{age}^2 + \beta_4
~\mathrm{age} + \beta_5)
\]
where \(\alpha_0, \dots, \alpha_5\) and \(\beta_0, \dots, \beta_5\) are as-yet unspecified parameters for the model.

In the below plot, I show two rows:

In the top row, I’ve taken historical data of percent change in \(\mathrm{WAR}\), put it into bins, and computed the mean and variance of the data in each bin- the two descriptors of a probability distribution- of each bin, represented by a dot. In the bottom row, the resulting model is shown. This model for the mean and variance is fit by using a maximum likelihood estimator (or MLE) to compute the optimal parameters \(\alpha_0, \dots, \alpha_5\) and \(\beta_0, \dots, \beta_5\) to best describe the historical data using the model. We can see that the model, in the bottom rows, seems to describe well the effects we see in the data in the top rows.

The plots in the bottom row use a set of optimum parameters, calculated by using the data for all qualified hitters from 1955 to 2018:

\(i\) \(\alpha_i\) \(\beta_i\)
\(0\) \(-0.45742\) \(0.04599\)
\(1\) \(0.56816\) \(0.70118\)
\(2\) \(-0.00014\) \(-0.00172\)
\(3\) \(-0.10206\) \(0.03003\)
\(4\) \(5.00562\) \(3.15901\)
\(5\) \(-0.03484\) \(-0.01492\)

Finally, we can calculate a prediction for a future \(\mathrm{WAR}\) by drawing from the distribution, adding one to the result, then multiplying by the previous season’s \(\mathrm{WAR}\), or we can calculate the percent change in \(\mathrm{WAR}\) required to reach some desired \(\mathrm{WAR}\), then evaluating the Gaussian cumulative distibution function for that percent change given the mean and variance given by \(\mu_\mathrm{model}\) and \(\sigma^2_\mathrm{model}\).

Applying the model: making predictions, evaluating probabilities

In practice, the model will be used to generate a mean and variance, from which an appropriate prediction will be made, as seen in this diagram:

This allows us to make predictions of the value a player is going to have in a given season, either as the most likely outcome (taking the mean), or a percentile guess by incorporating the variance to give a guess of, say, the 85th percentile WAR you would expect.

The real innovation is that you can query the model about the likelihood that a player reaches any given WARquery. This is of interest in the case of the Yasiel Puig example discussed earlier. In such a case the probability can be calculated by a modified process:

Using this process, we can calculate the probability that a player aged into a season that is already on the historical record or calculate the probability of a season that has not happened yet.

In the next figure, we do so, calculating the prediction for hitter WAR for 2018 using the model and data from 2017.

If the model were to exactly predict the 2018 season, every point would lie on the line. We can see that there is a strong correlation between the predicted WAR and the actual WAR that occurred in 2018. One thing to note is that the model fails to predict the highest-WAR seasons.

Likewise, in the next plots, we can see the 2017:

and 2016 predictions:

each of these based on the previous season’s data.

We can also go back to the example of trading some prospects away for one season of Yasiel Puig. He had 3.8 WAR in his previous season. He’s going into his age-27 season, and you need 3.0 WAR for the trade to get your club in the playoffs. Using the model and the historical data, we would have predicted that Puig would have delivered 3.07 WAR. Moreover, we could have calculated that there was a 51.4% chance that he delivered 3.0 WAR in that season. Is that risk worth it? That’s in the eye of the beholder, but it goes without saying that this type of data could be very useful in a decision making process for anyone from the front-office to fantasy GMs.

Future extensions

In addition to the obvious use cases, there are a lot of ways we could apply the results. The possibility that I find most exciting- not to mention the original inspiration for the project- would be to use the results as a prior distribution for a Bayesian method. The basic idea of Bayesian statistical methods is that they incorporate observation data with a “prior” estimate of a probability distribution of interest- in some sense a belief statement- in order to make an improved “posterior” estimate of the distribution, improving the belief statement with real observations. There are good resources to understand the value of such approaches using simple examples, but, in short, Bayesian methods allow one to refine beliefs by optimally incorporating data, even if the initial beliefs are imperfect. The great potential of this method could be to sharpen the guesses about player skill, by using methods such as Kalman filtering to make better guesses at player’s “true talent”, or to imbue other methods, like k-nearest neighbors, with probabilities. This, then, could be done using a Bayesian formulation that starts with this model as a prior distribution. In all, this model represents a public-domain probabilistic model for player aging, the first I know of, and I look forward to the paths it opens up for new frontiers of baseball analysis.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
schlicht
10/24
Cool - Bayesian adaptation could be quite powerful in identifying where our current blindspots are