Seemingly out of nowhere, it has become “I was wrong” season for the Baseball Prospectus fantasy team. First, Craig Goldstein wrote about undervaluing Starling Marte, and then J.P. Breen wrote about undervaluing Yovani Gallardo. Both articles do an excellent job analyzing what each author missed regarding the specific player. What I hope to look at today is not what was missed about a specific player, but rather what parts of human behavior cause us to err when forecasting player production.

In order to do so, let us take a look at forecasting and what humans do when forecasting. My favorite definition of forecast (the verb) is from Merriam-Webster and it goes, “to predict after looking at the information available.” I like this definition because it is convenient for my article. I also like it because it highlights that our forecasts are dependent on “the information available.” Relatedly, in Thinking, Fast and Slow, our main human, Daniel Kahneman writes, “An essential design feature of the associative machine is that it represents only activated ideas.” Put differently, we cannot take into account that which we cannot imagine. I am throwing around a lot of combinations of words right now, so please allow me to simplify all this:

When forecasting, we often limit ourselves to using the information available and what we are able to imagine.

Whether we undervalue or overvalue a player, we often do so because we underestimate the chances of an unexpected outcome. The consequence of this is thus overestimating the chances of an expected outcome. Usually this mistake has zero negative consequence because expected outcomes are more likely to occur than unexpected outcomes. However, the more frequently we make this mistake, the greater our chances of being on the wrong side of an unexpected outcome.

Based on what I have observed, underestimating the chances of an unexpected outcome manifests itself primarily in two ways in fantasy baseball: (i) overestimating perceived trends and (ii) overestimating rookie performances. These errors are then compounded when combined with (iii) confirmation bias.

1. Overestimating perceived trends
Our brains love trends and patterns. In fact, they are wired to not only recognize trends, but to create them. This is often a good thing (for instance, in recognizing a threat, such as a drunk driver), but can often create blind spots. We often see this in the veteran discount for older players who many fear will cease to be productive; or in the premium paid for prospects based on the assumption they will continue to linearly improve. We also see this with the price paid for consistency, or perceived consistency; for example, the player who you can bank on to play 160 games until you cannot.

In J.P.’s article, he (and every other person, including myself) was somewhat blinded by the trend of Gallardo’s diminishing stuff. To our pattern-making minds, Gallardo was an easy valuation heading into 2014 because everything pointed in the same direction. Therefore, we decided to dedicate no additional mental resources to analyzing Gallardo and we missed the very overlookable emergence of his sinker.

2. Overestimating rookie performances
As mentioned earlier, we often forecast using only the information at hand and what we are able to imagine. Kahneman calls this phenomenon “What You See is All There Is.” With rookies, especially those coming off of stellar, partial-season campaigns like Danny Salazar, Wil Myers, Trevor Rosenthal, and Michael Wacha, this forecasting flaw causes us to overestimate the likelihood of the evident and the imaginable. By only seeing these players perform excellently, we are going to have a tough time imagining them playing poorly. For me, this is was most notable in assessing the aforementioned Danny Salazar. Other than coming off of a somewhat recent Tommy John surgery, he had everything I wanted in a pitcher—the velocity, control, command, and three plus pitches. On top of that, I had never seen Salazar fail. And on top of that, when I looked back at information on Salazar, I did not find anything that would lead me to believe that he would not be great. When I tried to imagine a worst-case scenario for Salazar (beyond getting hurt), I could only imagine minor regression in stuff and command, which would lead to merely good rather than great production. What I could not imagine was Salazar dropping below average. Salazar pitching so poorly was not an “activated idea” for me; thus, I underestimated the chances of such an outcome.

As mentioned earlier, because I failed to place any weight on an unimagined outcome, I consequently overestimated the other outcomes. Even though I was looking at the future probabilistically in thinking that Salazar had a chance to be good—maybe even great—by eliminating the chances of the most negative outcome, I had inflated my forecast for his 2014 production. As a result, I ended up overpaying for him to underperform on two of my three teams.

3. Confirmation bias
Overestimating trends and rookie performances is one thing, but the situation worsens when we take confirmation bias into account. Confirmation bias is the tendency of people to favor information that confirms their beliefs or hypotheses. In other words, once I conclude that Salazar will not be bad, I am more likely to choose to read articles singing his praises than articles saying he is overrated. Combine this with a world that now allows us to pretty much avoid all disconfirming information (we choose the analysis we want to hear in many aspects of our lives; see: media, social media), and we are primed to, as J.P. described it, not see the forest through the trees.

Now that we know why we err in so many instances, we need a plan to overcome these obstacles. But how do we make ourselves imagine what we do not imagine at first? How do we see the forest through the trees before we make our valuations? What I have found in looking at this process this season is that it starts with being tenacious about finding disconfirming information. To be better forecasters, we need to be obsessed with proving ourselves wrong. Put more practically, do not end your analysis until you have found disconfirming information or at least explored as many avenues for potential disconfirming information as possible. Whether this involves reviewing past concerns from scouting reports (very helpful in finding disconfirming info regarding successful rookies) or simply asking what would need to be true in order for our assumptions to be false, we need to be actively checking our blind spots—especially considering that we are predisposed to buy into our own beliefs and hypotheses.

The last part I would like to point out is that forecasting player production has less to do with thinking a player will be good or bad and more to do with entertaining all the possible outcomes. What I mean by this is that thinking that Danny Salazar was going to perform well was not an error in forecasting. Thinking that there was no chance that Salazar was going to be below average was a huge error in forecasting. As the always excellent Shane Parrish (Farnam Street Blog) quotes Will Bonner’s Mobs, Messiahs, and Markets: Surviving the Public Spectacle in Finance and Politics, “You don't win by predicting the future; you win by getting the odds right.” By searching for disconfirming information, we can better find our blind spots, and thus improve at getting the odds right. This way, we can leave predicting the future to bolder pundits and simply attempt to improve our forecasting in order to improve our chances at winning.

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011. Print.

Parrish, Shane. "You Don’t Win by Predicting the Future; You Win by Getting the Odds Right." Farnam Street. 7 Oct. 2012. Web. 21 Aug. 2014.