According to the usual narrative, sabermetrics was invented by Bill James in the late '70s. Or possibly by *Baseball Prospectus* in the mid-'90s. Or maybe by Michael Lewis in 2002. Or whenever teams started to hire bloggers. For the longest time, those teams toiled under a veil of ignorance until, thankfully, Al Gore invented the internet and *then* they were able to see the light.

That's how it happened, right?

This one was inspired by a question from last Wednesday's episode of Effectively Wild. A listener asked what Ben and Sam thought were the most important sabermetric discoveries of … well, the history of sabermetrics. He said that the one about OBP being more important than batting average was too obvious, so that didn't count. Sam suggested that DIPS was the obvious other one. For too long, teams relied on batting average and ERA, when they should have been looking at OBP and FIP. Or did they?

**Warning! Gory Mathematical Details Ahead!**

It's hard to know whether teams were valuing certain stats by looking at those stats over the years. For example, if OBP was higher in the '90s, it might have been that, well, OBP was higher back then. How can we tell if teams actually valued OBP? Let's ask the simplest question of all: Did the player with a high OBP come back the next year? But then how do we get around the era problem? For example, a .330 OBP, even a few years ago, would have marked a perfectly average player. Nowadays, he might make the All-Star team.

However, if we do this year-by-year, we can sidestep that whole problem. In deciding which players to invite back for the 1962 season, the general managers of the day had only the stats available up to 1961 to make their decision and were probably weighting that year the most heavily. And so I gathered together all players who had at least 250 PA in a year and calculated their batting average and OBP for that year. I then looked to see whether they came back in the following year for another 250 PA. I started in 1954–55 and went all the way up to 2013–14.

I created a series of logistic regressions (came back vs. did not) with both AVG and OBP entered into a stepwise equation. I wanted to see which variable would enter the equation first, but allow for the fact that the other might also enter in. The results? Starting in 1954, batting average was the better predictor of whether a player came back. From that point until the turn of the millennium, OBP only enters the regression in a handful of years. But in 2004, things change a bit. OBP is the better predictor of return for the first time since 1995. And then from 2007 to 2014, the only years that OBP *doesn't *outperform batting average as a predictor of return are 2010 and 2011. (In '11, it enters the regression as significant apart from batting average.)

So far, that sounds about right. What about pitching?

I ran similar analyses for all pitchers who faced at least 250 hitters in one season and calculated their ERA and FIP for that year. Success was an invitation back for another 250 hitters in the next year. It looks as though teams were unaware of the power of OBP until the 2000's, even though it was well-known decades earlier. Were they similarly unaware of FIP, which didn't actually exist before the 2000's?

Actually, yeah. It turns out that for the last 60 years, FIP and ERA have battled back and forth as the better predictor of whether a pitcher would come back. In fact, from 1951 to 2014, ERA beat FIP as a predictor 32 times; FIP beat ERA 31 times. Limited to starters (more than 75 percent of games were starts), ERA does a little better (34–24; there are years when neither seems to predict!), as it also does with relievers alone (31–21), but there are plenty of years before the advent of FIP being an actual thing when it served as a better predictor of whether a pitcher was coming back.

**Nothing New Under the ****Sun**

Let's re-write the narrative a little bit. Around the time of *Moneyball*, teams were pretty much in the thrall of batting average, even though OBP is a better stat. I guess there really was something about walks that prevented teams from figuring out their true value. Walks to this day are not counted as a “real” at-bat because in the 1910's, someone decided that since the batter didn't “do” anything, he shouldn't get credit. Of course, sometimes it takes all the will a batter has to let a pitch go by. Now it seems that teams really have figured it out. Thanks, Michael Lewis.

But with respect to pitching, teams probably weren't as clueless as we all believe. Over the years, FIP has been a decent predictor of whether a pitcher would come back, even in the dark ages of the '60's, '70's, and '80's. That means that on some level, teams must have known that a lot of what happened on batted balls was heavily influenced by dumb luck. Or, in other words, that DIPS theory was at least partially in practice in MLB before it was fully articulated. So was DIPS a revolution or simply the articulation of something that we already knew implicitly?

There's nothing new under the sun. Catcher framing was a subject of discussion in 1982. People talked about win probability decades ago. There's a lot of institutional knowledge stored up in baseball; just because it hasn't been presented in a gory mathematical formula doesn't mean that it isn't there, even if the people acting on it themselves don't know that they're doing it.

We need a more accurate version of the narrative of the sabermetric revolution. Yes, we've come up with some fun numbers. We've formalized some ideas that previously didn't have solid form. We've probably pushed the envelope in ways that people weren't expecting. But in the same way, it's also possible that we've just been catching up to what was already known by baseball types.