Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

According to the usual narrative, sabermetrics was invented by Bill James in the late '70s. Or possibly by Baseball Prospectus in the mid-'90s. Or maybe by Michael Lewis in 2002. Or whenever teams started to hire bloggers. For the longest time, those teams toiled under a veil of ignorance until, thankfully, Al Gore invented the internet and then they were able to see the light.

That's how it happened, right?

This one was inspired by a question from last Wednesday's episode of Effectively Wild. A listener asked what Ben and Sam thought were the most important sabermetric discoveries of … well, the history of sabermetrics. He said that the one about OBP being more important than batting average was too obvious, so that didn't count. Sam suggested that DIPS was the obvious other one. For too long, teams relied on batting average and ERA, when they should have been looking at OBP and FIP. Or did they?

Warning! Gory Mathematical Details Ahead!
It's hard to know whether teams were valuing certain stats by looking at those stats over the years. For example, if OBP was higher in the '90s, it might have been that, well, OBP was higher back then. How can we tell if teams actually valued OBP? Let's ask the simplest question of all: Did the player with a high OBP come back the next year? But then how do we get around the era problem? For example, a .330 OBP, even a few years ago, would have marked a perfectly average player. Nowadays, he might make the All-Star team.

However, if we do this year-by-year, we can sidestep that whole problem. In deciding which players to invite back for the 1962 season, the general managers of the day had only the stats available up to 1961 to make their decision and were probably weighting that year the most heavily. And so I gathered together all players who had at least 250 PA in a year and calculated their batting average and OBP for that year. I then looked to see whether they came back in the following year for another 250 PA. I started in 1954–55 and went all the way up to 2013–14.

I created a series of logistic regressions (came back vs. did not) with both AVG and OBP entered into a stepwise equation. I wanted to see which variable would enter the equation first, but allow for the fact that the other might also enter in. The results? Starting in 1954, batting average was the better predictor of whether a player came back. From that point until the turn of the millennium, OBP only enters the regression in a handful of years. But in 2004, things change a bit. OBP is the better predictor of return for the first time since 1995. And then from 2007 to 2014, the only years that OBP doesn't outperform batting average as a predictor of return are 2010 and 2011. (In '11, it enters the regression as significant apart from batting average.)

So far, that sounds about right. What about pitching?

I ran similar analyses for all pitchers who faced at least 250 hitters in one season and calculated their ERA and FIP for that year. Success was an invitation back for another 250 hitters in the next year. It looks as though teams were unaware of the power of OBP until the 2000's, even though it was well-known decades earlier. Were they similarly unaware of FIP, which didn't actually exist before the 2000's?

Actually, yeah. It turns out that for the last 60 years, FIP and ERA have battled back and forth as the better predictor of whether a pitcher would come back. In fact, from 1951 to 2014, ERA beat FIP as a predictor 32 times; FIP beat ERA 31 times. Limited to starters (more than 75 percent of games were starts), ERA does a little better (34–24; there are years when neither seems to predict!), as it also does with relievers alone (31–21), but there are plenty of years before the advent of FIP being an actual thing when it served as a better predictor of whether a pitcher was coming back.

Nothing New Under the Sun
Let's re-write the narrative a little bit. Around the time of Moneyball, teams were pretty much in the thrall of batting average, even though OBP is a better stat. I guess there really was something about walks that prevented teams from figuring out their true value. Walks to this day are not counted as a “real” at-bat because in the 1910's, someone decided that since the batter didn't “do” anything, he shouldn't get credit. Of course, sometimes it takes all the will a batter has to let a pitch go by. Now it seems that teams really have figured it out. Thanks, Michael Lewis.

But with respect to pitching, teams probably weren't as clueless as we all believe. Over the years, FIP has been a decent predictor of whether a pitcher would come back, even in the dark ages of the '60's, '70's, and '80's. That means that on some level, teams must have known that a lot of what happened on batted balls was heavily influenced by dumb luck. Or, in other words, that DIPS theory was at least partially in practice in MLB before it was fully articulated. So was DIPS a revolution or simply the articulation of something that we already knew implicitly?

There's nothing new under the sun. Catcher framing was a subject of discussion in 1982. People talked about win probability decades ago. There's a lot of institutional knowledge stored up in baseball; just because it hasn't been presented in a gory mathematical formula doesn't mean that it isn't there, even if the people acting on it themselves don't know that they're doing it.

We need a more accurate version of the narrative of the sabermetric revolution. Yes, we've come up with some fun numbers. We've formalized some ideas that previously didn't have solid form. We've probably pushed the envelope in ways that people weren't expecting. But in the same way, it's also possible that we've just been catching up to what was already known by baseball types.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
ErikBFlom
5/26
Shouldn't the "The Hidden Game of Baseball" by Thorn and Palmer (Apr 1984) have been in your suspect list?
pizzacutter
5/26
Yeah, probably.
Lagniappe
5/26
Indeed it did. The Book; Tango, Lichtman and Dolphin; 2009 also reset some thinking.
Shauncore
5/26
More of a methodological for BP; it seems like there is agreement, with at least yourself Russell, that DIPS is important, yet BP used runs allowed for WARP no?
pizzacutter
5/26
I've never been involved with how WARP is calculated.
gilpdawg
5/26
I was under the impression that PWARP is based on FRA, and soon will be based on DRA.
garylynd
5/27
IMHO - It was Bill James. Nobody really started cribbing numbers until he did in the late 70's. Everything since has been built on his investigations.
Dodger300
5/27
I agree.

Bill James may not have impacted the decisions that MLB teams made at the time. Nonetheless, he certainly influenced the research going forward, which eventually did lead to the changes we see in the game.
therealn0d
5/27
Bill James is undebiably a very important figure in sabermetrics, but this statement completely ignores the contributions of others like FC Lane, George Lindsey, and Earnshaw Cook. Lane looked at 1000 hits of different types and assigned what percentage of a run each was responsible way back in 1916 (in an effort to displace batting average.) George Lindsey devised the blueprint for win expectancy and linear weights in the early 1960s, even before Cook wrote his book Percetage Baseball. Bill James benefits from being a very engaging writer and is a true heavyweight in the field, but by no means was he the first to start "cribbing the numbers."
rdcramer3
5/27
As I experienced it, the SABRmetrics revolution was actually an evolution.
tribefan204854
5/27
As someone who looked forward to the annual Bill James book in the 70's, it certainly changed the way I thought about the game.
therealn0d
5/27
In 1916 Ferdinand Cole Lane wrote that batting average was "worse than worthless." To support this view he collected data to determine what percentage of a run each hit type was responsible for. His results are pretty similar to the weights you'll see in something like wOBA.
frampton
5/27
What about "The Great American Novel" by Philip Roth (1973)? He postulated that the ideal lineup was by descending OBA!
jfranco77
5/27
I don't think we can answer this without answering an even more fundamental question - what IS sabermetrics?

I would argue that it did start with Bill James because I see the consumption of advanced stats and thinking BY FANS as the key component of sabermetrics. Yes, some of that fan thinking may have eventually worked its way into front offices where it wasn't there before... maybe.

I tend to think the development of MLB front office thinking was a little more linear than "holy crap, we shouldn't use RBIs!" and that makes it a lot harder to draw a line in the sand if you're thinking of it from the front office perspective.
beeker99
5/27
Maybe most teams were not looking at OBP over AVG in the early 2000s, but IIRC Gene "Stick" Michael talked about the importance of OBP way back in 1992 when he was running the Yankes during George's suspension (hence the acquisition of Mike Stanley) . . .
andymcg
5/28
Judging by this, I'm guessing you believe Columbus discovered America.