Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52

This was my third year attending the annual convention of the Society for American Baseball Research—in this case, the 43rd such event. It is one of the social highlights of the year for a community that essentially suffered a diaspora at birth—it’s never been easier for baseball researchers to communicate, but every so often it’s vital to actually bring them together under one roof, and SABR is a vital way of doing that.

There are panel discussions, keynotes, presentations, posters, and committee meetings. There are also discussions in hallways and on escalators and in line at cheesesteak vendors and in bars… well, okay, mostly in bars. And those ad hoc interactions are at least as important as the formal events, if not more so. I’ve tried to recap the formal events, at least the ones I found of suitable interest. But it doesn’t really do enough to capture the sense of what the thing is. So let’s talk a bit. I don’t mean so much talk about SABR, although I’ll do that plenty. I mean let’s talk like we’re at the bar, with room to meander and ruminate and think about larger things. Now, obviously, I’m going to be doing most of the talking here to start, but a few of my victims from the hotel bar on Saturday night can tell you that’s pretty typical of being at SABR too.

I think there a lot of people out there who aren’t exactly sure what to make of SABR. There’s the public at large, who equates SABR with sabermetrics, despite the fact that it’s a very small part of the organization’s mission. On the flip side is the larger sabermetric community and its fellow travelers, who often have a hard time seeing how SABR is or could be relevant to the discussion in the Internet age. And there’s the leadership of SABR itself, which is unsure of how to make SABR more relevant to the modern generation of sabermetricians without driving off the current members in the process. And they have to, because SABR faces an existential crisis if it does not—the organization is aged and literally dying, and if younger people are not brought into the fold, eventually it will simply run out of members.

Bill James named sabermetrics after SABR, in homage to the organization. But most SABR members are not metricians—the organization has a much stronger focus on historical baseball than it does on statistical baseball. And most practitioners of sabermetrics do work privately and either self-publish their results or publish through organizations like ours or other websites’. The brief heyday of SABR’s “By The Numbers” newsletter as a hotbed of baseball research has passed. Data comes from people like Sean Lahman and the devoted volunteers at Retrosheet. Lahman and the Retrosheet board were both at SABR, but neither receives material support from SABR and could function without SABR.

SABR is facing a crisis, and a healthy field of inquiry named after it is too good an opportunity to pass up as it tries to preserve itself for the future. But there is room to ask whether or not sabermetrics has any use for SABR, and if so, whether or not the role SABR is trying to take on is one that’s useful for the field.

Let’s start with the role SABR seems to have taken on: public advocacy for sabermetrics. It has set up the annual analytics conference. It has partnered with Rawlings to try to bring modern defensive metrics into the Gold Glove discussion. It’s easy to see why this approach appeals to SABR. It lets it put its name on the field’s progress on a whole, even where it hasn’t directly contributed to any of it. It doesn’t require any of the actual researchers to change how they go about things, nor does it require SABR to get involved at a more fundamental level.

The question is, is it needed? And I think one has to conclude that it really isn’t. If sabermetrics has a problem these days, it isn’t reach. There is a Brad Pitt movie about how the underdog stats geeks took over the world. There are TV shows that discuss the sabermetric viewpoint. There are websites devoted to espousing sabermetric player measures, and they’re far from obscure. They get cited during actual baseball broadcasts.

And it’s not clear that SABR is particularly well equipped to be the PR arm of the sabermetricians. It’s been a largely private organization for most of its existence; most people know of it through sabermetrics, rather than the other way around. Sabermetricians have a larger following in the media than SABR does.

The last bastion of the old guard in the media is the newspaper writers and the like in the Baseball Writers Association of America. But let’s be honest: they face many of the same demographic challenges that SABR does. With the continuing downward spiral of the newspaper industry and the rise of blogging, the BBWAA is trying to get younger, and it’s turning to writers who grew up with a sabermetric viewpoint. (BP has BBWAA-credentialed writers of its own.) It’s a long road until those sort of people become the majority, but the same is true of SABR, and it’s not at all clear that SABR has figured out how to make that transition smooth for its own organization, much less another.

So if SABR is inserting itself somewhere that isn’t a real area of need for the field of sabermetrics, it can be tempting to conclude that there isn’t a role for it to play. But before we do that, let’s take a look at the problems with the field of sabermetrics and see if there are some that SABR is well suited to correcting.

The first problem with the field of sabermetrics we should probably address, because we’re already wandering past it, is that not enough people are asking the question, “What are the problems facing the field of sabermetrics?” A little introspection is healthy. A little outright perspective is good, too. (And sabermetrics needs to do a better job of accepting criticism from outside the field.) But I don’t see much of a role for SABR there.

So having gotten that out of the way, what other problems are there? A very big problem is brain drain. As sabermetrics becomes more popular, it also loses many of its best and brightest to teams and to other fields of study (one of the most famous sabermetricians is largely famous for his work on predicting election outcomes, not his baseball research). Could SABR offer incentives to help keep researchers in the public domain? The answer seems likely not; there’s far less money in public baseball research than there is in professional baseball, and it’s not realistic or fair to expect SABR to find a way to make that less so. (It should be noted that SABR is offering scholarships to young researchers to encourage new people to enter the field; it is in fact attempting to do something here.)

There is another problem, though, that if not exactly related, at least is exacerbated by the constant turnover in the field. It’s that sabermetrics, in many ways, is a field with a shallow connection to history—both its own history and the history of baseball in general. And that’s a problem.

There are really two distinct phases of the sabermetric movement: the “books and letters” phase and the Internet phase. What’s rather incredible—I do not mean this in the modern meaning of the word, which means “wonderful,” but the literal meaning, “difficult to believe or comprehend”—is how little of either is being preserved.

The biggest name from the “books and letters” phase was of course Bill James, who mostly resorted to self-publishing until 1982, when Ballantine became his publisher. All of James’ abstracts are out of print now. The Hidden Game Of Baseball, by John Thorn and the criminally underrated Pete Palmer, is similarly out of print. Many lesser writers, getting by largely on self-publishing via mimeograph, are even less well preserved.

And the situation actually doesn’t improve until fairly late in the Internet era. A lot of the early history of sabermetrics on the Internet comes from USENET groups like, which has been preserved by Google (at least, until Google gets tired of it), but it’s like wandering a forest without a map or compass much of the time. A fair number of sabermetricians got their start on a forum called, later renamed Fanhome, but that site is lost pretty much entirely. Much of the early sabermetric output on the Internet has been lost entirely, and what remains is poorly catalogued if it’s catalogued at all.

Why does this matter? As an illustration, consider Total Average, created by Tom Boswell in the late 70s, back when he was more interested in dabbling in sabermetrics than mocking it. Total Average was a summation of bases (total bases, walks, and steals) divided by a player’s outs. At the same time, Barry Codell independently derived the same idea, which he called Base-Out Percentage.

Total Average/Base-Out Percentage isn’t a bad metric. Certainly at the time it was conceived of, it was better than a lot of what was out there (but not as good as OPS, which is a big reason why it hasn’t stuck around). But a funny thing has happened: people keep independently “discovering” bases per outs. And it seems to be legitimately independent, too, not simply appropriating the ideas of someone else, if for no other reason than there are really quite a few better ideas to steal. But when you lack good records and good cataloguing of those, nobody knows the idea has been done before. And more importantly, they don’t know all the research that shows why the idea isn’t worth keeping around.

It’s true that the history of sabermetrics contains a lot of dead-ends and ideas that have been supplanted. But preserving those doesn’t matter only if you think we’ve reached the endpoint of the sabermetric movement. If we’re still developing new things and improving old things, that history is vital. It tells us what others have tried, it tells us how they came to their conclusions, it tells us why they did what they did, and it tells us what flaws others saw in it. Without that, every time a researcher wants to build something, he or she has to start almost from scratch. And that means a lot of time and effort spent making the same mistakes someone has already made. Best-case scenario, the mistakes are caught and someone just wasted their time. Worst-case scenario, the mistakes aren’t caught and people waste even more time believing things that have already been debunked.

There’s also a disconnect between sabermetrics and the history of the game. The problem is that baseball was not birthed fully formed; it developed over time. The rules of the game, as well as the unwritten rules (not the silly ones everyone talks about, but the vital ones that seem to dictate how managers behave) evolved in response to real learning about the game. If you look at the finished product, and not all the history behind it, you lose all the education that went into that—and for a field that’s about the accumulation of knowledge, that’s a deep loss.

One of the things that confounded sabermetricians early on was the bunt. Looking at the run expectancy tables made it clear that the sacrifice bunt was almost always a terrible idea, unless you were a pitcher hitting. But upon closer scrutiny, it turns out that the bunt is not as terrible as raw run expectancy would lead you to believe:

[A] sac bunt attempt obviously does not lead to an out and a base runner advance 100% of the time (or even close to 100%); in fact the average result from a sac bunt attempt is not even equivalent to an out and a base runner advance. Also, the average result varies a lot with the speed and bunting skill of the batter and whether and by how much the defense is anticipating the bunt or not (among other things).

Because you can have a single or a reach on error on a sacrifice attempt, and because those aren’t recorded as sacrifices, the bunt is actually a better percentage play than the change in run expectancy would indicate. And because of game theory, even suboptimal bunting may be of value in that it affects how the defense is lined up against you.

Early sabermetricians simply couldn’t understand why managers would bunt, given the evidence they had, so they concluded that the bunt was an egregious wrong. Later analysis is much more forgiving of the bunt (even if it would probably conclude that it’s overdeployed). People of a certain bent are likely to be struck by how this resembles the principle of Chesterton’s Fence. As G.K. Chesterton wrote:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, or that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

If sabermetric inquiry cannot figure out why major league managers behave, almost to a man, in a certain fashion, that means nothing more or less than a failure of inquiry. And the sabermetrican needs to go back and study the matter further until he understands why. That reason may well end up being wrong. But until you find out what it is, how will you know that?

Many questions currently being discussed among sabermetricians—how to evaluate managers, how bullpens should be organized and used, how often teams should employ defensive shifts—are questions that could greatly benefit from similar amounts of depth. On the subject of shifts in particular, there is very little questioning of why teams line up in the defensive alignment they do. It may be that proponents of radical changes in defensive alignments are correct. But it may be that there’s a lot of hidden wisdom in the way the defense is traditionally aligned, and until you know what that is, you risk losing it altogether.

As an organization, SABR has been only loosely connected with the development of sabermetrics; most of the important work in the field has been done without it. In terms of preserving the game’s history, though, few have done the kind of work SABR has done. (Even in terms of preserving the history of sabermetrics itself, SABR’s archive of its “By The Numbers” newsletter is probably the single greatest record of the work of sabermetricians not named James or Palmer prior to the dawn of the Internet era.)

SABR is an organization with deep roots in history that needs to find ways to be relevant to the here-and-now to survive. Sabermetrics is a field that’s very relevant now but that has underdeveloped roots in history, both its own and the history of what it studies. The two could complement each other beautifully. It remains to be seen whether or not they will.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
Another asset SABR could offer is the diversity of thought and experience. SABR members come from many fields, including the humanities and social sciences that are virtually absent from sabermetric research. Yes, sabermetrics writers often wear blinders when it comes to these resources.

Also, most BP writers, like this one, would benefit from an experienced editor. The grammar and editing are OK but most of them write as if they are paid by the word. Sometimes the writers take a long and winding path to making their points and other times they don't even get there. SABR also has its windy writers and its quality varies,too. However, SABR does include talent that can provide valid criticism of sabermetrics, if only they felt it would be received politely.

Honestly, I think the social sciences are pretty well represented in sabermetrics, particularly econometrics. I could be wrong, though.

As to the second point... I do tend to meander a bit, yes. Some readers like that, some don't. I have been fortunate (or unfortunate, depending on which camp you're in) to have editors that are patient with me there.
I certainly wasn't thinking of econometrics as a social science. That's a little like calling sabermetrics a social science. A person hearing that might say "Well, in a way as it deals with people, but most of its tools are quantitative." I was thinking more about sociology and psychology and their branches. For example, if statistical analysis can't explain why a manager continues to call bunts, maybe a psychologist could. I think most of us can agree that baseball is played by humans who often aren't rational. If by econometrics you meant somebody like Kahneman, who tries to explain why people tend to be irrationally risk averse, then I have to agree.

As for the point about writing, I was sincere, but probably a bit harsh, but sincere. Reading my previous paragraph reminds me that everyone needs an editor!
A fair amount of social science is quantitative, isn't it? I'm pretty sure if you throw a rock into a random group of sociologists these days, you're more likely to hit a quantitative guy than a qualitative, for instance.

Now, if you're talking about the need for a qualitative sabermetrics, that's a different kettle of fish. Russell Carleton had an essay in the most recent Baseball Prospectus annual that may be right up your alley.
Much wisdom here. The same reasoning might be applied far more broadly in society, too.
Great article.

By the way, its a crime that The Book can't be bought for a Kindle. I'm not sure why I thought of that reading this article, but it helps to document the history of sabermetrics to have the literature available electronically.
Am I correct to assume that you are talking about "The Book: Playing the Perentages in Baseball," by Tom Tango?

I own the Kindle version. My Amazon account states that I purchased it on September 12, 2011.

If it is no longer available electronically, there must be some politics involved. I guess it is up to me to safeguard that version for history.
Amazon says "This title is not currently available for purchase" when you look at the Kindle version, which I believe was put out by the previous publisher, Potomac Books.
It used to be available on the Kindle and eventually will be again. It is available on the ipad. There is a link on my blog.
Interesting commentary, Colin. I wonder about some of your implicit assumptions about the sabermetric movement. In particular, I wonder if it could ever be organized in the way you envision. Still, it's a goal worth pursuing.

For the past two years, I have attended the Analytic Conference instead of the SABR Conference, and I have enjoyed it a great deal. Being more of a geek than a history guy, I get more out of it and I think it's potential is more than you give it credit for here.

Thanks for the thoughts.
Pretty much, to be a sabermetrician, you just have to identify as such. It's a blessing and a curse (more the former than the latter, I'd say). But it does make organization difficult at best, yes. At most, what you can do is say, "I think we ought to move in that direction," and hope somebody moves with you. I still think it's useful to ask what direction we ought to move in, though. The trick to remember is that it's almost all hobbyists and volunteers, so you try to be careful to frame it in such a way that you're not trying to obligate others to spend their free time in this way or that way. I like to think I've succeeded at that more than I haven't since I started doing metasabermetrics, but I'm sure my record isn't spotless there.
I just went through a "Chesterton's Fence" situation, and I wish I had read that passage before I went through it.

I bought my first house about a year ago. Since I bought it there has always been a hose that was fed through a small crevice and under my deck. I never understood its purpose, and had made small efforts to pull it out, but didn't really care.

A couple of weeks ago I bought a hosereel after my dog had chewed on part of my hose. I had my wife come out and look at it. When she did she told me to pull out that hose from under the deck. I did so. About twenty minutes later I realized that the only place my hosereel would fit is on the other side of the deck. The previous owner had one and had used that hose for this purpose. It was very difficult to get a hose back through that spot and fed under the deck.

If I would have stopped and realized the previous owner probably had a good reason for putting that hose there, I would have saved myself a couple of hours of frustration.
Very thoughtful article. I particularly like the Chesterton's Fence principle. Without having heard of it before, I actually try to apply it to my own research, particularly in areas where perception and reality do not agree. For me, it is important to understand in such cases not only that perception does not agree with reality but to also understand how the incorrect perception arises. As a recent example, I am currently applying the principle to my study of knuckleball trajectories, trying to understand how they actually behave and what leads to the perception that they behave differently.
Great article, Colin. It hits on something I've been thinking about as a member of SABR (under 30, no less!).

Have you looked at The Wayback Machine on the Internet Archive site for
"If sabermetric inquiry cannot figure out why major league managers behave, almost to a man, in a certain fashion, that means nothing more or less than a failure of inquiry. And the sabermetrican needs to go back and study the matter further until he understands why."

As a group, I've found sabermaticians to be quite dismissive (bordering on arrogant) of some of the conventional 'wisdom' employed everyday on a baseball field.
Given that the first piece of "conventional wisdom" that usually gets mentioned is that analytically minded researchers are all losers that have nothing to contribute because they have never played the game themselves and just look at box scores and play on their calculators, there has existed a sense of vindication whenever they manage to demonstrate that conventional wisdom is sometimes a bunch of hooey. SDCNs are human, too.
It's a spectrum. You have some people who are incapable of accepting that conventional wisdom is wrong, and some who are incapable of seeing the value of conventional wisdom at all. Understanding is a three-edged sword, though -- my side, your side, and the truth. The truth is really somewhere in the middle.
And one of the best features of these new thinkers is that they are challenging conventional wisdom and the best of them are also challenging their own conclusions, too. The next steps might look at the "whys" of certain trends and conclusions and that is why I suggest the social scientists might play a role. For example, in the debate about the need for a single closer role, the sociologist might be able to say yes/no some players just don't have the "gumption" (technical term) to get that final out and what can be done about it.
Inspired by a few things that have run on BP recently: what about (or how much is MLB) using sabermetricians to investigate/quantify possible rule changes? Like "changing this about the balk rule would decrease run scoring by 0.1 R/G" or similar? That strikes me as a potentially valuable role that I haven't heard much about.
Re: Bunting isn't always a bad idea
Ron Johnson wrote the equivalent of that MGL article in the late 90s, and continued to point it out all over the newsgroups through the early 00s. In the tighter saber community, we knew that (for the informed, the earth was never considered flat). Of course, perhaps that highlights the larger point that we don't have a Wikipedia.

Which we (SABR members) have discussed with people good at those sort of things.