Every once in a while, a quote about sabermetrics pops up that makes me pause for a moment, and last week we got one from none other than Orioles center fielder Adam Jones. Responding to a question about how to make baseball “cool” again, Jones went off on a rather interesting tangent:

"To me it just means getting rid of all these stats. Everything they’re throwing at us nowadays: You hit a home run, this was the exit velocity—who gives a crap? The ball was a home run. You can hit a ball 110 mph off the bat and you’re out. I can’t discredit the actual data, but, to most of us in this game, it’s complete eyewash. But somebody from Harvard or Yale or Tufts or one of those schools is going to get a job in baseball that is not even their field, but they love the game. But when you step between these lines, education means absolutely nothing. That’s the part you can’t measure. But they’re trying."

I’ll sidestep the implied “make baseball cooler by getting rid of the nerds!” angle. I happen to know a couple of people in front offices who went to those very schools. But I think there are a couple of critiques in that paragraph that are worth addressing.

First off, Jones is correct. Or at least he’s correct within his frame of reference. It makes no difference to him after the fact whether the ball left his bat at 90 mph or 110 mph. It matters whether it flew over the wall. What, after all, is he going to do with knowing the exit velocity? This is tautological, but the numbers that the public sees were produced to answer the questions that the public might have.

People like putting numbers on things that they previously didn’t have numbers for and they like making leaderboards from those numbers. While the numbers are nice, the message of the exit velocity leaderboard is one that people mostly knew already: hitting the ball hard is a good thing. It’s cool that we can now say exactly how hard the ball was hit, but where is the actionable intelligence?

That’s not to say a stat like exit velocity is useless to a player. The fact that we know (and can prove … not that we doubted it) that all else equal, more exit velocity is better, gives us a benchmark from which to work. In fact, the exit velocity readings that teams are more likely to look at (and feed to their players) don’t happen during a game. They happen during batting practice. That Statcast system works as well at 5:00 pm as it does at 7:00 pm.

Now, suppose Jones is in the batting cage, and tinkering around with his swing. He tries a few swings the way that he’s been swinging, and then a few with a new wrinkle. In batting practice, it doesn’t matter, although he can get a decent sample size of swings for each condition. After his time in the cage, he can go back and get a readout on the exit velocity for each of those swings. Maybe the new swing is giving him a little extra oomph, not the sort of thing that one can pick up with the naked eye, but something that a radar system can. It might not be much, but every little bit helps. Armed with that information, he can decide whether to use that trick in the game that night.

Not all players want that information (some do!), and frankly, trying to force it on them would mostly annoy them. Teams tend to take a “you can have whatever you want” approach to the subject, with the emphasis on what you want. Jones might be the sort of person who just doesn’t process information that way. As sabermetricians, we tend to forget that not everyone understands the world through numbers. Some people process things visually, some kinesthetically, some in other ways. This is neither good nor bad. People are different. Maybe his brain just processes information in a different way. I know that I’m personally horrible at visual-spatial processing, for instance.

From the front office perspective (and for the fans who like to view the world through numbers), the numbers that Jones is talking about are very useful. A player’s average exit velocity is an important (but not complete) datum. It provides an easy-to-collect method to both get information on a range of players and compare them to one another. Front offices workers and field players have different needs.

There’s another critique in Jones’ statement that’s valid, and an honest warning. “You can hit a ball 110 mph off the bat and you’re out.” It’s the sort of statement that seems rather obvious and that we’d all nod our heads to. After all, hitting the ball hard is useless if you don’t hit it to begin with, hit it with a bit of loft, hit it fair, and hit it where they ain’t. And before we all develop a case of not-me-itis, let’s just say that sometimes other people might forget to see the complexity in all of it.

It’s easy to fall prey to the tyranny of a leaderboard. They’re fun to look at. Statcast is the new toy that everyone loves, and exit velocity is fun to look at, but it is part of a complex system of inputs that makes a player good (or not). It’s possible to have mediocre exit velocity and still be a very good player. The thing about a leaderboard is that it reduces the world to one dimension, which is nice because it makes the world simpler than it actually is. I’m sure that most people reading this practice responsible leaderboard consumption. Right?

Jones has to be fully steeped in the complexity of hitting. It is, after all, how he pays his bills. He knows that there’s more to life than just wailing the ball. For some hitters, that’s their game. Others have a different approach. But when people fall into the trap of leaderboard worship—or worse, “one weird trick” territory; remember when we all thought that building a team entirely out of guys atop the on-base percentage leaderboard was the only way to go?—it has a tendency to flatten the world out. At that point, someone like Jones finds himself judged by one number, a number that he knows isn’t the whole story.

But I think the biggest critique that Jones levels is hidden deep in the weeds, and it’s one that I don’t think sabermetrics, as a field, has ever really dealt with. Again, quoting Jones: “You hit a home run, this was the exit velocity—who gives a crap? The ball was a home run.”

The field of sabermetrics is built around the idea of process. Exit velocity is such a cool statistic, specifically because it’s divorced from the outcome that it produced. After all, we know just as well as Jones that sometimes you hit a screaming liner right at someone, and sometimes you dink a little dribbler that just happens to find a hole. The thing is that we’re trained to look at the world through the lens of expected value. We know that the screaming line drive is more likely to become a hit than the little dribbler, so we value it more. That’s the logical way of doing things, but what about the emotional experience of it?

When I worked as a therapist, we were required to engage in what were known as EBTs or empirically-based treatments. There had to be evidence behind the kind of treatment that we did. This was fine, because it was nice to be able to say to a family that the treatment I was going to do was effective in 80 percent of cases, and perhaps that really was the case. The problem is what happens when you implement the treatment, and you end up with one of the 20 percent of people for whom it doesn’t work. At that point, your expected value means nothing.

And yes, please spare me the lectures about how expected value should be guiding principle for decision-making. Sure, it’s true from a dispassionate, logical point of view. What do you say to that one person in five? At some point, you have to produce a result. There’s someone who’s going back the dugout feeling awful because the line by his scorecard says "L6" rather than "1B." Even if there’s a rational understanding that sometimes you do everything right and it doesn’t work, it still stings. The usual sabermetric response is “get over it” and yes, if you’re looking at the world entirely through a rational lens, that’s the correct answer.

What if you’re talking to someone who isn’t in that frame of reference? What if you’re talking to someone who you’d like to listen to your data-driven proposals later? It’s nice to have all these numbers, but if we want them to be more than intellectual curiosities, we need to realize that some of the people we’re trying to reach are coming at the world from a different point of view and are going to have different needs than we have. That’s not an analytical problem to solve. That’s a problem that calls for a little bedside manner.

To Jones’ quote, I don’t think that “all these stats” are going to disappear anytime soon, but I could see how he might not be interested. I could see how the points of entry into the system might not be meeting the needs that he was bringing to the table. So that’s my challenge today. There’s no point in compromising on methodological rigor, but how to frame things so that it meets the needs of people beyond the readership of Baseball Prospectus.

You need to be logged in to comment. Login or Subscribe
Really great article. Its the transition from great metrics that are useful data to presentation of the data in a context and manner most useful to the person consuming it. This is the challenge of dashboards. They need to reduce their world to the 1,2,x single dimensions each user wants to see, having the most useful data for that person, while ensuring that the dimensions across all of the different views hang together in a consistent manner / methodology.
It seems to me that, while presentation of data in a context that is the most useful to the consumer of the data is important, there is only so much tinkering of 'presentation' that can be done in order to sell the player on the usefulness of the data. Baseball at its highest level is unbelievably competitive and it seems rational to believe that every player will always seek whatever slim margin of improvement he can gain from any source. But we know that isn't true. There have been many players through the years that have refused even simple data, like scouting reports, simply because they believe their unbiased physical reactions will produce a better result than cluttering their mind with expectations. Over time, it seems like the trend is towards the adoption of more data into the preparation of the players, but there are obviously still holdouts (like Jones). I'm not sure that anything you do, presentation-wise, will change the minds of the holdouts. It seems to me that holdouts will change their thinking on this only with the realization that they are missing a competitive edge to their opponents by refusing to use all of the tools available to them.
As Jones has entered his thirties, he has lost a step. Gravity always wins, but part of how older players extend their career arc during physical decline is detailed knowledge of the game. If I were Baltimore's management, I would read quotes like that as a lack of curiosity. I'd place a higher value on players that were telling me they would take inputs from all reputable sources as best as they could. Let's invert the situation-- imagine if Jones revealed that he was solely going to prepare for games mentally by poring over data. How would fans and colleagues react to him deciding that the physical action of batting and fielding practice was too old-fashioned?
"The problem is what happens when you implement the treatment, and you end up with one of the 20 percent of people for whom it doesn’t work. At that point, your expected value means nothing." It only means nothing if the patient only has one shot at treatment, right? If she has multiple opportunities to receive treatment, she's lost some time and money and undergone additional suffering, but she has gained information: a treatment her therapist thought was very likely to work did not work. Her therapist then should be in a better position on his second attempt to treat her. (Otherwise, shouldn't therapists just tell patients that the likelihood of success of every proposed plan of treatment is 50/50?) Similarly, Statcast-driven expected value only means "nothing" to Jones if it involves the last plate appearance of his career. If we assume he has many more to go, then the expected-value data remains valuable to him even if the outcome of that particular PA was negative and (to continue the narrative analogy) unlucky, right?
One of the best articles I have read on this site. Thank you.