Let’s get defensive. Probably the simplest defensive metric available that’s of any practical utility is Defensive Efficiency Rating, a Bill James creation. Simply put, DER is:

If you want to calculate it from official offensive statistics, you can figure it as:

In other words, it is one minus Batting Average on Balls In Play (BABIP). It’s actually quite a clever concept, and it does a fairly good job of measuring defense. The big problem is simply that it tells us a lot about a team, but very little about the players on the team.

Now, let’s think about this for a minute. By all rights, individual player defense should add up to team defense. Let us, for the moment, tackle a portion of the issue for the sake of clarity: For right now, we’re only looking at ground-ball defense. And let’s (for the time being) ignore balls fielded by the pitcher and catcher. Throw them out on both sides of the equation. And pretend for just a moment that all four of our infielders are newly minted Cal Ripken Jr. clones, and they play every game. Now, let’s take our DER and try and break it up into components:

“PM” stands for “plays made,” and “CH” stands for chances. In short, what we’re trying to do is split up balls in play and “credit” them to the individual fielders to produce each player’s individual DER. Can we do this?

The simple part (assuming one has play-by-play data) is figuring out each fielder’s plays made. In the infield, a play is made when the player fielding the ball is credited with either a putout or an assist. (In some rare circumstances-mostly when a missed catch error is committed-this occurs even when there is no out recorded on the play.) For the outfield, a play is made when the player fielding the ball is credited with a putout. An outfield assist is generally what we refer to as a “baserunner kill,” an important part of the puzzle, but something that’s better measured by an arm rating rather than including it in a range-based fielding metric. We’ll swing back around to looking at outfielder arms eventually, though.

So we can apportion out fielding plays made between fielders pretty readily. What can we do about splitting up balls in plays into fielding chances? For that, we need to look at where the ball is hit on the field.

The Danger Zone

Typically, when looking at balls in play, dividing the field into zones indicates the hit location, like so:

This particular zone diagram is adapted from the Project Scoresheet scorekeeping system; different data providers adopt different zone systems, and the specifics of each zone aren’t really important right now. The location of where a ball is hit on the field is recorded based upon whatever zone it’s hit in. For ground balls, it’s typically recorded as the zone where it passed through the infield, regardless of where the ball eventually ends up.

So when we want to determine something about a ball in play, using its zone location and batted-ball type, we can compare it to its peer group. But let’s consider a specific example:

Suppose we’re interested in the ball indicated by the blue dot in that diagram. In a zone-based system, that particular fielding play would end up being compared to the play indicated by the red dot on the far left, but not the play represented by the red dot right next to it on the field.

Of course we can always divide into smaller and smaller zones to address this issue. But you end up slicing your sample smaller and smaller, making yourself more susceptible to random variation. And you’re always going to end up with an arbitrary distinction between what batted balls are peers and what aren’t.

So, rather than dividing the field into zones, what can we do? Instead, let’s compare every batted ball to all other batted balls. Let’s simply weigh the closest ones more heavily.

Say we describe every batted ball, not by a zone, but by an angle, where home plate is the origin, a straight line out to second base is zero degrees, first base is 45 degrees, and third base is -45 degrees. If we have a ball hit at 10 degrees (or just a bit to the right of second base), we can compare it to a ball hit at zero degrees and 20 degrees equally. A ball at -5 degrees can also be compared, but we put less emphasis on it in determining what it should be.

To achieve this, we can use a tool known as local regression, also called a Loess or Lowess regression.

Finding Chances

Since we’re dispensing with the idea of zones, we can also dispense with the notion that some fielding chances are “in zone” and fieldable and some are “out of zone” and unfieldable. All batted balls in play are fieldable; it’s simply a matter of responsibility.

So, to figure out who is responsible for each portion of the field, we are going to throw out hits and simply look at plays made for the time being. Essentially, we are trying to give out one chance per batted ball. We can certainly use fractional chances, though-some balls might be 50 percent the responsibility of the shortstop and 50 percent the responsibility of the second baseman, for instance. Using some local regressions, we can apportion out chances like so:

Blue represents first base, purple second base, orange shortstop, and red third base.

That’s still a little messy, especially along the foul lines-not everything adds up to one exactly as we would like. That’s not entirely unexpected-the local fit is going to come with a standard error for each fitting point as well, and since we know the actual value, we can “push” everything in the right direction and get everything to add up correctly.

So we’ve got what we came here for, right? I mean, we can figure out a DER for each fielder that aggregates to team DER, at least for ground balls.

What we’re missing here is that not all ground-ball chances are equal. So, before we can compare two players, we need to know not just the number of chances they had, but also the difficulty of those chances. What we want to look at is the expected outs made on each ball in play as well:

This time, not everything does-or should-come close to adding up to one. Particularly, you can see that a ball hit straight up the middle is rarely, if ever, converted into an out.

Doing the Split

All of the above works for figuring out responsibility on hits. But there’s a caution when applying it to outs (or errors, for that matter). Once a ball is fielded, it is no longer a chance for any other fielder, and it shouldn’t be counted against them. So, for any ball a player fields, they are credited with one chance, regardless of location.

Since we are crediting the fielder with the entire chance, we also need to credit that fielder for the entire expected out as well, not just the expected out for his position:

This is key for two reasons-it makes sure everything properly reconciles at the team level, and it avoids over-crediting a “ball hog” for stealing plays from his teammates.

Putting it Together

So, for every ball in play, we figure both a player’s chances and their expected outs. A player’s individual DER is:

And their expected DER is:

From here, we can compute a “normalized” individual DER for each player that controls for the difficulty of chances each player received. We can also subtract expected outs from plays made to figure a fielder’s plays above (or below) average.

This gives us a framework for evaluating the contributions of individual fielders. We’re not quite ready to apply it to individual fielders, though. First, we need to account for a few other factors, such as baserunners, the number of outs, number of strikes, and park factors. We’ll attend to those next week.

Notes and Asides

All regressions and graphs for the article were produced using GNU R with the locfit package, with a smoothing value of 0.2.

Several existing fielding metrics served as an inspiration for this system, primarily Mitchel Lichtman’s Ultimate Zone Rating and Shane Jensen’s SAFE.

#### Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
What kind of batted ball data do you have to generate these angles? I don't know a ton about the UZR underpinnings either. Does it all come from the project scoresheet data, and you're converting the zones to angles?

Just out of curiosity, where does angle of batted ball data come from?

One thing I like about SAFE is that I think it was important, for infielders at least, that ground balls down the line went for more bases on average. I was never as sure about using SLGBIP vs BABIP for outfielders because I was told that differing parks effects made the sample sizes too small to work with and too difficult to sort out.

How is the outfield DER going to be calculated? Is there any chance you are going to add expected run value of various hit locations/angles to the metric?

As for expected run values - yes, certainly, that's something that we're going to progress toward.

I guess what I am trying to say is that I like the idea of having a fielding metric that is based on smooth functions and not discrete bins. That seems to mirror reality better even if, as mgl says, for large enough sample sizes, there won't be a significant difference in the results.

The reason I like your attempt at DER better than Jensen's SAFE (Here is where I'm not completely sure about how SAFE is calculated. I could have misinterpreted Jensen's explanation and therefore this paragraph would be wrong. Let me know if that's the case) is that DER gives fielders credit for exactly the number of chances they had. So that if by random chance, in a particular season, a fielder has many fewer chances than one would expect, he's not penalized/rewarded for more that what he really did. On the other hand, in SAFE, each player is measured using a density estimate rather than actual chances. If I'm correct here, SAFE is telling us anticipated value if the player received an average number and distribution of chances. That feels more like a projection than actual value of past performance.

Just a thought.

We're also going to do a LOT of work on park factors here, and do so in a way that takes into account different types of scoring bias in the batted ball data.

But before we can do any of that, we have to establish the sort of baselines that I mentioned above. So this is just sort of the introduction, to get everyone familiar with the tools that we're going to be using. There is still a lot of work to be done here.

Say a ball is hit between the first baseman and second baseman in the overlapping angle of the two, but neither makes the play. They both had a chance to make the play, though, so do you credit them both with a chance, or with a partial chance? How is that handled?

And I don't know that we have good information on how hard a ball is hit. So instead maybe we control for the identity of the batter instead. It's something to look at.

The only hit f/x data released to the public was for April 2009. We are hoping for more.

Peter Jensen has studied Gameday's ball locations, compared them to BIS and Stats, and although Gameday is slightly less accurate than those, it's close, and all have an error of around 6 feet in x or y in the outfield.

A year ago I published park factors studies that showed recorded line drive rates varied +/- 20% from park to park in the majors. The 'hard' or 'soft' designation had an even larger variance, and I have chosen not to use those. So yes, it's difficult to tell how hard the ball was hit.

For sure if they are just using Gameday, then they don't have great speed data then.

To me, defensive metrics are sorta like the origins of our galaxy. With what we presently know, there is no answer to the question that will satisfy a majority of the people.

Every at bat and every play in baseball is unique. Some things just may not be quantifiable.

One part is fielder positioning. This includes the shift, but also includes things like holding runners, individual players tendencies, etc.

One part is the speed of the ball. A sharply hit ball might be more (or less!) likely to be a hit than a slow ball. The type of field/turf might effect that as well. Not all balls hit at -10 are equal.

Agreed that positioning is quite important. Not just playing closer to the line or further, but up to cut off the run or back at double play depth. So it's not just up to the fielder as positioning is adjusted by game situation.

I imagine in the future as data gathering gets more mature, we'd also be able to correlate the fielders position at the time the ball is hit. Not sure if we have that now. So for example if the SS is playing closer to 2B and the 3B is closer to the line, both the GB chances and the GB expected outs lines would shift for each fielder.

However IMO this would be in the future and I think baselining and generating some raw data for evaluation using this concept is fine for now. Can't wait to see it.

And we don't have the positioning data to figure that out, no. But we do have plenty of baseball sense that tells us in what situations this typically occurs - really, there are a few things (what bases have runners on them, how many outs there are, number of strikes, etc.) that give us a pretty good gauge of where the runners should be positioned.

And we can take the data we have and test some of these and measure how much of an effect there is.

As a general rule of thumb - corner infielders tend to shift up and down the line, middle infielders tend to shift laterally - this is illustrated here and here. (That said, certainly there are times when the corner men shift over to their left or right, or the middle infielders play closer in or further back.) So you need a slightly different process for adjusting corner men than middle infielders. We'll look at all that.

However, if you want to calculate DER from official batting stats, that's the closest you're going to come. If you want to do it from official pitching stats, it looks something like:

(IP*3-SO)/(IP*3-SO+H-HR)

If you have extra data like sacrifices and reaching on errors and so on you can do a better job. But especially with historical data, sometimes you really don't have that. Not that this is especially relevant here - we have full play by play data for this.

For presentation purposes, though, please choose a different word than "Chances" to describe opportunities to field a ball. "Chances" already has an official definition, and it ain't that. The opportunities for confusion are legion. If "opportunities" is too long, how about just "opps"?