Believe it or not, most of our writers didn't enter the world sporting an address; with a few exceptions, they started out somewhere else. In an effort to up your reading pleasure while tipping our caps to some of the most illuminating work being done elsewhere on the internet, we'll be yielding the stage once a week to the best and brightest baseball writers, researchers and thinkers from outside of the BP umbrella. If you'd like to nominate a guest contributor (including yourself), please drop us a line.

Graham Goldbeck is a data analyst at Sportvision, the company behind PITCHf/x, HITf/x, COMMANDf/x, and FIELDf/x. In the past, Graham was a writer for the website Beyond the Boxscore and worked as a baseball operations intern for the Oakland Athletics and Tampa Bay Rays.

“If you’re struggling with your command, try taking a little off your fastball so you can locate it better.”
—Countless pitching coaches, announcers, general baseball proverb

Fastball command is commonly acknowledged to be one of the most important abilities a pitcher can possess. If you can’t throw the ball where you want, it’s difficult to succeed in baseball, especially as you reach the higher levels where batters’ eyes are more finely tuned. Along with velocity, movement, and deception, command is a crucial part of predicting future success.

The problem is that command has always been a very difficult attribute to measure quantitatively. Radar guns have been measuring pitch speed for over 50 years now, and Sportvision’s PITCHf/x system has been recording movement data since 2008 in all 30 MLB ballparks. Deception may never be able to be measured qualitatively, though as The Book Blog’s Mitchel Lichtman has noted, if you’re able to control for all the other components of a pitch, you can get a decent approximation of deception. Command has also seemed to be a bit of an elusive ability to measure, but I feel that has now changed with the introduction of Sportvision’s COMMANDf/x.

Using the existing PITCHf/x camera technology, COMMANDf/x captures the location of the catcher’s mitt at the time the pitch is released. In and of itself, I think is valuable information that can be used to measure skills like catcher game-calling. But when COMMANDf/x is used in conjunction with PITCHf/x ball location data, we can begin to get a grasp of a pitcher’s ability to hit the catcher’s glove, as measured by the distance between the location of the catcher’s glove and the pitched baseball in the same plane (we will refer to this as command delta). This new data opens up many new areas for analysis, including catcher framing and general pitcher command. It can also help us check the validity of commonly held baseball beliefs like the one expressed in the quote above.

To start, I took all fastballs thrown by pitchers in 2010 that were defined by MLBAM as pitch types FF (Four-seam Fastball), FT (Two-Seam Fastball), and SI (Sinker). I then performed a quick spot check on the COMMANDf/x data to see if it passed the smell test. These are the top five, bottom five, and league average and standard deviations for starter fastball command delta in 2010 (lower numbers are better):


Average Command Delta (in)

 Jamie Moyer


Roy Halladay


Livan Hernandez


Andy Pettitte


Kyle Kendrick


League Average


Kevin Correia


Jonathan Sanchez


Barry Zito


Clayton Kershaw


Justin Verlander




Standard Deviation Command Delta (in)

Jamie Moyer


Livan Hernandez


Roy Halladay


Nick Blackburn


Andy Pettitte


League Average


Clayton Kershaw


Barry Zito


Brandon Morrow


Daisuke Matsuzaka


Justin Verlander


That looks like a somewhat reasonable list just based on anecdotal evidence/walk rate, which are about all I feel we have to go on as approximations for command in lieu of COMMANDf/x. The lower standard deviation pitchers are guys who are known to rely on locating their fastball and are regarded as fairly consistent pitchers, while the higher standard deviation pitchers are more your guys with electric stuff… and Barry Zito.

Returning to our study, the next step is to figure out a way to separate a pitcher’s faster pitches from his slower ones, so we can compare command in each situation. Unfortunately, there is no way for us to tell for sure that a pitcher is intentionally taking something off his fastball, save for asking him before each pitch what he’s planning to do. In reality, the same could be said for COMMANDf/x—that the only way we know for sure where the pitcher intends to throw the ball would be to ask him before he threw it. But we’ve figured out that the catcher’s glove is a fairly good representation of pitcher intent, so we should be able to figure out a way to estimate when a pitcher is taking something off his pitches as well.

My first thought was to group the fastballs by pitcher, game, and four-seam/two-seam (I’m considering sinkers and two-seam fastballs the same for this purpose) and then split them into quartiles by speed, resulting in an average speed and an average command delta for top 25% fastball speed, top 25-50%, bottom 50-25%, and bottom 25%. Splitting them by pitcher is obvious (pitchers are different and need to be compared to themselves) and the four-seam/two-seam distinction is to make sure we are comparing similar fastballs to each other (since two-seam fastballs are generally a mile or two slower than a pitcher’s four-seam fastball, a disproportionate amount of them would end up in the bottom 25% bracket, which we want to avoid).

I also feel that grouping by game is necessary to make sure we are observing real speed differences. If we just split up the quartiles on the whole season, we would likely end up with a disproportionate amount of fastest pitches in the summer, and if a pitcher is pitching hurt for a couple starts his bottom 25% quartile will be riddled with injury pitches that won’t be representative of his true command. By comparing only pitches within a game to each other, we should be getting the best apples-to-apples comparison. Though slicing the data so thin does increase the variability in our average command delta numbers, we can alleviate this some by taking the weighted average based on pitches (more on that later).

Now that we have all that info, we can start taking the differences in speed and command delta between each of the quartiles to see if pitchers really do improve their command as they throw slower pitches. Once we have the differences between each quartile per pitcher per game, we can take the weighted average (by pitches) of each of the differences to see how command varies by speed on a per-pitcher basis. Taking the weighted average by pitches of this dataset yields what we are looking for: how—on the aggregate—fastball command varies by pitch speed. Those numbers are below:

Top25-Top50 Command (in)

Top25-Top50 Speed (mph)

Top50-Bot50 Command (in)

Top50-Bot50 Speed (mph)

Bot50-Bot25 Command (in)

Bot50-Bot25 Speed (mph)







So in the above chart, we see that from the fastest 25% fastballs to the next fastest 25%, we see no change in command for the 1 mph loss in velocity. We see almost the exact same thing over the next fastest pitches, no change in command for about another decrease of 1 mph. And then for the slowest pitches, pitchers actually lose almost a third of an inch of command for about a 1.5 mph decrease in velocity.

Those results seemed a bit strange to me, and they merited some further consideration. After thinking about it a bit more, I believe the problem lies in selecting the bottom 25% fastballs and assuming that those were the pitches that pitchers took a little bit off their fastball. It’s more likely that those slower fastballs occurred later in the game when a pitcher was tired (likely a negative effect on command), and some may have been fastballs that slipped out of the pitcher’s hand (definite negative effect on command). Maybe there is a better way to approximate pitchers intentionally throwing fastballs slower.

That’s when I came across this article by Lucas Apostoleris and the excellent chart in the middle detailing change in fastball speeds from average by count. Notice if you separate by just strikes you can see a clear trend: that pitchers seem to be changing speeds based on the number of strikes in the at-bat. Maybe grouping by strikes would be better than separating by speeds.

With that in mind, I repeated essentially the same process as before, the only differences being substituting number of strikes for pitch speed quartiles and comparing the averages for each strike count to the pitcher’s overall weighted average (for both speed and command), rather than quartiles to one another.

0 Strikes  Δspeed (mph)

0 Strikes  Δcommand (in)

1 Strike Δspeed (mph)

1 Strike  Δcommand (in)

2 Strikes Δspeed (mph)

2 Strikes  Δcommand (in)







These look like more reasonable numbers than before. It appears there is not much difference in command between zero-strike and one-strike counts, despite the quarter-mph difference in velocity. At two strikes, though, pitchers seem to throw about half a mile harder than average, while losing around an inch of command.

So how should we answer our original question? If a pitcher’s two-strike fastball is considered his “true fastball,” and he takes a little off of it (in zero- and one- strike counts), it does appear that his command improves. I don’t think that’s quite in the spirit of the question, though, which more likely refers to additions to or subtractions from a pitcher’s average fastball. From that perspective, it doesn’t appear that taking something off the fastball has much of an effect on command.

This has been a fairly short—yet I hope informative—use of the COMMANDf/x data. I think there is a lot of potential for this dataset to answer many interesting questions in the future.