​Believe it or not, most of our writers didn't enter the world sporting an address; with a few exceptions, they started out somewhere else. In an effort to up your reading pleasure while tipping our caps to some of the most illuminating work being done elsewhere on the internet, we'll be yielding the stage once a week to the best and brightest baseball writers, researchers and thinkers from outside of the BP umbrella. If you'd like to nominate a guest contributor (including yourself), please drop us a line.

Jeff Sackmann is the co-founder of  He is the creator of and blogs about tennis at  He is on twitter at @CollegeSplits and @JeffSackmann

When signing a reliever who has just had a career year under the tutelage of master pitching coach Dave Duncan, a team must take into consideration Duncan’s influence—and the possibility that the pitcher’s success will evaporate in a different environment. But how many clubs take into account the same variable when evaluating potential draft picks?

In the wild and woolly world of Division I baseball, there’s plenty of variance in everything, from field characteristics to strength of schedule to coaching styles. Some head coaches, such as UCLA’s John Savage and Fullerton’s Dave Serrano, are considered excellent developers of young pitching, but aside from pitch-count-focused studies, coaching quality is rarely a factor in quantitative discussions of college pitching prospects.

Given the differences between the best and worst schools for pitcher development, we may be missing something important. Over the last four years, one top D-I program has seen its returning pitchers improve their walk rates by 12.0 percent, their strikeout rates by 14.8 percent, and their ground ball out percentages by 6.1 percent. By contrast, returning pitchers at another powerhouse school have suffered walk rate inflation of 23.8 percent, a tiny uptick in strikeout rate of 0.7 percent, and a decrease in ground ball out percentage of 6.9 percent.

Let’s put these numbers in context. On average, returning pitchers improve their strikeout rate by roughly four percent and their ground ball out percentage by about three percent, while their walk rate stays the same. The shift to more wood-like aluminum bats before the 2011 season complicates the analysis, but in the end, seems to have a negligible effect on pitcher development patterns. And for the purposes of comparing programs, remember that every program faced the same changing bats last year!

The first school—the one with a parade of pitchers piling success on success—is Arizona State, where the coaching hierarchy has changed but the presence of current top dog Tim Esmay has not. The program at the opposite end of the spectrum is Louisiana State. Despite winning the 2009 national championship, the Tigers have seen pitchers consistently decline from one season to the next. These aren’t minor differences. Some D-I coaches are producing substantial, consistent improvements in the peripheral stats of their charges, while others are overseeing alarming steps backward.

The impressive performance of Sun Devil pitchers and their coaches isn’t even the very best college baseball has to offer. That honor goes to UC Davis, where long-time pitching coach Matt Vaughn is beginning his first year in the head job. Vaughn has guided his returning pitchers to an 8.5 percent improvement in walk rate, a 21.0 percent increase in strikeout rate, and a 9.0 percent increase in ground ball out percentage. Other high-level programs with notable pitcher development results are Nebraska, Pepperdine, and UC Irvine, where returning pitchers have all posted marked improvements in all three of these components.  Savage’s UCLA pitchers haven’t improved at the same rate, but the Bruins do rank in the top quartile of D-I schools.

Of course, as with anything concerning college baseball, the number crunching is a bit tricky. To generate these results, I took the last five full years (2007-11) of Division I pitching totals and, for each pair of consecutive seasons, identified hurlers who faced at least 100 batters each year. This low threshold includes virtually every pitcher who played both years, eliminating at least some of the selection bias inherent in counting only those pitchers who were worthy of playing time on two consecutive college rosters.

That gives us about 6,000 pairs of player-seasons, or roughly 20 returning pitchers for each D-I program. For LSU and UC Davis, we have exactly 20 each; most top-tier programs have a sample between 15 and 30. Weighting each pair of player-seasons by the lowest number of batters faced, I averaged the walk rate, strikeout rate, and ground ball out percentage for each school. (At the college level, I am able to track batted ball type only for outs, which is reasonably predictive of batted ball type for all balls in play in the low minors.)

When taking a closer look at the results, what pops out is the lack of emphasis on control. Only 109 programs (out of close to 300) have seen the average returning pitcher decrease his walk rate. Barely half of those schools (61) have seen the average returning pitcher improve his walk rate by more than five percent.

Much of the blame here is due to physical development and an increased focus on velocity. The first isn’t due to the interference of the coaching staff, and the second may or may not be. At Arkansas, for instance, the average returning pitcher has upped his walk rate by a staggering 30.3 percent. But all is not lost for the Razorbacks: the sharp increase in walks is matched by a 34.5 percent jump in strikeouts.

The issue of physical development (and, perhaps, mental development) raises the question of just how much credit or blame to assign to a college coaching staff. So much is out of a coach’s control, and even more depends on the type of pitchers recruited in the first place. LSU’s track record helping pitchers improve over the past several years is abysmal—including in their 2009 championship season. Certainly the Tigers have recruited some first-class arms, such as Anthony Ranaudo, Louis Coleman, and Austin Ross. Perhaps certain types of players are less amenable to college coaching, and LSU accidentally signed up a lot of those guys.

Another question is more important. To what degree should we consider the quality of college pitching coaches when evaluating draft prospects? It’s difficult to answer, and because of the small samples involved, it’s even more difficult to test. By the time the current crop of college pitching has become successful (or not) at the professional level, many of today’s college coaches will have moved on. And from any given program, very few pitchers go on to any degree of pro success.

Even when considered intuitively, the question is a bit of a head-scratcher. Player A goes to LSU, has a sparkling freshman season, puts up solid numbers as a sophomore and a junior and continues to impress scouts. Player B goes to Pepperdine, struggles with control as a freshman, steadily improves, and posts better numbers as a draft-eligible junior than the hurler at LSU. Do we give Player A more credit because he has continued to perform despite LSU’s track record of poor pitcher development, or less credit because he has missed chances for improvement at a crucial age? Is Player B more pro-ready because he has proven himself coachable, or has he maxed out his talent at age 21?

Fortunately, for any individual player, we have a lot more data to inform our evaluation. But as we’ve seen, coaching quality is a factor that influences peripherals from one year to the next just as much as park characteristics influence other pitching statistics. No analysis of a draft prospect would ignore the effects of a pitcher’s home field. We should treat coaching quality as a quantifiable—if complex—factor as well.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
Was this analysis of every D-I program or just the "top" ones? If the latter, how did you decide "top" or "major"? Good stuff, by the way.
Thanks for the report Jeff. Glad this data is finally available. Is the full list of programs available somewhere online? Would love to see some of the other programs you analyzed.