Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

The things that we, the public, know about baseball in 2017 are truly staggering. With Statcast we can measure launch angle and exit velocity. With PITCHf/x and Brooks Baseball we can figure out which slider has the most break, who repeats their release point the best, and nearly every tiny data point about the flight of a pitch. With Baseball Prospectus, Baseball-Reference, FanGraphs, Retrosheet, and more we have the most comprehensive dataset of player performance and statistical analysis in the history of athletic endeavor.

But what do we really know about how a player like Matt Carpenter goes from 13th-round pick to superstar, why a talent like Brandon Wood never succeeds, or how a leg kick and a change of mind transform some players into superstars while others see no shift at all? I’m not certain there’s any part of baseball that’s less understood in the public sphere than player development. Each franchise employs a dedicated, highly-trained staff to assist players in reaching their potential and improving their skills. From coaching in instructs all the way up to the majors, conditioning and training specialists, and even psychological and “mental skills” assistance, teams are putting a huge amount of effort into transforming potential into performance.

When it comes to specifics, that’s where things get kind of foggy.

What we in the public sphere don’t know much of is exactly how teams attempt to do this. Team “ways” are codified in little red books, coaches talk about particular drills from time to time, but player development has hardly the same data available for us to study the way major-league performance or amateur scouting reports do. Some might say player development does not lend itself to “hard” data the same way other facets of the game do, and the unstructured information gets lost out in the ether even when it does slip past a team’s web of silence.

As a result, when Josh Donaldson becomes an MVP or Domonic Brown becomes a washout, most of the credit or blame falls to the player himself and his work ethic or makeup. Occasionally the player development apparatus receives a shoutout, but more often than not we’ll chalk it up to the luck of the draw. After all, it’s incredibly difficult to separate the work of the player from the work of the organization in one-on-one cases, isn’t it?

While some teams gain reputations as player development hotbeds—the Cardinals and Giants and, um, … well, I’m sure there’s someone else!—player development comes across as a bit of a black box in terms of what works and what doesn’t, what teams do and what they don’t, and how to measure success other than World Series championships.

I think we can start to fix that.

Last year, Russell A. Carleton wrote a call to start looking deeper at player development from an analytical standpoint. It was a great article in which he asked why folks don’t apply the same rigor and data-driven research to the world of player development that they do to major-league performance, or scouting, or the other parts of the game.

I don’t exactly picture myself riding in on a white horse to answer these questions, but I do think that I have a rather unique perspective on how this can and should be done. In addition to being interested in (read: obsessed) with baseball and sabermetrics, it so happens that in my day job I’m what’s called an “instructional designer.” And instructional designers could be a bridge between player development and the systematic approaches that made sabermetrics so valuable.

Instructional designers like myself specialize in learning, and how people learn. Some of us came up from training and HR backgrounds and fell face-first into the field looking for answers. Others carry advanced degrees in “Instructional Systems” or “Instructional Technology”—degrees that come with a background in educational psychology, systems thinking, and business.

Our jobs are, usually, to create training that does what it’s supposed to do. Instructional designers design training programs and implement them, usually in business, higher education, or military fields. We use our backgrounds in educational psychology, adult learning, and communication to facilitate the acquisition of knowledge, skills, and attitudes (k/s/a for short) in learners. That certainly includes skills that apply to baseball, from physical skills like throwing motions to cognitive skills like knowing how enormous of a lead to take on Jon Lester. (And if you’re wondering just how many and how varied baseball skills are, here’s a piece from last year documenting just how many types of baseball learning exist.)

Instructional designers facilitate learning systematically, accounting for as many of the variables as we can that go into something as opaque as learning—trust me, there are a lot of them. But by designing instruction in a way that’s systematic, formal, and based on best practices, we can develop interventions that work, that are repeatable, and that allow the measurement of success.

Training and learning without instructional design is like baking a cake without a recipe. Sometimes you try it and everything turns out fine. Sometimes you’re left with a giant pile of scrambled eggs and sugar, wondering what the hell just happened.

A good instructional designer does a few things that separate him or her from your average coach or teacher­­—the “design” aspect of the job description. A great instructional designer works in concert with subject-matter experts like the top-tier coaches and analysts that teams already employ, and they allow for those people’s wisdom and experience to be used in the best, most efficient possible way. This allows for structure, repeatability, and reliability.

Instructional design concepts are many, and far too involved to discuss in depth in a too-long article online. But there are a couple to discuss on a broad scale that could apply some real value to the worlds of player development and/or coaching.

Objective-Based Design

More than anything else in training design, I believe in the power of structured, defined, performance-based objectives. It is quite a bit harder to reach your end goal when you’re not sure what that goal is, or when you set a goal that is out of your control. A budding young outfielder may say, “I want to become a better baseball player.” A no. 3 starter might say, “I want to pitch well enough to get Cy Young votes.”

Those are wonderful, admirable goals but not useful learning objectives. There are three major qualities that make for a good learning objective: they must be specific, measurable, and observable. By creating objectives that meet these three criteria, you train to an endpoint, rather than entering the world of guesswork. A player stating his desire to raise his batting average by 20 points is terrific, but even that’s not a true performance goal. It relies too much on the work of others, and doesn’t demonstrate a specific behavior in the same way another objective might.

Saying that you want to know when to swing at fastballs is good too, but you can’t observe that the learner has acquired that skill. On the other hand, stating that a player wants to lower his swing rate by 10 percent on pitches that are below the strike zone, or decrease his throwing errors by 20 percent in the upcoming season can be effective high-level performance objectives. Those are things that you can observe, measure, and are specific enough to judge success or failure.

Terminal vs. Enabling Objectives

The next step is taking those high-level objectives and turning them into smaller, more manageable steps. In the business, we call those smaller steps enabling objectives that help build toward our final goal or terminal objective. Without them, learning can look a lot like the old South Park joke about the elves that steal underwear:

  1. Steal underwear
  2. ???
  3. Profit!!

As we all know, the middle stuff matters. Nevertheless, without design, that’s how some people approach training or performance gaps. They start with an end goal “get better at baseball” and then try to get there without really breaking down what that means—systematically—into the smallest possible units.

Over at Statistically Speaking, Carleton went about this process from a mathematical perspective almost six years ago. It’s the right idea, but done in a different way than I would do it. I’d start from an instructional design perspective. Let’s say our end goal is a good one: Gonny Jomes wants to be a better defender by improving his range in left field­—and he wants to record more putouts as proof. How could we break that down into manageable parts in order to help him achieve this objective?

It requires a lot of research, but the quick-and-dirty answer is that you spend a lot of time doing task analysis on what the aspects of range are. You may be able to improve your range by running faster, but you may be able to improve your range by taking a more direct path to a hit ball. You may be able to improve your range by getting a better jump on the ball. So right there, you have three enabling objectives that tie to your terminal objective.

But that’s not enough either. Let’s look at the “run faster” objective, which we can make a good performance objective by restating as “increase my running speed when running toward a ball in play in left field.” There are a couple of ways that this can be addressed. You can increase acceleration, and get up to a top speed faster. You can increase top speed, so that once you do accelerate your overall speed is better.

It may take some time, but I’d drill down to the smallest possible unit of behavior that you can test … one of them in this case might be something like “given the crack of the bat, take a first step in the appropriate direction 95 percent of the time” or “given the crack of the bat, take a first step in under one second 90 percent of the time.” Then we build from there.

Of course, it’s important to keep in mind that some aspects of physical skill building can’t be broken down into teachable skills. At some point, physical talents and limitations come into play—we can’t just insist that Jeff Mathis swing the bat as hard as Bryce Harper does. So part of the overall process is identifying the things we can train from the things we can’t, and focusing on those items. But there’s more that we can change than you might think! There’s also costs and opportunities to training initiatives, so there’s a complex set of circumstances in place. The goal is to find and train the ones with the biggest and best returns. And data can help us do that.

Instructional design is just one way that we can try to apply principles from the worlds of business or science to sports. And judging from what I’ve heard from league sources, teams are beginning to move in this direction with psychology specialists working on mental skills and performance enhancement. Beyond instructional design, there are other outsider, quasi-analytical concepts that lend themselves to player development: human performance technology, design theory (ask Jeff Quinton!), and human resources. By casting an analytical eye on what goes into player development, new avenues of success could open up. Finally cracking just a piece of the player development code—can we find a systematic way to improve player skills across a significant sample?—could be a very big thing, especially for the first team to find a way to make it work.

Special thanks to Russell A. Carleton and Rob Neyer for assistance with this piece.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
BruceSchwindt
2/14
Excellent and very informative article Bryan.
jeshleman13
2/15
I think the challenge to analyzing player dev, why it "gets lost out in the ether," is that there are so many approaches to apply-- your case for ID is strong, and at it's best, a team would have specialists for different player types/needs. I assert that combining process-led approaches like ID with theoretical underpinning in soc/psych would prove valuable to a franchise. But I have no idea if teams think this way.

I've tried to research what teams do in this realm and it's . . . scant.

lipitorkid
2/15
This is perfect. I'd like to add one more component to the "objective" part of this process. Models of success and understanding those models of success. This concept was perfectly addressed in the 2017 BP Annual Arizona Diamondbacks article written by Nick Piecoro.

Unless we are creating something entirely new, our objectives are built upon a concrete model of what we want something to be. Unfortunately we can misunderstand how the model objective came to exist. For example, if Mike Trout tells people he is hitting down on the ball, but video does not support this explanation.

I'm really looking forward to more articles about this topic. Thanks spotting a new white rabbit.