A first foray into one of baseball's most murky subjects.
Pitch sequencing has long lurked as a sort of terra incognita in sabermetric analysis. It’s something that all baseball folks agree is important, but it’s proved mostly impenetrable to strictly quantitative approaches. There’s an intuitive sense that sequencing must be one of the crucial determinants of pitcher success, and although we can seemingly identify a good sequence when we see one, any attempt to apply a universal criterion of good sequencing across all pitches (or pitchers) is much more challenging. The rest of this article will be devoted to applying just such a criterion, and determining whether it is of any practical utility in understanding pitching generally.
There are at least two schools of thought about pitch sequencing. On the one hand, there seems to be an appreciation for sequences that mix up locations, speeds, and breaks in unpredictable ways, on the grounds that those kinds of sequences ought to be the most challenging for a hitter. On the other hand, Mitchel Lichtman (aka MGL) has argued forcefully on the basis of game theory that the ideal sequencing would be something like weighted randomness (weighted, that is, by the quality of each pitch). MGL’s argument says that if a pitcher tried too hard to mix things up, for instance by purposefully not throwing two of the same pitch in a row, he would end up tipping the next pitch to the batter, resulting in a powerful disadvantage.
Read the full article...