Premium and Super Premium Subscribers Get a 20% Discount at MLB.tv!
May 16, 2001
Aim For The Head
Expected vs. Actual Wins
This week's question comes from Chuck Hildebrandt, who writes:
Being a lifelong Detroit Tiger fan, I was studying the 1984 season in Total Baseball when I was startled by something I saw in the National League. I noticed that the New York Mets, managed by Davey Johnson, were outscored by their competition, 652-676, yet finished the season with a 90-72 record.
Thanks for the question, Chuck.
First, let's limit ourselves to teams that played at least 100 games in a season, as some teams in the early days of professional baseball played incomplete schedules. In order to find a team's expected number of wins, we'll use the Pythagenport formula--Clay Davenport's refinement of Bill James's original Pythagorean formula. The formula is:
Win% ~= R^E/(R^E + RA^E) , where E = 1.5*Log((R+RA)/G) + 0.45
Given an estimated expected winning percentage, we can compute the difference between a team's actual and expected records either based on winning percentage or difference in wins.
(To answer this question, I'm using the free downloadable statistics database at www.baseball1.com. The database is currently unavailable due to server problems, but is expected to be back online soon.)
Chuck, you have a keen eye. The 1984 Mets turned out to be the second-biggest overachievers ever, at 11.7 wins above expectation, beaten only by the 1905 Tigers, who went 79-74 while scoring 512 runs and allowing 602. They "should" have gone about 66-88, but instead managed to be five games over .500, and 12.8 wins over expectation.
On the underachieving side, there are two teams that lost 13 or more games beyond expectation. The worst underachievers were the 1993 Mets, whose 672 R/744 RA differential should have been good for a 73-89 record. They instead posted a gruesome 59-103 record, or 14.3 wins below expectation. The other unlucky team was the 1986 Pirates, who scored 663 runs and allowed 700, which should have earned them a 77-85 record, but instead they went 64-98, a 13-game differential. The worst underachievers who actually outscored their opponents were the 1907 Reds, who scored 526 runs and allowed 519. They projected to a 79-77 record, but actually went 66-87.
Here's a list of all teams with differentials of 10 or more games:
G500 = Games over .500 Pyth = Pythagenport expected winning percentage P_W = Pythagenport expected wins P_L = Pythagenport expected losses DIF% = Difference between actual and expected winning percentage DIFW = Difference between actual and expected wins
Of course, teams play 162 games today, versus 154 or fewer in years past, so it's a little easier to run up a larger differential over more games. If we look just at differences in winning percentage, there are nine teams that were 75 points or more off expectation. The '93 Mets still top the list, at 89 points below expectation, but a new team, the 1981 Reds, turns out to be the biggest overachiever, exceeding its expected winning percentage by 87 points (going 66-42, .611 versus a projection of 56.6-51.4, .524). Other teams with 75+ point differentials but not a 10-game overall difference include the 1884 Chicago White Stockings (later known as the Cubs) as 78-point underachievers, and the 1894 New York Giants as 76-point underachievers.
Let's consider Chuck's second question: "What do such differentials really say about a manager's influence on a team, versus dumb luck?" Strategic blunders by the manager can certainly influence a team's record, but the magnitude of this effect over the course of a season is hard to estimate. A manager probably has a more important influence on his team in playing the right lineup, managing the pitching staff, keeping his bench fresh, and so on, than in specific game tactics.
A team that underachieves its projection as badly as the teams we're talking about probably lost more than its share of one-run games, which can be caused in part by a lousy bullpen. The 1999 Royals, who are in the table above, had a terrible bullpen, possibly one of the worst ever. When no one is getting the other guys out, it's hard to blame all of that on the manager.
Of course, the arguments above aren't very sabermetric. Let's ask a slightly different question: are teams who underachieve or overachieve likely to continue doing so the next season? This doesn't necessarily answer the question about the manager's impact, because a manager's job is somewhat more at risk following a season that didn't meet expectations, but it's a place to start.
I took the list of all teams with 100+ games and compared their DIFW in one season to the next (assuming the franchise still existed). I computed the correlation of the two differentials. If the correlation was close to 1.0, then teams were more likely to have the same kind of differential (above or below average) the following season. If the correlation was close to -1.0, the reverse is true, which would mean that teams that overachieve one year are more likely to underachieve the next. A value close to zero means that there's no relationship between the two, that nothing from the team's "luck" carries over to the next season. The actual correlation was +0.05, which is pretty close to zero, and suggests that there's no relationship.
We can refine the question a little bit more; since teams who overachieve are more likely to retain their manager, we can focus only on teams that were significant overachievers. I selected five or more games as a threshold. Plotting their win differentials in the following season, we get the following chart, which shows no real trend or pattern, furthering the theory that the manager has little consistent impact on whether a team over- or underachieves it's expected Pythagenport projection.
A few readers wrote in with comments about last week's question about Expected vs. Actual Wins:
Regarding whether Wes Ferrell's Hall of Fame case is enhanced by his offensive production, Kevin Morse writes: "He's certainly more deserving than his Vet Committee-elected brother Rick."
Brian Simpson asks: "What about Orel Hershiser? I seem to remember him being a fairly good hitter before he got hurt." Indeed, Hershiser was pretty good with the stick for a pitcher. His best season was 1993, when he posted a 784 OPS (.356/.373/.411), but that was his only season with an OPS over 600 in 50 or more plate appearances.
Mike Ritzema writes: "I read your article and I was wondering where Darren Dreifort would place. I saw his two bombs against Chicago last year and it gives me hope that he'll hit for his money, too." Dreifort's two bombs helped him to just a 520 OPS last year (.210/.246/.274), his best year to date.
David (no last name given) inquires: "Interesting article about historical pitchers hitting performances. Can those numbers be converted to some familiar sabermetric figures--runs above average, games won v. average, etc. Basically, how does a good hitting pitcher affect a team's ability to win?"
Great question. The upper limit seems to be about 20 runs, for the very best hitting pitching seasons, as shown in the following chart (PMLV is the number of runs contributed on offense above what a league average pitcher would have hit, adjusted for park and league):
In today's game, pitchers don't throw as many innings or complete games, and rarely would get a chance to bat often enough to clear 20 runs of value. Hershiser's 1993 was worth about 13.5 runs, and that's about the top end for the past couple of decades.