Notice: Trying to get property 'display_name' of non-object in /var/www/html/wp-content/plugins/wordpress-seo/src/generators/schema/article.php on line 52
keyboard_arrow_uptop

Last week, the Tampa Bay Rays signed Grant Balfour to be their closer for 2014 (and presumably 2015), committing to pay him $12 million over the next two seasons. It’s not an expensive closer contract, as these things go. But for the cost-conscious Rays, it seemed a little strange. The team also re-signed Juan Carlos Oviedo (formerly Leo Nunez) and traded for Heath Bell over the winter. Another sabermetric darling team, the Oakland A’s, signed Eric O’Flaherty last week and, earlier in the winter, traded for Josh Lindblom and Jim Johnson.

Wait a minute, these are the two franchises that have had books written about them and how they embrace advanced analytics. It was the A’s who practically invented the Billy Taylor/Huston Street model of “developing a closer” (i.e., getting someone a bunch of saves) and then flipping him for other pieces. I thought that the official sabermetric orthodoxy was that teams shouldn’t allocate any of their precious resources on chasing relievers. Isn’t the wisdom that when a trade involves a reliever and something else, the team that gave up the reliever won? Bullpen guys are too volatile! For them, the traditional metrics used to evaluate pitchers (ERA, saves) are either not very reliable and/or are junk stats. When you look at relievers through the lens of WAR(P), they don’t produce anywhere near on par with elite starters or position players, so why pay them similarly? Teams would be better off getting a couple of fire-balling pre-arbitration guys and some guys with checkered records, and spending the money saved elsewhere. Then, they can hope that a couple of them have a BABIP-driven amazing season. Why blow money or prospects on a guy who’s going to pitch only 70 innings at most?

I’d argue that WAR(P), as we have defined it, doesn’t do a very good job of describing relievers. The disconnect can be summed up by looking first at this chart and then at this one. In case you don’t want to click through, the first chart is a listing of the top WARs of 2013, while the second is the top win probability added (WPA) scores of 2013. The WAR chart Top 30 doesn’t contain any relievers at all. The WPA chart alternates between elite starters and back-end relievers, mostly closers. There’s a lesson in here, if you’re careful to look for it.

WAR answers (or attempts to answer) the question “What is Smith worth over and above our common baseline, replacement level?” It does that by specifically trying to isolate the contributions that Smith made independent of any context. The reason that RBI totals are a bad way to compare players is that batters who happen to play on teams where they hit behind guys who are always on base will have big numbers. Those whose managers stick them in the leadoff spot and those who are just stuck on bad teams will have lower numbers. WAR also ignores any information about when in the game the event happened. To WAR, a single is a single is a single, no matter whether it was to lead off the first or to drive in the winning run of Game 7 of the World… sorry, bad Edgar Renteria flashbacks.

For position players, you can make the case that it all sort of evens out. You can’t really leverage a specific hitter to a specific situation (pinch hitting aside). Hitters take their appointed turn in the order, no matter the circumstances. If it’s the bottom of the ninth, two on, two out, down by one, and the no. 7 spot is due up, the cleanup hitter can’t just say “I got this one.” Hitters have little control over what situation they will find themselves in about the best prediction going forward is that they will have some big situations, some little situations, and some good old average situations to deal with. You might make the same sort of argument with starting pitchers as well. Relievers, on the other hand…

In the modern bullpen, it’s generally known ahead of time who will pitch in what situation. There is plenty to say about the way the modern bullpen is constructed, both good and bad, and let’s just lay that aside for now. Closers will pitch in the ninth inning with their teams up 1-3 runs, whether we like that or not. There are other relievers who only suck up low-leverage innings when it’s 10-3. That brings us to the WPA chart. We know that Greg Holland, who finished second in MLB in WPA last year behind Clayton Kershaw, did so because he was placed into a lot of high-leverage situations where there was a lot of win probability available. It would be a mistake to assume that because of that fact (and that fact alone) that Greg Holland was the best reliever in baseball last year. (Then again, it wouldn’t be a silly statement either!)

WPA has its problems—the biggest being that it credits or debits everything that happens in an inning to the pitcher, even things over which he has little control—and it isn’t a very good tool to evaluate individual pitchers. Had Holland done the same work in low-leverage situations, WAR(P) would still have recognized him for it, but WPA would not have. Holland had a good year, no doubt, but more importantly, he illustrates a point. Because teams have a lot more control over what relievers are placed into what situations, having a good reliever (or a reliever having a fluky good season) for those high-leverage situations can have a big impact on a team’s chances of winning games. I suppose this isn’t really news, we’ve just confirmed it with #GoryMath.

To flip the coin around, because Holland did his work in high-leverage situations, WAR(P) does not recognize his accomplishments as much as WPA does, and here WPA is more sensitive to a key aspect of how relievers are actually used. (In fairness, Baseball-Reference’s version of WAR has an adjustment for leverage when calculating reliever WAR scores, although for some technical reasons I still think it undervalues relievers’ actual contributions).

Lately, there seems to have been a shift in the free agent market. As more and more teams begin to use the WAR framework as a way to evaluate players going forward (and believe me, most of them do in some way or another), it’s led to a lot of free agent signings where the general consensus has been “Yeah, that’s about right according to WAR.” In the same way that when the market was responding to batting average, the A’s found a flaw in the stat and how it failed to match up with the realities of the game, maybe we’re just seeing teams start to take advantage of the flaws in WAR. Stop me if you’ve read this book before.

Betting on relievers is most certainly risky, but the point of risk isn’t to avoid it. The point is to properly manage it. The starters on the WPA leaders list make (or will eventually make) much more than the relievers on the list will, but the starters are also a safer bet to get the kind of performance that produces that sort of WPA year after year. It’s a lower cost, high-risk, high-reward bet, but when you live in a “small market,” sometimes those are the only bets you can afford.

It is true that it’s hard to get a handle on which relievers are good and which ones are not. However, we can certainly agree that there are some who are better at the craft than others, even if the numbers don’t always show it over 70 innings, and quality costs a little more. And with teams finally moving away from judging a reliever by his saves total, more fully understanding statistical reliability, and doing some deep sub-atomic studies using Pitch F/X and other mystical voodoo things, it’s a lot clearer who is a better investment. Yes, because of the small sample size, relievers will have big error bars around their range of expected outcomes, no matter what, but it’s worth the due diligence to at least make sure the mid-point of that error bar is as high as you can get it. It will take some luck to get the full benefit of the reliever, but it takes some luck to get anything fun in life.

Why are smart teams spending money on relievers? Well, for the same reason that smart teams spend money on anything. There’s a case to be made that relievers aren’t properly valued by the metrics. In addition, the conventional wisdom is that relievers aren’t worth paying very much, and maybe that’s depressing the market unfairly. If you want to make a case against a specific player (Heath Bell? Really?) that’s reasonable, but as an asset class, relievers might just have come around to having an expected value that’s more than they cost.

Thank you for reading

This is a free article. If you enjoyed it, consider subscribing to Baseball Prospectus. Subscriptions support ongoing public baseball research and analysis in an increasingly proprietary environment.

Subscribe now
You need to be logged in to comment. Login or Subscribe
Grasul
1/27
Great article. Its great when you guys consider the limitations and assumptions of advanced metrics as part of any given topic. The balance makes for a more thorough evaluation.
TheRedsMan
1/27
Fangraphs model of WAR, which has no leverage adjustment, has Balfour as a ~7 win player over the last 6 years. The Rays just paid him like he's a 1 win player.

Sabermetrics orthodoxy is "don't pay extra for past leverage". It's not "don't pay for production if that production comes from a reliever".

I don't see the conflict here.
paulcl
1/27
Fangraphs WAR adjusts for leverage for relievers - it gives them credit for half the difference between their leverage index and an LI of 1.0. For example, a closer with a LI of 2.0 will have his WAR multiplied by 1.5.

(http://www.fangraphs.com/blogs/war-and-relievers/)
TheRedsMan
1/27
Go figure; you learn something new everyday. Thanks for the correction.
pizzacutter
1/27
When I speak of the traditional Sabermetric orthodoxy, I refer more to the idea that any idiot with a right arm can close a game. In that case, why would teams paying all this extra money for a veteran (often a "proven closer") when they could simply have some league minimum guy handle it (and pick up 30 saves in the process.)
TheRedsMan
1/27
I don't know anybody who thinks that "any idiot with a right arm can close a game".

I do know a lot of people who think that "any reliever who can get outs can get outs in a close game" and who think "it's stupid to use your best reliever to protect a 3 run lead in the 9th while refusing to use him to protect a 1 run lead in the 8th."
lichtman
1/28
Right, exactly. In any case, the "any idiot" narrative is not a very helpful one. Smart teams know the value of a closer (and other relievers) based upon a credible rate (context neutral runs allowed per inning or per whatever) projection.

And it IS correct to use half leverage. The value of a closer is his wins above replacement times around 1.5 or so (assuming that he pitches in an average 2.0 LI).

The (silly) meme that, "Any idiot can save a game with a 3 run lead in the 9th," has nothing to do with the value of a closer based on a credible projection for him.

Kimbrel is probably 2.5 runs per 9 better than a replacement reliever. Valverde is probably .5 runs better. That makes Kimbrel worth 16 runs (72 IP) times 1.5 (half leverage) more than Valverde. That's 10 million dollars per year more value, even though Valverde is one of those "idiots" who can save a game with a 3 run lead.

(Of course, it's not that "any idiot" can save that game. It is that his team wins the game with the "idiot" on the mound 94% of the time and with the elite closer, it is 97%. Of course, the value of a great closer is mostly in games where his team is NOT up by 3 runs, hence the concept of leverage.)
stephenwalters
1/27
Good insights, Russell, as usual. But if the argument is that WPA is a better indicator of relievers' prospective value than WAR(P), I'd respectfully disagree.

WPA reflects the fact that managers tend to reserve the highest-leverage situations (9th innings of reasonably close games) for one guy, and exclude others from that role. Even if the excluded ones are only trivially less effective than the designated save collector (or, possibly, as or more effective!), their WAR(P) values will tend to be lower as a result. That's why the list of WPA leaders includes few pure "setup" men and a bunch of closers.

So I'd agree with RedsManRick that sabermetric orthodoxy ("don't pay for past leverage") emerges intact from this list comparison. It's possible, of course, that managers have identified these specialists because they possess special skill in coping with the stress of higher leverage, but if the argument is that that's why we need to pay "established closers" more, then we're back in the age-old debate about whether this presumption of special skill is valid, and whether closers can be developed.

If "smart" teams are willing to pay more for relievers this winter, perhaps the answer lies in their very volatility. A team that thinks it is poised on the brink of winning it all might be risk averse; they don't want to take a chance that an otherwise-great season goes down the drain 'cause their 'pen has a collective off year, and so they wind up paying premium prices for low-variance arms.
pizzacutter
1/27
I'm looking at that relationship in the reverse. Yes, WPA inflates numbers for closers, specifically because the 9th up by a run is a high leverage situation, and most certainly, you don't pay for past success or past leverage. But, if you sign a guy with the intent of putting him in those high leverage situations, WAR isn't doing as much as it should to highlight the out-sized impact that he can have on games.

My argument isn't about whether we should use WPA to evaluate individual relievers. It's about that fact that WAR doesn't do properly reflect what relievers do.
lichtman
1/28
WAR that uses "half leverage" DOES properly evaluate short relievers for projection purposes. WPA overvalues them because it uses full leverage. It also includes too much noise that has no predictive value (which is why it is better use a context neutral WAR plus an adjustment - half - for leverage).

BTW, the reason for the "half adjustment" for leverage is "chaining." You'll have to poke around on The Book blog and other places on the web for an explanation of that. Basically, if you choose to leverage (increase his value) a good reliever by using him in high leverage situations, all your other relievers move down the food chain.
newsense
1/27
Another way to think about this is that WPA is not a good predictor of future WPA. The best predictor of future WPA is something like FIP or FRA combined with the predicted leverage role, the latter being under team control.
TangoTiger1
1/27
Well-said.
pkiguy22
1/27
Great article. I have noticed (as mentioned above) that teams who are on the fringe of being a contender, tend to spend more on a closer/reliever. Meanwhile, the team on the decline will be more willing to let their established/veteran relievers go because there is little gained by paying them "market value".
doctawojo
1/27
For what it's worth, the A's spent a supplemental round pick on Huston Street a decade ago. The A's haven't ever bought a Papelbon on the market, but they have, in their way, been paying blood and treasure for guys they thought they could trust to close for a long time now.

This has little to do with the point of the article; it's just to say that wherever the A's are in their valuation of relievers, and whether where they are is ten years ahead of the times or ten years behind, I think they've been there for quite a good while.
dlinde
1/27
Why should relievers be credited for being used in high leverage situations? They don't have any impact on the game preceding them. Yes, high leverage relievers get a WPA boost, but, unless you believe in clutch, we'd expect the same level of performance from them in low leverage and, vice versa, the same performance from low leverage relievers in high leverage situations. In this sense, a context neutral metric like WAR is more reflective of value, no?

Is it possible certain relievers disproportionately face the tops of lineups? Is so, we could be getting a skewed sense of their efficacy. A 3 FIP vs. opponents averaging a .340 wOBA is obviously more valuable than a 3 FIP vs. opponents averaging a .310 wOBA. I assume WAR fails to recognize this?
misterjohnny
1/27
I was about to make a similar comment. The fundamental question that is unanswered is: Is there such a thing as "clutch" relievers? Or can the same pitcher get a strikeout with the bases empty and up 4 runs as with the winning run on second base?
pizzacutter
1/27
The point isn't to figure out how to assign credit to the pitcher. It's to realize that, from the team's perspective, there is a real incentive to make sure you have good pitchers to staff those high leverage situations, because, by definition, they have a greater impact on the game than low leverage situations. Teams control not only whom they sign, but what role he will fill. WAR is only somewhat sensitive to that leverage, and the market seems to be drawing more in line with WAR. So, there's room to exploit an inefficiency.
jdeich
1/27
If I'm reading this article correctly, WAR is only impacted by LI in the case of relief pitchers?

As in, if Felix Hernandez pitches a perfect 8th inning up 2-1, he's credited with "standard" WAR for that inning, but if a reliever pitches the same perfect 8th inning, he gets "enhanced" WAR due to high LI?

Conversely, if a reliever gives up a home run, he gets a larger magnitude of WAR adjustment than the batter did? Does it matter if the batter is a pinch hitter?

It seems like LI's impact should be symmetric, even if the symmetry is "LI doesn't count for anyone". One extra win created by the offense should equal one extra win surrendered by the pitching/defense, regardless of the names given to roles.
TangoTiger1
1/27
Generally speaking, over the course of a season, the LI of every starting pitcher will hover around 1 (say, 0.95 to 1.05, with outliers just outside that). Hence, it's not worth the effort to try to get that precise.

That said, you can definitely argue that maybe it should be that precise.
jdeich
1/27
I picked Felix Hernandez because he's likely to continue having an above-average LI (career = 1.04, as high as 1.13 when the Mariners are not especially horrible). He's going to be in lots of close games:

1) He is good at the pitching of the baseballs, and the opponents' run total will have a low average and low variance.
2) Seattle's offense is generally awful, and their run total will have a low average and low variance.

Also, he pitches deeper into games (6.6 IP/GS) than the average AL pitcher (5.9 IP/GS), and most high-LI situations occur in the 7th or later.

It's an even larger effect historically-- Bob Gibson had 4 straight years with LIs of 1.10 to 1.19 when many games ended 2-1 or 1-0.

I don't see how it would be "not worth the effort" when the system is already applied to a subset of pitchers. LI is calculated for all pitchers. Wouldn't this just entail removing a "if reliever, then ..." decision?
TangoTiger1
1/27
Thank you for your diligence.

Actually, it's a bit more complicated! With relievers, because the leverage is "bequeathed" to the manager, we only give the reliever a portion of the leverage, reasoning that the manager has to give the leverage to SOME reliever. Hence, if a reliever enters with an LI of 1.8, we give the reliever an LI of 1.4, for purposes of figuring his contributions.

But for starting pitchers, it's not the same thing. They directly have a hand in creating their own leverage, as you properly note. If Felix or Cliff Lee or some other pitcher gets an LI of 1.1 because they pitch deep in games, they may deserve most if not all of that LI.

So, it's a bit more nuanced.

In terms of "worth it": it's easy for us to sit here and tell David at Fangraphs and Sean at Baseball Reference "uh, do it!". It's not our effort being measured here.

It's definitely a valid point, and should be given its due consideration.

Great job!
schlicht
1/27
Is it possible that pitchers for a team with a weak offense would tend toward higher leverage scores over the course of a season?

What I'm thinking is that if the team's average margin of victory is small relative to the league average, then pitchers on that team will tend to pitch in higher leverage situations.
Although this would likely be balanced by lower leverages due to larger margins of defeat.



lichtman
1/28
"Although this would likely be balanced by lower leverages due to larger margins of defeat."

You just answered your own question, no? A team's average leverage for the season has to be 1.0, by definition. I think.
TangoTiger1
1/28
The league average LI is 1 by defintion. That won't apply at the team level necessarily.
lichtman
1/28
OK, makes sense. A team could conceivably have all their games go 0-0 into extra innings.

Here is an interesting thought:

Say that a pitcher like Felix (or any good starter) always goes into the 9th inning and if the game is close, he pitches and if the game is not, he leaves and a reliever comes in.

That starter would obviously have an average leverage of quite a bit above 1.0. Maybe 1.1 or 1.2 for he season. Maybe higher. And since he is an above average pitcher, he should have a lot more value (WAR) than regular WAR (with no leverage adjustment) would suggest.

But, let's say that his team's regular closer is someone like Kimbrel or Mariano, and he is pitching in the 9th instead of them. He is costing his team wins so to give him more value because of his average seasonal leverage is a bit of a contradiction. Of course on the team level, it will all balance out, since the closer will get less value (WAR) since he is giving up some high leverage situations to thst starter. And that is not to mention the fact that the starter's high leverage situations in the 9th are occurring the 3rd and 4th (and more) times through the order when he is not nearly as good as he is overall!

So it is NOT really correct to multiply any starter's overall WAR by his average leverage for the season since the manager is "de-leveraging" his starter by allowing him to pitch in higher leverage situations (later in the game) when he is least effective (because of the TTOP and fatigue).
lichtman
1/28
It is not a question of clutch or even whether a reliever "should" get credit for pitching in high leverage situations, like a closer (and set-up guy) does.

Forget about the word "credit." You pay a player for the value he provides to your team (compared to some other player, in most cases, it is the "replacement player"). It doesn't matter how that value comes to fruition. If a manager decides that he is going to put a certain pitcher into high leverage situations (like he would a closer), it just so happens that the pitcher's value per inning gets multiplied. That is THE definition of leverage, whether you understand the concept or not.

Here is a perfect example, which should completely answer this question or solve this "problem" if any of you are having a problem with this concept.

Let's say that you can get Barry Bonds in his prime, but you are only allowed (or choose) to play him in games where you are up by at least 7 runs. What is a fair salary?

And what about if you only play him in games where the score is within 1 run or tied. How much is he worth PER GAME?

That's all there is to this concept when it comes to relievers. It actually applies to all players, but most players play in average leverage situations overall. You can actually do the same thing (even though most people don't), and technically you have to, for pinch hitters, pinch runners, and defensive replacements. Whatever their hitting, running, and defensive value is, you have to multiply it by the average LI that they play in. That is their value for salary and trade purposes, even if their "talent in a vacuum" is overvalued.

manoadano
1/27
You mention that you think the Baseball-Reference and Fangraphs fudge factor of giving relievers half credit (0.5*LI) for their leverage still undervalues relievers. Can you explain why? Do you disagree with the concept of chaining?

Also, it's strange that you would mention Josh Lindblom instead of Luke Gregerson.
pizzacutter
1/27
BRef's WAR adjustment is (WAR * (1 + average gmLI) / 2), but that's going to be dragged down by "need to pitch" outings that the team doesn't care about. In addition, they use game entry leverage, in a well-conceived attempt to make sure that a pitcher can't just cook up his own high leverage situations by being bad at his job, but the fact is that in a ninth inning situation, if the closer goes 1-2-3, the leverage actually increases as those outs go on the board. All told, he faced much greater leverage than we give him credit for. I also don't understand the derrivation of "let's go halfway back to 1" other than "let's just split the difference." Further, that adjustment will be agnostic to whether the pitcher was good in mopup/bad when it counted vs. bad in mopup/good when it counted vs. same performance in either situation.
TangoTiger1
1/27
The derivation is the shortcut approximation of a much longer calculation called "chaining", where you refigure how the bullpen would do, when one of the guys is removed from the chain, and everyone's role is readjusted, as the new guy coming in, comes at the bottom of the chain.

Whether Mariano is on the roster, is injured or is retired, the leverage opportunities will still exist. And someone pretty good will pick up some of that slack.
TangoTiger1
1/27
Russell is wrong on multiple levels here.

WAR properly accounts for the leverage issue, by giving the pitcher credit for halfway between standard leverage (1.0) and actual leverage (whatever he got, say 1.8). And that's done for the very reason Russell notes: that a manager can indeed leverage his reliever in the future.

BUT, we don't give full credit, because the next best reliever could do almost as good a job. Hence, once you "chain" all this, you give 1.4 credit in terms of leverage, and that's what WAR does.

Secondly, his view of "sabermetric orthodoxy" is not at all accurate. No one suggests "that any idiot with a right arm can close a game", in any circle. It's a straw man.

I posted my views on my blog.
gpurcell
10/24
"we don't give full credit, because the next best reliever could do almost as good a job."

Which simply is not true!
lichtman
1/28
Also, for projection purposes, which is what this article is all about, you should not be using any kind of WAR which already includes leverage, unless you have to.

Obviously no credible forecaster uses past WAR for WAR projections other than as a quick and dirty method, but we'll assume that we are (using WAR to project future WAR).

Once you get your projection, you then estimate how that reliever is going to be used and THEN apply the leverage adjustment. You don't use the leverage adjusted WAR from the past in order to project leverage adjusted WAR in the future, unless, again, that's all you have or you don't know how he is going to be used in the future, so you just assume that he'll be used exactly like in the past.

The reason is similar to why you don't use WPA to project future WPA. Let's say that last year a closer had an average LI of 1.7. But let's say that the average closer has a LI of 2.0. Let's also say that this player is going to a new team, so we can take the team and manager out of the equation. Don't use his leverage adjusted WAR to project his leverage adjusted WAR in the future! Use his non-adjusted WAR and then adjust it using 2.0 and not 1.7!
ericmvan
1/28
Another caveat with using WPA is that it can be extraordinarily sensitive to defensive support. One might, for instance, wonder how Koji Uehara failed to lead MLB in reliever WPA. The answer is that on July 6, Brandon Snyder, playing 3B, had an absolutely trivial game-ending force-out at second and threw the ball into CF, allowing the tying run to score and costing Uehara 0.5 WPA. So there went 11% of his season WPA, lost in 0.4% of his batters faced. Who knows who else suffered a similar fate -- or benefited from a play that turned a meltdown into a shutdown?

Ideally we'd have a WPA that divided credit and blame between the pitcher and defenders on every play, using UZR's methodology. And yes, failing that, there is and has always been an argument for using errors in assigning WPA. A crude binary and unidirectional implementation of our preferred methodology is still better than no adjustment at all.
lichtman
1/28
Right, even Pizza mentions this. It is like using RA9 rather than FIP in WAR. RA9 and WPA includes things the pitcher has little to no control over, like defense, luck, and sequencing.
lichtman
1/28
WPA is an interesting stat. If you want to use it, or some derivation of it for a retrospective award, it is fine. For anything prospective, like a projection, it is terrible. It is not biased though, other than defense (which makes it biased I guess!), although, like Eric says, you can try and adjust it for defense.
ericmvan
1/29
WPA without adjusting for defense (for hitters as well as pitchers) is too noisy to be very useful, but an adjusted WPA would be a great tool for prospective estimates of situational performance differences, which we know are real.
ericmvan
1/29
I just remembered this: John Farrell had a great comment at last week's Boston SABR chapter meeting that underscores how wrong it is to believe that pitching the 9th is exactly like pitching the 7th or 8th.

If you're down by 3 runs, a 2-run rally in the bottom of the 8th is worth +.078 WPA. In the 9th inning, it's worth -.047.* That suggests that there ought to be a significant difference in hitters' approaches. And some pitchers will be better suited to facing hitters as their approach changes.

Dan Brooks had earlier showed a chart that showed a huge 2013 increase in Koji Uehara's use of the splitter, which has always easily been his best pitch. Farrell was asked about that, and said that Uehara has a great sense of hitters' approaches, and that hitters are more aggressive in the 9th, which allowed him to throw the splitter more: a hitter who might lay off it in the 8th is more likely to chase it in the 9th.

There's always more to this game then we think there is. (Jim Rice's career OPS by inning, starting with the 5th: 935, 871, 854, 798, 733.)

*There's arguably a flaw in the logic of WPA for successful 9th-inning rallies. If you come back from 4 or 5 run downs, the hits and walks early in the rally get almost no WPA compared to the tying and winning hits, yet their success is ultimately just as necessary. The solution would be to take some portion (maybe all) of the total WPA of the rally and redistribute it according to changes in RE. That reflects the reality of the rally being an all-or-nothing affair, something which every participant understands and which greatly affects strategy on both sides of the ball.
TangoTiger1
1/29
Unfortunately Eric, the logic won't hold. I had this discussion ten years ago, and it'll break down eventually if you follow that line of reasoning.

If you want to have this discussion, please start a thread on my forum. I don't want to get into a tangent here.

http://tangotiger.com/index.php/boards/viewforum/2/
drawbb
2/03
I'm sorry, I don't see understand how any of those WPA figures you quoted could be correct. How could a rally be worth negative WPA?