AP Hockey Story of the Day: December 16 – Might four-man defensive units work?

The NBA’s Sacramento Kings, who recently fired their head coach despite a not-horrific start by their standards, are rumored to be looking to try a new system that, were it successful, could completely change the sport’s dynamic. You can read about it here, but essentially the Kings are looking to play a four-man zone-style defense, which would involve one player consistently focusing on offense – a cherry-picker, if you will. Now as the article points out, the reality of the situation may mean a 5-man defensive unit but one man designated as a breakout player who immediately sprints up court whenever the opposition takes a shot. It’s a very interesting idea and it should be noted that this isn’t the first time the Kings have done something innovative.

So how does this apply to hockey? And what are my thoughts on this? Well my first impression would be that a set-up like this would involve a four-on-four at the defensive end (since surely the opposition would adjust to stick a player on the cherry-picker) and a five-on-five at the offensive end. At least in hockey (and I would imagine in basketball as well), offense is easier to generate with more room at four-on-four, so it would seem like all the team is doing is making is easier for the opposition to score while the only advantage would be potentially getting some one-on-ones up-court on the breakout. We can’t know for sure without seeing it though, in either sport.

The big question this presents though is, why don’t NHL teams use farm clubs more to test set-ups like this and see what works? Whether it’s a four-man defense, or playing with 3 D, 2F, or playing with “midfielders”, or whatever else, we can’t know what might change the game until we try it. Somebody’s gotta be the first mover. That’s what Sacramento is trying be, as they look for any possible advantage after years of poor performance. After all, they can’t be much worse.

AP Hockey Story of the Day: December 9 – Birnbaum, Tango, and Shot Quality Revisited Again Again

Phil Birnbaum, who is one of the great non-hockey analytics writers out there, has taken a number of stabs at the shot quality question in hockey over the last couple of years, and today weighed in on the Tom Tango controversy.

The issue – whether to weight goals significantly higher than other shots in corsi analysis – is one I’ve stayed relatively quiet on, and that’s simply because like any good jury, I want to see all the evidence presented before coming to a decision. We’re still not at the point where I’m totally confident evaluating the worth of Tango’s statistic, or the merits of shot quality overall, but I think that’s partially because the answer depends on the question we’re trying to answer. Does shot quality matter? Absolutely. Does it render large sample shot differential metrics useless? Nope. Can it be used to improve on what we have? I think it can. As Birnbaum has often pointed out, we know that shot quality impacts shooting percentages because we see it in score effects. Is it possible a team could play a system in which they more resembled a team down a goal than a team in a tied state, thus impacting shot differentials and shooting percentages? It’s possible, although it’s important to note that there are psychological factors involved in score effects, as well as the other team playing a certain way. It’s not just one team that impacts it. It also seems quite plausible that teams make the conscious choice to forego shot attempts in order to try for better shots. I think the Ducks are a team that have done this the past couple of years, and the Leafs may be as well. Those changes aren’t enough to impact the idea league-wide that more shots = more goals, but on a team level it could. This is where the sniff test comes into play. We may not have the statistics to prove such decision-making exists, but that just means we have to try harder to find them.

So where does this leave us? Ultimately, dealing with goal data is still very difficult because of the variance involved in goalie play and shooting combining into something that tells us very little reliably. But it’s time that the discussion shifts from the dismissive “we’ve done a regression and we’ve shown that shot quality doesn’t impact the numbers much so it’s not worth pursuing” to “what can we do to limit the variance involved and get meaningful data out of teams’ ability to convert shots into goals”. I don’t think the Tango statistic adds all that much – although I’m waiting to see a version that accounts for score effects – but I also think that’s because it’s so basic and doesn’t really do anything to account for the variance involved in goal scoring.

What’s next? Without tracking data, I’m not really sure. But I definitely wouldn’t want to be somebody staking their reputation or career prospects on how good a team is or isn’t based on corsi in cases where there’s the possibility that system effects or even a changing environment based on shots and carry-ins becoming more of a policy target (see Goodhart’s law) are skewing those numbers and failing to give a truly accurate representation of a team’s even-strength play.

Birnbaum is somebody I will be following closely through all this. As I will everybody else who is refusing to accept “shot quality doesn’t make much difference” as proven fact. After all, absence of evidence does not equal evidence of absence.

AP Hockey Story of the Day: December 1 – “We can’t get much worse”

DC’s own Tony Kornheiser, of Pardon The Interruption fame, came out yesterday and said that he feels the best next step for the dysfunctional Washington Redskins is to embrace analytics, to try and do to football what Billy Beane did for baseball. His reasoning is one which I think should be used to convince front office personnel more often, even if it’s a risky approach: “How much worse could it get?”

If you’re a team, and this applies to any sport, that hasn’t made the playoffs in a number of years, that doesn’t exactly look poised to take the world by storm, isn’t it worth doing something unconventional to turn the tide (non-McDavid year category)?

If you’re the Florida Panthers, for example, or the Carolina Hurricanes, or the Edmonton Oilers. You haven’t made the playoffs for a while (at least in a full season), it’s not like a radical change of approach is going to hurt your brand at this point. What is holding you back from going all in (and I mean all in) on analytics?

I’ve always said that analytics is an organizational attitude more than it is a single system or a single metric. Going all in on analytics doesn’t mean firing your scouting department or building an alter to the corsi gods. It means scrutinizing every decision you make, seeking information through data wherever that can be achieved, and trying to gain an edge in a very competitive environment with so much parity.

So yes, the Redskins should go all in on analytics. So should every other team, but for a team that’s been woeful for so long, there’s really no risk or downside. Worst case, you’re still bad but you were probably going to be bad anyway. Best case, you win championships, and maybe even change the game.

AP Hockey Story of the Day: November 18 – On the Defensive Shell

Garik16 from Hockey Graphs, Lighthouse Hockey, and Islander Analytics wrote a good piece today on the defensive shell (a topic that’s been on my list to address for a while), following up on David Johnson’s initial look into the subject a couple of years ago. I highly encourage you to read both stories, but the general conclusion from today was that the shell doesn’t actually help a team because the opponent’s scoring rate – what you’re trying to minimize – actually increases. I had a couple of thoughts on the issue, because while the material is interesting, I don’t necessarily agree with the conclusions.

First of all, it’s important to note that score effects are the result of a combination of four very different forces (more on this in a future post).

1. Players naturally playing harder/more aggressively when trailing

2. Coaches coaching trailing players to push ahead and take risks

3. Players naturally being risk averse and not going their hardest when leading

4. Coaches coaching leading players to make the safe play (ie a contain or prevent defense)

Generally speaking, items 2 and 4 are those that the coach can impact. A coach can tell his players to keep pushing up by a goal late in the third period (they generally claim that they do, and players often echo this notion) but items 1 and 3 will still naturally institute score effects, and therefore the idea that getting hemmed in one’s zone leading late is simply a poor strategy that should be discarded – like the 1-3-1 forecheck or the overload power play – is misguided.

Of course, there IS an element that is coaching-driven, but the problem you run into here is that the impact of such strategizing will tend to get swallowed in heaps of variance, since you’re adjusting one of the four factors, but the other three stay the same (the other team won’t, for example, suddenly agree to not take chances since you’re still pushing ahead).

So when I see something like this graph, where the author is using shooting percentage Down 1 as a proxy for results against a defensive shell, I worry that the intended effect isn’t being isolated.

Screen Shot 2014-11-18 at 4.39.21 PM

We know that the shooting percentage on average Down 1 is greater than Tied, but if one removed items 1, 2 and 3, leaving only the strategized intentional defensive shell, would that still be the case? Maybe, but we can’t say that from just this data.

AC Thomas’ more precise graphs on shooting percentage as the game goes on don’t rid this analysis of confounding variables, either.

Screen Shot 2014-11-18 at 4.43.22 PM

Sure, the shooting percentage when trailing is shown to be higher than that with the score tied right up until the end of regulation. But there’s a pretty logical reason for that as well. Teams tied near the end of regulation are playing for overtime. They are far more likely to dump shots from the blue line than to pinch to create opportunities. And both teams are far more worried about not allowing a goal against than scoring a goal for. There is massive loss aversion in play here; shooting percentage is bound to drop. You can’t compare the two situations because looming overtime (or more specifically the loser point) is a confounding variable.

So yes, scoring rates are higher against teams protecting leads than against teams late in tie games, but that doesn’t mean that the defensive shell, as a strategical maneuver, is responsible for that. Impending overtime, as well as natural factors that are difficult for coaches to account for, could just as easily be responsible.

On Russell Martin and the Differences Between Evaluating Baseball and Hockey Contracts

Today, the Toronto Blue Jays signed Canadian Catcher Russell Martin to a huge 5 year, $82 million contract. I’m no expert on baseball analytics, but I know enough to be able to find concepts to apply to hockey where possible. I have, however, seen many people making fun of this as a massive overpayment. They’ve called it “McCann Money” with the implication being that Martin is by no means the player that Brian McCann is. Now I don’t know how big the gap between the two players is, and frankly I don’t have the time nor the will to find out, but there is one factor fans – especially those who primarily follow hockey and thus appear on my feed – may not be taking into account that I’d like to address.

Hockey has its similarities and differences to baseball, and one of the most important differences is the salary cap. Hockey’s hard cap is an important differentiating factor because it levels the playing field amongst teams, for one, but it also changes contract evaluations because the deal has to be evaluated (at least for cap teams) based on whether it brings the best possible return for the team in a $70 million world.

Baseball is different not just because the teams spend different amounts of money, but because the amounts teams spend are variable, and can be influenced by the very return they procure from that money. A successful team can lead to playoff revenue which means the team can afford a higher payroll. A successful and marketable player can lead to additional ticket and concession sales, which again leads to more available cash.

With Martin this is particularly important because he’s a Canadian boy. More than that, he’s a Canadian star. Russell Martin may not be Brian McCann, but by catcher standards, he’s a star. And it’s not like the Jays are crawling with stars. Martin will undoubtedly very rapidly take over the team lead in jersey sales, will be a poster boy, will attract Canadians to the ball park, will do advertisements. That all brings revenue to the team, which helps offset the cost of his contract in a way that, say, the Montreal Canadiens signing Daniel Briere never could. The Canadiens didn’t get to spend to a higher cap because a Briere brought in additional revenue, but the Jays can.

So this is a caution – without too much in-depth knowledge of the situation – to Toronto fans who may be used to the hockey way of thinking. There’s more to contracts in baseball, and ultimately, whether or not Martin is a $15 million/year player, this contract will probably prove fruitful for the franchise.

AP Hockey Story of the Day: November 13 – Building An Analytics Team

This piece from Trey Causey is absolutely spot on. If you’re involved in an organization in any sport, you need to give this to your President/GM/Owner. This is how analytics will help your team win, and luckily, you have a major first mover advantage – especially in something like hockey – because while teams are now using analytics, nobody is using it quite like this yet.

One point I’ll expand on quickly when it comes to hockey is the idea of time horizons. Coaches tend to worry more about immediate payoffs than GMs, because their jobs are more likely to be in immediate jeopardy if the wins don’t come. But that isn’t the way it should be. Coaches need to understand and employ analytics, but they also need to be given assurances that they will be judged based on process, rather than results. At least in the near term. All parts of the organization need to be moving in the same direction, and only then can output be optimized. If a GM isn’t willing to take that approach with a certain coach, then hire a coach with whom you’re comfortable enough to do so.

Why Splitting Back-To-Backs Between Goalies May Not Always Be The Right Call

There’s an important difference between always taking the middle ground in an argument and recognizing nuance where many find none. Analytics are a case in which it is important to remember, whether it’s with corsi, or PDO, or fighting, or any other issue, that because of the imperfection of our metrics, our understanding of psychological factors at play, and our understanding of just what goes on behind closed doors, that what the numbers tell you isn’t always entirely accurate. This nuance is something that I’ve tried to emphasize with this blog over the past few months, and will continue to push. There isn’t a middle ground just because somebody says there should be…there’s a middle ground because of the number of factors in play that simply haven’t been taken into account by any model we have at our disposal right now.

Recently, there’s been some criticism of Boston Bruins coach Claude Julien for his decision to play Tuukka Rask on both halves of back-to-backs – he’s done so twice already this year, and may do so again tonight and tomorrow night against division foes. It’s easy to look at something like goaltending performance in back-to-backs, and say “case closed.” But it’s important to scrutinize every claim that is made, whether it’s by an NHL GM, a coach, a journalist, or a statistician. Eric Tulsky, obviously one of the best in the business, found in 2012 that over a sample of two seasons, goaltenders see a drop of about 1% in save percentage when starting a second game in two nights – at least versus starting a fresh goaltender in that second game in 48 hours. Here’s the critical table from that piece at Broad Street Hockey.

Screen Shot 2014-11-12 at 5.07.45 PM

It’s not unreasonable to believe, from this evidence, that on average starting a fresh goalie in the second half of a back-to-back leads to a save percentage 1% higher. It’s important, though, not to extrapolate as far as to say that this means that starting the fresh goalie is always the right call in these situations. In order to show why, I compiled what I felt to be a good estimate of each goaltender in the NHL’s current true talent save percentage. To do this, I took their complete numbers – regular season and playoffs – since the 2011-12 NHL season. That way, the sample size (for the majority) is still large enough, but it’s not confounded by numbers from a decade ago that may or may not have relevance. A Marcel model may have been preferable, but I felt this was a pretty good estimate. For goalies that didn’t have at least 40 NHL games in that span, and that had at least double as many AHL as NHL games, I used Stephan Cooper’s AHL-to-NHL save percentage translation (approximately 8% as of 2012) to find the best guess for true talent NHL save percentage for those goalies. Adjusted AHL numbers are used for those goalies marked by an “*”.

Screen Shot 2014-11-12 at 4.50.05 PM

The column to the far right is the difference between the estimated true talent save percentage of the team’s current starter (based on games played this year) and backup. As you can see, slightly more than half of the teams have a difference of greater than 1%. For a team like New Jersey, for example, starting Kinkaid – rested – over Schneider – tired – on the second half of a back-to-back would mean losing ~1.5% on the team’s save percentage for that game. So it’s certainly not as easy of a call as the Flyers say, going from Mason to Emery.

Now obviously there are other factors to take into account. If you assume that a backup needs to play 20 games in a season, it makes more sense to have them play games during back-to-backs, because the drop off in save percentage will be greater if you’re sitting a rested starter. But let’s say Alain Vigneault feels his team really needs a slump-busting win one Saturday night, and that the loss in overall save percentage taken by starting Lundqvist now rather than a game down the road was worth it for team morale and momentum, then it’s not necessarily the wrong move. Or, even more obviously, let’s say Colorado is playing a back-to-back to end the season, and desperately needs two wins to make the playoffs. I think it’s safe to say that starting Varlamov for both of those games is the right call. Same idea for back-to-backs in the playoffs, if any came to be.

The point is, fans, statisticians (I prefer to call them/us analysts), and journalists will often take analytical findings like corsi or the idea of splitting back-to-backs – concepts that are true on average or on a league-wide level – and forget that on the micro stage, for particular teams, those results might not hold. If you don’t have team-specific information, then the right call would generally be to split the starts, but we do have that extra knowledge, so if you’re a coach or a GM, you have to put that to use.

So is Julien making the right call to play Rask two nights in a row? If he feels that wins now are more important than wins later, for some reason, then yeah. Otherwise, unless he expects his starter to see 75 starts this year, he might have been wise to play the long game and give Svedberg some action. The point is though, the decision is more complex than simply looking at a league-wide table.

Finally, since I’m sure some have interest in this, I’ve sorted the table above to find the best starting and backup goaltenders by combined save percentage over the past 3+ years. Here are the results. For AHL translations, Stalock and Johnson’s results were impacted the most by the league that was chosen.

Screen Shot 2014-11-12 at 4.52.45 PMScreen Shot 2014-11-12 at 4.52.58 PM

AP Hockey Story of the Day: November 5 – On Skill Development as an Inefficiency

There’s a fascinating look at skill development here that reminds me of an interesting anecdote I read in Sports Illustrated a while back. When people think of sports inefficiencies these days they think of analytics, numbers, Moneyball, etc. But those are just the most the most prominent modern manifestations. Back in the late 19th century, Baltimore Orioles manager Ned Hanlon began bringing his team down south prior to the season to work on their skills and ability to execute plays like the squeeze bunt and the hit and run. He had players field grounders and fly balls for hours every day, and his teams, more ready for the season than ever before, won three straight National League pennants, leading other managers to copy his practice and develop what has become known as Spring Training. Hanlon actually developed baseball’s first true inefficiency: fundamentals.

That New Yorker article from James Surowiecki talks about a similar development in basketball, almost a century later, but it leads me to wonder, how much skill development is truly being done at the NHL level? I remember people talking about how Pat Quinn was a poor fit for the young Oilers because he wasn’t a teacher (I hope I’m remembering that right, feel free to correct me). You hear about players working skating or shooting over the summer, but how much of that is team-driven? Are there coaches who take the time to teach young players some of the fundamentals that they may have missed learning in juniors, or are they simply benched when they make mistakes? This isn’t an issue I know much about so I’d be curious to hear the opinions of others.

Are skill development and fundamentals inefficiencies that could still be exploited at the NHL level, or are we past that point?

Reconciling Analytics With Intangibles No Simple Matter: On Michel Therrien and NHL Coaching

My story of the day from yesterday turned into a more extended column, so I decided to post it this morning.

There has always been a conflict between the values of analytically inclined hockey people (let’s call them analysts) and old-school minds (purists). I’ve written before about how my experiences have allowed me to gain some insight into both schools of thought, and in many ways fuse them into my views on the sport, but I wanted to bring up a couple of issues with regards to the questions surrounding Michel Therrien and his coaching following an absolute drubbing — both in terms of possession numbers and the score — at the hands of the Calgary Flames. Friend of the blog Andrew Berkshire wrote a great recap of the game and criticized Therrien for the team’s possession struggles, which is justified. But some other analysts tend to simplify the game down to a variety of semi-predictive statistics without considering other circumstances. I wanted to use this situation to share some more general thoughts on the use of analytics.

First of all, there is an analytics debate, but it’s not the one that so often gets fought on message boards and social media sites. Using analytics is the proper way to go about maximizing a hockey team as much as using a trainer is a proper way to prevent and treat injuries. Not “buying into” analytics isn’t an acceptable view because analytics aren’t something for you to “buy into” any more than, say, the colour blue or the concept of gravity. As John Oliver would say, asking whether you buy into analytics is like asking do you buy that 5 is greater than 15? Or that owls exist? Or that there are hats? Analytics involve scrutinizing data to find trends. Not buying a certain metric or conclusion is fine. Not buying analytics means being satisfied with incomplete information. It’s not a valid position.

But there is an analytics debate, and one that will continue to be important surrounding all the numbers that have begun and will continue to be thrown out there. That debate is over which numbers have meaning, and maybe more importantly, how do you reconcile what meaningful numbers tell you with the underlying — and usually valid — truths about the sport which are as yet impossible to properly quantify?

It’s a relevant question with Therrien because if it were up to a number of analytics writers online, coaching would involve starting the season with lines that, drawing on with or without you numbers, would maximize past Corsi percentage, and then letting the players go out there and play all 82 games and only making changes if the Bayesian Corsi outlook shifted dramatically.

The problem is that such a thing isn’t practical. We’ve already seen the dangers of committing too much to maximizing principles when dealing with humans rather than material products in baseball, and in hockey, where chemistry is so important for player confidence and compete level (drink!) there is a downside to managing players like assets rather than humans.

For example, slumps can turn into massive droughts if a player begins to clutch the stick too tight, or stops going to the dirty areas. Those are material factors that hang outside the realm of variance, and sometimes players need to sit a game out or to try a fresh look with new linemates in order to catalyze the process of slump-busting.

These principles also come into effect when it comes to personnel decisions. The Astros have gotten into trouble on a reputational level by failing to promote talented youngsters in order to preserve contract years or even to pressure those players to sign for cheap long-term. Ryan Lambert wrote an interesting piece on why NHL teams should do something similar. While I agree that NHL teams ought to pay more attention to maximizing assets and doing data-focused cost-benefit analysis, the idea that the Buffalo Sabres should send an NHL-ready Connor McDavid back to juniors should they earn the privilege to draft is a case of a failure to consider other implications.

Twenty-nine teams would start the 2015-16 season with the phenom on their roster, and in Lambert’s hypothetical the Sabres wouldn’t. What NHL prospect would ever want to get drafted by the Sabres again? I’m sure the Erie Otters would be thrilled with the decision, but how about every other team in the CHL? Suddenly, the Sabres aren’t so popular around the league that, you know, produces more than half of the future NHL talent. Another Eric Lindros moment wouldn’t be out of the question, and suddenly by trying to maximize the returns, the Sabres have managed to handicap themselves for years to come.

It’s an element of cost-benefit analysis that most analysts ignore because it is very difficult to quantify, but that doesn’t make it any less important.

So back to Therrien, who has fans asking both why he has Dale Weise on the top line, and why he isn’t making more changes to the lines for a shake-up. Well you can’t really have it both ways. Sure, the opening night lines look like those you might want to start the playoffs, but that doesn’t mean you can go 82 games with them. Coaches don’t get enough credit for knowing their players — their personalities, their motivations, their complexities —and how to bring the best out of them.

Now of course, this has nothing to do with Therrien’s ability to draw a good possession team out of a talented roster, it’s just a general lesson that it isn’t always as straightforward as reading (admittedly impressive and important) numbers off of a chart. Nothing, when dealing with human beings — and a game as complex as hockey — ever is.

AP Hockey Story of the Day: October 31

Yesterday I discussed a great piece on Bill James written by Joe Posnanski, somebody I read consistently and try to emulate much of my more conceptual writing after. But Joe blogged today about the most discussed play of Wednesday’s Game 7 of the World Series and made a point that I vehemently disagree with. Here’s the story.

Here’s the relevant passage:

“But my point is this: You don’t get a second choice in real life. You choose once and that’s it. And the reveal — you chose poorly — becomes the reality. And so when you look back at something that didn’t work, you now know that anything else, even the stupidest possible choice, MIGHT have worked. The only thing we know for an absolute fact is that the choice made failed.

In this case, we know how the World Series ended. It ended with Giants pitcher Madison Bumgarner entirely overmatching Royals catcher Salvador Perez, who hit a foul pop-up to end the game. That’s what happened, and it is unchangeable and, so, in the end, unjustifiable. If given the option to go back in time, the one thing you KNOW WILL NOT WORK is to let Perez hit.

It always entertains me when some coach or manager makes a move that doesn’t work and then grumps, “if I was given that exact same situation again, I’d do it again.” No you wouldn’t. It didn’t work. You’re telling me if time was reversed, and another chance was given, that Grady Little wouldn’t pull Pedro? Don Denkinger wouldn’t call Jorge Orta out? The Portland Trailblazers wouldn’t take Michael Jordan or Kevin Durant?” (bolding my own)

The bolded passage is the one I have a problem with. Sure, you know the decision to hold Gordon didn’t work out. But sport, like life, is probabilistic. There is a set percent chance that the next batter, Salvador Perez, would have found a way to drive Gordon in from third. We don’t know exactly what that percentage is – we never could – but we can make an estimate of it based on Perez’ batting numbers, Bumgarner’s pitching numbers, park effects, defense, etc. The specific percentage isn’t important, for this argument, but the point is it exists. Going into the decision of whether to send Gordon home or not, the third base coach has to have a vague idea of a) the percent chance of Gordon making it home safely, and b) the percent chance of Perez driving in the runner on third. In theory, if “a” is greater than “b”, you send the runner. Of course, the third base coach, Mike Jirschele, couldn’t possibly have calculated all that in a split second, but that’s the advantage of being an experienced third base coach (something that analytics folk often don’t recognize). When you are put in situations like that over and over, your instincts get better and better, and more often than not just on feel you’ll make the right call. Now it’s also possible that Jirschele didn’t take Perez into account at all, that he simply said, “is there a greater than 50% chance that Gordon makes it home right here?” and decided that the answer was no. That would mean leaving critical information out of the equation, which isn’t ideal, but would still rely on that baseball sense and experience.

But back to probabilities. Posnanski’s assertion is that you could replay that situation hundreds of times, and each time Perez would make an out and the game would be over, making the decision to hold Gordon wrong in a vacuum. But that’s just not how sports work. Say that Perez’s chance to get a hit off Bumgarner there was .200. It’s not unrealistic, after all, Bumgarner was dealing. That means that 1/5 of the time, holding the runner, the Royals would have tied the game.

Now let’s get back to Posnanski’s statement: “If given the option to go back in time, the one thing you KNOW WILL NOT WORK is to let Perez hit.” But you don’t know that. In fact, you approximate the chance at 20%. Even if somebody gave you a look into a future and showed you a vision of Perez popping up, that would only be one potential reality. In one out of every four others, Perez would drive the runner home. So it’s flawed thinking.

If a football coach goes for it on 4th and 1 needing a touchdown and doesn’t get it that doesn’t make it the wrong call. If a hockey coach pulls his goalie on the power play down a goal with a minute left and the other team scores, that doesn’t make it the wrong call. You can’t judge the result because results are variable; process is not. If the aforementioned probability “a” had been higher than “b”, then over a long series of similar situations, Jirschele would have come out on top. We know in the playoffs there aren’t really iterated games, but that doesn’t make probabilistic modeling any less valid.

We don’t really know whether or not the Royals made the right call – although there is certainly evidence from many that it was the right one – but we do know that the result of the game isn’t the judge of that. The result only muddies the process.