John Tortorella by Robert Kowal. Licensed under Creative Commons via Commons.
Today, John Tortorella had a question about power plays.
It’s certainly a reasonable thing to wonder. It also seems like the kind of thing that, you know, your analytics person or department could quite easily figure out. But hey, the “let those Twitter guys figure it out” approach works as well I guess.
Anyway, I took the bait, partially because I am a power play fiend and partly because I’ve become more and more interested by faceoff impact. So I figured out the numbers. It’s important to note that NHL scorekeeping can be sketchy. Each scorer may have a different definition of a faceoff win, which can lead to some problems. But at this point, it’s what we have, so let’s take a look.
Over at Hockey Graphs, I discussed the NHL’s faceoff rule change for the 2015-2016 season and whether it has thus far had the intended effect on league goal scoring.
Chart courtesy of @kikkerlaika
Regression is a dangerous word.
That’s especially true because with the increase in the popularity of measures like PDO, fans have become prone to yell the term in a (figurative) crowded theatre and then run away. Regression is the beginning of the discussion, not the end. Teams don’t all regress to the same values, or at the same rate. Basically, tread with caution.
I wrote about the Edmonton Oilers, the Chicago Cubs and an imbalanced draft strategy over at Hockey Graphs. You can read that here.
Corsi has a lot of flaws. First of all, it’s not an accurate measure of possession. Corsi is just shot attempts, so it doesn’t actually measure how often a team has the puck on its stick, or its time in the offensive zone, or any other useful metric like that. Second, all shots aren’t created equal. Corsi treats a feeble wrister from the point with no traffic in front the same as a point-blank one-timer in front. Finally, it doesn’t take into account compete level or chemistry. I’m not sure why people try and use Corsi to evaluate teams.
I have come up with a far superior way to evaluate them. I called it Inceptum. Inceptum is a little difficult to explain, but the important thing is that it does a good job at predicting what will happen for the rest of the season. So if you want to know whether the team you support is as good as (or better than) its record, don’t look at goal differential, don’t look at Corsi — which isn’t even real possession — look at Inceptum, which has been shown to do a better job of predicting the results from the rest of the year than any box-score measures.
Goal differential after 10 games, for example, explains 23% of the variance in end of season goal differential, while Inceptum explains 32%! My metric is certainly not perfect, and one always has to take into account contextual factors and the eye test — luck plays a big role as well — but it’s one of the best evaluative tools we now have.
I don’t know if we’ll ever see a power play quite like that of this decade’s Washington Capitals. We can’t attach a firm date to it because it could extend as far as the end of Alex Ovechkin’s career at this rate, but we know that its peak of power began with the hiring of Adam Oates as Caps head coach back in 2012. Oates had run a successful 1-3-1 power play for the New Jersey Devils with Ilya Kovalchuk as his trigger-man, but nothing even close to the heights he managed to achieve with the man advantage in his two seasons in DC. Barry Trotz, to his credit, has kept the same formation — what’s that old adage about things that ain’t broke? — with only minor tweaks, and last year the power play continued to succeed.
Now there’s a lot to discuss about the formation and its success — I like to think of the Caps’ PP as a work of art more than anything else — but for the sake of this post I’m going to focus in on Alex Ovechkin. Never has there been a more criticized future first-ballot Hall of Famer, nor arguably a more controversial elite goal scorer. It should already be a given that Ovechkin is the best power play goal scorer of all time — he sits fifth overall in PPG/g despite playing in a significantly lower scoring era than his contemporaries like Mike Bossy and Mario Lemieux — but I would argue by the time he retires, he will also likely be the greatest goal scorer of all time period. It’s the man advantage recently, in the latter stages of Ovechkin’s goal scoring peak, that has been the sniper’s bread and butter. Since Oates brought the 1-3-1 to town, Ovi has scored 48% of his goals on the power play, compared to 33% prior to that. He scored 25 power play goals last year, six ahead of the next highest total in Joe Pavelski’s 19. You have to go back another five to reach the player who is in third — Claude Giroux with 14 — indicating how great of a season the Sharks’ center/winger had, but that’s a story for another day.
I was thinking today about the skills that it takes to be able to analyze hockey properly, and it took me back to classroom learning. As somebody who hated memorization, it was always a relief to me when a teacher explained that we didn’t need to know something specific for the test. Providing a periodic table, or a t-table, or allowing us to write our own “cheat sheets” for a test was interpreted as a measure of sympathy by the professor. “I know this stuff is hard as hell; I’ll cut you a break and relieve you of a little studying.”
The truth, though, is that allowing the use of these materials, or going as far as holding open-book tests, has practical legs. In the real world, whether in science or math – really in most fields – it is more important to be able to find and interpret information than to know it offhand. If one needs to perform a chi-squared test, for example, in the internet age I can find that information very easily. The more experience one has in the field, the less one will need to rely on guides to perform such calculations, but until that point there is no need to hold information that can easily be found.
“Bryz-warmup” by Arnold C. Licensed under Public Domain via Commons.
This is the fifth part of a five part series. Check out Part 1, Part 2, Part 3, Part 4 here. You can view the series both at Hockey-Graphs.com and APHockey.net.
To quickly recap what I’ve covered in the first four parts of this series, I have updated the work that’s been done on Pythagorean Expectations in hockey, and am looking to find out whether teams that have the best lead-protecting players are able to outperform those expectations consistently.
The first step is to figure out how to assess a player’s ability to protect leads. To do this, for every season, I isolated every player’s Corsi Against/60, Scoring Chances Against/60, Expected Goals Against/60 (courtesy of War-On-Ice) and Goals Against/60 when up a goal at even strength. I then found a team’s lead protecting ability for the year in question by weighting those statistics for each player by the amount of ice time they winded up playing that year. For players that didn’t meet a certain threshold, I gave them what I felt was a decent approximation of replacement level ability. For example, here was the expected lead protecting performance of the 2014-2015 Anaheim Ducks in each of those categories.
This is the fourth part of a five part series. Check out Part 1, Part 2, Part 3, Part 5 here. You can view the series both at Hockey-Graphs.com and APHockey.net.
So now, four parts into this five part series, is probably a good time to discuss my original hypothesis and why I started this study. As I mentioned in my previous post, baseball has already gone through its Microscope Phase of analytics, where every broadly accepted early claim was put to the test to see whether it held up to strict scrutiny, and whether there were ways of adding nuance and complexity to each theory for more practical purpose. One of the first discoveries of this period was that outperforming one’s Pythagorean expectation for teams could be a sustainable talent — to an extent. Some would still argue that the impact is minimal, but it’s difficult to argue that it’s not there. What is this sustainable talent? Bullpens. Teams that have the best relievers, particularly closers, are more likely to win close games than those that don’t. One guess that I’ve heard put the impact somewhere around 1 win per season above expectations for teams with elite closers. That’s still not a lot, but it’s significant. My question would be, does such a thing exist in hockey?
Of course, there are no “closers,” strictly speaking. But there are players who close out games more often than others, and there are players whom coaches trust in late game lead-protecting situations more than others. Does a team with players who thrive in such situations have a higher chance of exceeding their expectation?
This is the third part of a five part series. Check out Part 1, Part 2, Part 4, Part 5 here. You can view the series both at Hockey-Graphs.com and APHockey.net.
Since the last post was getting a little long, I decided to hold off on releasing the full Pythagorean results. Linked you will find a table of every team since the lost season, sorted by the difference between its adjusted point total and its Pythagorean expectation. Essentially, the teams that have the highest numbers in the right-most column are likely to have been the most fortunate, and those at the bottom were possibly unlucky. If you look at the 2014-2015 results below, you will see which teams should be a little bit worried about their chances, and those which may be ready for a rebound. Tomorrow, I will address what the point of this whole study was, and we’ll look at some more data.