While a ton of e-ink was spilled in my last post, perhaps the most succinct thought on PP prediction came from good friend of the blog, holiday park.
“The basic problem is that relatively little time in the game is spent on special teams, and goals are only scored in a fraction of those times: the result is that teams don’t vary that much in either PP or PK efficiency. Insofar as any correlation-based method of analysis is about explaining variation in one variable in terms of variation in another, you’re kind of stuck if your dependent variable doesn’t vary all that much.”
To make sure both holiday park and I aren’t full of shit, let’s apply the correlation findings from this past article to the 2012-2013 season, which weren’t included in my previous analysis.
Methods
Let’s “pretend” we are starting from the half-way point in the 12-13 season, when all teams have played approximately 24 games. We look at our stats that are the best predictors of PP success and give our best prediction of what the year end GF/60 will be. We then wait a theoretical half season, compare our 1st half results with our 2nd half results and see how we did.
Misses and shots were corrected for scorer bias as indicated in the previous study.
Results
Table 1. below shows FF/60 regressed 33% to the mean at the half way point, FF/60 regressed 33% to the mean for teh 2nd half of the season, GF/60 and Pts for the 2nd half.
We compare our first half regressed FF/60 with our dependent variables of interest using correlations, to show how predictive FF/60 was.
Lastly, a table of 12-13 results for all games, 1-48. All data 5v4, non-empty net.
Discussion
A little disheartening, but not entirely surprising. Our ability to predict PP success was terrible. Using the best predictor of PP success we came out with a correlation between regressed FF/60 to GF/60 of basically 0. Why? Because Sh% dominates GF/60 over such a small sample of 24 games. We previously showed that sh% is almost entirely random, and therefore, we basically have no capacity to predict GF/60 in 5v4 play.
To drive this home, and I fucking hate doing this, let’s look at the top 5 teams by shooting percentage over the first half of the year, and see how they did the 2nd half.
As a whole, they regressed more than we predicted. We think they will sustain about 5-7%, but in total last year they performed below league average. This variance is absolutely expected given that I’ve only selected 5 teams, and we are only using 24 games.
For whatever reason, writing this article brings to mind all the posts leading up to the playoffs and just after the playoffs that compare PP/PK success between teams. From this and the previous article, we can conclusively say that using stats to try to substantiate any argument is basically blatant lying. Unless we start applying stats that have reliability, we aren’t likely to gain much ground.
Conclusion
Given the above, the model performed badly at predicting GF/60 at the half-way point last season. At best, we can assume that FF/60 is probably driving PP success, but even over a small sample (24 games), remains marginally reliable. The biggest issue is that GF/60 is heavily dominated by shooting percentage, which we showed last post to be almost entirely random.
Our analysis here showed a much lower correlation than we expected, 0 vs. 0.34 respectively. I still expect the 13-14 season to show at least modest correlation (0.34) between regressed Fenwick For/60 and GF/60. Regardless, predicting PP success (and by virtue of lower correlation coefficients PK success) is difficult, if not improbable. Laugh at your friends who speak of teams as “strong” or “elite” citing a top PP%.