In prior posts, we presented the accuracy of different analysts in projecting football players’ performance, finding that the average was more accurate than any individual analyst. In this post, we present a Shiny app to examine the accuracy of historical fantasy football projections. The app allows you to examine the accuracy of historical projections using different analysts, positions, seasons, league scoring settings, and types of averaging. The app also includes an interactive scatterplot.
The app is located here:
How to Examine Historical Accuracy of Fantasy Football Projections
- Click “Settings”, then “General Settings”.
- Select a previous season or week (so we know how projections compared to actual performance).
- Change the league settings to tailor the projected/actual points to your league settings.
- Choose the type calculation type: average (mean), weighted average, or robust average. For more info on these calculation types, see here.
- Choose the analysts to include and, if you selected a weighted average, how much to weight each analyst in the average projections.
- Click “Save Settings”.
- Click the “Accuracy Tab”.
Note: there are other settings you can modify, as well. For a description of these settings, see here.
The page displays two scatterplots. The top scatterplot of projected versus actual points is from ggplot2 and displays a LOESS smoother and confidence interval, along with an estimate of the R-squared value for the linear (not LOESS) fit.
The bottom scatterplot of projected versus actual points is an interactive scatterplot. You can select which positions to display in the legend. Hovering over the dots, you will see how many points each player was projected to score and actually scored. For instance, in 2014, we can see that Robert Griffin greatly under-performed expectations, whereas Demarco Murray exceeded expectations and Tom Brady fell close to expectations.
The table examines the accuracy of historical projections by position with several accuracy metrics:
- mean error (ME): closer to zero is better (positive values mean the projections are under-estimates, negative values mean the projections are over-estimates)
- root-mean squared error (RMSE): lower is better
- mean absolute error (MAE): lower is better
- mean percentage error (MPE): closer to zero is better (positive values mean the projections are under-estimates, negative values mean the projections are over-estimates)
- mean absolute percentage error (MAPE): lower is better
- mean absolute scaled error (MASE): lower is better
- R-squared (RSQ): higher is better
R-squared is measure of relative fit, whereas the others are measures of absolute fit. Note: the high percentage estimates of error (MPE and MAPE) reflect that a number of players scored very few points, which skews percentage estimates of error.
- The average of analysts was more accurate than the individual analysts, consistent with the principle of the wisdom of the crowd. For more info, see here.
- The weighted average was slightly more accurate than the mean or robust average. For more info, see here. Note, however, that the default weights were calculated based on historical accuracy, so it remains to be seen whether these weights will apply to future projections. If the best analysts are consistently more accurate than other analysts, the weighted average will likely continue to outperform the mean. If, on the other hand, analysts don’t reliably outperform each other, the mean might be more accurate.
- The weighted average explained about 60% of the variation in players’ actual performance. That means that the projections are somewhat accurate but have much room for improvement. Nevertheless, the projections are likely more accurate than pre-season rankings.
- Projections were more accurate for some positions than others. Projections were most accurate for QBs and WRs. Projections were least accurate for Team Defenses (DST) and individual defensive players (IDP). For more info, see here.
- Projections over-estimated players’ performance by about 5–6 points on average across most positions (based on mean error). It will be interesting to see if this pattern holds in future seasons. If it does, we could account for this over-expectation in players’ projections. In a future post, I hope to explore the types of players for whom this over-expectation occurs.
But don’t take my word for it. Test it out yourself and see what you find. And let me know if you find something interesting!