In prior posts, we demonstrated how to download projections from numerous sources, calculate custom projections for your league, and compare the accuracy of different sources of projections (2013, 2014, 2015, 2016). In the latest version of our annual series, we hold the forecasters accountable and see who had the most and least accurate fantasy football projections over the last 5 years.
The R Script
To compare the accuracy of the projections, we use the following metrics:
Whose Predictions Were the Best?
|Fantasy Football Analytics: Average||.670||.545||.567||.635||.618||.577||.626||.553||.645||.535||.625||.569|
|Fantasy Football Analytics: Robust Average||.667||.549||.561||.636||.613||.581||.628||.554||.644||.536||.623||.571|
|Fantasy Football Analytics: Weighted Average||.626||.553||.645||.535||.636||.544|
- Fantasy Football Analytics: Average (or Weighted Average)
- Fantasy Football Analytics: Robust Average
- Yahoo (ProFootballFocus)
Notes: CBS estimates were averaged across Jamey Eisenberg and Dave Richard. FantasyFootballNerd projections include only their free projections (not their full subscription projections). We did not calculate the weighted average prior to 2015. The accuracy estimates may differ slightly from those provided in prior years because a) we now use standard league scoring settings (you can see the league scoring settings we used here) and b) we are only examining the following positions: QB, RB, WR, and TE. The weights for the weighted average were based on historical accuracy (1-MASE). For the analysts not included in the accuracy calculations, we calculated the average (1-MASE) value and subtracted 1/2 the standard deviation of (1-MASE). The weights in the weighted average for 2016 were:
CBS Average: .344
Yahoo Sports: .400
Here is a scatterplot of our average projections in relation to players’ actual fantasy points scored in 2016:
- Projections that combined multiple sources of projections (FFA Average, Weighted Average, Robust Average) were more accurate than all single sources of projections (e.g., CBS, NFL.com, ESPN) every year. This is consistent with the wisdom of the crowd.
- FFA projections were more accurate than projections from FantasyPros. This may be because we include more sources of projections.
- The simple average (mean) was more accurate than the robust average. The robust average gives extreme values less weight in the calculation of the average. This suggests that outliers may reflect meaningful sources of variance (i.e., they may help capture a player’s ceiling/floor) and may not just be bad projections (i.e., error/noise).
- The weighted average was equally accurate compared to the simple average. Weights were based on historical accuracy. If the best analysts are consistently more accurate than other analysts, the weighted average will likely outperform the mean. If, on the other hand, analysts don’t reliably outperform each other, the mean might be as or more accurate. Given the mean and weighted average were equally accurate each year, the evidence suggests that analysts don’t consistently outperform (or underperform) each other.
- The FFA Average explained 57–67% of the variation in players’ actual performance. That means that the projections are somewhat accurate but have much room for improvement in terms of prediction accuracy. 1/3 to 1/2 of the variance in actual points is unexplained by projections. Nevertheless, the projections are likely more accurate than pre-season rankings.
- The R-squared of the FFA average projection was .67 in 2012, .57 in 2013, .62 in 2014, .63 in 2015, and .65 in 2016. This suggests that players are more predictable in some years than others.
- There was little consistency in performance across time among sites that used single projections (CBS, NFL.com, ESPN). In 2012, CBS was the most accurate single source of projection but they were the least accurate in 2013. Moreover, ESPN was among the least accurate in 2014, but they were among the most accurate in 2015. This suggests that no single source reliably outperforms the others. While some sites may do better than others in any given year (because of fairly random variability–i.e., chance), it is unlikely that they will continue to outperform the other sites.
- Projections were more accurate for some positions than others. Projections were much more accurate for QBs and WRs than for RBs. Projections were the least accurate for Ks, DBs, and DSTs. For more info, see here. Here is how positions ranked in accuracy of their projections (from most to least accurate):
- QB: R2 = .73
- WR: R2 = .57
- TE: R2 = .55
- LB: R2 = .53
- RB: R2 = .48
- DL: R2 = .45
- K: R2 = .38
- DB: R2 = .37
- DST: R2 = .24
- Projections over-estimated players’ performance by about 4–10 points every year across most positions (based on mean error). It will be interesting to see if this pattern holds in future seasons. If it does, we could account for this over-expectation in players’ projections. In a future post, I hope to explore the types of players for whom this over-expectation occurs.
Fantasy Football Analytics had the most accurate projections over the last five years. Why? We average across sources. Combining sources of projections removes some of their individual judgment biases (error) and gives us a more accurate fantasy projection. No single source (CBS, NFL.com, ESPN) reliably outperformed the others or the crowd, suggesting that differences between them are likely due in large part to chance. In sum, crowd projections are more accurate than individuals’ judgments for fantasy football projections. People often like to “go with their gut” when picking players. That’s fine—fantasy football is a game. Do what is fun for you. But, crowd projections are the most reliably accurate of any source. Do with that what you will! But don’t take my word for it. Examine the accuracy yourself with our Projections tool and see what you find. And let us know if you find something interesting!