# Expected Points by Position Rank in Fantasy Football

24In this post, I calculate the expected fantasy points scored by players based on their position and position rank. This post is modeled after a post by Chase Stuart (see here and here), where he calculated players’ expected fantasy points as a function of historical performance for each position and position rank. For my fantasy football auction draft optimizer tool in Shiny, see here.

### How the Expected Points were Calculated

I downloaded historical projected position ranks and actual fantasy points scored from 1999 to 2012. The historical data for average draft position (used for projected position rank) came from myfantasyleague.com. The historical data for actual fantasy points came from Pro-Football-Reference and FantasyPlaymakers. I calculated fantasy points based on standard fantasy football scoring settings from FantasyPros. Then I computed robust averages of actual fantasy points for each position rank by averaging the actual fantasy points scored for each position rank across years using the Hodges-Lehmann estimator, which is the median of all pairwise means, and is robust to outliers.

### R Scripts

The R script for downloading the historical ADP data is located here (note that the data were first exported to .xml by going here):

https://github.com/dadrivr/FantasyFootballAnalyticsR/blob/master/R%20Scripts/Historical/Historical%20ADP.R

The R script for downloading the historical fantasy points scored is located here:

https://github.com/dadrivr/FantasyFootballAnalyticsR/blob/master/R%20Scripts/Historical/Historical%20Actual.R

The R script for calculating the best fitting line for each position and creating the plots is located here:

https://github.com/dadrivr/FantasyFootballAnalyticsR/blob/master/R%20Scripts/Posts/eVORP.R

### Plots

The plots below show the expected fantasy points for each position and position rank with the best-fitting line overlaid.

For a brief summary of the position scatterplots:

Running backs, wide receivers and tight ends have the greatest loss in value after the early picks. Also, variations in quarterbacks, running backs, wide receivers and tight ends were much more predictable (R-squared values around .80) than variations in kickers and defenses (R-squared values around .50).

### Conclusion

- Variations in QBs, RBs, WRs, and TEs are fairly
**predictable**in terms of fantasy points - Variations in Kickers and Defenses are fairly
**unpredictable**in terms of fantasy points - RBs, WRs, and TEs show
**steep early decreases**in expected value after the first picks - QBs show
**stable decreases**in expected value - Kickers and Defenses show
**flat levels**of expected value

- Spend your first picks on RBs, WRs, and TEs (their variations in fantasy points are fairly predictable, and their values decrease exponentially after the best ones are off the board).
- After drafting RBs, WRs, and TEs, draft a QB. QBs’ variations in fantasy points are fairly predictable, and their values don’t decrease as fast as RBs, WRs, and TEs, so you can still get a fairly solid QB after the top QBs are off the board.
- Wait until drafting RBs, WRs, TEs, and QBs before drafting Kickers and Defenses because their variations in fantasy points are less predictable and lower ranked Ks and Ds only show small decreases in expected value.

R-squared is not a measure of predictability, it’s a measure of how much variation is accounted for in a model relative to a model with a simple mean term.

If the mean model is very good (low Std. Error) than who cares what the R-squared is? Looking at the graphs it seems kickers are very predictable, they are all just about equal.

Would probably be more useful to compare std. errors across positions, which would control for how some positions have more variance to explain than others.

^Agreed, but also the case for the defense, really. Your point that those two groups’ expected value declines barely at all is really the key point justifying saving them for last.

Thanks for the comments, guys. I used R-squared because it’s a measure of the proportion of the variance explained, and it tells us how informative variations in position rank are for explaining variations in fantasy points. Based on the lower R-squared, knowing that a kicker is ranked 1st is not very informative for knowing whether he will score more points than the 5th ranked kicker. Plus, as the above commenter notes, the expected value declines barely for kickers. Both the R-squared and expected value decline are informative for determining when to draft different positions. The R-squared metric lines up with what we would intuitively expect would be the most “predictable” positions based on variations in position rank: QB, RB, WR, and TE > K and Def.

Totally agreed that the low R^2 is a reason not to care too much about which kicker you draft, just meant that this was a different matter than saying “kickers are unpredictable”

Rhetoric aside, I noticed you have a lot of downloaded projections from various sites, but would be interested in comparing models that generate point projections (or making better ones). Do you know the easiest way to grab point histories, age and any other variables (height, weight, combine speed, etc.) for players and teams?

This is a fun read, thanks!

Thanks, Nigel, and good points. I’ve updated the phrasing in the post to “variations in kickers’ fantasy points are less predictable.”

I think it’s a great idea to examine and improve the forecasting models for projections of fantasy points. I’m not too familiar with the black box that the companies are using behind the scenes. It would be interesting to generate a model on our own and compare its accuracy to the leading projections (e.g., Accuscore: http://accuscore.com/fantasy-sports/nfl-fantasy-sports). As a starting point, the model might include past performance, the age curve, and the age X position interaction. We could also look at player metrics that you mention such as height, weight, combine speed. Much of these data are available on pro-football-reference.com. Let me know if you’d like to give this a try!

I’m new to R and RStudio. I’ve literally started just a few days ago. This is one of the most interesting applications of it so far! Thanks for this!

Quick question

Why do you suppose that the QB’s point totals are fairly predictable?

-(Namely why is the regression linear, while the RB’s and WR’s are smoothed?)Thanks!

Excellent question, Prateek. Several model comparisons suggested that linear models fit better than logarithmic models for QBs, Ks, and DEFs, whereas the opposite was true for RBs, WRs, and TEs. In addition, visual examination of the scatterplots supported this distinction.

Glad I came across this article. The only issue I would raise to the findings would be the data set is too large. Example, in the early 2000’s there was none of this RBBC, making it easier to predict the lower ranked runners performance. I would expect that more recently the R^2 value would be lower. I’m sure this has happened with other positions as well, meaning if you wanted to use the findings for this season, it could be misleading. Another thought for this point I just had is that WR and QB point totals have gone up in recent years, so the older data is deflating their current value.

On another note I would be interested to see it broken up in position tiers i.e. rbs 1-5, 6-10 (although not sure if there is enough data to have significant findings, especially if you take my earlier suggestion and limit it to more recent). The only reason for doing this is to see how it would impact draft strategy. For example, based on your current findings, the WR-WR draft strategy has no merits as their point totals are much lower, with less predictability than rbs (and I know from my own analysis that the Standard deviation of point totals are lower for WR compared to Rb as well, so nobody bring that up :p).

But good analysis!

“Another thought for this point I just had is that WR and QB point totals have gone up in recent years, so the older data is deflating their current value”

*In terms of expected points*

Thanks WLU for your thoughts. It’s an interesting hypothesis that RBs’ predictability may be lower now than in the past because of running back by committee. I’m less worried about mean changes in points, because we are looking at the predictability of variations within position, so a mean level change shouldn’t affect that much. I’ll try to look into the predictability of positions within recent years. The problem is that outliers will have a greater impact in a smaller data set. Thanks for your interest and ideas!

Isaac – slightly off topic but was hoping you could help with a discussion my league is currently having. We are trying to determine what method to use to set the initial waiver order.

There are 3 suggestions:

1. Reverse of last season’s regular season standings

2. Reverse of original 1st round draft position (we have a lottery for all non-playoff teams so this doesn’t match #1)

3. Reverse of actual 1st round draft position (some people traded picks prior to the draft so this doesn’t match #2)

I’d prefer #2 if forced to choose from those 3 options, but have a better idea and need some help. I’d like to use an expected value by draft position curve(without regard to position). I’d then sum up the expected value of all the picks each team has and set waivers from least to most expected value. I’ve looked all over but haven’t been able to find this data/curve online – I was hoping you could use your data set to create this curve.

Hope you can help out. Thanks.

Thanks for the link Isaac. Think I’ll pull together the historical data and develop the curve.

Hey Chris,

Here’s a post on expected value by draft position in the NFL draft. It’s not for fantasy, but it might give you some ideas how to do this:

http://harvardsportsanalysis.wordpress.com/2011/11/30/how-to-value-nfl-draft-picks/

If you want to set the waiver order based on expected values of draft positions, you should just go ahead and set it by draft position, because the expected values by draft position should mirror draft position within margin of error when collapsing across positions (QB, RB, WR, etc.). In contrast, as shown by my blog post, the expected values will differ according to position rank when looking at different positions, but you seem to be collapsing across positions, so that shouldn’t matter.

Does that make sense?

-Isaac

Just wanted to thank you for posting the R code. I am just starting fantasy and trying to learn R. You have helped me do both at the same time!

Hey Isaac, gotta say how much I’ve learned from your site, I really appreciate all the hard work you put in. I had an idea that I’d be very interested to get your take on. I understand your post on how projections are more useful than rankings. However, because there are vastly more rankings available, it is more data. I’ve taken each position and figured the experts that are consistently 1/2 SD above mean in their rankings for each position (using Fantasypros accuracy percentages). So I have lists of “position experts”, and I’ve combined their ’14 rankings to make my master ranking lists (with extra weight given to experts who have more years’ consistency and who are >.75 or >1 SD over mean in multiple years). These rankings have naturally forming “tiers” of sorts, although I had not been able to relate them to VOR yet. So I haven’t joined them into a master list. However, given the expected positional values across each position in this post, I could potentially assign VOR values. This would allow us to isolate data from WR experts, and RB experts etc. who have great accuracy. btw I use about 8 experts for each position What are your thoughts?.

btw I use standard scoring in my leagues (besides two w/ close friends). Also I realize that this method may be less accurate to determine the first round of a draft (these players can be outliers). That is not where I typically have difficulty making picks though. And obviously adding position expected values to position rankings compounds uncertainty. But if the position experts are really good, it may still prove valuable.

Hey Will

This is an interesting possibility to go from rankings to projections. As you noted in my post on why projections are better than rankings, it is much easier to go from projections to rankings than vice versa. You could use the expected values as a starting point if you wanted to translate the rankings to expect fantasy points. I would definitely *not* do this if your leagues use any non-standard scoring settings (the rankings for standard leagues become irrelevant). You mentioned that your leagues mostly use standard scoring settings, so you might calculate the expected values and see how closely they mirror the actual projections. It’s also worth considering that these expected values are based on averages over many years. The actual expected values in any given year could differ considerably from these values. This seems like a lot of work for an experimental approach, but I’m interested in the results, so feel free to share your findings. In general, I think projections will be easier and give you more mileage than rankings. There are lots of rankings out there, but we are compiling a large data set of projections, too. Our apps now have 17 sources of projections (more than any other website), and the data are much richer than rankings data.

Hope that helps!

-Isaac

Hey Isaac,

I’m an economics student at the Richard Stockton College of New Jersey and I am writing my econometrics paper essentially on what you have compiled here. I am terrible at using computer programs though unfortunately so I was hoping you could guide me through how to plug your R script links above into R. or preferably how to transfer them into excel, minitab, or spss so I can work with the data. As of now i can’t figure out how to view the data. My paper is on the “zero RB theory” or the thought that it is better to draft wr’s before rb’s. Thank you! you can reach me at [email protected]

Hey Kevin,

You can open/run the scripts in Rstudio (see more info here: http://fantasyfootballanalytics.net/2013/03/isaac-petersen.html). I’ll try to put a post together in the next couple weeks detailing how to run the scripts from the webpage. We have a GitHub repo with lots of R scripts that you can download for free. R can easily output the data into .csv and .txt files that can be read by Excel and SPSS (not sure about Minitab), but I’d bet there’s a format that R can output that Minitab can read!

Hope that helps!

-Isaac

Hey Kevin,

Just added a post on how to download and run our R scripts (and data):

http://fantasyfootballanalytics.net/2014/10/download-run-r-scripts.html

Cheers!

-Isaac

Thank you so much!

Just a thought, after reading your conclusions…

You mention to always draft RBs, WRs, or TEs first before QB since the drop off is faster. However, based on the graphs… doens’t it only make sense to draft a RB, WR, or TE until the point where the slope of the graph is flatter than the QB graph? That means at that point, it may make sense to draft 1 QB next before going back to RBs, WRs, and TEs because the dropoff from that point onwards for QB is actually more than RB, WR, and TE

Hi Lawrence,

Yes, I agree with you. I think the biggest point is that you should spend your earliest picks on RB/WR/TE.

-Isaac

Hi Isaac,

How do you take into account the lack of repeatability in positional performance year on year? I understand that it’s possible to calculate how RB3 will do year on year or how quickly the points fall off from RB1 to RB10, but the probability of correctly identifying which RBs will feature in that top 10 is quite low. For example, if you look at RBs over the past 5 years, the likelihood of one of the top 3 appearing in the next year’s top 10 is less than 20%. That’s where I feel most rankings fall down, as they are almost universally based on last year’s finishing positions, with slight variations. I’ve done mocks where participants will brag about having two top 10 backs, but of course what they mean is two of last year’s top 10, and the chances of both of those repeating as top 10 backs the following year is less than 10%.

That said, cheers for some very thought-provoking work!