Well dear readers, it’s the end of an era. This will be the last Josh Allen “leap” article based on my statistical improvement models. After nearly two years it’s time to put this project to rest. Before we get to the charts, let’s stroll down memory lane...
- Impatient with waiting to see Allen’s sophomore season, I gathered a bunch of stats to compare a decade of rookie quarterbacks with their sophomore selves and created a “normal” range of improvement to try to predict Allen’s second year.
- I checked those predictions after his second season and that went well.
- Encouraged by my previous success, I did the same thing comparing year-two QBs versus their year-three selves and plotted Allen’s likely improvement.
- I checked in midseason and things weren’t looking good for my model.
After year two I looked really smart. After year three my model and I are probably gonna look really bad. You could say I might be eating krow but we all know I can’t get close enough to them to do that. It’s time to see how bad Allen broke my model...
Yards per game
As a quick reminder on how to read these, the bottom two bars are Allen’s actual stats for the listed year. The three bars with “average” in the title represent the range that should be considered normal change from year two to three. In other words, if Allen’s 2020 stat for a particular chart is between the low and high average marks it would be a success for the model’s prediction.
For best- and worst-case stats, that’s the amount of change that the listed QB had but applied to Allen’s stats. For instance with Stafford, he went from 178 to 314 yard per game for a change of 136.6 yards. We add that last number to Allen’s 2019 total of 193.1 to arrive at our best case.
In this measure, Josh Allen’s 2020 results fall almost perfectly in the middle between the high-average mark and the best case. That means Allen improved significantly more than than anything that should be considered normal. Allen was the second most improved QB in this metric of the decade of quarterbacks that went into the model. Andrew Luck had been behind Stafford at 58.7 yards improved.
Yards per attempt
This stat shows the same result. Allen was quite a bit higher than the range that could be considered average or normal. This is essentially an identical narrative to yards per game as Andrew Luck fell behind Stafford with 1.0 yards per attempt improvement to Stafford’s 2.0. Allen takes over second place again with 1.2 yards per attempt improvement.
There’s no worrying about second place on this chart. Allen takes over the top spot from Carson Wentz by improving 10.4 percent in completions. Wentz improved 9.4 percent—a full percentage point behind Allen. Since I only took a ten-year sample of QBs to make sure I was capturing the modern era of passers I can’t say this is unprecedented in the history of the league, but I can say it’s a leap that recent players haven’t matched, putting Allen in a class of his own for this category.
This will be important for the summary. Stafford and Luck improved 4.1 percent and 1.5 percent respectively. Stafford’s year three was higher at 63.5 percent completions. Both of their year-two completion percentages were comparable to Allen’s year two.
We return to the earlier narrative. Allen was quite a bit more improved than should have been expected but didn’t quite set the new mark with his 2.2 percent improvement in touchdown rate. Second place in this category was Sam Bradford at 2.1 percent, meaning Allen takes THIS second spot too.
This is the only one the model can claim to have gotten right. Josh Allen saw only slight improvement and fell with a normal amount of change. On the flip side, this was a stat Allen was already excellent in so dramatic change really wasn’t possible.
Any analyst using data needs to be careful with a project like this. I’d like to think I started things off on the right foot. Specifically, when it comes to outliers, many analysts seem to suggest fans should write off the possibility of a player becoming one. Right from the start I wanted to include best-case scenarios to push back on that attitude. My approach was intended to be, “Not only do outliers exist, but here’s their name for each stat.” Every chart was meant to convey “This should be expected, but this is what we’re hoping for.”
I added more detail in how Allen finished this time around specifically because in four of these five metrics Allen was either most improved or second-most improved. While Andrew Luck and a few other quarterbacks saw a fair amount of improvement between year two and three, Josh Allen willed his way to drastic improvement across the board. No one else came close.
My model was not only wrong this year, it’s laughably so. And the craziest thing is that I don’t think Allen invalidates the data. I’d bet that if you plugged in other stats for QBs like Sam Darnold, Lamar Jackson, Baker Mayfield etc. I’d look smart again. In fact, let’s go for it with Mayfield who had some buzz around him this season. Remember, my model predicted most QBs from year two to three would remain mostly stagnant.
Mayfield’s yards per game dropped by about 17 yards. Yards per attempt went up by 0.1, completion percentage went up 3.4 percent, touchdown rate went up 1.2 percent, and interception rate fell by 2.3 percent. So we have average regression, stagnation, average improvement, better than normal improvement, and excellent improvement. That’s pretty successful for the model.
If I were writing for a Cleveland Browns blog my conclusion would be “See, I told you he wasn’t likely to improve drastically in all these areas.” Go ahead and look at Lamar Jackson and Sam Darnold’s stats from this year. My model indicated regression is pretty normal. If I wrote for those blogs I could have written “I told ya so” as my conclusion with a mic drop.
I’m probably starting to sound like “other” analysts who can’t admit when they’re wrong. I absolutely am defending the model. I’m confident it does a good job at telling us what’s normal or expected change. Here’s a key difference though with me. My defense of the model is important for those of you who were on the Allen bandwagon all along.
If my model sucks then you can’t point at those charts above and make a good conclusion. I’m telling you my model doesn’t suck so that you can take those charts and shove them up the *** of any lingering Allen doubters. Put simply, if the model does a good job showing “normal change” it therefore becomes a good tool for saying “THIS CHANGE ISN’T NORMAL.”
And that does mean the model can get it wrong. In Allen’s case it couldn’t have done any worse. The predictions were wrong and so was I. Now I dare you to find anyone who is happier to have missed the mark than I am.