Perhaps the greatest rivalry in professional sports is Team Analytics vs Team Eye Test. It never fails. Write something with numbers and the film scourers will argue “but the intangibles.” Write about the film and the data crew will call out the person’s stats. As someone who regularly does both, what if I told you that both processes are almost always the same thing?
In order to prove they’re really more similar than different let’s take a quick tour through the usual processes for fans and pundits. I say “usual” because there are extreme versions of both processes but they’re generally rare.
For the data- and analytics-focused approach the first step is being asked a question. For this approach I’ll use my penalty harm stat as an example. That began with someone asking “Didn’t Doug Marrone’s teams get a lot of penalties too?” during a “Rex Ryan lacks discipline” conversation.
With the question asked, the next step is to see if existing data can answer it. In this case, penalty counts and per-game rates were readily accessible and answered the question easily. Sometimes the question evolves. “Sure, but it seems like Ryan’s team gets hit worse.”
So we add penalty yards into the mix and deepen our understanding of the situation. And sometimes a wrench gets thrown in the gears like “Well yeah, but yards don’t tell the whole story. What about timing and etc, etc.” In the case of penalty harm I took a broader look at all of the available data and aggregated multiple data points into a proprietary formula to help answer that question.
The Eye Test
This process starts in the same way. Someone (sometimes ourselves) asks a question. Once the question is out in the air we go back to the available film and watch it. Simple as that. Almost anyway.
Film junkies tend to get lost in the tape and spend a lot of time there. To help us along there’s a lot of pausing, repeating the same play over and over, and most importantly taking notes.
They don’t sound the same? Prove it
Analytics are really just numerous eye tests
Nearly every single piece of data used by the analytics community is the result of an eye test. Let’s use yards per attempt (Y/A) as our example here. Every single pass that’s thrown is seen and verified by the officiating crew, both teams playing, and millions of fans. We’ve all collectively seen what occurred and agreed upon the basic fact that “Josh Allen threw to John Brown for 26 yards and a touchdown.”
Yes, there are occasional anomalies like the now infamous “interception” against the Los Angeles Rams. And yes those need to be considered. Sometimes we’re lucky to have stats like adjusted catch rate that considers drops. Other times we don’t. Ultimately though, an anomaly like that “interception” will get washed out in the good data. If Allen continues his current turnover trends that single data point will be meaningless soon enough.
The Eye Test is nothing more than data sets
But wait! Analytics doesn’t have data on a lot of things. Things like quarterback throwing mechanics, the stance of a lineman, the balance of a running back. Actually, all of those things have data. Remember the last thing I said about the eye test process. Notes.
I’ve shared mine from time to time and close readers likely realized that whenever I hit the film I’m looking for specific items. For a quarterback it might be ball placement, pocket navigation, decision making, etc. Every play. Over and over again. When it comes time to summarize I might write a sentence like “Goff navigated the pocket well despite pressure.” Here’s the catch, that conclusion is based off of using each play as a data point.
So why the conflict?
To be perfectly clear, at their foundations the two processes are the same. They are both inherently observable data points that have been collected to make a broader observation. The key difference is where they are on the spectrum. Analytics relies on much larger data sets that have been independently verified by a collective of observers. The eye test is typically one person’s observations of a smaller data set.
Like most things in life, there’s a very vocal minority on the extreme sides of things that can give that position a black eye. Some people are too rigid with the data. If I cite interceptions at the end of the year and someone points out the fake one against the Rams I should be open to that discussion. Anyone who is not is too rigid with the data. That gives analytics a bad name.
On the flip side, we’ve all seen eye testers see couple highlights (or lowlights) and base their entire analysis on that. “I love that guy’s balance and ability to move a pile” is a silly argument if the running back is averaging 2.6 yards per carry.
So now what?
I think most of us know that both are valuable so there’s no need for empty platitudes of “THEY’RE BOTH GREAT.” In reality most of us skew one way or the other and that’s the reason for this article. We’re all human and subconsciously choose sides when presented with more than one.
By understanding the roots of analytics and the eye test my hope for all of you is that it leads to the realization that they’re more similar than not, deepening the breadth of knowledge in our already top-notch community.
So here’s the platitude. Once you see that both processes are fundamentally the same, there’s no reason to pick a “side.” Preferences and leanings won’t ever go away and there’s no reason they should. But remember that analytics is nothing more than tons and tons of eye tests slapped together. And eye tests are nothing more than focused data sets.
Better together. Trust the process(es)