🎮 Your Weekly Pitch: Proof that video game critics are just bad critics
The data shows that it’s the users, and not the critics, you should be listening to.
Written by @alexc_journals, a not-so-secret-gamer at heart
A 5 minute read
1 WEEK TILL L-DAY
In 1 week, we writers of Case By Case will be starting law school. Wild.
As a heads up, Alex will be running CBC independently and schedules are likely to get crazy. Please bear with us and any last-minute changes to our little labor of love.
THE LEDE
The biggest misconception in games is that they’re about immediate enjoyment.
They’re (often) not.
Rather they’re an investment in your future—in downloadable content, in bug patches, in everything that makes them living breathing works of technical art.
As a consumer making that investment, chances are that when you buy a (probably very expensive) game, you might do some quick googling. After all, you want to know that your money is being well spent, right? So you go looking for the experts a.k.a. game critics.
But it turns out, unlike movies or books, you just can’t trust the critics.
They’re just plain embarrassingly, and notoriously, wrong most of the time.
This means that you, dear player, are probably making some very expensive decisions based on objectively bad information by ‘experts.’
The Guiding Question
Why are user reviews and critic reviews so mismatched in video games? Why is it an open secret that you shouldn’t listen to critics a.k.a. The ‘experts’?
The Answer
This is an older article, but it proves a clear point. Critics and users often have wildly different reactions to games—enough so that you might wonder if they’re even looking at the same game.
There are a few reasons for the radical disconnect:
Timing
Critic reviews are published when the game comes out. They’re very rarely revisited. User scores on the other hand are constantly flowing in over months and years; they can even be edited and updated if a user changes their mind over time.
This is what makes reviewing games so much harder than other mediums. If a game gets a bad patch, closes servers, or otherwise languishes, user reviews immediately reflect that—and vice versa for those rare titles that struggle on release but turn it around after launch.
When critics don’t revisit their reviews, they’re almost immediately out of date.Eating the whole enchilada
Even though critics get early access to play games, oftentimes they aren’t playing the full game.
That’s like saying I reviewed Titanic, watched only 20 minutes, and then called it “historically inaccurate” because the boat didn’t sink.
The boat sank. Obviously.
I just didn’t get to experience it, even though it’s my job to judge the piece as a whole.
To their credit, it’s a lot harder to fully experience most video games than it is a movie. A movie is a static thing that takes a few hours to experience. A game on the other hand could have 50+ hours of gameplay, completely different endings, and an entirely open world with no streamlined story. Playing something fully can be near impossible. But it doesn’t mean that critics are playing something enough to make a review actually valuable.The experts can’t play well
Critics often review games based on their own taste. That probably sounds obvious, but it’s an especially acute problem when it comes to video games.
The more time you spend analyzing a subject, the better your eventual analysis. But game critics often play games outside their area of expertise. I.e. How helpful is it to have someone who only plays turn-based combat games suddenly review a 1st person shooter? Chances are, they’ll comment on how they think it has poor mechanics. Although really, anybody who plays 1st person shooters is looking exactly for that kind of mechanic.
For a deeper dive into games journalists and some of their shortcomings, check out this hilarious review of critics by videogamedunkey (Warning: there’s a lot of cursing but he makes some valid points echoed by players in the gaming community).
Let’s look at one example: No Man’s Sky
Because it’s been out for at least a few years and incorporated significant changes via updates and patches, we figured No Man’s Sky would be the perfect test case.
Take a look at the critics’ reviews in the 3rd column. After the first month of release, there were literally no changes. Not a single new contribution even though the game went through 12 major updates and 27 small patch updates since its release. Most of those early critic reviews were essentially “meh.”
Users on the other hand consistently added new reviews over 5 years, with a significant portion shifting from negative to positive.
Compare this to reviews on Steam, where you can see recent vs. all reviews (~3,000 give 90% v. ~157,000 give 68%, respectively). No Man’s Sky is listed on Steam as “Very Positive” under recent reviews—likely a result of constant development by game creators—and a far cry from the critics saying “meh.”
Why This Story is Worth Pursuing
The problem here is this:
Even though user reviews are typically more thorough, genre-specific, and updated in a timely manner, critic reviews are still the first thing most people see before they buy a game.
It’s the biggest number…
taking up the most space…
on the best real estate on the site.
But that number is objectively pretty useless for consumers.
In fact, you’re likely making outright bad decisions if you purchase a game based on critic reviews—even though that’s not how critic reviews are marketed.
The lesson here is that a wise gamer doesn’t listen to the ‘experts’—and neither should you.
The News Peg
This is a hyper convenient pitch since you could peg it to the release of any major game. In fact, here’s a list of every major game coming up just this year broken down by month.
What We Don’t Have Answers To
This would make a great data journalism piece (and in fact, it was inspired by this data scientist on Reddit). However, you’d have to validate the hypothesis with a few more games. A great place to start would be to compare 5 heavily patched games versus 5 not heavily patched PC games.
Good test subjects: Doom Eternal, Fallout New Vegas, XCOM 2, Assassin's Creed Odyssey, For Honor, Rainbow 6 Siege, Monster Hunter World, Stardew Valley, Splatoon 2, and The Witcher 3.
Diverse Sources Worth Interviewing
This pitch was inspired by Andrew, the data scientist behind this Reddit post. He helped us refine our hypothesis and outline what data would be critical to answering our question. As you dive into the data, use him as a sounding board to validate your data and what it might be telling you.
PUBLICATIONS TO PITCH TO
Polygon; contact pitches@polygon.com or matt.leone@polygon.com at the Features desk.
The Verge; contact andrew.webster@theverge.com or kevin.nguyen@theverge.com at the Features desk.
Inverse; contact jacob@inverse.com for general pitches.
OTHER PITCHES IN THIS SPACE
Written reviews often seem more polarized than reviews with no comments. Why? Would parsing that language and comparing it to the language of video game critics show us any differences in the things people vs. critics care about?
Some video games are almost exclusively known for their DLCs—like Fallout New Vegas or For Honor with their seasonal updates). But at what point does a DLC take over and fundamentally change the DNA of a video game? In those cases, should critics be expected to re-review the whole game with the new DLC? Would that make the gaps in their reviews any smaller?
ICYMI
🌏 The unbreakable link between fashion and the climate crisis—how clothes contribute to climate change more than you might think
SEE YOU ON THE WIRE
Want the next great pitch before anyone else?
Want to share this pitch with another journalist who might want to run with this story?
See something we should have added/corrected/a tip for a future story? Publishing this story and want us to share?
Have ideas on how we can improve as a service for journalists?