A pair of recent cheating scandals—one in the “speedrunning” community of gamers, and one in medical research—call attention to an alarming contrast.
In the competitive pursuit of speedrunning, gamers vie to complete a given video game as quickly as humanly possible. It’s a sport for the nerdier among us, and it’s amazingly popular: Videos streaming and recording speedruns routinely rack up seven-figure view counts on Twitch and YouTube. So when one very prominent speedrunner—a U.S. YouTuber with more than 20 million subscribers who goes by the nom de game “Dream”—was accused in December 2020 of faking one of his world-record runs of the block-building game Minecraft, the online drama exploded like a batch of TNT.
Specifically, Dream reported that he’d finished Minecraft in just over 19 minutes, faster than all but four players had ever managed it, because of an incredible stretch of good luck. Moderators at the website speedrun.com, who preside over such world-record attempts, begged to differ. According to their impressively detailed probability analysis, Dream’s luck was just too good. He was the equivalent of a roulette player who gets their color 50 times in a row: You don’t just marvel at the good fortune; you check underneath the table.
Recriminations followed. Dream and the moderators fired tweets and videos back and forth, and the moderators received a tsunami of social-media abuse from the more outraged members of Dream’s fan base. Dream commissioned his own scientist to produce a rebuttal to the probability points. If you had more than a passing interest in video games, you couldn’t miss the story: It was covered on every major video-gaming site, in tech magazines, and all over YouTube.
And then, on May 30 this year, Dream admitted it: The run wasn’t real. He had, he claimed, inadvertently set some software running that enhanced his luck in the game, thus breaking the rules and disqualifying his speedrun.
Whether you believe the “inadvertently” part is up to you. The important thing is that the system worked: Dream’s ill-begotten time, which was rightly struck from the books, became the latest in a long line of scam achievements exposed by moderators using sophisticated tools to uphold the field’s standards. Whether they’re employing audio-spectrum analysis, picking through every keypress to make sure that the run is legit, or simply using their long experience to spot a questionable performance, members of this community of technical experts have put in strenuous work to make life harder for those who break the rules.
Scientists should pay attention.
Two weeks before Dream’s confession, and halfway around the world, another fraud scandal had just come to a conclusion. Following a long investigation, Japan’s Showa University released a report on one of its anesthesiology researchers, Hironobu Ueshima. Ueshima had turned out to be one of the most prolific scientific frauds in history, having partly or entirely fabricated records and data in at least 84 scientific papers, and altered data and misrepresented authorship on dozens more. Like Dream, Ueshima would eventually come clean and apologize—but only after a data sleuth had spotted strange anomalies in his publications. Many of his papers have already been expunged from the scientific literature.
If you haven’t heard about this historic low point for scientific publishing, I don’t blame you. Aside from the specialist website Retraction Watch, which exists to document these kinds of events, not one English-language media outlet covered it. (There were a few stories in the Japanese press.) The case garnered little social-media interest; there was no debate over the lessons learned for science.
Does it strike you as odd that so many people tuned in to hear about a doctored speedrun of a children’s video game, while barely a ripple was made—even among scientists—by the discovery of more than 80 fake scientific papers? These weren’t esoteric papers, either, slipped into obscure academic journals. They were prominent medical studies, the sort with immediate implications for real-life patients in the operating room. Consider two titles from Ueshima’s list of fraudulent or possibly fabricated findings: “Investigation of Force Received at the Upper Teeth by Video Laryngoscopy” and “Below-Knee Amputation Performed With Pericapsular Nerve Group and Sciatic Nerve Blocks.” You’d hope that the mechanisms for purging fake studies such as these from the literature—and thus, from your surgeon’s reading list—would be pretty strong.
Alas, that’s not often the case. The scientific community has long looked the other way when fraud allegations fly. That Ueshima’s university made such an extensive investigation of his work and published it for all to see is unusual. Skeptics and whistleblowers who spot potential fraud in researchers’ work are routinely ignored, stonewalled, or sometimes attacked by universities or journal editors who don’t have the time or inclination to dig into potentially forged (and potentially dangerous) studies.
For example, it took 12 years for any action to be taken against the world’s most prolific scientific fraudster, Yoshitaka Fujii (coincidentally, another Japanese anesthesiologist), even after very convincing analyses of his dodgy data were published. Like Dream’s speedrun, Fujii’s data were just too good to be true: The fraud-spotters wrote, with admirable literalness, that they were “incredibly nice!”
Ironically, scientists who study what they claim are the pernicious effects of video games have been particularly lax about policing allegations of misconduct within their community; at the very least, they may be less diligent than gamers themselves. One researcher who recently left the field (and academia altogether) wrote about his exasperating experience trying to alert multiple journals and a university to obvious “gibberish” data in several video-game-violence papers: It did not go well; most of the relevant scientific authority figures reacted with little more than a shrug. “The experience has led me to despair for the quality and integrity of our science,” he wrote. “If data this suspicious can’t get a swift retraction, it must be impossible to catch a fraud equipped with skills, funding, or social connections.”
The methods used to produce fake speedruns and fake science have some surprising similarities, however different the institutional responses might be. The most basic way to put together a fraudulent speedrun is by manipulating the video that you need to submit to the moderators. This is usually done by “splicing”: If you mess up the start of a level, then nail the boss battle, but do the opposite on another attempt, you can stitch together the two good halves into one perfect—but bogus—video. Even when the joins are poorly done, as in a now-infamous botched attempt to claim a five-minute run of Super Mario Bros., only those paying proper attention will spot them.
Scientists engage in similar shenanigans with the images in their papers. Spliced, duplicated, touched-up, recolored, and otherwise Photoshopped images from microscopes or blots are rife in scientific publications, and are overlooked by peer reviewers with worrying regularity. The eagle-eyed microbiologist Elisabeth Bik, considered the world expert in spotting “problematic” scientific images, routinely reports her concerns about images to the relevant universities or journals—and often goes completely unheard.
The rigorous format in which the speedrunning community asks players to provide video proof of their runs is itself significant. For many games, you need to show not just a recording of your screen, but also a video of your hands on the controller or keyboard, so moderators can ensure that it really was a human—and not a script or a bot—that clinched the all-important record.
Science has been much slower to adapt, even after countless scandals. Researchers provide images for their papers entirely at their own discretion, and with no official oversight; when they aren’t faked, they might still contain cherry-picked snapshots of experiments that don’t represent the full range of their results. The same applies to numerical data, which are often—consciously or unconsciously—chosen or reported to make the best case for a scientist’s hypothesis, rather than to show the full and messy details. Only a few journals require scientists to do the equivalent of posting the screen-and-hands recording: sharing all their data, and the code they used to analyze it, online for anyone to access.
Speedrunning, like science, may be done in groups (say, one of the game’s levels per team member). In both contexts, the actions of one fraudulent member taint the achievements of their colleagues. In a 2006 group speedrun of the first-person-shooter Half-Life 2, one player illegitimately altered the game’s code to make his run faster, betraying the trust of his teammates. Similarly, in science fraud, a credible allegation can come as a huge shock to all the members of a research group—except, perhaps, for the guilty party.
Meanwhile, gamers have developed clever tools to reassess and recertify older speedruns. Savvy moderators for the racing game Trackmania United Forever have, in the past few weeks, demonstrated a new kind of analysis that uses the number of times a player changes their car’s direction to show that many revered world-record times were in fact impossible for human hands to achieve—in other words, that they were the result of cheating. The faked Trackmania speedruns have now been wiped from the record. Science has its own advanced fraud-detection methods; in theory, these could be used to clean out the Augean stables of research publishing. For example, one such tool was used to show that the classic paper on the psychological phenomenon of “cognitive dissonance” contained numbers that were mathematically impossible. Yet that paper remains in the literature, garnering citations, without so much as a note from the journal’s editor.
Another parallel between fraud in speedrunning and science concerns the fraudsters’ motivations. If you ask a speedrunner why they think people fake their runs, they might say it’s about clout. Getting your username at or near the top of the leaderboard, even for an old or obscure game, gains you respect from your peers. Some runners are willing to do an awful lot to that end. Scientists might think they’re above clout—and perhaps they should be—but the evidence points the other way. There are no literal leaderboards in science, but there are plenty of other signifiers of clout: the sheer number of publications on your CV; the number of times you’ve been cited; the reputation of the journals where you publish; the grant money you can aquire; the fame of being the discoverer of something important. All of these appear to be brain-breakingly powerful incentives for certain scientists, who flout the rules despite knowing better.
Apart from a minority of professional gamers, speedrunning is a hobby, and the community is moderated by volunteers. Science is, well, science: a crucially important endeavor that we need to get right, a prestige industry employing hundreds of thousands of paid, dedicated, smart people, submitting their research to journals run by enormously profitable publishing companies.
Perhaps the very status of science is what makes its practitioners reluctant to pursue fraudsters: Not only do scientists find it difficult to imagine that their peers or colleagues could be making up the data, but questioning a suspect data set could result in anything from extended frustration and social awkwardness to the destruction of someone’s career. You can see why so many scientists, who hope for a quiet life where they can pursue their own research, aren’t motivated to grasp the nettle.
But the consequences of ignoring fraud can be drastic too, and whole evidence bases, sometimes for medical treatments, can be polluted by fraudulent studies. The entire purpose of the scientific endeavor is brought into question if its gatekeepers—the reviewers and editors and others who are supposed to be the custodians of scientific probity—are so often presented with evidence of fraud and so often fail to take action.
If unpaid Minecraft mods can produce a 29-page mathematical analysis of Dream’s contested run, then scientists and editors can find the time to treat plausible fraud allegations with the seriousness they deserve. If the maintenance of integrity can become such a crucial interest for a community of gaming hobbyists, then it can be the same for a community of professional researchers. And if the speedrunning world can learn lessons from so many cases of cheating, there’s no excuse for scientists who fail to do the same.