e-flux Conversations has been closed to new contributions and will remain online as an archive. Check out our new platform for short-form writing, e-flux Notes.

e-flux conversations

Why Can Everyone Spot Fake News But The Tech Companies?

Reporting for BuzzFeed, Charlie Warzel wonders why Facebook, YouTube, and Twitter repeatedly fail to stop the spread of damaging conspiracy theories on their platforms in the aftermath of major tragedies like school shootings, even as numerous journalist have easily exposed these theories with a few keystrokes. Warzel acknowledges that the task of monitoring such massive systems is complex, but the repeated failure of these mega-rich companies to take any meaningful suggests either an excessive faith in algorithms and machine learning, or a breathtaking incompetence. Read an excerpt from the article below, or the full piece here.

This cycle — of journalists, researchers, and others spotting — with the simplest of search queries — hoaxes and fake news long before the platforms themselves repeats itself after every major mass shooting and tragedy. Just a few hours after news broke of the mass shooting in Sutherland Springs, Texas, Justin Hendrix, a researcher and executive director of NYC Media Lab spotted search results inside Google’s “Popular on Twitter” widget rife with misinformation. Shortly after an Amtrak train crash involving GOP lawmakers in January, the Daily Beast’s Ben Collins quickly checked Facebook and discovered a trove of conspiracy theories inside Facebook’s trending news section, which is prominently positioned to be seen by millions of users.

By the time the Parkland school shooting occurred, the platforms had apologized for missteps during a national breaking news event three times in four months, in each instance promising to do better. But in their next opportunity to do better, again they failed. In the aftermath of the Parkland school shooting, journalists and researchers on Twitter were the first to spot dozens of hoaxes, trolls impersonating journalists, and viral Facebook posts and top “trending” YouTube posts smearing the victims and claiming they were crisis actors. In each instance, these individuals surfaced this content — most of which is a clear violation of the platforms’ rules — well before YouTube, Facebook, and Twitter. The New York Times’ Kevin Roose summed up the dynamic recently on Twitter noting, “Half the job of being a tech reporter in 2018 is doing pro bono content moderation for giant companies”…

All of this raises a mind-bendingly simple question that YouTube, Google, Twitter, and Facebook have not yet answered: How is it that the average untrained human can do something that multibillion-dollar technology companies that pride themselves on innovation cannot? And beyond that, why is it that — after multiple national tragedies politicized by malicious hoaxes and misinformation — such a question even needs to be asked?

Image via NPR.