From “War of the Worlds” to Benghazi

A recent article by Adrian Chen about fake news in the New Yorker begins with my favorite myth: That a 1938 radio broadcast of Orson Welles’s *War of the Worlds* caused a mass panic. (It very likely did not.)

Next, Chen pivots to a more contemporary concern about the truthfulness of news content: the election of Trump, and Facebook/Twitter’s role in it. Much has been written about this topic. (Here are some of my favorites: Stratechery, Nieman Lab, Wired, Bloomberg).

What the hot takes I’ve read so far seem to miss is that we’re looking at this as a computer science problem. That is, since technology created the problem, it can fix it, too.

Chen:

It’s possible, though, that this approach comes with its own form of myopia. Neil Postman, writing a couple of decades ago, warned of a growing tendency to view people as computers, and a corresponding devaluation of the “singular human capacity to see things whole in all their psychic, emotional and moral dimensions.” A person does not process information the way a computer does, flipping a switch of “true” or “false.” One rarely cited Pew statistic shows that only four per cent of American Internet users trust social media “a lot,” which suggests a greater resilience against online misinformation than overheated editorials might lead us to expect. Most people seem to understand that their social-media streams represent a heady mixture of gossip, political activism, news, and entertainment. You might see this as a problem, but turning to Big Data-driven algorithms to fix it will only further entrench our reliance on code to tell us what is important about the world—which is what led to the problem in the first place. Plus, it doesn’t sound very fun.

As Chen explains later in the piece, automated solutions to the “fake news problem” also lend themselves to manipulation (i.e. people reporting news they don’t like as fake) and claims of bias directed toward the tech company themselves.

While I agree with the dangers of automated solutions to the fake news problem, I think the tech-rooted discussion also miss a larger issue with social media and the ways it’s changing how we interact with the world: the algorithms themselves, and the *types* of news they promote.

Facebook and Twitter are optimized for engagement, which is a bias that affects what you see when you use those platforms.

Alexis C. Madrigal:

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

Spreading false information on the platform itself is a problem that has a feasible solution. But the larger effects Madrigal talks about are the more worrisome ones, and have far less obvious answers.