From “War of the Worlds” to Benghazi

A recent article by Adrian Chen about fake news in the New Yorker begins with my favorite myth: That a 1938 radio broadcast of Orson Welles’s *War of the Worlds* caused a mass panic. (It very likely did not.)

Next, Chen pivots to a more contemporary concern about the truthfulness of news content: the election of Trump, and Facebook/Twitter’s role in it. Much has been written about this topic. (Here are some of my favorites: Stratechery, Nieman Lab, Wired, Bloomberg).

What the hot takes I’ve read so far seem to miss is that we’re looking at this as a computer science problem. That is, since technology created the problem, it can fix it, too.

Chen:

It’s possible, though, that this approach comes with its own form of myopia. Neil Postman, writing a couple of decades ago, warned of a growing tendency to view people as computers, and a corresponding devaluation of the “singular human capacity to see things whole in all their psychic, emotional and moral dimensions.” A person does not process information the way a computer does, flipping a switch of “true” or “false.” One rarely cited Pew statistic shows that only four per cent of American Internet users trust social media “a lot,” which suggests a greater resilience against online misinformation than overheated editorials might lead us to expect. Most people seem to understand that their social-media streams represent a heady mixture of gossip, political activism, news, and entertainment. You might see this as a problem, but turning to Big Data-driven algorithms to fix it will only further entrench our reliance on code to tell us what is important about the world—which is what led to the problem in the first place. Plus, it doesn’t sound very fun.

As Chen explains later in the piece, automated solutions to the “fake news problem” also lend themselves to manipulation (i.e. people reporting news they don’t like as fake) and claims of bias directed toward the tech company themselves.

While I agree with the dangers of automated solutions to the fake news problem, I think the tech-rooted discussion also miss a larger issue with social media and the ways it’s changing how we interact with the world: the algorithms themselves, and the *types* of news they promote.

Facebook and Twitter are optimized for engagement, which is a bias that affects what you see when you use those platforms.

Alexis C. Madrigal:

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

Spreading false information on the platform itself is a problem that has a feasible solution. But the larger effects Madrigal talks about are the more worrisome ones, and have far less obvious answers.

“With Big Data Comes Big Responsibility”

I think we desperately need to pay more attention to the companies who are manipulating us and selling our data while disclosing these practices in the middle of rarely read terms of services agreements.

Om Malik is on the same page:

Forbes tells us that even seemingly benign apps like Google-owned Waze, Moovit or Strava are selling our activity and behavior data to someone somewhere. Sure they aren’t selling any specific person’s information, but who is to say that they won’t do it in the future or will use the data collected differently.

And this uncertainty should be sparking a debate.

It is important for us to talk about the societal impact of what Google is doing or what Facebook can do with all the data. If it can influence emotions (for increased engagements), can it compromise the political process? What more, today Facebook has built a facial recognition system that trumps that of FBI — think about that for a minute.

As for me, the NSA revelations have prompted me to change my digital ways. I removed almost all of my information from Facebook. It took hours. I then deleted my Google account, although I maintain one under a pseudonym so I can easily login to websites that require it. I also login to Waze with a pseudonym. (Fake name generator you are awesome.)

These are imperfect solutions and I am still engaging with these companies and giving them my data; I recognize that. And I still interact on Instagram and Twitter. But I feel as though this is as far as I am willing to go and I am now engaging with these companies in a more deliberate manner. Which is what we need more of.

Read With Big Data Comes Big Responsibility.

Maybe Social Won’t Stay After All

This excerpt from a story about the 25th anniversary of the world-wide web with Kevin Kelly, one of the founders of Wired, makes me more optimistic about the future of online communication and collaboration:

[The early online message board called] The Well allowed the users great freedom to start their own topics, to write whatever they wanted to write and had a lot of interesting people, and so it become sort of an online salon or virtual community very, very quickly because we kind of, again, let the users direct everything. And from that experience very early I saw that this was, one thing, a sharing economy. People were just going overboard to help each other in a way that we hadn’t seen in a long time.

And the second thing is, is that almost immediately the virtual citizens demanded to meet face to face. We did monthly Well meetings. This was technology that wasn’t kind of like industrial or steam engine-like and mechanical and alien. This was more organic. This was more human-like. This was more like an Amish barn raising, and that was a big thing for us in shifting our idea of what technology could be itself. […]

Right now I think it’s a little bit of a phase, like an adolescent phase. I think young people tend to do things with obsessions and I think we’re kind of – the Internet was in its teenage adolescent phase and we became obsessed with some of this and I think as this generation gets older, I think they’ll be less obsessed with this and they’ll even out and round out and have actually more face to face interaction than they are right now.

What do you think? Will social media go away? Or morph into something different?

I think that will only happen when the costs (time, privacy, profile maintenance) outweigh the benefits (easy contact with friends and interests).

Why I’m Getting Closer to Closing My Facebook Account

OK you know I probably won’t.

But this is gross:

Corporations may have more control over online speech today than the courts. Executives determine which videos, pictures and comments are permitted and what art is allowed. Their rules govern billions of posts across the globe each day.

“Our job is to manage the rules that determine what content is unacceptable on Facebook and also, obviously, what is acceptable,” [Facebook lawyer Judd] Hoffman says. His team determines what more than 1 billion people and businesses can and can’t say and do on Facebook. …

And Facebook bans copyright infringement and all sorts of speech that, in public, is protected by the First Amendment — things like nudity, hate speech, bullying and pornography.

From Facebook’s Online Speech Rules Keep Users On A Tight Leash {npr}.

And then there’s this bit from a recent Douglas Rushkoff interview {all things considered}:

In my life, it’s sort of the experience of being on Facebook and seeing everyone from my past suddenly back in my present, you know, and the inability to distinguish between people who may have been friends of mine in second grade and people who I’ve met just yesterday and people who are actually significant relationships. You know, that sense, that collapse of my whole life into one moment, where every ping, every vibration of my phone might just pull me out of whatever it is I’m doing into something else that seems somehow more pressing on the moment.

I won’t be closing my account any time soon, but I will be re-considering how I use the service. And I’ll try to use it less.

It comes down to something Rushkoff talks a lot about and that I want to try to do more of, and that’s being intentional about the media you consume. Whether it be Facebook or TV or blogs or books. The idea of not being a passive consumer and not taking things at face value.

More on Douglas Rushkoff and Facebook on this blog.

That Ain’t Smart, That’s Creepy: Credit Score Edition

Another installment of my ongoing series about technology, privacy and smart phones—That Ain’t Smart, That’s Creepy.

This one’s about how credit agencies are beginning to use social networking data as a criteria for assessing an individual’s credit score…

Applicants who type only in lower-case letters, or entirely in upper case, are less likely to repay loans, other factors being equal, says Douglas Merrill, founder of ZestFinance, an American online lender whose default rate is roughly 40% lower than that of a typical payday lender.

…and other assorted creepiness:

An online bank that opens in America this month will use Facebook data to adjust account holders’ credit-card interest rates. Based in New York, Movenbank will monitor messages on Facebook and cut interest rates for those who talk up the bank to friends. If any join, the referrer’s interest rate will drop further. Rates and fees will also drop if account holders spend prudently. Efforts to define customers “in a richer, deeper fashion” might eventually include raising rates for heavy gamblers, says Brett King, Movenbank’s founder.

Using Social Media to Determine Creditworthiness {the economist; via andrew sullivan}.