You are what you consume: Facebook v. Twitter

650571123_6d22ed8b89_o

You are what you read, watch, and listen to. The content you consume changes how you think about the world, and determines what topics you’re aware of and concerned about. Over the past century, countless thinkers have explored this idea, and from a variety of perspectives.

McLuhan focused on the media type (i.e. books vs. television), and asserted that the medium through which content is delivered changes how the content is encoded by the creator, and decoded by the recipient. More recently, Nicholas Carr argued that ways digital media affects our ability to focus and follow complicated arguments. Eli Pariser coined the term filter bubble to describe the way social media is designed to show us content we already agree with—clustering us into like-minded groups infrequently exposed to ideas that challenge our existing attitudes and beliefs.

But what if social media, the same technology that helped create today’s highly polarized political environment, could be used to reverse the trend? What if you could assemble a custom feed of diverse thinkers representing an eclectic range of voices from across the political spectrum, or whichever thing you’re into. And since your thoughts are influenced by the content you consume, this could help your thinking be more inclusive of a range of views. It’s a personalized news feed more directly curated by you, rather than Facebook’s engagement algorithms.

That’s how I use Twitter. I follow an eclectic mix of artists, journalists, comedians, entrepreneurs and startup influencers, and political thinkers from both sides. When I open my Twitter homepage, I’m exposed to views I agree with and those I do not. It’s a way to take me out of my bubble every once and awhile, and remind me that “the other side” often has good points to make and  deeply held beliefs to defend.

I suppose I could use Facebook to achieve a similar result. But in my experience, this isn’t how that service is used. Facebook is more for private, personal news and achievements; people seem to be acutely self-conscious when posting there. Twitter is more free-form, public, and informal. Twitter starts with the assumption that you’ll follow people you might not know (i.e. famous people); Facebook is based on precisely the opposite premise.

And really, you could achieve this type of thought diversity by reading different books, picking up magazines from “the other side” every once and awhile, etc. But the cost of engagement is lower on Twitter; all you have to do is click the “follow” button.

From “War of the Worlds” to Benghazi

A recent article by Adrian Chen about fake news in the New Yorker begins with my favorite myth: That a 1938 radio broadcast of Orson Welles’s *War of the Worlds* caused a mass panic. (It very likely did not.)

Next, Chen pivots to a more contemporary concern about the truthfulness of news content: the election of Trump, and Facebook/Twitter’s role in it. Much has been written about this topic. (Here are some of my favorites: Stratechery, Nieman Lab, Wired, Bloomberg).

What the hot takes I’ve read so far seem to miss is that we’re looking at this as a computer science problem. That is, since technology created the problem, it can fix it, too.

Chen:

It’s possible, though, that this approach comes with its own form of myopia. Neil Postman, writing a couple of decades ago, warned of a growing tendency to view people as computers, and a corresponding devaluation of the “singular human capacity to see things whole in all their psychic, emotional and moral dimensions.” A person does not process information the way a computer does, flipping a switch of “true” or “false.” One rarely cited Pew statistic shows that only four per cent of American Internet users trust social media “a lot,” which suggests a greater resilience against online misinformation than overheated editorials might lead us to expect. Most people seem to understand that their social-media streams represent a heady mixture of gossip, political activism, news, and entertainment. You might see this as a problem, but turning to Big Data-driven algorithms to fix it will only further entrench our reliance on code to tell us what is important about the world—which is what led to the problem in the first place. Plus, it doesn’t sound very fun.

As Chen explains later in the piece, automated solutions to the “fake news problem” also lend themselves to manipulation (i.e. people reporting news they don’t like as fake) and claims of bias directed toward the tech company themselves.

While I agree with the dangers of automated solutions to the fake news problem, I think the tech-rooted discussion also miss a larger issue with social media and the ways it’s changing how we interact with the world: the algorithms themselves, and the *types* of news they promote.

Facebook and Twitter are optimized for engagement, which is a bias that affects what you see when you use those platforms.

Alexis C. Madrigal:

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

Spreading false information on the platform itself is a problem that has a feasible solution. But the larger effects Madrigal talks about are the more worrisome ones, and have far less obvious answers.

When Free Data Ain’t Free

Wired has a smart take on an idea that sounds good at first: unlimited data usage on your phone for certain apps.

T-Mobile has announced plans that allow access to Twitter, Instagram, and others for free. (Well, included with your monthly charges.)

Virgin Mobile has plans with unlimited access to just Facebook, Twitter, Instagram, or Pinterest for a flat monthly fee.

But like net neutrality, this bundling/unbundling (depending on how you look at it) could stifle innovation:

In [Fred] Wilson’s comparison, zero rating makes apps more like TV by effectively turning specific services into channels. Under the Sprint deal, you get the Facebook channel, the Twitter channel, and so on. To get the full-on open internet—which we used to simply call the internet—you must pay more. For Wilson, this amounts to a kind of front-end discrimination analogous to efforts to undermine net neutrality on the back-end. Some apps or services get preferential treatment, while others are left to wither through lack of equal access.

As Wilson explains, this makes zero rating an existential threat to what he sees as a period of more egalitarian access that allowed the internet economy to flourish. “There was a brief moment in the tech market from 1995 to now where anyone could simply attach a server to the internet and be in business,” Wilson writes in response to a commenter. “That moment is coming to an end.”

Read:
Free Mobile Data Plans Are Going to Crush the Startup Economy {by marcus wohlsen; wired}.

“With Big Data Comes Big Responsibility”

I think we desperately need to pay more attention to the companies who are manipulating us and selling our data while disclosing these practices in the middle of rarely read terms of services agreements.

Om Malik is on the same page:

Forbes tells us that even seemingly benign apps like Google-owned Waze, Moovit or Strava are selling our activity and behavior data to someone somewhere. Sure they aren’t selling any specific person’s information, but who is to say that they won’t do it in the future or will use the data collected differently.

And this uncertainty should be sparking a debate.

It is important for us to talk about the societal impact of what Google is doing or what Facebook can do with all the data. If it can influence emotions (for increased engagements), can it compromise the political process? What more, today Facebook has built a facial recognition system that trumps that of FBI — think about that for a minute.

As for me, the NSA revelations have prompted me to change my digital ways. I removed almost all of my information from Facebook. It took hours. I then deleted my Google account, although I maintain one under a pseudonym so I can easily login to websites that require it. I also login to Waze with a pseudonym. (Fake name generator you are awesome.)

These are imperfect solutions and I am still engaging with these companies and giving them my data; I recognize that. And I still interact on Instagram and Twitter. But I feel as though this is as far as I am willing to go and I am now engaging with these companies in a more deliberate manner. Which is what we need more of.

Read With Big Data Comes Big Responsibility.

Friday Link List

It’s been awhile since I’ve done one of these, so there’s going to be a mix of newbies and oldies.

1. The Builder’s High {rands in repose; via daring fireball}

Is there a Facebook update that compares to building a thing? No, but I’d argue that 82 Facebook updates, 312 tweets, and all those delicious Instagram updates are giving you the same chemical impression that you’ve accomplished something of value. Whether it’s all the consumption or the sense of feeling busy, these micro-highs will never equal the high when you’ve actually built.

 

2. Lessons in Learning How to Program {Inc.}

Learning to program is a humbling experience. It forces you to see how well you actually understand things in the real world. Do I really know sandwiches well enough to explain them to someone (or, in the case of a computer, something) who has never heard of them? Yeah, probably. But what about more complicated things?

3. Absence of Like {nicholas carr}

The choice is not between Like and Dislike but rather between Like and Absence of Like, the latter being a catch-all category of non-affiliation encompassing not only Dislike but also Not Sure and No Opinion and Don’t Care and Ambivalent and Can’t Be Bothered and Not in the Mood to Deal with This at the Moment and I Hate Facebook — the whole panoply, in other words, of states of non-affiliation with particular things or beings. By presenting a clean binary choice — On/Off; True/False — the Like button serves the overarching goal of bringing human communication and machine communication into closer harmony.