Facebook lets companies target job ads by age. Is that really so bad?

Young professional

The New York Times and ProPublica released a story a couple weeks ago detailing the extent to which employers can target their recruiting ads on Facebook.

Here’s the lede:

A few weeks ago, Verizon placed an ad on Facebook to recruit applicants for a unit focused on financial planning and analysis. The ad showed a smiling, millennial-aged woman seated at a computer and promised that new hires could look forward to a rewarding career in which they would be “more than just a number.”

Some relevant numbers were not immediately evident. The promotion was set to run on the Facebook feeds of users 25 to 36 years old who lived in the nation’s capital, or had recently visited there, and had demonstrated an interest in finance. For a vast majority of the hundreds of millions of people who check Facebook every day, the ad did not exist.

Setting aside the issue of legality (age-based ad targeting may violate the Age Discrimination in Employment Act of 1967, the article notes), I’m struggling to see how this is a necessarily bad thing. After all, if a recruiter is biased against older workers, they’re not likely to hire them once they infer their age based on their resume. Many hiring managers have a range of expected ages for a given position based on the required experience, the age range of others on the team, etc. I’m not arguing that this is a good thing. Everyone deserves a fair shake, no matter how old you are. But blaming Facebook for this existing human bias doesn’t advance the cause either.

Further, the issue at hand is only that the companies are advertising open positions to people in certain age ranges. Many companies require all open positions to be posted on the careers page; and the majority of such positions also appear on job boards such as indeed.com. It’s not as though the jobs are being hidden from people; rather, certain people aren’t being actively targeted.

Of course this is easy for a 30-something well-versed in technology to say. But the outrage over this issue robs job seekers of their agency and gives Facebook too much credit. If your idea of a job search is to browse your Facebook Feed for ads from companies who want to recruit you, I don’t think you’re going to have much luck. It’s far more effective to seek out companies and career opportunities that match your objectives than waiting for them to come for you via Facebook ads.

Image via Flickr.

You are what you consume: Facebook v. Twitter

650571123_6d22ed8b89_o

You are what you read, watch, and listen to. The content you consume changes how you think about the world, and determines what topics you’re aware of and concerned about. Over the past century, countless thinkers have explored this idea, and from a variety of perspectives.

McLuhan focused on the media type (i.e. books vs. television), and asserted that the medium through which content is delivered changes how the content is encoded by the creator, and decoded by the recipient. More recently, Nicholas Carr argued that ways digital media affects our ability to focus and follow complicated arguments. Eli Pariser coined the term filter bubble to describe the way social media is designed to show us content we already agree with—clustering us into like-minded groups infrequently exposed to ideas that challenge our existing attitudes and beliefs.

But what if social media, the same technology that helped create today’s highly polarized political environment, could be used to reverse the trend? What if you could assemble a custom feed of diverse thinkers representing an eclectic range of voices from across the political spectrum, or whichever thing you’re into. And since your thoughts are influenced by the content you consume, this could help your thinking be more inclusive of a range of views. It’s a personalized news feed more directly curated by you, rather than Facebook’s engagement algorithms.

That’s how I use Twitter. I follow an eclectic mix of artists, journalists, comedians, entrepreneurs and startup influencers, and political thinkers from both sides. When I open my Twitter homepage, I’m exposed to views I agree with and those I do not. It’s a way to take me out of my bubble every once and awhile, and remind me that “the other side” often has good points to make and  deeply held beliefs to defend.

I suppose I could use Facebook to achieve a similar result. But in my experience, this isn’t how that service is used. Facebook is more for private, personal news and achievements; people seem to be acutely self-conscious when posting there. Twitter is more free-form, public, and informal. Twitter starts with the assumption that you’ll follow people you might not know (i.e. famous people); Facebook is based on precisely the opposite premise.

And really, you could achieve this type of thought diversity by reading different books, picking up magazines from “the other side” every once and awhile, etc. But the cost of engagement is lower on Twitter; all you have to do is click the “follow” button.

From “War of the Worlds” to Benghazi

A recent article by Adrian Chen about fake news in the New Yorker begins with my favorite myth: That a 1938 radio broadcast of Orson Welles’s *War of the Worlds* caused a mass panic. (It very likely did not.)

Next, Chen pivots to a more contemporary concern about the truthfulness of news content: the election of Trump, and Facebook/Twitter’s role in it. Much has been written about this topic. (Here are some of my favorites: Stratechery, Nieman Lab, Wired, Bloomberg).

What the hot takes I’ve read so far seem to miss is that we’re looking at this as a computer science problem. That is, since technology created the problem, it can fix it, too.

Chen:

It’s possible, though, that this approach comes with its own form of myopia. Neil Postman, writing a couple of decades ago, warned of a growing tendency to view people as computers, and a corresponding devaluation of the “singular human capacity to see things whole in all their psychic, emotional and moral dimensions.” A person does not process information the way a computer does, flipping a switch of “true” or “false.” One rarely cited Pew statistic shows that only four per cent of American Internet users trust social media “a lot,” which suggests a greater resilience against online misinformation than overheated editorials might lead us to expect. Most people seem to understand that their social-media streams represent a heady mixture of gossip, political activism, news, and entertainment. You might see this as a problem, but turning to Big Data-driven algorithms to fix it will only further entrench our reliance on code to tell us what is important about the world—which is what led to the problem in the first place. Plus, it doesn’t sound very fun.

As Chen explains later in the piece, automated solutions to the “fake news problem” also lend themselves to manipulation (i.e. people reporting news they don’t like as fake) and claims of bias directed toward the tech company themselves.

While I agree with the dangers of automated solutions to the fake news problem, I think the tech-rooted discussion also miss a larger issue with social media and the ways it’s changing how we interact with the world: the algorithms themselves, and the *types* of news they promote.

Facebook and Twitter are optimized for engagement, which is a bias that affects what you see when you use those platforms.

Alexis C. Madrigal:

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

Spreading false information on the platform itself is a problem that has a feasible solution. But the larger effects Madrigal talks about are the more worrisome ones, and have far less obvious answers.

When Free Data Ain’t Free

Wired has a smart take on an idea that sounds good at first: unlimited data usage on your phone for certain apps.

T-Mobile has announced plans that allow access to Twitter, Instagram, and others for free. (Well, included with your monthly charges.)

Virgin Mobile has plans with unlimited access to just Facebook, Twitter, Instagram, or Pinterest for a flat monthly fee.

But like net neutrality, this bundling/unbundling (depending on how you look at it) could stifle innovation:

In [Fred] Wilson’s comparison, zero rating makes apps more like TV by effectively turning specific services into channels. Under the Sprint deal, you get the Facebook channel, the Twitter channel, and so on. To get the full-on open internet—which we used to simply call the internet—you must pay more. For Wilson, this amounts to a kind of front-end discrimination analogous to efforts to undermine net neutrality on the back-end. Some apps or services get preferential treatment, while others are left to wither through lack of equal access.

As Wilson explains, this makes zero rating an existential threat to what he sees as a period of more egalitarian access that allowed the internet economy to flourish. “There was a brief moment in the tech market from 1995 to now where anyone could simply attach a server to the internet and be in business,” Wilson writes in response to a commenter. “That moment is coming to an end.”

Read:
Free Mobile Data Plans Are Going to Crush the Startup Economy {by marcus wohlsen; wired}.