How Facebook undermines democracy

Jay OwenGlobal Citizen, Wealth of Networks

What Facebook Did to American Democracy

And why it was so hard to see it coming

Alexis C. MadrigalOct 12, 2017 – The Atlantic

In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after.

Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.

Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

* * *

We’ve known since at least 2012 that Facebook was a powerful, non-neutral force in electoral politics. In that year, a combined University of California, San Diego and Facebook research team led by James Fowler published a study in Nature, which argued that Facebook’s “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.

Rebecca Rosen’s 2012 story, “Did Facebook Give Democrats the Upper Hand?” relied on new research from Fowler, et al., about the presidential election that year. Again, the conclusion of their work was that Facebook’s get-out-the-vote message could have driven a substantial chunk of the increase in youth voter participation in the 2012 general election. Fowler told Rosen that it was “even possible that Facebook is completely responsible” for the youth voter increase. And because a higher proportion of young people vote Democratic than the general population, the net effect of Facebook’s GOTV effort would have been to help the Dems.

The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

In June 2014, Harvard Law scholar Jonathan Zittrain wrote an essay in New Republic called, “Facebook Could Decide an Election Without Anyone Ever Finding Out,” in which he called attention to the possibility of Facebook selectively depressing voter turnout. (He also suggested that Facebook be seen as an “information fiduciary,” charged with certain special roles and responsibilities because it controls so much personal data.)

In late 2014, The Daily Dot called attention to an obscure Facebook-produced case study on how strategists defeated a statewide measure in Florida by relentlessly focusing Facebook ads on Broward and Dade counties, Democratic strongholds. Working with a tiny budget that would have allowed them to send a single mailer to just 150,000 households, the digital-advertising firm Chong and Koster was able to obtain remarkable results. “Where the Facebook ads appeared, we did almost 20 percentage points better than where they didn’t,” testified a leader of the firm. “Within that area, the people who saw the ads were 17 percent more likely to vote our way than the people who didn’t. Within that group, the people who voted the way we wanted them to, when asked why, often cited the messages they learned from the Facebook ads.”

In April 2016, Rob Meyer published “How Facebook Could Tilt the 2016 Election” after a company meeting in which some employees apparently put the stopping-Trump question to Mark Zuckerberg. Based on Fowler’s research, Meyer reimagined Zittrain’s hypothetical as a direct Facebook intervention to depress turnout among non-college graduates, who leaned Trump as a whole.

Facebook, of course, said it would never do such a thing. “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community,” a spokesperson said. “We as a company are neutral—we have not and will not use our products in a way that attempts to influence how people vote.”

They wouldn’t do it intentionally, at least.

As all these examples show, though, the potential for Facebook to have an impact on an election was clear for at least half a decade before Donald Trump was elected. But rather than focusing specifically on the integrity of elections, most writers—myself included, some observers like Sasha Issenberg, Zeynep Tufekci, and Daniel Kreiss excepted—bundled electoral problems inside other, broader concerns like privacy, surveillance, tech ideology, media-industry competition, or the psychological effects of social media.

The same was true even of people inside Facebook. “If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was,” wrote Antonio García Martínez, who managed ad targeting for Facebook back then. “And yet, now we live in that otherworldly political reality.”

Not to excuse us, but this was back on the Old Earth, too, when electoral politics was not the thing that every single person talked about all the time. There were other important dynamics to Facebook’s growing power that needed to be covered.

* * *

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” There are other factors, as Slate’s Will Oremus noted in this rare story about the News Feed ranking team. But who knows how much weight they actually receive and for how long as the system evolves. For example, one change that Facebook highlighted to Oremus in early 2016—taking into account how long people look at a story, even if they don’t click it—was subsequently dismissed by Lars Backstrom, the VP of engineering in charge of News Feed ranking, as a “noisy” signal that’s also “biased in a few ways” making it “hard to use” in a May 2017 technical talk.

Facebook’s engineers do not want to introduce noise into the system. Because the News Feed, this machine for generating engagement, is Facebook’s most important technical system. Their success predicting what you’ll like is why users spend an average of more than 50 minutes a day on the site, and why even the former creator of the “like” button worries about how well the site captures attention. News Feed works really well.

But as far as “personalized newspapers” go, this one’s editorial sensibilities are limited. Most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. And this is true not just in politics, but the broader culture.

That this could be a problem was apparent to many. Eli Pariser’s The Filter Bubble, which came out in the summer of 2011, became the most widely cited distillation of the effects Facebook and other internet platforms could have on public discourse.

Pariser began the book research when he noticed conservative people, whom he’d befriended on the platform despite his left-leaning politics, had disappeared from his News Feed. “I was still clicking my progressive friends’ links more than my conservative friends’— and links to the latest Lady Gaga videos more than either,” he wrote. “So no conservative links for me.”

Through the book, he traces the many potential problems that the “personalization” of media might bring. Most germane to this discussion, he raised the point that if every one of the billion News Feeds is different, how can anyone understand what other people are seeing and responding to?

“The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom,” Pariser wrote. “How does a [political] campaign know what its opponent is saying if ads are only targeted to white Jewish men between 28 and 34 who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?”

This did, indeed, become an enormous problem. When I was editor in chief of Fusion, we set about trying to track the “digital campaign” with several dedicated people. What we quickly realized was that there was both too much data—the noisiness of all the different posts by the various candidates and their associates—as well as too little. Targeting made tracking the actual messaging that the campaigns were paying for impossible to track. On Facebook, the campaigns could show ads only to the people they targeted. We couldn’t actually see the messages that were actually reaching people in battleground areas. From the outside, it was a technical impossibility to know what ads were running on Facebook, one that the company had fought to keep intact.

Pariser suggests in his book, “one simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted.” Which could happen in future campaigns.

Imagine if this had happened in 2016. If there were data sets of all the ads that the campaigns and others had run, we’d know a lot more about what actually happened last year. The Filter Bubble is obviously prescient work, but there was one thing that Pariser and most other people did not foresee. And that’s that Facebook became completely dominant as a media distributor.

* * *

About two years after Pariser published his book, Facebook took over the news-media ecosystem. They’ve never publicly admitted it, but in late 2013, they began to serve ads inviting users to “like” media pages. This caused a massive increase in the amount of traffic that Facebook sent to media companies. At The Atlantic and other publishers across the media landscape, it was like a tide was carrying us to new traffic records. Without hiring anyone else, without changing strategy or tactics, without publishing more, suddenly everything was easier.

While traffic to The Atlantic from Facebook.com increased, at the time, most of the new traffic did not look like it was coming from Facebook within The Atlantic’s analytics. It showed up as “direct/bookmarked” or some variation, depending on the software. It looked like what I called “dark social” back in 2012. But as BuzzFeed’s Charlie Warzel pointed out at the time, and as I came to believe, it was primarily Facebook traffic in disguise. Between August and October of 2013, BuzzFeed’s “partner network” of hundreds of websites saw a jump in traffic from Facebook of 69 percent.

At The Atlantic, we ran a series of experiments that showed, pretty definitively from our perspective, that most of the stuff that looked like “dark social” was, in fact, traffic coming from within Facebook’s mobile app. Across the landscape, it began to dawn on people who thought about these kinds of things: Damn, Facebook owns us. They had taken over media distribution.

Why? This is a best guess, proffered by Robinson Meyer as it was happening: Facebook wanted to crush Twitter, which had drawn a disproportionate share of media and media-figure attention. Just as Instagram borrowed Snapchat’s “Stories” to help crush the site’s growth, Facebook decided it needed to own “news” to take the wind out of the newly IPO’d Twitter.

The first sign that this new system had some kinks came with “Upworthy-style” headlines. (And you’ll never guess what happened next!) Things didn’t just go kind of viral, they went ViralNova, a site which, like Upworthy itself, Facebook eventually smacked down. Many of the new sites had, like Upworthy, which was cofounded by Pariser, a progressive bent.

Less noticed was that a right-wing media was developing in opposition to and alongside these left-leaning sites. “By 2014, the outlines of the Facebook-native hard-right voice and grievance spectrum were there,” The New York Times’ media and tech writer John Herrman told me, “and I tricked myself into thinking they were a reaction/counterpart to the wave of soft progressive/inspirational content that had just crested. It ended up a Reaction in a much bigger and destabilizing sense.”

The other sign of algorithmic trouble was the wild swings that Facebook Video underwent. In the early days, just about any old video was likely to generate many, many, many views. The numbers were insane in the early days. Just as an example, a Fortune article noted that BuzzFeed’s video views “grew 80-fold in a year, reaching more than 500 million in April.” Suddenly, all kinds of video—good, bad, and ugly—were doing 1-2-3 million views.

As with news, Facebook’s video push was a direct assault on a competitor, YouTube. Videos changed the dynamics of the News Feed for individuals, for media companies, and for anyone trying to understand what the hell was going on.

Individuals were suddenly inundated with video. Media companies, despite no business model, were forced to crank out video somehow or risk their pages/brands losing relevance as video posts crowded others out.

And on top of all that, scholars and industry observers were used to looking at what was happening in articles to understand how information was flowing. Now, by far the most viewed media objects on Facebook, and therefore on the internet, were videos without transcripts or centralized repositories. In the early days, many successful videos were just “freebooted” (i.e., stolen) videos from other places or reposts. All of which served to confuse and obfuscate the transport mechanisms for information and ideas on Facebook.

Through this messy, chaotic, dynamic situation, a new media rose up through the Facebook burst to occupy the big filter bubbles. On the right, Breitbart is the center of a new conservative network. A study of 1.25 million election news articles found “a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.”

Breitbart, of course, also lent Steve Bannon, its chief, to the Trump campaign, creating another feedback loop between the candidate and a rabid partisan press. Through 2015, Breitbart went from a medium-sized site with a small Facebook page of 100,000 likes into a powerful force shaping the election with almost 1.5 million likes. In the key metric for Facebook’s News Feed, its posts got 886,000 interactions from Facebook users in January. By July, Breitbart had surpassed The New York Times’ main account in interactions. By December, it was doing 10 million interactions per month, about 50 percent of Fox News, which had 11.5 million likes on its main page. Breitbart’s audience was hyper-engaged.

There is no precise equivalent to the Breitbart phenomenon on the left. Rather the big news organizations are classified as center-left, basically, with fringier left-wing sites showing far smaller followings than Breitbart on the right.

And this new, hyperpartisan media created the perfect conditions for another dynamic that influenced the 2016 election, the rise of fake news.

Sites by partisan attention (Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman)

* * *

Continue Reading