Facebook’s Crisis Management Algorithm Runs on Outrage

Last year, a Facebook user in Sri Lanka posted an angry message to the social network. “Kill all the Muslim babies without sparing even an infant,” the person wrote in Sinhala, the language of the country’s Buddhist majority. “F—ing dogs!”

The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook Inc. began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower selected the option for “hate speech,” one of nine possible categories for objectionable content on Facebook.

For years nonprofits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the hate speech report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”

The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. False rumors circulated widely on Facebook claiming Muslims were putting sterilization pills in Buddhists’ food. In late February 2018 a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the midsize city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.

The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says, summing up the thinking.

But as she began looking into what had happened in Sri Lanka, Leinwand realized the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local nonprofits and would lead to “imminent violence.” When Facebook saw a similar string of sterilization rumors in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”—a sign that Facebook was capable of policing its platform.

But is it? It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with Donald Trump’s 2016 presidential campaign. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and, when exposed, tried to downplay it with a handy phrase that Chief Executive Officer Mark Zuckerberg repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism, and extortion.”

If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism. Facebook made Leinwand and other executives available for interviews with Bloomberg Businessweek to argue that it’s making progress.

Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post—but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.

“The whole concept that you’re going to find things and fix them after they’ve gone into the system is flawed—it’s mathematically impossible,” says Roger McNamee, one of Facebook’s early investors and, now, its loudest critic. McNamee, who recently published a book titled Zucked, argues that because the company’s ability to offer personalized advertising is dependent on collecting and processing huge quantities of user data, it has a strong disincentive to limit questionable content. “The way they’re looking at this, it’s just to avoid fixing problems inherent with the business model,” he says.

resource: https://www.bloomberg.com/features/2019-facebook-neverending-crisis/?srnd=technology-vp

Comments are closed.