SAN FRANCISCO — In June, Mark Zuckerberg spoke about “community” with a gathering of influential Facebook users in Chicago.
It was an important moment for the 33-year-old chief executive of the social media company. He was promoting Facebook Groups, a product that millions of people on Facebook used to talk about shared interests, debate, discuss and maybe debate some more.
This type of activity, he believed, was one of the keys to his sprawling company’s future. The goal of Facebook, he told his audience, which included many Groups leaders, was to “give people the power to build community and bring the world closer together.”
Inside Mr. Zuckerberg’s company, however, there was already growing concern among employees that some of that content was having the opposite effect. Foremost among the offending material: Posts and memes touching on hot-button issues like race, gender and sexuality that were secretly created by Russian organizations with ties to the Kremlin to influence the 2016 presidential election.
Now there is an ongoing debate among Facebook employees over how to handle this so-called organic content, or posts from users that are not advertisements and can be freely shared across Facebook, according to a dozen current and former Facebook employees. These people spoke on condition of anonymity because they were prohibited by nondisclosure agreements from talking about the company.
On one side are employees who idealize Facebook’s lofty goals of unfettered speech and do not think the company should be in the business of censoring what its users have to say. On the other side are workers who worry that the company’s hands-off approach has already caused problems — ones that will grow worse if nothing is done.
“The algorithms Facebook sets up that prioritize and control what’s shown in its powerful newsfeed are a lot more consequential than the ads,” said Zeynep Tufekci, an associate professor at the University of North Carolina at Chapel Hill who closely follows social media and technology. “The company seems stuck between exercising this massive power as it wishes, but also terrified about the conversation about its power and how to handle it.”
Next week, Facebook’s general counsel will be among several tech industry executives expected to testify at a series of Congressional hearings about the role the technology industry played in Russian interference of last year’s election.
Facebook has acknowledged that an organization with ties to the Kremlin purchased $100,000 worth of ads related to the election and has promised to crack down on such advertising.
Since Facebook disclosed the existence of those ads and posts with Russian ties last month, the company has attempted to tamp down fears it abetted interference in the election. It has also added rules meant to improve disclosures of political advertising in an attempt to show users exactly who is behind the ads they saw run through their newsfeeds.
And on Friday, the company began a test of new features designed to give users a better understanding of the people and organizations buying advertising on Facebook. That included providing users with a searchable database of ads being served to them.
But misleading ads were often a small component of the misinformation campaign.
Investigators believe the Internet Research Agency, a so-called troll farm that has been linked to the Kremlin, amassed enormous followings for various Facebook Pages that masqueraded as destinations for discussion about all sorts of issues, from the Black Lives Matter movement to gun ownership.
Aided by Facebook’s finely tuned ad-targeting tools, the Russian firm would pay to place posts in the News Feeds of users. The ad product, called a “promoted post,” was designed to look little different than the rest of the content flowing through the News Feed.
Users who responded in a positive manner to the advertisements were prompted to subscribe to related Facebook Pages or Groups run by the Russians. If they did, it meant that nonpaid, “organic” posts would begin to appear in the users’ News Feeds. From there they spread, being shared and reshared among the user’s network of friends.
The tactic was effective. Some of the pages, like “Blacktivists,” which focused on racial issues, had more than 360,000 users who “liked” the page — even more than the main “Black Lives Matter” page.
Facebook is not the only big internet company wrestling with the issue. But at Mr. Zuckerberg’s company, the issue has been particularly troublesome, given how easy it is to spread messages to tens of millions of Facebook users.
Whether something is removed from Facebook is often dictated by its terms of service, which define standards of behavior on the network. Those standards prohibit posting nudity and threats of violence. But misleading users — even outright lying — aren’t necessarily against the rules. And that’s hard to police.
So far, Facebook has focused on the issue of authenticity and identity on the platform. Facebook removed hundreds of ads last month, not because of the content they contained, but because the Russians running the pages did not disclose their real identities.
“We want people to be able to come to Facebook to talk about issues of public interest, and we know that means people will sometimes disagree and the issues they discuss will sometimes be controversial,” Monika Bickert, head of product policy and counterterrorism at Facebook, said in a statement. “That’s O.K. But it’s important to us that these conversations happen in an authentic way, meaning we have to be speaking as ourselves, with our real identities.”
That line of reasoning may not hold up for long, as Facebook is being forced to deal with policy discussions outside the United States. In Myanmar, Facebook is caught between the government and a persecuted minority group of Muslims, the Rohingya, who face a misinformation campaign on Facebook in posts often from top government leaders.
Facebook has said little publicly about the situation but there is intensifying pressure to respond.
One of the solutions discussed internally at Facebook has been “whitelisting,” in which algorithms would decide which content makers would be allowed to publish or advertise on Facebook, according to two people familiar with the company’s internal deliberations. They have also discussed “blacklisting,” in which the algorithms would decide which content makers could not post.
But in closed-door meetings at Facebook’s Menlo Park, Calif., headquarters and in Washington, Facebook employees have expressed concern that such a move could have effects on other publications and content makers that are not spreading false or divisive news.
Others worry that acting too hastily could establish precedents that would lead to a situation, for example, where human rights activists using Facebook to coordinate protests in Syria would be forced to identify themselves. They also worry that any effort to quash certain content in the United States could only aid censors in other countries.
As for a technical solution, some hope artificial intelligence can help Facebook sift fact from fiction. But today’s A.I. technology is not advanced enough to do the work on its own.
Perhaps unsurprisingly, Mr. Zuckerberg’s solution is to double down on his community concept. He has said publicly that strengthening social bonds on Facebook will lead to a positive outcome, despite whatever reservations his employees and the general public may have.
“Every day, I say to myself, ‘I don’t have much time here on Earth, how can I make the greatest positive impact?’ Some nights I go to bed and I’m not sure I made the right choices that day,” Mr. Zuckerberg said at the June conference. “I can tell you, those doubts don’t go away, no matter who you are. But every day you just get up and try to make the world a little better.”
Follow Mike Isaac on Twitter @MikeIsaac. Nicole Perlroth contributed reporting.
Source: Tech CNBC
Facebook employees are debating whether the company should censor what users post