Violent extremists, racists and Russian-backed propaganda machines have all used social media platforms to disseminate and perpetrate their ideologies. This year, after revelations that Russian-backed ads and fake news attempting to influence the 2016 presidential election were distributed on these platforms, tech behemoths like Facebook and Google have faced increasing criticism for not more appropriately monitoring content.
So how did this happen?
“One of the problems in the industry is that … we came from a, shall we say, a more naive position — right? — that illegal actors and these actors would not be so active,” says Eric Schmidt, the billionaire Executive Chairman of Alphabet, speaking about Google at the Halifax International Security Forum in November.
“But now, faced with the data and what we’ve seen from Russia in 2016 and with other actors around the world, we have to act” to remove the content, Schmidt says.
But the extent of that naivete is debatable. Case in point: It took Google-owned YouTube from 2009 until recently to heed calls to take down videos of jihadist propaganda featuring (now-deceased) well-known ISIS recruiter Anwar al-Awlaki. In part, Google long argued it was a technology platform and therefore not responsible for policing content. YouTube made a change this year so that such content would be automatically removed, according to The New York Times.
Pressed on the subject, Schmidt acknowledged the problem.
“[W]ith respect to YouTube for example, there have been cases where people have uploaded either things which violate our terms of service or things which have been used incorrectly. In the case of YouTube, we have very, very detailed terms of service and after a while, we, after enough people mention it, we typically look at them and decide,” Schmidt says.
“In this case, I think it was fairly clear, perhaps it was overdue.”
Billionaire LinkedIn co-founder Reid Hoffman has a similar take on some founders’ early naivete.
During a talk with Hoffman at an Anti-Defamation League event in November, Recode tech journalist Kara Swisher criticized Silicon Valley’s “lack of responsibility,” with such content and “its tarnishing of the American democratic system.”
Like Schmidt, Hoffman, who knows many of the players, says there wasn’t ill intent.
“A bunch of these folks that built these systems didn’t actually think about hostile attack on them, didn’t actually think about what Russians might do in order to hack in, or trying in their own minds to figure out, okay, How are we as democratic and inclusive as possible, and how do we build the algorithms that way?” says Hoffman.
“Yeah, they thought cat videos. Great. Adds a little joy to your life, right?” says Hoffman.
“I think all of that has been retroactive. Has been, ‘Oh s—, people are putting up skinning cat videos, we should do something about that.’ Right?”
Blame it on optimism, says Hoffman.
“To some degree in the early days with technologists, what it is is a little bit like a Roschach test. If you’re an optimist, you see optimism. If you’re a pessimist, you see pessimism, and so forth. [Mark Zuckerberg’s] an optimist, so the kinds of things in the early days … Talking about Zuck, look, we can connect people and they can share experiences,” Hoffman says of Facebook.
“All of these things bringing in the human fabric of life, of seeing other people as people. Seeing pictures of them on holiday, or with their kids and that kind of stuff, that was the stuff that was being discussed. It’s all the positive use cases.”
Perhaps Zuckerberg was living in a “Facebook bubble” before Facebook. But naivete doesn’t excuse the blind spot, admits Hoffman.
“With power comes responsibility. With great power comes great responsibility,” he says.
Like this story? Like us on Facebook
Source: Tech CNBC
Google, Facebook founders were naive about propaganda, extremist content, say Silicon Valley titans