More than ever, Facebook is under pressure to prove that its algorithms are being deployed responsibly.
On Wednesday, at its F8 developer conference, the company revealed that it’s formed a special team and developed discrete software to ensure that its artificial intelligence systems make decisions as ethically as possible, without biases.
Facebook, like other big tech companies with products used by large and diverse groups of people, is more deeply incorporating AI into its services. Facebook said this week that it will start offering to translate messages that people receive via the Messenger app. Translation systems must first be trained on data, and the ethics push could help ensure that Facebook’s systems are taught to give fair translations.
“We’ll look back and we’ll say, ‘It’s fantastic that we were able to be proactive in getting ahead of these questions and understand before we launch things what is fairness for any given product for a demographic group,'” Isabel Kloumann, a research scientist at Facebook, told CNBC. She declined to say how many people are on the team.
Facebook said these efforts are not the result of any changes that have taken place in the seven weeks since it was revealed that data analytics firm Cambridge Analytica misused personal data of the social network’s users ahead of the 2016 election. But it’s clear that public sentiment toward Facebook has turned dramatically negative of late, so much so that CEO Mark Zuckerberg had to sit through hours of Congressional questioning last month.
Every announcement is now under a microscope.
Facebook stopped short of forming a board focused on AI ethics, as Axon (formerly Taser) did last week. But the moves align with a broader industry recognition that AI researchers have to work to make their systems inclusive. Alphabet’s DeepMind AI group formed an ethics and society team last year. Before that, Microsoft’s research organization established a Fairness Accountability Transparency and Ethics group.
The field of AI has had its share of embarrassments, like when three years ago Google Photos was spotted categorizing black people as gorillas.
Last year, Kloumann’s team developed a piece of software called Fairness Flow, which has since been integrated into Facebook’s widely used FBLearner Flow internal software for more easily training and running AI systems. The software analyzes the data, taking its format into consideration, and then produces a report summarizing it.
Kloumann said she’s been working on the subject of fairness since joining Facebook in 2016, right as people in the tech industry began talking about it more openly and as societal concerns emerged about the power of AI.
She said her group has “a bunch of collaborations” with teams inside the company and has worked with outside organizations, like the Better Business Bureau’s Institute for Marketplace Trust and the Brookings Institution.
Facebook doesn’t plan to release the new Fairness Flow software to the public under an open-source license, but the team could publish academic papers documenting its findings, Kloumann said.
At the same time, Facebook knows it can do more in terms of hiring AI researchers with a diversity of ideas and backgrounds to try to minimize bias in its software. The company’s AI research group has been opening labs far away from Facebook’s Silicon Valley headquarters — most recently in Montreal.
“This is something that I see us doubling down, for sure,” said Joaquin Quiñonero Candela, the company’s director of applied machine learning.
Source: Tech CNBC
Facebook has formed a special ethics team to prevent bias in its A.I. software