Google says it won’t use its artificial intelligence technology for weapons or surveillance, with a few caveats, according to a list of ethical principles published by CEO Sundar Pichai.
The company will still work with the government and military in other areas, including cybersecurity and training, and it will only avoid surveillance that violates “internationally accepted norms,” Pichai writes. Google also won’t work on technologies that are likely to cause harm, unless it decides that “the benefits substantially outweigh the risks.”
The guidelines come after months of internal controversy stemming from Google’s partnership with The Pentagon to use AI to analyze drone footage. Several thousand employees signed a petition urging Pichai to keep Google out of the “business of war” and dozens resigned in protest. Google eventually said it would not renew the contract when it expires next year.
Through the firestorm, Google executives reportedly promised that they would publish a list of ethical principles to guide its future projects. Pichai writes that this document will act as “concrete standards” that inform its research, product development, and business decisions.
In addition to outlining which AI applications it won’t pursue, Google highlighted that it believes that AI should “avoid creating or reinforcing unfair bias,” and provide privacy safeguards.
According to Pichai, here’s everything Google says that it won’t use its AI technologies for.
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.
You can read the full set of principles here.
Source: Tech CNBC
Google promises not to use A.I. for weapons or surveillance, for the most part