Google is placing a lot of importance on engaging with governments across the world, even amid tensions around artificial intelligence, privacy and fake news, according to one of the company’s top engineers.
“It’s important to engage with governments around the world in how they’re thinking about AI — to help inform them,” Jeff Dean, head of Google AI and one of Alphabet’s top computer scientists, told CNBC’s Josh Lipton on Tuesday.
“Obviously tech companies have a lot of expertise in this space. We can offer advice and our views of where the technology is going — what impact that might have in the broader society. And I think you want governments to be thinking carefully about what the implications of these technologies will be five, 10 years down the road. That seems like a helpful dialogue to have.”
Google kicked off its annual developer confab, I/O, on Tuesday, where it announced artificial intelligence integrations for everything from writing emails to computer chips.
Meanwhile, in Washington, Donald Trump’s administration has been planning a summit on artificial intelligence with representatives from Google, plus Amazon, Facebook, Intel and 34 other major U.S. companies, according to The Washington Post.
Dean is a company veteran, joining the company when it had 25 employees in 1999. He’s led Google Brain since 2012, where he researches artificial intelligence. Google’s top brass, including Sundar Pichai and Sergey Brin, have long maintained that artificial intelligence is the future of the company.
But critics of artificial intelligence have also warned that it has the ability to take workers’ jobs. Dean counters that Google actually aims to make humans more productive and improve their experiences with technology.
“We want to make users more productive and a lot of time is spent in email writing things that are relatively straightforward. A really smart person looking over your shoulder would probably be able to guess about what you’re going to write in the next sentence. We want to make the system able to do this for you,” Dean said.
There are a lot of positive ways that regulators and technologists can work together, Dean said, particularly in the healthcare field which can “benefit everyone in the world.”
Google’s technologies are discovering new types of things that doctors didn’t even realize were present in visual data, such as getting clues about heart health from retinal scans instead of blood draws, Dean said.
“The healthcare space is a very complicated one for a variety of reasons: It’s much more regulated than some other kinds of industries, for good reason,” Dean said.
“You want to deploy these sorts of systems in a safe and methodical manner. But I think it’s also important also to get these benefits out as soon as regulatorily possible. It is still pretty early. I think the fact that machine learning is really starting to work on practical problems is a relatively recent phenomenon and I think we’ll see more and more of this in healthcare and other industries.”
The sheer volume of information available online has created challenges in Silicon Valley, particularly around “fake news,” election interference and offensive content.
Dean said that, at Google, humans are still the ultimate arbiters of whether content is offensive.
“It’s obviously a difficult problem because the volume of information there makes it sort of impractical to completely have humans look at every possible piece of information. So you really do need this symbiosis between intelligent machine-learning algorithms and other kinds of systems,” Dean said.
Google employees have also reportedly argued over the company’s role in providing the Pentagon with technology to improve drone targeting.
“It’s important to understand what kinds of uses we want machine learning to be used for,” Dean said. “As a company, we’re debating what kinds of work we want to be doing. I also think we’re releasing open-source tools …. and many of the uses that other people will put that for are going to be great, but some of them may be things we might not be comfortable with. That’s one of the things about technology. The underlying technology itself is sort of neutral. It’s how people use that and make decisions.”
Microsoft chief Satya Nadella said this week that “the ethics around AI, privacy, security” puts Microsoft’s cloud ahead of Google’s.
“Microsoft is in a lot of the same businesses that Google is in,” Dean said. “They run a search engine [Bing] that gets advertising revenue, and so on. We, obviously, get more revenue from advertising. But I don’t think there’s any big mismatch here. I think really what cloud customers care about is, Can they get their problem solved on any particular provider’s cloud products? And I think we have a lot of advantages there.”
— CNBC’s Paayal Zaveri contributed to this report.
Source: Tech CNBC
As Washington clashes with tech, Google's AI chief Jeff Dean says tech must 'engage' with lawmakers