In the intensifying battle to have the best voice-powered technology, Google is making its virtual assistant sound more human and less robotic.
The speech-activated Google Assistant is relying on software from DeepMind, the artificial intelligence research group under Alphabet. The technology now uses a version of DeepMind’s WaveNet system for American English and Japanese, according to a blog post published on Wednesday.
It’s a timely shift. Two weeks ago Apple released an upgraded version of the Siri virtual assistant, which is available on iPhones, iPads, Macs and other devices. The news also comes as Google introduces new versions of its Pixel smartphones, as well as speakers and earbuds that will let users talk to the Google Assistant.
The blog, from DeepMind research scientists Aäron van den Oord and Tom Walters and Google speech software engineer Trevor Strohman, said that the Assistant working on WaveNet “is the first product to launch” using Google’s second-generation AI chip, the tensor processing unit, or TPU. Google also uses graphics cards from Nvidia to train certain AI systems.
Google acquired DeepMind in 2014, and has used the software to help lower the cost of cooling its data centers.
DeepMind first unveiled WaveNet a year ago, but it was “too computationally intensive to work in consumer products,” according to the blog post.
Source: Tech CNBC
Google's virtual assistant now sounds less like a robot and more like a person