Apple’s artificial intelligence strategy continues to be focused on running workloads locally on devices, rather than relying heavily on cloud-based resources, as competitors Google, Amazon, and Microsoft do.
While this fits in with Apple’s core business — selling devices — and the company’s emphasis on user privacy, it could put it at a competitive disadvantage with many app makers, who prefer the more flexible approach from competitors as they look at add AI features.
Between announcements about iOS, Mac and Siri, Apple made two big announcements about AI at its Worldwide Developer Conference in San Jose last week.
First, it introduced a new framework called CreateML that app makers can use to train AI models on Macs.
Apple developers can try out Create ML inside the app that many of them are already very familiar with: Xcode, Apple’s own app for coding programs for its devices. And they can use Swift, Apple’s programming language, rather than having to pick up one that’s more closely associated with AI development, like Python. To keep things simple, the software even supports dragging and dropping when the time comes to train models with a bunch of data.
Second, it announced updates to its Core ML software, first introduced last year, for easily incorporating AI models into apps for iPhones and other Apple devices. These AI models are smaller, so they will take up less space on devices once they’ve been embedded into apps.
For years, plenty of developers have taken a different path — hosting models in public clouds operated by Amazon, Google, Microsoft and other companies.
Many developers choose to train their models in clouds, too. They can pay to rent out banks of powerful machines for as long as they need to in order to get models operating at a level they’re happy with.
With Core ML it is possible to optimize a cloud-trained model for Apple devices — but with Create ML, the Mac basically doubles as a server.
That approach has some advantages, Apple believes.
“User privacy is fully respected,” Apple’s Francesco Rossi told developers in one conference session last week. “By running on machine learning models on-device, we guarantee that the data never leaves the device of the users.”
Additionally, apps using Core ML won’t be affected by network issues — they’ll keep working fast, because the computing work happens on users’ devices, Rossi said.
Apple’s approach has some fans. “It addresses two major pain points: training of models on the cloud is expensive, and getting them to work on mobile devices is difficult at best,” Alex Jaimes, vice president of AI and data science at automotive start-up Nauto, told CNBC in an email.
“If Apple can provide tools that make it feasible to train models locally at reasonable speeds, it could further increase its hardware footprint because for individual developers training models locally on a single machine is much more cost effective than doing so on the cloud.”
What’s more, making it easier to add AI to apps for iOS could lead to greater engagement, “luring consumers and feeding that ecosystem,” Jaimes wrote. It could also bring revenue growth for Apple’s app stores — part of the company’s growing services business.
The trouble is, Create ML is coming out late. Google introduced its open-source AI framework, TensorFlow, almost three years ago.
“TensorFlow is leagues and leagues ahead of [Create ML], which currently looks like a toy compared to it,” Reza Zadeh, CEO of startup Matroid, told CNBC over email. Google has been busy adding Swift support to TensorFlow, and it recently introduced ML Kit, which works on both Android and iOS and can operate on devices or from Google’s cloud.
“No serious developer or researcher is even considering” using the Create ML technology, Zadeh said.
Apple’s A.I. strategy stands apart from the rest of big tech, for better or worse