Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple developing dedicated AI chip called Apple Neural Engine

Last updated

Apple is reportedly developing a dedicated chip for integration with devices like iPhone that can handle artificial intelligence tasks, such as facial and speech recognition, according to a report on Friday.

Referred to internally as "Apple Neural Engine," the silicon is Apple's attempt to leapfrog the burgeoning AI market, which has surged over the past year with products like Amazon's Alexa and Google Assistant. According to a source familiar with the matter, the chip is designed to handle complex tasks that would otherwise require human intelligence to accomplish, Bloomberg reports.

Though Apple devices already sport forms of AI technology — the Siri virtual assistant and basic computer vision assets — a dedicated chip would further improve user experience. In addition, offloading AI-related computational processing from existing A-series SoCs could improve the battery life of portable devices like iPhone and iPad. If it comes to fruition, the strategy would be similar to chips introduced by competing manufacturers, including Google and its Tensor Processing Unit.

Apple has tested Apple Neural Engine in prototype iPhones, and is thinking about offloading core applications including Photos facial recognition, speech recognition and the iOS predictive keyboard to the chip, the report says. The source claims Apple plans to open up third-party developer access to the AI silicon, much like APIs for other key hardware features like Touch ID.

Whether the chip will be ready in time for inclusion in an iPhone revision later this year is unknown, though today's report speculates Apple could announce work on Apple Neural Engine at WWDC next month.

Apple's interest in AI, and related augmented reality tech, is well documented. CEO Tim Cook has on multiple occasions hinted that Apple-branded AR solutions are on the horizon. The company has been less forthcoming about its ambitions for AI.

That cloak of secrecy is slowly lifting, however. At a conference last year, Apple Director of Artificial Intelligence Research Russ Salakhutdinov said employees working on AI research are now allowed to publish their findings and interface with academics in the field. Some believe the shift in company policy was designed to retain high-value talent, as many researchers prefer to discuss their work with peers.

Just weeks after the IP embargo lifted, Apple published its first AI research paper focusing on advanced methods of training computer vision algorithms to recognize objects using synthetic images.

Apple has been aggressively building out its artificial intelligence and augmented reality teams through acquisitions and individual hires. Last August, for example, the company snapped up machine learning startup Turi for around $200 million. That purchase came less than a year after Apple bought another machine learning startup, Perceptio, and natural language processing firm VocalIQ to bolster in-house tech like Siri and certain facets of iOS, MacOS, tvOS and CarPlay.

Earlier this year, Apple was inducted into the the Partnership for AI as a founding member, with Siri co-founder and Apple AI expert Tom Gruber named to the group's board of directors.

Most recently, Apple in February reveled plans to expand its Seattle offices, which act as a hub for the company's AI research and development team. The company is also working on "very different" AI tech at its R&D facility in Yokohama, Japan.



78 Comments

wigby 15 Years · 692 comments

On the surface, this seems to be a direct response to Google's TensorFlow announcements. In reality, Apple has been working on such a chip for a few years now and has been forced to reveal something on stage at WWDC so they don't appear to be falling behind.

But the reality is that AI is still like a game of Monopoly and each player only holds one property so far. There are so many moves to go that comparing Siri to Echo or Google or Cortana right now is pointless - they all suck.

MacPro 18 Years · 19845 comments

Hope in a few years time it'll be able to do a lot more than that facial and speech recognition.   I'll be needing an augmented brain in about ten years I'm sure ...  :(

ireland 18 Years · 17436 comments

Although maybe true, you forgot to mention the word "reportedly" in the title of this piece.

The_Martini_Cat 12 Years · 485 comments

I hope Apple reveled its Seattle office plans on Saturdaynalia.

ericthehalfbee 13 Years · 4489 comments

I bet they already have it. It'll be part of their new GPU coming out in the next iPhone this fall.