Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

On-device processing key to iPadOS Scribble's success hints Apple SVP Craig Federighi

Last updated

Apple's handwriting recognition in the Apple Pencil relies on recognizing strokes, an interview with Craig Federighi reveals, while new features such as iPadOS' Scribble rely on massive amounts of onboard machine learning processing.

Introduced as part of iPadOS 14, Scribble enables users to fill out text fields and forms using the Apple Pencil, without needing to type anything out. It accomplishes this by performing onboard processing instead of cloud-based versions, as well as taking advantage of machine learning to improve its accuracy.

Speaking to Popular Mechanics, Apple SVP of software engineering Craig Federighi explains how the Apple Pencil's handwriting recognition was produced. It all started with "data-gathering" by asking people around the world to write stuff down.

"We give them a Pencil, and we have them write fast, we have them write slow, write at a tilt. All of this variation," said Federighi. "If you understand the strokes and how the strokes went down, that can be used to disambiguate what was being written."

Combining the stroke-based recognition with character and word prediction also means that a lot of processing has to take place. As speed is of the essence, this eliminates the use of cloud-based processing of handwriting recognition, and instead forced Apple into a system involving on-device processing.

"It's gotta be happening in real time, right now, on the device you're holding," insists Federighi., "which means that the computational power of the device has to be such that it can do that level of processing locally."

Apple's expertise in chip design has led to the new iPad Air 4 having the A14 Bionic, Apple's fastest self-designed SoC, packing 11.8 billion transistors, a 6-core CPU, a new 4-core graphics architecture, and a 16-core Neural Engine that is capable of up to 11 trillion operations per second. Apple has even added CPU-based machine learning accelerators, which makes machine learning tasks run up to 10 times faster.



8 Comments

AutigerMark 6 Years · 65 comments

How else would something like that be done?  Server side processing would be too slow.  It’s got to be near instantaneous.

ph382 8 Years · 43 comments

How else would something like that be done?  Server side processing would be too slow.  It’s got to be near instantaneous.

The new Siri translate app hits a server somewhere. I'd bet that Alexa uses a server.

I'm glad that Apple is improving the technology. If I'm driving in the middle of nowhere, and give a voice command I'd like it to work. Same for turning off the lights at home. This is why the A14 has a neural engine twice as big as the A12. An interesting question about Apple Silicon is how much of a neural engine will be included.

iCave 5 Years · 10 comments


ph382 said:
The new Siri translate app hits a server somewhere.

Actually it doesn't.  The whole translation happens on-device, offline.

dysamoria 12 Years · 3430 comments

How is the performance on an original iPad Pro?