Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple didn't start with a goal for Cinematic Mode, but worked hard to get it right

Credit: Apple

In a recent in-depth interview, a pair of Apple executives have shed more light on the goals and creation of the new Cinematic Mode on the iPhone 13 lineup.

Kaiann Drance, Apple's vice president of Worldwide Product Marketing, and Johnnie Manzari, an Human Interface Team designer, spoke with TechCrunch about Cinematic Mode — and how the company ran with the idea despite not having a clear goal.

"We didn't have an idea [for the feature]," said Manzari. "We were just curious — what is it about filmmaking that's been timeless? And that kind of leads down this interesting road and then we started to learn more and talk more with people across the company that can help us solve these problems."

The feature relies heavily on Apple's new A15 Bionic chip and the Neural Engine. According to Drance, bringing a high-quality depth of video to video is much more difficult than in photos.

"Unlike photos, video is designed to move as the person filming, including hand shake," said Drance. "And that meant we would need even higher quality depth data so Cinematic Mode could work across subjects, people, pets, and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload."

Before starting to work on Cinematic Mode, the Apple executives said the team spent time researching cinematography techniques to learn more about focus transition and other optical characteristics. Manzari said the team started with "a deep reverence and respect for image and filmmaking through history."

When they were developing Portrait Lightning, Apple's design team studied classic portrait artists like Andy Warhol and painters like Rembrandt.

The process was similar for Cinematic Mode, with the team speaking with some of the best cinematographers and camera operators in the world. They then continued by working with directors of photography, camera operators, and other filmmaking professionals.

"It was also just really inspiring to be able to talk to cinematographers about why they use shallow depth of field," said Manzari. "And what purpose it serves in the storytelling. And the thing that we walked away with is, and this is actually a quite timeless insight: You need to guide the viewer's attention."

Of course, the Apple designer realized that these techniques require a high level of skill, and weren't something that the average iPhone user could pull off easily.

That, Manzari said, is where Apple came in. The company worked through technical problems and addressed issues with fixes like gaze detection. Some of the problems were fixed with machine learning, which is why the mode leans heavily on the iPhone's baked-in Neural Engine.

Manzari said this type of feature development represents the best that Apple has to offer.

"We feel like this is the kind of thing that Apple tackles the best. To take something difficult and conventionally hard to learn, and then turn it into something, automatic and simple."