An interview with a number of Apple vice presidents involved with the engineering of the iPhone 13 camera system has been published, proving more insight into the decisions behind the improvements for the 2021 releases.
Published on Monday, the "iPhone 13: Talking to the Camera Engineers" episode of the Stalman Podcast features a trio of Apple representatives. The group is headed up by Kaiann Drance, VP of Worldwide iPhone Product Marketing, along with VP of Camera Software Engineering Jon McCormack, and VP of Camera Hardware Engineering Graham Townsend.
For the iPhone 13, Apple has brought its Sensor Shift OIS, as well as improvements to low-light photography, Photographic Styles, and Cinematic Mode. On the Pro models, there's a new Macro mode, along with support for ProRes video.
The half-hour podcast starts off with Townsend discussing the benefits of Apple designing its camera hardware, including how the hardware team can work closely with their software counterparts "starting from an early design phase." The lens, sensor, and other hardware is "specifically designed to complement the firmware and the software processing" of the device.
"Since we own the entire stack, from photons to jpeg if you will, we can choose the optimal place in the pipeline to deliver specific benefits," Townsend adds. For example, the Sensor Shift is powerful enough to stabilize a single second of video, with it helping provide the raw and accurate imaging data that the software team can expand on.
The new Macro in the iPhone 13 Pro is enabled partly from the autofocus system Apple uses, Townsend confirmed, otherwise "you get into having a dedicated macro camera" without it. "That to us is just not as efficient as being able to use the same camera for these two separate but somehow linked purposes."
Machine learning has progressed considerably, especially with the amount of processing power the A15 now provides, according to McCormack. "This really speaks to the amount of processing power in the iPhone, and in fact we've got so much processing power now that we're able to take these same computational photography techniques and introduce them in the video world to bring computational videography."
"Really, we are now applying all of the same machine learning magic we learned in stills to video." McCormack says the iPhone now "segments each frame in real-time, and we process the sky and the skin and foliage individually, and this takes our already industry-leading video and makes it even better by giving us better clarity and more detail in different parts of the image.
8 Comments
With all this technology and i still get the damn green dot lens flare on my phone
Truly amazing. The image processing going on in our phones is better than all the dedicated equipment I’ve ever owned.
Lens flairs and mirrored hotspots from light sources is part of the physics of glass. Even in feature films with the best cameras available you’ll see mirrored hotspots from in-frame light bulbs, etc.
Eh. Ever since the moaning about it since the 6 (which was outright cute by comparison), it’s never bothered me, or most normals I’d wager. It’s part of the functional form of the device — the very thing other complainers say Apple doesn’t cater to enough! My phone is in my hand, in my pocket, or laying somewhere unused — usually face down. That it doesn’t lay perfectly flat on its back just has no practical ramification.
I have iPhone 12 mini, upgraded to iPhone 13 mini.
I vote for square image sensors arranged in a row and the flash placed above them. This will enable the flash to light from above each sensor in either landscape, portrait, or square images.