AppleInsider may earn an affiliate commission on purchases made through links on our site.
An interview with a number of Apple vice presidents involved with the engineering of the iPhone 13 camera system has been published, proving more insight into the decisions behind the improvements for the 2021 releases.
Published on Monday, the "iPhone 13: Talking to the Camera Engineers" episode of the Stalman Podcast features a trio of Apple representatives. The group is headed up by Kaiann Drance, VP of Worldwide iPhone Product Marketing, along with VP of Camera Software Engineering Jon McCormack, and VP of Camera Hardware Engineering Graham Townsend.
For the iPhone 13, Apple has brought its Sensor Shift OIS, as well as improvements to low-light photography, Photographic Styles, and Cinematic Mode. On the Pro models, there's a new Macro mode, along with support for ProRes video.
The half-hour podcast starts off with Townsend discussing the benefits of Apple designing its camera hardware, including how the hardware team can work closely with their software counterparts "starting from an early design phase." The lens, sensor, and other hardware is "specifically designed to complement the firmware and the software processing" of the device.
"Since we own the entire stack, from photons to jpeg if you will, we can choose the optimal place in the pipeline to deliver specific benefits," Townsend adds. For example, the Sensor Shift is powerful enough to stabilize a single second of video, with it helping provide the raw and accurate imaging data that the software team can expand on.
The new Macro in the iPhone 13 Pro is enabled partly from the autofocus system Apple uses, Townsend confirmed, otherwise "you get into having a dedicated macro camera" without it. "That to us is just not as efficient as being able to use the same camera for these two separate but somehow linked purposes."
Machine learning has progressed considerably, especially with the amount of processing power the A15 now provides, according to McCormack. "This really speaks to the amount of processing power in the iPhone, and in fact we've got so much processing power now that we're able to take these same computational photography techniques and introduce them in the video world to bring computational videography."
"Really, we are now applying all of the same machine learning magic we learned in stills to video." McCormack says the iPhone now "segments each frame in real-time, and we process the sky and the skin and foliage individually, and this takes our already industry-leading video and makes it even better by giving us better clarity and more detail in different parts of the image.