The Portrait mode in Apple's iPhone lineup was difficult to pull off, and the move into computational photography prompted managerial debate and some level of professional risk to those involved.
The majority of iPhone users will be familiar with the Portrait mode on their devices, with the camera able to take a photograph of a subject in sharp focus, while blurring out the foreground and background elements. In the Harvard Business Review profile on Apple's management of innovation, the journey to achieve "bokeh" was a tricky one to undertake.
It was the kind of innovative technological move that Apple is renowned for. That renown, though, comes as much from how Apple manages its people, as it does from the technology experts it always employs.
In 2009, senior leader Paul Hubel had the idea of enabling iPhone users to take portrait-style photographs that include bokeh. A Japanese word that refers to purposefully out-of-focus elements in a photograph, bokeh is typically employed to create a blurry and pleasing background, allowing a viewer to pay attention to the in-focus subject.
Typically shots that use bokeh were limited to high-priced cameras, such as single-lens reflex cameras that could manage the focal range of the lens with minute detail, with smartphone camera systems typically not able to work to such levels.
Hubel reckoned that a dual-lens camera design could take advantage of computational photography to create a composite image, one that mimicked the bokeh of SLR cameras. The concept was welcomed by Apple's camera team, which had the purpose to encourage "More people taking better images more of the time."
Though initial efforts were promising, "failure cases" were frustrating the team, such as when the algorithm failed to determine the line where the main subject ended in the image and where the blurry background began. In cases where some foreground elements were near to the subject but were intended to be blurred, the algorithm could mistakenly blur some parts of these extra parts, but keep others in sharp focus.
One person involved in development said that Hubel was "out over his skis" during much of the production of the feature. Specifically, this meant that if the feature didn't work, or didn't work well, Hubel's team was in danger of losing credibility and weight with the rest of the iPhone development effort — and Apple's management itself.
As Apple has strict engineering standards of zero artifacts, equating to "no unintended alteration in data introduced in a digital process by an involved technique and/or technology," these so-called "corner cases" prompted multiple "tough discussions" between Apple's teams. Such corner cases forced the feature to be delayed a year to allow time for fixes.
Design and marketing leaders were invited by engineering teams to agree on quality standards that could affect the feature, which also raised the question of "What makes a beautiful portrait?" To create the quality standards, great works of photographers were analyzed for themes, such as how the edge of the face blurred while eyes remained sharp, which were then introduced to the algorithm.
Another problem was raised in previewing a portrait photo, as it was originally designed to be applied to an image after it was taken. The Human Interface design team insisted there be some form of "live preview" before a shot was taken, to assist the user in making adjustments.
Working with the video engineering team responsible for sensor control and camera operations enabled the camera hardware team to push forward and create the addition. It was then introduced to the public with the iPhone 7 Plus, and formed a central pillar to Apple's marketing efforts.
The modern version of the Portrait mode in the camera app can, in some cases, rely heavily on machine learning. For single-camera iPhones like the iPhone XR and the iPhone SE, computational photography advancements have allowed the feature to continue working despite the reduced number of source images.
9 Comments