Apple in a document published on Friday detailed how it has used machine learning advancements to introduce significantly improved people recognition in iOS 15, including in situations when a face isn't clearly visible.
The company lists "improved recognition for individuals" as a new feature in the iOS 15 version of the Photos app, though the web page is sparse on details. However, a new blog post on Apple's machine learning site reveals that the Photos app is able to identify people in a variety of scenarios, including if their faces aren't clear to the camera.
One of the methods Apple uses to achieve this is to match faces and upper bodies of specific people in images.
"Faces are frequently occluded or simply not visible if the subject is looking away from the camera. To solve these cases we also consider the upper bodies of the people in the image, since they usually show constant characteristics— like clothing— within a specific context. These constant characteristics can provide strong cues to identify the person across images captures a few minutes from each other," Apple writes.
The company takes a full image as an input, and then specifically identifies detected faces and upper bodies. It then matches the faces to the upper bodies to improve individual recognition in situations where traditional facial recognition would be impossible.
As always, the mechanism uses on-device machine learning to ensure privacy. Apple has also taken steps to ensure the process minimizes memory and power consumption.
"This latest advancement, available in Photos running iOS 15, significantly improves person recognition. As shown in Figure 8, using private, on-device machine learning we can correctly recognize people with extreme poses, accessories, or even occluded faces and use the combination of face and upper body to match people whose faces are not visible at all," Apple writes.
The blog post contains much more detail on training the machine learning model, for anyone interested.