Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple acquires machine learning startup to improve iPhone photos

According to a report on Thursday, Apple recently acquired Spectral Edge, a UK-based startup focused on improving smartphone photography through machine learning technology.

Citing government documents made public today, Bloomberg reports Apple recently took control of the company and assigned lawyer Peter Denwood as a director. All other board members attached to the startup were removed, per the document.

While Apple has not confirmed the Spectral acquisition, the tech giant has in the past followed a similar blueprint when purchasing smaller firms.

Spectral started life in 2011 as an academic project at the University of East Anglia before being spun out into a startup in 2014.

The firm developed and refined a mathematical technique for improving smartphone photos, an area where Apple is constantly seeking new tech to edge out competition. Spectral's technology captures and blends an infrared shot with a standard shot to enhance a photograph's overall depth, detail and color. The process relies on machine learning and can be integrated into both hardware and software.

"Right now there is no real solution for white balancing across the whole image [on smartphones] — so you'll get areas of the image with excessive blues or yellows, perhaps, because the balance is out — but our tech allows this to be solved elegantly and with great results," Rhodri Thomas, CEO of Spectral Edge, told TechCrunch last year. "We also can support bokeh processing by eliminating artifacts that are common in these images."

With a number of patents under its belt, Spectral in 2018 raised a $5.3 million Series A funding round and announced an initial customer in NTT.

Apple will likely fold Spectral's IP portfolio into its own work in AI-based photography, a segment that is becoming increasingly important for smartphone manufacturers. Companies looking to squeeze high quality photos out of their handsets have turned to machine learning processes in a bid to overcome the physical constraints of miniature sensor arrays, and to great effect.

This fall, Apple introduced iPhone 11 and iPhone 11 Pro, both of which pack in special machine learning silicon and software designed to enhance photographic capabilities. Night Mode, for example, takes a set of multiple images captured in quick succession, aligns them to correct for errant movements, applies algorithms to detect and discard areas with blur, adjusts contrast and colors, and de-noises the output to arrive at a final image. Another new technology, Deep Fusion, compares, combines and processes long and short exposure images to generate a highly detailed photo.