According to a report on Thursday, Apple recently acquired Spectral Edge, a UK-based startup focused on improving smartphone photography through machine learning technology.
Citing government documents made public today, Bloomberg reports Apple recently took control of the company and assigned lawyer Peter Denwood as a director. All other board members attached to the startup were removed, per the document.
While Apple has not confirmed the Spectral acquisition, the tech giant has in the past followed a similar blueprint when purchasing smaller firms.
Spectral started life in 2011 as an academic project at the University of East Anglia before being spun out into a startup in 2014.
The firm developed and refined a mathematical technique for improving smartphone photos, an area where Apple is constantly seeking new tech to edge out competition. Spectral's technology captures and blends an infrared shot with a standard shot to enhance a photograph's overall depth, detail and color. The process relies on machine learning and can be integrated into both hardware and software.
"Right now there is no real solution for white balancing across the whole image [on smartphones] — so you'll get areas of the image with excessive blues or yellows, perhaps, because the balance is out — but our tech allows this to be solved elegantly and with great results," Rhodri Thomas, CEO of Spectral Edge, told TechCrunch last year. "We also can support bokeh processing by eliminating artifacts that are common in these images."
With a number of patents under its belt, Spectral in 2018 raised a $5.3 million Series A funding round and announced an initial customer in NTT.
Apple will likely fold Spectral's IP portfolio into its own work in AI-based photography, a segment that is becoming increasingly important for smartphone manufacturers. Companies looking to squeeze high quality photos out of their handsets have turned to machine learning processes in a bid to overcome the physical constraints of miniature sensor arrays, and to great effect.
This fall, Apple introduced iPhone 11 and iPhone 11 Pro, both of which pack in special machine learning silicon and software designed to enhance photographic capabilities. Night Mode, for example, takes a set of multiple images captured in quick succession, aligns them to correct for errant movements, applies algorithms to detect and discard areas with blur, adjusts contrast and colors, and de-noises the output to arrive at a final image. Another new technology, Deep Fusion, compares, combines and processes long and short exposure images to generate a highly detailed photo.
3 Comments
iPhone 11 Pro photos taken are much inferior than iPhone 4S. With the 3 lens iPhone 11 Pro, lens changes automatically when you’re moving closer but you’ll lose what you’re focusing on the whole picture shifts frame, and after a picture is taken, the zoom-in shows no details at all even though you’re still using optical zoom or no zoom-in at all. iPhone 11 Pro with iOS 13.3 is still taking really, really bad photography.