Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple details Siri's machine learning upgrades for better listening on HomePod

Apple on Monday updated its Machine Learning Journal with a post by the company's Siri speech and audio software engineering teams, explaining how the company uses machine learning to help the HomePod hear people under tougher circumstances than iPhones and iPads.

Siri on the HomePod had to be upgraded to cope with loud music, ambient noise, and distant talkers, the journal entry notes. Accordingly the HomePod employs not just far-field microphones, but "mask-based multichannel filtering using deep learning" to strip out echo and background noise, and "unsupervised" learning to split up multiple sound sources and use only the one including "Hey Siri" as a trigger phrase.

The entry goes into considerable technical detail, being oriented mainly towards professionals in the machine learning field. It does however mention that multichannel signal processing happens "continuously" on the HomePod's A8 processor, even in low-power states, and is able to adjust to both changing environments and moving talkers.

Apple suggests that while "other state-of-the-art systems" use multi-microphone processing, they typically only focus on echo and noise reduction.

The HomePod is a relative late comer to the smartspeaker market, having launched just this February — Amazon's Echo debuted in 2014, and the Google Home shipped in 2016. Apple though has taken a different tack than many vendors, concentrating on sound quality with technologies like beamforming. A HomePod will automatically tune itself to match its position in a room.

Siri though has been criticized as limited next to Amazon and Google's voice assistants, for instance only natively supporting Apple Music in streaming services. The HomePod hardware is also expensive at $349 — Apple has been rumored as working on a cheaper model to be more competitive.

The journal update coincides with Apple's appearance at the 32nd Conference on Neural Information Processing Systems in Montreal, Canada. The company has tried to open itself more to the academic community, presumably to appeal to potential recruits and to appease the researchers it already has, who previously complained about Apple's paper publishing restrictions.

The Machine Learning Journal is another one of those compromises. It began in July 2017 with a paper discussing neural net training with collated images, and has since gone on to cover a variety of topics, such as face detection and "differential" privacy.

A year ago the company's s director of AI research, Ruslan Salakhutdinov, spoke about the company's self-driving car project, the ultimate purpose of which is still shrouded in mystery. The company may or may not be working on a self-designed car — it at least briefly abandoned the idea in favor of pure platform development.