AppleInsider is supported by its audience and may earn commission as an Amazon Associate and affiliate partner on qualifying purchases. These affiliate partnerships do not influence our editorial content.
When iOS 10 arrives this fall — most likely in September — Siri's voice should sound a little more natural thanks to machine learning technology Apple is implementing, an interview revealed on Wednesday.
The company is swapping out some licensed technology for a deep neural network (DNN), according to Backchannel, which spoke with several key Apple executives. The head of advanced development for Siri, Tom Gruber, noted that while the assistant's responses are still being stitched together from a central database of recordings, machine learning will smooth out sentences and make Siri sound more human.
Siri's robotic-sounding voice has often been spoofed and criticized. Recently, singer Barbra Streisand called Apple CEO Tim Cook to complain about how Siri pronounces her last name — in response Cook promised to fix the problem in a coming update, which may have been a reference to the Siri upgrade.
The interview — also featuring executives like senior VPs Eddy Cue and Craig Federighi — noted that Apple in fact moved Siri's voice recognition to a neural net-based system in July 2014, but didn't publicize the fact until today. The technology is said to have drastically improved Siri's ability to understand commands.
Federighi commented that Apple has "a lot" of people working on machine learning technology, including not just Siri but things like palm rejection for the Apple Pencil. There is no central machine learning group, however.