Apple on Wednesday added three new articles to its Machine Learning Journal, part of an attempt at bringing some of its research -- and researchers -- out of the shadows.
The pieces include "Improving Neural Network Acoustic Models by Cross-bandwidth and Cross-lingual Initialization," "Inverse Text Normalization as a Labeling Problem," and lastly "Deep Learning for Siri's Voice: On-device Deep Mixture Density Networks for Hybrid Unit Selection Synthesis." Each is credited to the "Siri Team," rather than any individual authors.
The third article notably includes voice samples, comparing Siri's voices in iOS 9 and iOS 10 to this fall's iOS 11. One of the smaller but still significant improvements in iOS 11 is a more natural-sounding voice for the AI assistant.
Until relatively recently, Apple was notorious for keeping a muzzle on its AI/machine learning teams, such that researchers couldn't get papers published and thereby maintain a presence in the scientific community. While the Machine Learning Journal is anonymous, credited papers have begun appearing elsewhere.
Greater freedom for researchers could allow Apple to better build on past work, as well as attract and retain talented staff. The company's past secrecy policies may have scared off some hires.