Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Future iPhones may identify users by the sound of their voice

Beyond recognizing voice commands, future iPhone software could use the sound of someone's voice to identify the person themselves, allowing the system to enact custom-tailored settings and access to personal content.

The concept was revealed this week in a new patent application published by the U.S. Patent and Trademark Office and discovered by AppleInsider. Entitled "User Profiling for Voice Input Processing," it describes a system that would identify individual users when they speak aloud.

Apple's application notes that voice control already exists in some forms on a number of portable devices. These systems are accompanied by word libraries, which offer a range of options for users to speak aloud and interact with the device.

But these libraries can become so large that they can be prohibitive to processing voice inputs. In particular, long voice inputs can be time prohibitive for users, and resource taxing for a device.

Apple proposes to resolve these issues with a system that would identify users by the sound of their voice, and identify corresponding instructions based on that user's identity. By identifying the user of a device, an iPhone would be able to allow that user to more efficiently navigate handsfree and accomplish tasks.

The application includes examples of highly specific voice commands that a complex system might be able to interpret. Saying aloud, "call John's cell phone," includes the keyword "call," as well as the variables "John" and "cell phone," for example.

In a more detailed example, a lengthy command is cited as a possibility: "Find my most played song with a 4-star rating and create a Genius playlist using it as a seed." Also included is natural language voice input, with the command: "Pick a good song to add to a party mix."

"The voice input provided to the electronic device can therefore be complex, and require significant processing to first identify the individual words of input before extracting an instruction from the input and executing a corresponding device operation," the application reads.

To simplify this, an iPhone would have words that relate specifically to the user of a device. For example, certain media or contacts could be made specific to a particular user of a device, allowing two individuals to share an iPhone or iPad with distinct personal settings and content.

In recognizing a user's voice, the system could also become dynamically tailored to their needs and interests. In one example, a user's musical preferences would be tracked, and simply asking the system aloud to recommend a song would identify the user and their interests.

The proposed invention made public this week was first filed in February of 2010. It is credited to Allen P. Haughay.

Another voice-related application discovered by AppleInsider in July explained a robust handsfree system that could be more responsive and efficient than current options. That method describes cutting down on the verbosity, or "wordiness" of audio feedback systems, dynamically shortening or removing redundant information that might be audibly presented to the user.

Both The Wall Street Journal and The New York Times reported earlier this year that Apple was working on improved voice navigation in the next major update to iOS, the mobile operating system that powers the iPhone and iPad. And a later report claimed that voice commands would be "deeply integrated" into iOS 5.

The groundwork was laid for Apple's anticipated voice command overhaul when Apple acquired Siri, an iPhone personal assistant application heavily dependent on voice commands, in April of 2010. With Siri, users can issue tasks to their iPhone using complete sentences, such as "What's happening this weekend around here?"

In June, it was claimed that those rumored voice features weren't ready to be shown off at Apple's annual Worldwide Developers Conference. However, it was suggested that the feature may be shown off this fall, and will be a part of the anticipated fifth-generation iPhone.

Further supporting this, evidence of Nuance speech recognition technology has been discovered hidden inside developer beta builds of iOS 5. These features buried within the software have been hidden in builds released to developers.