Apple proposes acoustic separation for iPhone conference calls
Apple in the second of two interesting patent filings revealed this week discusses techniques for improving the iPhone's ability to serve as a multi-party communication environment, in which participants on conference calls can be assigned to virtual position in order to improve clarity.
When a conference call is initiated, participants would be presented with a graphical user interface on the iPhone for use in managing the virtual locations for the plurality of participants.
"The visual indication for at least one of the participants can be assigned to a different one of the visually distinct regions, thereby causing an audio sound associated with the participant to be spatially adapted to originate from a virtual location corresponding to the visually distinct region," Apple said in the filing.
"To assist the user of the device in determining and distinguishing the different participants in the multi-party call, directional audio processing can be utilized so that the different sources of audio for the call can be directionally placed in a particular location with respect to the headset. As a result, the user of the device hears the other participants in the multi-party call as sound sources originating from different locations. "
In one implementation, Apple said the assignment to the default positions is automatic, either based on the participants' position geographically or in the order at which the participants joined the multi-party call.
"Next, a participant position screen is displayed," Apple continued with is explanation. "The participant position screen can enable a user to alter the position of one or more of the participants to the multi-party call. Here, the participant position screen is displayed such that a user of the portable communication device can manipulate or otherwise cause one or more of the positions associated with the participants to be changed. In doing so, the user, in one embodiment, can cause the physical movement of a representation of a participant on the participant position screen. Here, a decision determines whether a reposition request has been made. When the decision determines that a reposition request has been made, the associated participant is moved to the specified position."
All the participants on an iPhone conference call could also share media items such as "songs, albums, audiobooks, playlists, movies, music videos, photos, computer games, podcasts, audio and/or video presentations, news reports, and sports updates."
In particular, the patent filing contains considerable discussion of multi-party voice calls with concurrent audio playback. "One aspect of the invention pertains to a wireless system that supports both wireless communications and media playback," Apple said. "The wireless communications and the media playback can be concurrently supported. Consequently, a user is able to not only participate in a voice call but also hear audio playback at the same time."
In such instances, another graphical user interface would be presented on the iPhone's screen to allow each user to "blend" the two audio sources to their individual liking, independent of one another.
"The display screen includes a blend control. The blend control allows a user of the portable electronic device to alter the blend (or mixture) of audio from audio playback and audio from a voice call. [...] The blend control includes a slider that can be manipulated by a user towards either an audio end or a call end. As the slider is moved towards the audio end, the audio playback output gets proportionately greater than the voice call output. On the other hand, when the slider is moved towards the call end, the voice call output gets proportionally greater than the audio playback output. For example, the position of the slider can represent a mixture of the audio playback output and the voice call output with each amplified similarly so that the mixture is approximately 50% audio."
"The audio for each can be altered such that the audio from the incoming call and the audio from the media playback are perceived by a listener (when output to a pair of speakers, either internal or external) as originating from different virtual locations. The different virtual locations can be default positions or user-specified (during playback or in advance). [...] The sender or recipient of the audio sounds pertaining to a media item can be permitted to separately control the volume or amplitude of the audio sounds pertaining to the media item. As a result, the mixture or blend of the audio sounds pertaining to media items as compared to audio sounds pertaining to the voice call can be individually or relatively controlled."
The September 2006 filing, titled "Audio processing for improved user experience," is credited to Apple employees Michael Lee and Derek Barrentine.