Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple looks to take multi-touch beyond the touch-screen

With its competitors struggling to catch up with multi-touch technology introduced last year as part of the iPhone, Apple is already conceptualizing new versions of the technology that would fuse a variety of secondary inputs with today's touch-based gestures to produce more efficient data input operations.

A new 30-page patent filing by Wayne Westerman and John Elias, co-founders of the Fingerworks firm acquired by Apple during the development of the original iPhone, details a handful of these newly proposed inputs under the title "Multi-Touch Data Fusion."

The pair of engineers note that while the fingertip chording and movement data generated by today's multi-touch input devices can provide a strong set of user control, fusing additional information from other sensing modalities can significantly enhance the interpretative abilities of a device or significantly improve its overall ease of use.

Among the secondary input means outlined in the filing (and detailed below) are voice fusion, finger identification fusion, gaze vector fusion, biometrics fusion, and facial expression fusion.

Voice Fusion

In the case of Voice Fusion, it's proposed that voice input be applied to actions on a multi-touch device that are poorly served by manual input while manual input handles tasks poorly served by voice.

For example, the filing presents the scenario where modifications to a mechanical drawing require resizing, rotation, and color change. Using voice commands to resize and rotate objects in the drawing would prove problematic because a verbal description of the intended size and rotation would be difficult to express. Therefor, those operations are best suited for manipulation by manual multi-touch gestures.

On the other hand, "using multi-touch to select a color is typically less efficient than using voice because the color has to be selected by traversing a list," Westerman explained. "Alternatively or additionally, voice input may be used to insert text in the object."

Finger Identification Fusion

Today's multi-touch touch sensors are efficient at detecting the presence of human fingers, but they do so with ambiguity as to which specific fingers are making the contacts. While that limitation doesn't pose a problem for existing applications, there are some examples where precise finger identification is imperative.

"Finger painting, where each finger has an assigned color, stroke, or other characteristic, is a simple example of an application that would be significantly enhanced compared to the state-of-the-art by using finger identification with multi-touch data fusion," Westerman wrote. "For example, if the index finger of the left hand is assigned the color red and the other fingers are assigned different colors the application must be able to determine when the index finger of the left hand is in contact with the surface in order to paint red. Conversely, the application must be able to determine when the red-assigned finger is not in contact with the surface. The fusion of finger identification data with multi-touch movement data allows the application to function without error."

Beyond multi-touch

To help identify individual fingers, the engineer proposes the user of a digital video camera, like Apple's built-in iSights, to provide a view over the multi-touch surface. The camera data would determine where the fingers of each hand are relative to the multi-touch XY coordinates, while the multi-touch sensor determines when the fingers make contact with the multi-touch surface.

Gaze Vector Fusion

Similarly, iSight cameras could also serve to record gaze vector data, where operations on a computer screen are partially determined by the direction in which a user directs his eyes or head position.

Beyond multi-touch

For example, if the user wishes to bring forward a window in the lower left corner of a screen, which is currently underneath two other windows, the user would direct his gaze to the window of interest and then tap a specific chord on the multi-touch surface.

Biometrics Fusion

The filing also suggests the use of biometric input, or input that is determined by hand size, fingerprint input, body temperature, heart rate, skin impedance, and pupil size.

"Typical applications that might benefit from the fusion of biometric data with multi-touch movement data would include games, security, and fitness related activities," Westerman wrote. "Hand characteristics such as size, shape, and general morphology can be used to identify an individual for the purpose of allowing access to secured areas, including computer systems. While hand characteristics alone would not provide a sufficient level of identity verification, it could be the first door through which a user must pass before other security measures are applied."

Facial Expression Fusion

A final means of secondary input outlined in Westerman's filing, which seems years off, would again employ a digital video camera like an iSight, to determine operations based on a user's facial expression and signs of frustration.

Beyond multi-touch

For example, he notes that a novice user may experience frustration from time to time when learning how to perform some task with an electronic device. "Say that the user is trying to scroll through a document using a two-finger vertical movement (gesture). Scrolling, however, is not working for him because he is unknowingly touching the surface with three fingers instead of the required two," he wrote. "He becomes frustrated with the 'failure' of the device. However, in this case, the system recognizes the frustration and upon analyzing the multi-touch movement data concludes he is trying to scroll with three fingers. At this point, the device could bring the extra-finger problem to the attention of the user or it could decide to ignore the extra finger and commence scrolling. Subsequent emotional data via facial recognition would confirm to the system that the correct remedial action was taken."

The filing by Westerman and Elias is dated Dec. 27, 2007.



37 Comments

dr_lha 18 Years · 236 comments

I can see it now, in Office 2020:

Clippy: You look frustrated, can I help you?
User: YArrrgghhhhhh!!!!!

eai 19 Years · 407 comments

What about hearing the user shout at the computer? Detecting tone of voice etc

wbrasington 17 Years · 381 comments

Finally!
A way to do cut/copy/paste on the iPhone.
The user uses voice commands and clearly prnounces..... "Control - A, Control - VEE"

Oh, so simple.....

gordoncomstock 19 Years · 112 comments

I know we all want to be touchy-feely with the iPhone, but how about a simple external modifier key? Put it on the opposite side of the iPhone from the volume buttons. Heck make it two keys like Shift and Command!

There's no efficient equivalent of the "click and hold" feature of a mouse on the iPhone. When copy and paste arrives, I'm betting that it's no picnic.

99.9% of the time, when operating an iPhone, you're holding it in one hand and tapping with the other. Give that other hand a conveniently placed button! Apple's already realized that the iTouch needs external buttons --it's not always convenient to have EVERYTHING handled by onscreen touch control. I don't think the iPod experience on the iPhone is as good as a click wheel endowed iPod.

gc

SpamSandwich 19 Years · 32917 comments

All very interesting, but hopefully patents applied for that they never intend to use in real products. The only recent idea I've been taken with is the idea of a multi-touch mouse or a keyboard that adds some kind of mulit-touch and yet retains the feel of a keyboard. At least then the fingers can remain in a good position to continue to do work. Any time the hands even leave the keyboard to go to the mouse is wasted movement.