Apple exploring 3D gestures to control devices from a distance
Apple's interest in hands-off control of a device like an iPhone, iPad or Mac was revealed this week in a new patent application made public by the U.S. Patent and Trademark Office. Entitled "Real Time Video Process Control Using Gestures," the filing, discovered by AppleInsider, is related to remotely controlling and editing video recordings on a mobile device.
Such editing could be done with gestures on a touchscreen, much like is already available on the iPhone and iPad. But within the application, Apple also makes mention of hand gestures that can be performed without touching the device.
The filing notes that a device could be controlled with hand gestures accomplished in either two or three dimensions, and these could be interpreted through infrared sensors, optical sensors, or other methods. These gestures could be used as a replacement for, or even in concert with, traditional touchscreen-based gestures.
"As with the touch based gestures applied on or near the touch sensitive input device, the hand gestures can be interpreted to provide instructions for real time processing of the video by the video capture device," the filing reads.
Apple's goal is to simplify and minimize the need for user input partially because the size of recording devices, like an iPhone or iPad, has become so small. The filing notes that placing a finger on a touch-sensitive display can cause a video capture device to move, and that movement is then translated to the video recording.
With Apple's method, a remote camera could be controlled wirelessly from a second, separate device. An iPhone or iPad are specifically mentioned in the filing as potential options for a "control device."
One image accompanying the application shows a video being recorded on an iPhone. That video is then transmitted wirelessly, via Bluetooth, to an iPad, where the user can view the video in real-time and make adjustments.
Given the volume of data that must be wirelessly transmitted, Apple's solution is to automate real-time video processing as much as possible, identifying objects and individual people's faces captured in a video. The filing even states that a system could help to determine how entities captured in the video relate to one other.
In one example provided, a video of two tennis players playing against each other could be analyzed to have a "negative correlation," as one player is hitting the ball while the other is not.
"Therefore, by determining the relative correlation between these two players, an implicit association can be assigned to each," the application reads.
Using this kind of data, the image could be framed according to user specifications. For example, after recognizing a specific face, a video capture device could zoom in and track that individual in real time, with minimal or no input from the user.
Apple's proposed invention, published this week by the USPTO, was originally filed in April of 2010. It is credited to Benjamin A. Rottler and Michael Ingrassia Jr. I.