Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple researching how to use compressed LiDAR data in AR & 'Apple Car'

On an iPhone 12 Pro, the LiDAR Scanner is the darker circle in the camera bump

New research shows Apple is not only working to add LiDAR to "Apple Car" and its augmented reality efforts, but also how to make devices utilize the image data faster.

It has felt as if LiDAR is the technology waiting for an application. There's no LiDAR app oniPhone, no LiDAR settings either. So even though there is much you can do with it already, it has always felt as if Apple has big plans for the future.

That expectation appears to be borne out by a trio of newly-revealed patent applications. Starting with "Geometry Encoding of Duplicate Points," the three are really all concerned with compressing and transmitting extensive LiDAR data in the fastest, most efficient way.

This first one looks to analyze the image data that LiDAR captures, then look for ways that it can reduce the data without losing important information.

It's very much like JPEG, where for instance, a blue sky is captured as several thousand blue dots. The JPEG processing effectively writes that down as one blue dot with a note that you need a thousand more after it.

"An encoder is configured to compress spatial information for points included in a three-dimensional (3D) volumetric content representation using an octree, predictive tree, or other geometric compression technique," says Apple of this LiDAR compression.

"For points of the 3D volumetric content that are spatially located as same or similar locations in 3D space, such duplicated points, may be signaled using a duplicate point count," it continues. "The duplicate point count may be used instead of explicitly signaling (duplicated) spatial information in the predictive tree for the duplicated points, as an example."

Part of this patent application regards how this compression could be calculated. Then part of it is about how a system could interpret the compression and produce a fuller result.

This patent application is by David Flynn, and so is a second, more specific "Geometry Encoding Using Octrees And Predictive Trees." While its detail is more specific, the aim is the same as Flynn's other patent application.

"Various types of sensors, such as light detection and ranging (LIDAR) systems, 3-D-cameras, 3-D scanners, etc. may capture data indicating positions of points in three dimensional space, for example positions in the X, Y, and Z planes," says the patent application.

"Also, such systems may further capture attribute information in addition to spatial information for the respective points, such as color information (e.g. RGB values), intensity attributes, reflectivity attributes, motion related attributes, modality attributes, or various other attributes," it continues.

The issue is that a "point cloud" containing all of this data "may include thousands of points, hundreds of thousands of points, millions of points, or even more points." Apple says that "such volumetric data may include large amounts of data and may be costly and time-consuming to store and transmit."

So any proposal to reduce the information needed to be sent will speed up the responsiveness of the system.

As ever, the patent application tries to be as broad as possible, and it also gives few specific examples. But there are multiple references to a vehicle.

"For example, a vehicle equipped with a LIDAR system, a 3-D camera, or a 3-D scanner may include the vehicle's direction and speed in a point cloud captured by the LIDAR system, the 3-D camera, or the 3-D scanner," it says.

The point it's making there is less that LiDAR could be used in "Apple Car," more that specific applications will result in much more data to process.

It's the same with Apple AR, and according to the third newly-revealed patent application, also video.

"Video-Based Point Cloud Compression with Predicted Patches," is concerned with applying similar compression methods to video — when that video has "associated spatial information" too.

This patent application is by Jungsun Kim, Khaled Mammou, and Alexandros Tourapis. The latter's previous related work includes using real-time LiDAR surface tracking as part of a system for recording touch sensations.

Keep up with everything Apple in the weekly AppleInsider Podcast — and get a fast news update from AppleInsider Daily. Just say, "Hey, Siri," to your HomePod mini and ask for these podcasts, and our latest HomeKit Insider episode too.

If you want an ad-free main AppleInsider Podcast experience, you can support the AppleInsider podcast by subscribing for $5 per month through Apple's Podcasts app, or via Patreon if you prefer any other podcast player.