Most people don’t realize that the all-time fastest selling consumer electronic device is essentially a 3-D depth camera. The Microsoft Kinect was the first major release of a device that had the power to understand its surroundings, detect movement and gestures and even identify real world objects – and it was really just a companion to an already popular gaming device.
The overwhelming success of the Kinect began a steady rise in interest in what 3-D and depth cameras could do for consumer electronics that leads to 2014. Even before the end of 2013 there were already some major shifts as large companies like Apple suddenly entered the space. Other companies like Intel and NVIDIA shortly announced what most of the mobile industry had been anticipating-that they too, had been working on 3-D depth cameras. Their focus wasn’t gaming, but rather taking the same technology and making it mobile- smaller, and embedded into future mobile devices.
Apple Acquires PrimeSense
Whenever Apple does something related to the AR industry, everyone in it takes notice. It got out that Apple had been quietly filing patents for things like semi-transparent iPads and other computer-vision device elements, but when they purchased PrimeSense for $360 million it was the biggest endorsement of camera technology since the company purchased Polar Rose, a facial-recognition startup, in 2010.
PrimeSense was the technology behind the original Kinect project (then entitled “Natal”) before Microsoft opted for its own in-house technology, but it didn’t stop PrimeSense from iterating and, most importantly, developing for a mobile future. One of the most recent models before the acquisition, the Capri, was already in test on mobile devices – and the company claims they have an even smaller model in the works.
The most likely candidate for implementation is the Apple TV. Maybe it’s the similarity and meteoric success of the Kinect that begs the comparison, but Apple has shown it has its own way of doing things (note the lack of a Google Glass competitor). PrimeSense technology could even show up in the next iPad.
At a press conference at the 2014 Consumer Electronic Show, Intel’s Senior Vice President for perceptual computing, Mooly Eden, got on stage to reveal a 3-D camera the size of his index finger – part of Intel’s new “RealSense” initiative to take perceptual computing (gesture, voice and facial recognition among other areas) into the mainstream. Meanwhile, Intel’s CEO, Brian Krzanich, was kicking off the Consumer Electronics Show with his own keynote full of announcements and reveals of next-generation technology, including augmented reality and wearable computing. In his presentation, Eden revealed that companies like Lenovo, Dell, Acer, Asus, HP and Fujitsu were already working on integrating the technology into their devices. These companies mostly deal in laptops, but the Ultrabooks are basically mobile devices with keyboards that run Windows PC and, according to Krzanich’s keynote, a native Android operating systems. Unlike the Kinect, Eden notes that this is a “close range” technology, which could potentially reinvent the way consumers can use electronic device to interact with the world, from entertainment to enterprise. Intel also announced the integration of 3-D augmented reality tracking by Metaio yet another step that shows they’re taking depth and virtual interaction seriously.
SoftKinetic joins forces with NVIDIA and Makerbot
SoftKinetic recently announced its partnerships with both NVIDIA, one of the leading OEMs, and Makerbot, the most famous 3-D printing company. SoftKinetic occupies some great real estate in the 3-D camera world: they offer both the hardware and the middleware for their powerful DepthSense time-of-flight USB camera.
Much like RealSense (and unlike Kinect), this approach makes for a shallower (i.e. closer to the user) form of gesture recognition, which is what NVIDIA may be shipping in its Tegra Note 7.
SoftKinetic has already done something similar with Ubisoft for the Just Dance PS4 game, but as mentioned earlier the entertainment focus seems to be moving more toward a general consumer interest, with software and hardware that would allow mobile users to control their devices from “afar.” Hands not free? Soiled from cooking or working? Just wave your fingers in front of the device. For Makerbot, the SoftKinetic 3-D scanning would allow for a more streamlined 3-D model creation pipeline, from virtual inception to actual production. At first glance this partnership doesn’t seem quite as significant as one with NVIDIA, but think about being able to scan nearly any object, environment or maybe even face for later recognition or reference.
What does it mean for Augmented Reality?
Content creation has always been a slight bottleneck for AR- 3-D modeling can be expensive and time-consuming. Even large companies with extensive 3-D libraries need those files to be converted and optimized for mobile. Not unkindly: some 3-D models of heavy products like cars or machinery can number in the gigabytes- not very mobile-friendly
With on-the-fly 3-D scanning, companies could scan in objects and components to later use for AR training and maintenance scenarios, similar to those produced by Mitsubishi Electric and Volkswagen. Scanning is a great asset, but the true value in 3-D cameras comes from their ability to help the software better recognize and understand its surroundings. Before depth, AR experiences had trouble determining if an object was in the foreground or background. This may seem like a mundane achievement, but it becomes far more significant when the goal is to overlay relevant and contextual information for the user.