The end of 3D Modeling: Project Tango at Google I/O

Details about Google Project Tango, 3D vision for mobile devices, have been available since February. At Google I/O project lead Johnny Lee demonstrated the latest device featuring Nvidia’s K1 Tegra, putting some flesh on the bones of an undeniably cool idea. 

Project Tango Reference Design with K1 Tegra
The latest iteration of a Project Tango device: record your reality and make it what you want it to be. Google is working with 40 active partners to put computer vision to work in consumer devices.

The early assessments from Google I/O this year are that the company got a lot of work done, but maybe not so much magic. The mainstage presentations covered the upcoming L version of Android, which will get its official cute dessert name later this year as products using it come to market. The straightforward L code name suits this year’s Google I/O, which covered the ever widening territory of Google’s interests, and mobile’s capabilities. The company showed off its new tools for developing apps for wearables, drivables, and watchables (TVs, tablets, etc.).

That’s not to say there was no magic. Computer scientist Johnny Lee, of Project Tango demonstrated the latest iteration of a tablet with machine vision.  The new device is the culmination of 4 design iterations as the team worked to reduce the size of the camera module and produce a practical, affordable device. The tablet-sized device is based on Nvidia’s Tegra K1 with a 128GB SSD, and 4GB of RAM. It also features accelerometers, two cameras (one wide angle camera and a traditional camera), which combine to capture image, depth, and motion sensing.  According published reports, the development kit will cost $1,024 and includes Google’s SDK.

No place like home: Matterport was able to build a 3D model from data collected by a Project Tango device. (Source: Matterport)
No place like home: Matterport was able to build a 3D model from data collected by a Project Tango device. (Source: Matterport)

At Google I/O Johnny Lee’s demonstrated that the device can not only see but it can build. Lee’s demos showed the device being walked around Google HQ, up several flights of stairs and down, and also through Lee’s house. The device could construct a rough 3D model of the environments in real time. In addition, capturing the data and applying post-processing could result in a realistic 3D model of the space. The model showed by Lee was built by developer Matterport, which specializes in machine vision software. The room they used is far from a pristine area; it’s a room in transition complete with packing boxes, tools, cans, drapes, etc.

First of all, creating a usable model of an area with consumer hardware is decidedly cool, and second of all that model can be just the starting point. The model can provide the bones for different environments — a complete remodel, an idealized fantasy land, a space ship.

Even though the focus is on the consumer market, it’s not surprising that Autodesk and Trimble are among the active participants in the Tango project. The world of 3D modeling is changing fast and it’s changing forever.