Ukrainian startup has achieved a breakthrough in scanning, imaging, and measuring from off-the-shelf parts
By Jon Peddie
2r1y, a startup with talent from Silicon Valley and Ukraine, was founded in 2012 by a group led by Richard Neumann and Serge Yefimov. They have spent the last couple of years building a technology for a perceptual imaging solution. Think of this as the technology that would be required to enable a computer system to see and perceive the world around it. This type of vision system will be critical for robotics, remote medical procedures, global security, smarter homes, consumer devices, and many other areas where machine sight and vision are important.
The problem, however, is big, too big. Sensing all the things around us with high resolution is more than big data, it is gigantic, unwieldy data. So the developers set their original goal for the design to find a means by which a finite amount of light can be most effectively utilized to illuminate a subject. The result was a technology that precisely and dynamically controls how light is placed into an environment.
2r1y thought the market needed a different kind of illumination (than what was currently being used) for 2D gesture-recognition systems, which could be used for smart TVs and gaming to capture human motion in HD at 30 fps. However, flooding a subject with kilowatts of visible light is not a desirable user experience, nor is it an energy-efficient alternative. Using a similar amount of non-visible near-infrared (NIR) light has the added factor that in more than very small amounts, a few milliwatts, NIR can be extremely dangerous to humans.
Originally the concept seemed simple: figure out a way to provide a rather coarse spot or segmented NIR illumination to enhance the performance of 2D gesture-recognition systems. The team built a test system with three banks of 100 LEDs on curved back planes. They were used to prove the idea worked, but more accuracy would be needed to make the system viable. That test system now sits on a shelf in the lab at 2r1y—proof of concept, but not practical.
The company next sought to find a way to precisely place light only where needed—on a person, a hand, or just a fingertip. This allows for an extremely small amount of light to be used in the most effective way possible. The added benefit is that the total amount of energy required is a fraction of other methods of illumination.
After a lot of research and experimentation, they found a mechanism for both controlling and directing the illumination. But then the project hit a snag when they couldn’t find a camera sensitive enough to meet their needs. Through a lot of trial and error, they evaluated several sensors and ultimately settled on one from Omnivison.
Points of light
The core technology begins with a single light source, creating a single point of illumination. At a distance of 4 meters, the point is less than 4 mm in diameter, and each point of light uses less than 88.4 nW (nano-watts)!
The system has control over the amplitude and pulse width (time illuminated) of each point. Points are generated in rapid succession and optically directed over an area during a single frame exposure; think of it like a phased array radar, but with light. This effectively creates a dynamically configurable homogenous illumination of the subject, or objects. Currently the company can get 24 million sequential points per second—that’s enough to enable 720p at 60 fps. (The team promises the next generation will have increased performance in resolution and frame rate.)
With this ultra-modulated and steerable point light source, they could actually begin to test the functionality. The device worked better than they expected. They could successfully place millions of points of light wherever they wanted. They began with the basics of low-level illumination and worked up to more accurate “painting” of light until they could illuminate just the fingers on a hand. They could illuminate a face and block light from the eyes. They could track multiple moving subjects at once and measure distances.
One night Yefimov, their chief 3D imaging technologist, suggested they see what they could do with this much control over light. Using some ideas salvaged from the original prototype, they began to play with 3D imaging. At first the results were rough, but as they worked with the system, they began to perfect the methodology and the technology. They achieved a full facial scan at 1 meter with a 1.7-mm accuracy in 50 milliseconds—time for some champagne.
Utilizing the core module, 2r1y has begun to explore the possibilities of how to use their “directed illumination.” The basic function starts with a low level of illumination, sufficient to either manually, or with AI, identify an area of interest. Like a theatrical follow spot, the illumination can then be concentrated on a smaller area, say, a person. Or on smaller areas such as a hand or just a fingertip. As the subject moves or changes orientation, the illumination can follow and morph to match.
Points can be used to determine precise distance measurements to a single or multiple subjects, so now they had a metrology device. Points can also be used to extract noninvasive biometrics such as pulse. The technology can also be used to generate structured light patterns for extracting high-density point clouds for 3D imaging.
And because every point is unique, multiple functions can be performed simultaneously (in the same frame of exposure). Current consumer-type devices such as structured light and pulse modulated (TOF) methods offer depth maps having resolutions in fractions of megapixels. With Kinect v2 leading the pack with 0.2 megapixels, they are not able to attain the density and speed demanded by emerging markets, or what 2r1y has accomplished, for example, data densities 5 to 10 times greater than other systems.
CEO Neumann is quick to point out that this is their starting point. The data density is now a limitation of the resolution of the image sensor and the processing power of the GPU/CPU. You could, he states, use this same methodology to generate a 2-, 5-, 10+ megapixel depth map. Neumann is proud of the fact they went from a white board to a functioning, market-ready camera in 88 days The company is now showing demos to select organizations.
What do we think?
This is breakthrough technology. If 2r1y can do what they say they can (we haven’t seen a demo yet), with COTS CE parts, this will genuinely be disruptive technology. The low-power directed beam capability would fit into handhelds, and be useful for automotive applications. Auto companies are experimenting now with arrays of HD projectors on the front of a test vehicle to create shaped light around the car in front of them so the road is lit up but not the interior of the car in front. With the added benefit of being able to measure distance precisely, 2r1y’s technology can be an early warning device as well. We think 2r1y has a tiger by the tail.