Laval Virtual extends its vision

The future is already here, it’s just not evenly distributed. – William Gibson

We’re on the road to nowhere: come take a ride. Maybe in the future, we’ll just pretend to go somewhere. (Source: JPR)

The fortunes of the Laval Virtual conference are reflecting the interests in its headline technologies Virtual Reality, Augment Reality, and whatever lies in between. It’s booming. Last year the conference held in Laval, France, an easy train ride south west of Paris, saw a boost in attendance to 15,500, and this year attendance jumped to 17,000 according to early estimates.

Exhibits at Laval Virtual had been dominated by European companies in the visualization fields, and there are plenty of them but this year Microsoft took a big role with the Hololens. The show also saw participation from startups, systems integrators, and everything in between.

The CAD community is an early-adopting market for immersive technologies. CAVEs have been in use as tools to share design ideas or to allow people to experience virtual spaces before they become reality. David Nahon is the champion for VR at Dassault Systèmes and he has been encouraging the company to invest more in VR and AR. His team has developed its own version of a 3D paint tool like Google’s Tilt Brush, and it’s captivating. There is something really wonderful about being able to create an environment around yourself using VR. But obviously, Dassault has a much deeper interest in design and the company is taking the first steps into the VR world by enabling direct links between Catia and VR headsets. Nahon says they first showed this at CES this year in Las Vegas using an HTC Vive HMD. He says, whatever is on the screen can be seen in the headset with just two button pushes. One of the capabilities he sees as valuable is the ability to offer fast rendering in the headset so people can evaluate finishes for a product. The value of immersive technologies says Nahon “is all about insights.” VR is hot right now, and it enables more companies to use immersive technologies to get to that all-important insight. For remote collaboration, you can’t beat a good VR setup.

Jon is interacting with Mimesys co-founder Jules Rousseau who is across the aisle in another part of the booth. They are commenting on a model and taking advantage of Oculus’ controls. (Source: JPR)

To that end, Mimesys presented one of the best shared environments we have seen. The system combines VR and AR and uses sensors and the company has developed a tool set that includes a shared white board, etc., to enable people to communicate in a 3D environment. The company builds on Unity and incorporates Leap motion, Kinect or Realsense sensors, to create a virtual shared workspace.

Shared VR and AR workspaces are a major subset of virtual reality products and at Laval there were several companies that put together very useful approaches, but Mimesys has one of the most complete products with varied tools. Their goal is to make remote collaboration as natural as possible. The company won the Laval Grand prize this year, and given the history of the conference (their 21st year) and the sophistication of the jury, that is quite an accomplishment.

Similarly, MiddleVR showed a variety of tools for enabling shared environments. Unlike Mimesys, however, MiddleVR is well established in industrial visualization as well as VR and AR. Their motto is “improve reality.” The company is five years old and is completely bootstrapped. It has more than 220 clients now. MiddleVR teamed with French research lab Clarte to develop Improov3 for industry. It joins the company’s flagship collaborative environment tools. Like Dassault, Improov3 can use CAD data in VR to evaluate designs for manufacturability. The system supports FBX, OBJ, 3DS, STL, VRML, IGES, STEP as well as CAD and BIM data (CATPart, CATProduct, CGR, JT, 3DXML, IFC).

Sebastien Kuntz, founder and president of MiddleVR, says there are still silos inhibiting the design and manufacturing process. Designers aren’t really in contact with the way people will use their products. Middle VR has been developing systems that improve the design of manufacturing environments. So, for instance, Kuntz describes a system the company is working on to evaluate ship interiors, including submarines for livability, durability, and workability. Can people move around comfortably and safely? Improov3 has been developed to help bridge the gap between design and use. Improov3 is also being used to design and evaluate assembly line ergonomics. Kuntz says that training is still the number one use for VR. MiddleVR customers are using VR systems to train workers in dangerous environments or people working with high voltage equipment. Kuntz believes that no one tool overtakes the other. Rather, he sees CAVE’s continuing to be used in environments where people can come together to evaluate a design, and even large screen displays can give people enough of a sense of some designs to make decisions. The emphasis for MiddleVR is ease of use, making the process better.

Some of the biggest news wasn’t necessarily at the show. At Laval, JPR was included in a panel of “visionaries” to discuss ideas for what comes next. The panel included analysts, university researchers, and industrial R&D experts. It was a great combination of the supremely practical with the absolutely whimsical. The panel met off site in an intensive two-day workshop in preparation for delivering keynotes at the conference.

Laval seminar: top row: Phillipe David (SNCF), Zhungke Wu (Beijing), Mark Pallot, Alexandre Godin (Airbus), Olivier Decalf (Thales), David Defianas (Peugeot), Kathleen Maher (JPR), Jon Peddie (JPR). Bottom row: Masahiko Inami (Univesity of Tokyo), Simon Richer (ENSAM, Arts et Metiers Paris Tech), Olivier Boulanger (Renault), Alvaro Cassinelli (SinergiaTech, Fablab, Uraquay)

The job before us was to help identify trends and technologies for future Laval conferences. Very quickly this group identified broad regions of interest tying the technologies together: augmenting humans, digital interfaces, and integrating the digital world with the real world for smart cities, homes, environments.

I’m not totally sure about the rules of bubble jumping but it involves jumping stilts and huge inflated bubbles so who cares? (Source: Super Human Sports Society)

Masahiko Inami of the University of Tokyo embodies the practical and playful. Throughout his career, he has worked on developing technologies to augment human capabilities, and, he freely admits, he has wanted to be a superhero since was a small child. Inami’s work encompasses practical applications such as augmented automotive systems using “x-ray vision.” The researchers combined the review cameras with projection systems allowing drivers to “see through” the car to what’s behind them without obstruction. The effect is accomplished by covering the seats in back with projection-friendly reflective material so as the driver looks back she sees the view from the review camera projected on the seats. On the more playful side, Inami and his team at the Living Lab in Tokyo are working on superhuman sports. Encapsulating players in giant bubbles so they can smash into each other, or giving players access to sensors to see behind each other or interact with digital objects. Although there is a sense of play in the work of the Living Lab Tokyo, they are also experimenting with the idea of ubiquitous computing, getting messages from the environment. Being able to know where someone is looking or if they are happy or unhappy with something. It’s as useful for a car to know this as an opponent in a game.

Reseacher Alvaro Casssinelli, who also worked at the Living Lab Tokyo and at Keio University, is also working in the field of ubiquitous computing. He led an experiment to imbue any object with electronic capabilities. His group developed the banana phone and pizza box computer to demonstrate their idea of the invoked computer. The capability exists around us and can be invoked by gesture – picking up the banana to talk on the “phone,” opening the pizza box to find a projected keyboard and screen. This powered environment is accomplished with a stage where projectors, sensors, speakers, motion capture, and networks work together to empower objects and people. In an interview, he described a restaurant where one might open their napkin to see the menu projected on it. This work won an award at Laval in 2011 and it is ongoing. It suggests a future where we might not be carrying so many devices but we can still access the capability. And, better yet, that capability is constantly improving, thanks to regular updates in the cloud.

Cassinelli has moved to Uruguay where he is taking a much more practical approach in a country where there is much less digital industry and connectivity. His team has founded Fablab Uraquay, which they are using to enable people to experiment and build new products. As a result of his move from high-tech always-on Tokyo to Uruguay where technology is expensive and, as author William Gibson might say, unevenly distributed Cassinelli cautioned the panel to slow down and think about the people using the tools. As we pushed forward in our grand schemes of augmenting everything and everyone, Cassinelli points out that there are still huge parts of the world where technology is not a part of daily life and might not even be welcome.

I think that’s where this discussion became very interesting. The device that brings us technology, including the VR glasses, an AR helmet, a telephone, a computer, a car is going to become irrelevant, what is left is the information itself and the sensors that gather it, which are already multiplying around us like replicating viruses.

In fact, it was the work of the industrial researchers from transportation SNCF, Airbus, Renault, and Peugeot, as well as the wide-ranging work being done in security by Thales that brought the lessons home. They’re thinking about machine to machine collaboration as well as machine to people work. For them, then improving the capabilities and the information coming from sensors is really the point.

Philippe David from SNCF demonstrated the capabilities of French startup Chronocam. The developers of this imager throw away the idea of conventional cameras capturing streams of images. After all, pictures, videos, and photos are for humans. Machines don’t care about pictures, they care about data. The Chronocam is a CMOS sensor, which captures motion data as autonomous pixels. It presents to the machine relevant data as the information changes. In contrast, traditional sensors which pick up a series of images are collecting redundant data and yet it’s all being processed by the machine and that is slowing it down. Ironically, the Chronocam sensor is modeled after the human eye, which is always absorbing information. As a result, the Chronocam can capture data at a rate above 100,000 fps, it has very high dynamic range (up to 140 db), low power, and data is always being processed according to what’s changing in the scene. Intel invested $15 million in the company in 2016. Intel has subsequently bought Mobileye underscoring the importance of computer vision to Intel and to the future.

Alex Godin from Airbus has been pursuing the promise of light field photography which might also function like the human eye enabling reactive focus for AR and VR. Godin has been shown several systems including the Avegant system, which is currently being demonstrated and the Magic Leap system, which is being dragged kicking and screaming out of its Florida labs. Such a capability can make AR and VR systems more comfortable and useful by providing a variable focus that matches the way human eyes work. Eye tracking can help systems know what the viewer wants to see.

The industrial researchers are not necessarily betting on any particular technology. Rather they were encouraging more research into these areas by demonstrating the work they are doing now. Phillipe David said, what we really need to know are the use cases for the technology.