Delta gets onboard with Misapplied Sciences’ multiview display tech.
It’s four years since Display Daily published an article (subscription required for viewing) that set out the ideas from Misapplied Sciences relating to being able to direct multiple images on a display directly to particular viewers. Now the technology is being tried by Delta Air Lines at Concourse A of Detroit Metropolitan Airport in the US.
The display technology comes into its own after users scan their boarding pass or can activate it using facial recognition if they are enrolled in the Delta Digital Identity program (if not, the system uses overhead cameras to identify those using the system when they register and tracks the “person object” without knowing any personal details, Delta told Gizmodo). Those who are enrolled will be automatically recognized and will have personalized information shown to them. The information will be in an appropriate language and will advise of gate numbers and other useful travel guidance such as the correct carousel for baggage collection.
The idea was demonstrated at CES 2020. At that event, there was a demonstration involving around 12 people/positions, but Delta said at the time that it was “applicable to hundreds of people.” The plan was to have the system rolled out in 2020, but obviously COVID meant the airline had other things to think about.
Misapplied said the technology can:
- Target advertising to each viewer’s needs, interests, behavior, and surroundings. (Oh, joy!)
- Individualize media, messaging, and lighting effects in public entertainment venues to each viewer.
- Provide wayfinding.
- Help with traffic control in the form of messaging and signals on roadways that are specific to each zone, lane, and vehicle.
- Adjust the content for each viewer’s distance, viewing angle, and sight lines.
- Help with people flow management, especially in the case of an evacuation event.
Misapplied has also suggested that the technology could be very helpful in sports as well, or in retail—anywhere there are a lot of people. In a stadium, for example, the system could direct a spectator to their seat.
The company claims that its pixels can control the color of the light in “up to a million angles.” The firm has its own processor to drive the pixels, but up to hundreds of views can be created using a PC before being sent to the processor.
The company is short on providing details of the actual display technologies used, but it looks as though the number of pixels in each view is quite limited in the Delta configuration and that they are based on discrete LEDs, but this is just my speculation.
The technology reminded me of something I had seen before. I had a vague memory that Microsoft had, at one time, been working on steerable optics. I found a video from 2010 on YouTube that was from Microsoft Applied Sciences on the MSAppliedSciences channel. (That’s just one letter away from the name Misapplied Sciences. Oh, and Albert Ng was an intern at Microsoft Research before starting Misapplied.) The video showed a very crude version of the idea with just two people and using the Wedge display technology that I’ve been reporting on since I saw it demonstrated by Adrian Travis at the UK EID show in 2000. The aim was to offer 3D using multiple views. (In a “small display world” coincidence, he was showing his latest project in the I-Zone at Display Week just a few weeks ago.) Travis moved from his start-up, Cambridge 3D Display, to work at Microsoft for some time.
A paper on Travis’ work provides some clues, I think, as to how Misapplied might be directing the light.
The idea of steerable optics is, I believe, a very important one. At the moment, all of our displays pump out light in all directions—in fact, we grade them on how well they can spread the light in a Lambertian way. But the brute reality is that every photon that doesn’t land on a retina is wasted. To quote one of my favorite phrases: “A display without a person has no function.” At the moment, our displays bathe the world in light, which is not only extremely wasteful, but in many situations, that wasted light is reflected back onto the display and reduces the image quality (as Pete Putman so clearly showed and has been researched by Barco and others).
That got me thinking about a cinema fitted with this technology. I sometimes find myself in a cinema that only has a few viewers. Supposing you could just show the display to them? How about having different age-suitable visuals for different viewers?
At the moment, the firm has pretty big pixels suitable for large video walls. However, imagine the use of microLEDs with metasurface optics, exploiting the technologies that have allowed the development of the advanced semiconductor industry. Could we get to small pixels and allow just the viewer to see their screen without the huge amounts of wasted power? I suspect such a vision is a long way away, but it’s intriguing. And it might just be nearer than I think!
This article was reproduced with permission by Bob Raikes, publisher of Display Daily.