Researchers Render Images Based on Reflections from an eye

Researchers have found a way to use these subtle light reflections on the human eye to see and reconstruct the scene that the person is observing. The study is out of the University of Maryland, and it shows how the team responsible for the research captured multiple views of a scene by imaging the eyes of a moving person.

A modified rendering setup for the NeRF (neural radiance fields) algorithm. In the traditional NeRF approach, rays are cast from the camera origin along the viewing direction. However, in this modified setup, rays bounce off the cornea of an eye. The reflected ray originates at a point where the initial camera ray intersects with the cornea, denoted as O’. The new ray direction, denoted as d’, is the reflection of the original viewing direction across the cornea’s normal (represented as -→n). The observed image is a combination of the iris texture and the reflected scene. This composition poses challenges for standard NeRF training because of the highly-detailed iris texture. (Source: arXiv)

However, the process is complex and it isn’t without its challenges. One of these is cornea localization; identifying the precise location and orientation of the cornea can be a noisy and inaccurate process, potentially hindering the efficacy of the 3D reconstruction.

Another challenge is related to the complexity of the iris textures. The iris of the eye has a unique texture that could make interpreting the reflections complicated. Lastly, the researchers noted that the reflections captured in each image tend to be low resolution, which could be another obstacle to detailed reconstruction.

To address these issues, the researchers proposed several novel approaches. The first of these is cornea pose optimization, which is a process that corrects for the noise in identifying the cornea’s position and orientation. The second approach is iris texture decomposition. This could be a method that simplifies or interprets the complex textures of the iris, making it easier to understand the reflections. Finally, they introduced a radial texture regularization loss, which appears to be an error metric that aids their model in better understanding the textures of the human eye iris.

Qualitative synthetic results from the research show reasonable reconstructions from challenging measurements. The method can reconstruct the 3D geometry of the scene by visualizing the accumulation of the learned radiance fields with respect to the camera poses. The accumulation is defined as the integral of the density along the camera rays. (Source: arXiv)

One of the most significant advantages of this method is that it doesn’t require a moving camera to reconstruct the 3D scene. Instead, it relies on the motion of the person whose eye reflections are being used. The researchers have indicated that their method has been effective with real-world data.

The researchers aim to create new possibilities in 3D scene reconstruction by using accidental or unexpected visual signals from the reflections in a person’s eyes. They believe their research has the potential to broaden the field of 3D reconstruction, and they hope their work will inspire more exploration in this area.

Reference

Alzayer, H., Zhang, K., Feng, B., Metzler, C., & Huang, J.-B. (2023). Seeing the world through your eyes. https://doi.org/10.48550/arXiv.2306.09348