Research Projects

Grant
Virtual Reality Simulation of Mobility

09/2020 – 08/2024, Monocular Visual Confusion for Field Expansion, NIH/NEI R01 – EY031777

We develop a new VR walking simulator using the Unity game engine on HMD. The new walking simulator will present a crowded airport terminal or shopping mall scene with multiple pedestrians in each scene. The subject will be asked to look at the lead person. Multiple pedestrians will be continuously present appearing at various bearing angles and courses.  

Doyon J, Hwang A, Jung J-H. (2023). Understanding Viewpoint Changes in Peripheral Prisms for Field Expansion by Virtual Reality Simulation. Biomed Opt Express. Under review. Preprint. Optica Open. DOI

Hwang A, Peli E, Jung J-H. (2023). Development of Virtual Reality Walking Collision Detection Test on Head-mounted display. Proc. SPIE 12449, 124491J. DOI

Grant
Augmented Reality for Field Expansion (Vision Multiplexing)
09/2020 – 08/2024, Monocular Visual Confusion for Field Expansion, NIH/NEI R01 – EY031777

07/2018 – 12/2019, Field Expansion for Acquired Monocular Vision using Multiplexing Prism, Fight for Sight Grant #GA18003

Visual confusion is the core mechanism for field expansion in patients with field loss to detect collision in mobility. Although binocular visual confusion using unilateral device (one eye only) is common, binocular rivalry reduces the detection performance in mobility. We develop new monocular visual confusion devices (optical see-through device) and reveals the visual mechanism of monocular visual confusion with considering contrast reduction, providing stereoscopic depth cues, and coherency of motion flows. This mechanism could explain the visual perception in augmented reality (AR) and see-through displays.

Grant
Computational Model of Visual Perception and Binocular Vision

09/2020 – 08/2024, Monocular Visual Confusion for Field Expansion, NIH/NEI R01 – EY031777
09/2019 – 08/2024, Visual field expansion through innovative multi-periscopic prism design, NIH/NEI R01- EY023385  (Co-Investigator)

Visual confusion is the core mechanism for field expansion in patients with field loss to detect collision in mobility. Although binocular visual confusion using unilateral device (one eye only) is common, binocular rivalry reduces the detection performance in mobility. We develop new monocular visual confusion devices (optical see-through device) and reveals the visual mechanism of monocular visual confusion with considering contrast reduction, providing stereoscopic depth cues, and coherency of motion flows. This mechanism could explain the visual perception in augmented reality (AR) and see-through displays.

Grant
Active Confocal Imaging for Visual Prostheses

01/2016 – 01/2021, Active confocal imaging for visual prostheses, US Department of Defense, W81XWH-16-1-0033
02/2015-10/2015, Active confocal imaging for visual prostheses, Grant from the Promobilia Foundation, Sweden #14222

There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter.

Grant
Light-Field Imaging and Computational Photography

01/2016 – 01/2021, Active confocal imaging for visual prostheses, US Department of Defense, W81XWH-16-1-0033
12/2012-12/2013, Study on effect of super multi-view condition in three-dimensional display to accommodation response and improvement of optical vision rehabilitation device, National Research Foundation of Korea, 2012R1A6A3A03038820 (PI)

We have proposed several light-field 3D imaging (i.e., integral imaging) that use a micro lens array to capture and represent whole 3D rays within a single shot of the elemental image. These systems permit real-time capturing and reconstruction of 3D volume data, as well as confocal image generation. We have developed computational photography algorithms for 3D depth extraction, volume reconstruction, and modification using light-field technology. This project is to develop and evaluate a novel front-end optical and video processing system based on light-field imaging and computational photography to be used with any mobile systems that will remove background clutter and therefore improve object detection and recognition.