We develop a new VR walking simulator using the Unity game engine on HMD. The new walking simulator will present a crowded airport terminal or shopping mall scene with multiple pedestrians in each scene. The subject will be asked to look at the lead person. Multiple pedestrians will be continuously present appearing at various bearing angles and courses.
Doyon J, Hwang A, Jung J-H. (2023). Understanding Viewpoint Changes in Peripheral Prisms for Field Expansion by Virtual Reality Simulation. Biomed Opt Express. Under review. Preprint. Optica Open. DOI
Hwang A, Peli E, Jung J-H. (2023). Development of Virtual Reality Walking Collision Detection Test on Head-mounted display. Proc. SPIE 12449, 124491J. DOI
Visual confusion is the core mechanism for field expansion in patients with field loss to detect collision in mobility. Although binocular visual confusion using unilateral device (one eye only) is common, binocular rivalry reduces the detection performance in mobility. We develop new monocular visual confusion devices (optical see-through device) and reveals the visual mechanism of monocular visual confusion with considering contrast reduction, providing stereoscopic depth cues, and coherency of motion flows. This mechanism could explain the visual perception in augmented reality (AR) and see-through displays.
Visual confusion is the core mechanism for field expansion in patients with field loss to detect collision in mobility. Although binocular visual confusion using unilateral device (one eye only) is common, binocular rivalry reduces the detection performance in mobility. We develop new monocular visual confusion devices (optical see-through device) and reveals the visual mechanism of monocular visual confusion with considering contrast reduction, providing stereoscopic depth cues, and coherency of motion flows. This mechanism could explain the visual perception in augmented reality (AR) and see-through displays.
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter.
We have proposed several light-field 3D imaging (i.e., integral imaging) that use a micro lens array to capture and represent whole 3D rays within a single shot of the elemental image. These systems permit real-time capturing and reconstruction of 3D volume data, as well as confocal image generation. We have developed computational photography algorithms for 3D depth extraction, volume reconstruction, and modification using light-field technology. This project is to develop and evaluate a novel front-end optical and video processing system based on light-field imaging and computational photography to be used with any mobile systems that will remove background clutter and therefore improve object detection and recognition.