The Collaborative-Research Augmented Immersive Virtual Environment Laboratory

CRAIVE-Lab-Ithaca

This project addresses the need for a specialized virtual-reality (VR) system for the study and enabling of communication-driven tasks with groups of users immersed in a high-fidelity multi-modal environment located in the same physical space. While current multi-modal VR systems have achieved a high degree of realism, they either focus on the immersion of a single or very small group of users or on presenting material to a larger group of users in a cinema-type environment. In both cases, the systems provide homogeneous visual and acoustic fields. For group communication tasks, inhomogeneous fields that provide personalized visual and acoustic perspectives for each user, could provide better access to relevant information from the VR system’s display and at the same time increase the experiential degree of presence and perceived realism for interactive tasks.

The project addresses the technical hurdles that need to be surmounted to establish a large-scale (18m×12m×4.3m), multi-user, multi-perspective, multi-modal display. For the visual domain, multiple-point-of-convergence rendering techniques will be used to (re-)create scenes on a seven-projector display. Based on user positioning data from a hybrid tracking system, optimal points of convergences will be determined such that the majority of the users have an undistorted view within their direct visual field, especially at close user proximities, restricting the unavoidable distortions to accommodate other users to their peripheral visual field or areas outside of their vision. Perceptual tests will be conducted to find the best practice to adapt visual perspectives to changing user positions.

For the acoustic domain, a 192-loudspeaker-channel system will be designed for Wave Field Synthesis (WFS) with the support of Higher-Order-Ambisonic (HoA) sound projection to render inhomogeneous acoustic fields. The acoustic spatial quality of the system will be unprecedented for an AV system of this scale, partly because RPI’s Experimental Media and Performing Arts Center (EMPAC) provides exceptional acoustics in studios with the extremely low-noise floor and a unique diffuse sound characteristics needed to provide a neutral basis for the creation of virtual acoustic spaces. A haptic display, consisting of sixteen platform elements, will be used simulate floor vibrations and also provide infrastructure for other vibrating objects (e.g., handheld devices).

An intelligent position-tracking system estimates current user positions and head orientations as well as positioning data for other objects. For the tracking system, a hybrid visual/acoustic sensor system will be used to emulate the human ability of extracting robust information by relying on different modalities. A network of 12 cameras, and a 16-channel ambisonic microphone plus additional peripheral microphones will be used to this end.

CRAIVE-Lab-Ithaca

CRAIVE-Lab-Ithaca

The CRAIVE-Lab project is made possible by funding from the National Science Foundation.