Funding and Partnerships
Institute for IDEAS collaborates with universities and industry partners worldwide. In addition, faculty affiliated with the Institute for IDEAS participate in a variety of ongoing collaborations and partnerships with NASA, Princeton University, UC Davis, LIGO, and more. If you represent an institution, private or public, interested in immersive technologies, please contact us at krzysztof@instituteforideas.org.
The Institute for IDEAS is externally funded by federal research agencies and non-profit foundations. To date, the faculty afiliated with the Institute for IDEAS raised over $10,000,000 in external funding.
George Washington University
Currently, we have two active research projects with George Washington University. The first project, funded by a $1.5m NSF grant, focuses on using volumetric capture and spatial computing to augment the training process for remote medical personnel. The second project is the immersive educational system for teaching proportional reasoning using brainwaves to modulate difficulty.
California State Polytechnic University
Our collaboration with Cal Poly involve a number of immersive educational projects. The faculty at the Institute for IDEAS has coauthored a number of papers with Cal Poly faculty and are currently working on five research projects, of which three are led by Cal Poly and two by the Institute for IDEAS.
Currently Funded Projects
National Institute of Health:
Dr. Bei Xiao - $420,000 (common mechanisms and individual variability)
Project: Learning diagnostic latent representations for human material perception
Abstract: Visually discriminating and identifying materials(such as deciding whether a cup is made with plastic or glass) is crucial for many everyday tasks such as handling objects, walking, driving, effectively using tools, and selecting food; and yet material perception remains poorly understood. The main challenge is that a given material can take an enormous variety of appearances depending on the 3D shape, lighting, object class, and viewing point, and humans have to untangle these to achieve perceptual constancy. This is a basic unsolved aspect of biological visual processing. We use translucent materials such as soap, wax, skin, fruit, meat as a model system. Previous research reveals a host of useful image cues, and finds that 3D geometry interacts with the perception of translucent materials in intricate ways. The discovered image cues are often do not generalize across geometry, scenes, and tasks, and are not directly applicable to images of real-world objects. In addition, the stimuli used are small in scale and limited in their diversity (e.g. most looks like frosted glass, jade and marble). Our long-term objectives are to measure the richness of human material perception across diverse appearances and 3D geometries, and to solve the puzzle of how humans identify the distal physical causes from image features. To achieve this goal, we will take advantage of the recent progress made with unsupervised deep neural networks, combined with human psychophysics, to investigate material perception with a large-scale image data set of real-world materials. Our first specific aim is to identify a latent space that predicts the joint effects of 3D shape and physical material properties on human material discrimination, using unsupervised learning trained with rendered images. We will then investigate how the latent representations in neural networks relate to human perception, such as comparing the model predictions of how 3D geometry affects material appearance. Further, we aim to identify image features that are diagnostic of material appearance with respect to fine 3D geometry. Our second specific aim is to investigate high-level semantic material perception from photographs of real world objects and characterize the effects of high-level recognition on material rating tasks. To discover a compact representation of large-scale real-world images, we train a style-based Generative Adversarial Networks (styleGAN) ona large number of photographs. Our pilot data suggests styleGAN can synthesize highly realistic images and the latent space can disentangle mid-level semantic material attributes (e.g. "see-throughness"). Therefore, the latent space can help us discover diagnostic image features related to high-level material attributes and individual difference. By manipulating the stimuli using the disentangled dimensions in the latent space, we will measure the effects of global properties (e.g. object) and local features (e.g. texture) on material perception and test the hypothesis that the individual differences we found in a preliminary study come from the individual variability in high-level scene interpretation. Collectively, these findings will allow us to examine the basic assumptions of mid-level vision by uncovering task-dependent interplay between high-level vision and mid-level representations, and provide further guidance for seeking neural correlates of material perception. The methods developed in this proposal such as comparing human and machine learning models and characterizing individual variability, will impact to other research questions in perception and cognition. The AREA proposal provides a unique multidisciplinary training opportunity for undergraduate students in human psychophysics, machine learning, and image processing. The PI and the students will also investigate novel method of recruiting under-represented human subjects such as "peer-recruiting".
Dr. Krzysztof Pietroszek - $1,000,000 (volumetric capture)
Project: Acquisition of Volumetric Capture System for the Institute for IDEAS
Abstract: A state-of-the-art, high-quality volumetric capture system will support multidisciplinary research projects at the Institute for Immersive Designs, Experiences, Applications, and Stories (the Institute for IDEAS) at American University in Washington, DC. Volumetric capture is a computer vision technology that enables recording the topology of objects and people rather than, as in traditional video recording, the projection of three-dimensional objects onto a two-dimensional surface. By using virtual or augmented reality headsets or CAVE systems, volumetric captures can be viewed and interacted with from any perspective chosen by the user at the time of viewing. Thus, volumetric captures allow for a “holographic” experience that combines the immersiveness of virtual and augmented reality with the realism of traditional video recording. Volumetric capture presents an interesting, yet unexplored, research potential across many domains, such as healthcare, education, entertainment, visualization, and communication, etc. Yet, due to high costs of system acquisition, volumetric capture and its application is currently being research only at a handful of research centers (e.g. Stanford’s Virtual Human Interaction Lab, the University of California San Diego, and Georgia State University).
American University Institute for IDEAS was created specifically to pursue research projects in immersive technologies, including volumetric capture and desires to better understand the affordances and applications ordered by volumetric capture technology and have the public benefit from it, as well as try to make more accessible and affordable. The acquisition of volumetric capture system will allow the 24 full-time faculty fellows of the Institute, as well as faculty and students across the university and the region, to pursue multiple, multiyear research and creative projects involving volumetric capture, including some that are already funded by NSF, NEH, private foundations, or other federal agencies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Dr. Krzysztof Pietroszek - $602,000 (telehealth communication research)
Project: Augmenting Remote Medical Procedure Training and Assistance with Spatial Computing and Volumetric Capture
Abstract: Healthcare expenditure accounts for 17% of the US Gross Domestic Product and 12% of the workforce, but access to highly skilled health professionals is not evenly distributed across geographic and socioeconomic strata. Videoconferencing-based telehealth systems partly address this inequality by enabling experts to assist and train remote or rural medical personnel. However, videoconferencing alone cannot adequately convey three-dimensional information that is essential in performing many medical procedures. For example, it is very difficult for an expert to precisely guide an operator's hand remotely in performing a medical procedure using only videoconferencing. This project will transform the way medical personnel communicates and collaborates across the distance by allowing for real-time exchange of three-dimensional information that is missing in current videoconferencing telehealth. The project will lead to more equitable access to healthcare; improved success for medical procedures that require the assistance of a remote expert; more cost-effective distribution of healthcare skills and training; and higher quality expert medical advice from a distance. It will also engage students in interdisciplinary research using emerging technology.
National Endowment for the Humanities:
Dr. Braxton Boren - $50,000 (reconstruction Bach's acoustics)
Project: Hearing Bach's Music As Bach Heard It
Description: The recreation of acoustic conditions of the Thomaskirche (St. Thomas Church) in Leipzig, where J.S. Bach worked as a concert master, to better understand the relationship between the acoustic clarity of the physical space and Bach’s compositions.