Research

At present, our research is mainly concerned with the question of how information is integrated from multiple sensory modalities into a coherent percept of the world. This is an exciting time to study perception, because we are amidst a paradigm shift: for more than a century, perception has been viewed as a modular function with different sensory modalities operating largely as separate and independent modules. (As a result, multisensory integration has been one of the least studied areas of research in perception.) However, over the last few years, accumulating evidence for cross-modal interactions in perception has led to a new surge of interest in this field, making it arguably one of the fastest growing areas of research in perception, and has swiftly overturned the long-standing modular view of perceptual processing. Our studies have been among those that have started a shift towards an integrated and interactive paradigm of sensory processing. 

The general goal of our research is to understand the mechanisms and principles of human perception. Human perception in natural environments almost always involves processing sensory information from multiple sensory modalities. Therefore, understanding perception requires understanding multisensory integration. Much of our current research investigates learning. Again, learning in natural settings almost always occurs in a multisensory environment, so understanding learning requires understanding multisensory learning.

Current Projects

Multisensory memory

We examine how encoding information from multiple sensory modalities affects subsequent recall or recognition of unisensory information.

Team: Arit Glicksohn, Carolyn Murray

Multisensory learning

Learning to efficiently detect visual features (e.g., objects in a crowded scene, or a tumor in an x-ray) often requires a long and laborious training period. We explore how incorporating additional sensory modalities into the training protocol affects visual learning.

Team: Arit Glicksohn, Andrew Frane

Perceptual Pleasure

We are studying the interactions between audiovisual perception and enjoyment.

Team: Andrew Frane, Maggie Yeh

Computational Modeling

We develop and test Bayesian models of multisensory perception and learning. We use these models to account for empirical findings as well as addressing basic questions about the nature and characteristics of perceptual processing and learning in health and disease.

We recently released to public a beta version of matlab toolbox for our Bayesian Causal Inference of multisensory perception that can be used for understanding the model, as well as for adopting it to account for experimental data for a variety of tasks. Development of this toolbox was sponsored by the NSF.

 

Levels of Study

Our research tackles the question of multisensory perception and learning at various levels:

Phenomenology: This is to find out how the different modalities interact at a descriptive level. We investigate the phenomenology of these interactions using behavioral experiments.

Brain Mechanisms: This is to find out which brain areas and pathways are involved, in what kind of circuitry (bottom-up, top-down, etc.), and how each area or mechanism contribute to processes of multisensory perception and learning. We have been using event related potentials and functional neuroimaging to investigate these questions. We are also collaborating with neurophysiologists making single-unit recordings in awake behaving monkeys.

Computational Principles: This is to find out what the general theoretical rules governing multisensory perception and learning are. To gain insight into these general principles, one needs to find a model that can account for the behavioral data. We have been using statistical modeling to gain insight into these rules and principles.

Bayesian Causal Inference Toolbox (BCIT), developed in our lab by Dr. Majed Samad with assistant developer Kellienne Sita, is now in beta release, available at https://github.com/multisensoryperceptionlab/BCIT. It is designed for researchers of any background who wish to learn and/or use the Bayesian causal inference model, and it does not require any computational training or skills.

Methods of Study

  • Traditional psychophysics
  • alteredrealityAltered reality system: This is a portable and immersive system that allows the subject to move around, inside or outside of the lab, performing daily tasks while the images get altered in real-time and projected to the subject’s head-mounted display, in effect altering the subject’s “reality.” This system can be used to investigate how people adapt to changes in the environment.
  • fMRI 
    fmri1fmri2fmri3
  • tDCS (Transcranial Direct Current Stimulation) 
    tdcs1tdcs2
  • EEG/ERP
    eeg2eeg3
  • Computational Modeling
    model1model2

 

Applications of our research

Our research on learning has important implications for education and rehabilitation. We are currently trying to apply our findings and methods to stroke rehab as well as training protocols in education.