Research

The general goal of our research is to understand the mechanisms and principles of human perception. Human perception in natural environments almost always involves processing sensory information from multiple sensory modalities. Therefore, understanding perception requires understanding multisensory integration. This is the main focus of our research. Much of our current research investigates learning. Again, learning in natural settings almost always occurs in a multisensory environment, so understanding learning requires understanding multisensory learning.

Topics of research

Multisensory Perception

At present, our research is mainly concerned with the question of how information is integrated from multiple sensory modalities into a coherent percept of the world, with a special interest in the question of how visual perception is affected by other sensory modalities. This is an exciting time to study perception, because we are amidst a paradigm shift — yes a paradigm shift! For more than a century, perception has been viewed as a modular function with different sensory modalities operating largely as separate and independent modules. (As a result, multisensory integration has been one of the least studied areas of research in perception.) However, over the last few years, accumulating evidence for cross-modal interactions in perception has led to a new surge of interest in this field, making it arguably one of the fastest growing areas of research in perception, and has swiftly overturned the long-standing modular view of perceptual processing. Our studies have been among those that have started a shift towards an integrated and interactive paradigm of sensory processing. 

We have also recently discovered that cross-modal interactions also play an important role in learning. What is most surprising about this is that multisensory training improves learning even for single-modality tasks! More specifically, people who are trained with both auditory and visual stimuli perform better (compared to those trained only with only visual stimuli) even when sound is no longer available. How could this be? We are currently investigating the computational and brain mechanisms of this cross-modal facilitation of learning. Auditory-visual training not only accelerates learning but also increases the magnitude of learning. We think because learning in natural settings almost always occurs in a multisensory environment, the mechanisms of learning have evolved to operate most effectively in those conditions. 

We have also recently found that automatic implicit learning of regularities in the environment (statistical learning) can occur simultaneously and independently in visual, and auditory domains, as well as across the two modalities. In other words, people can learn three kinds of regularities/associations at the same time and without any interference among them — again consistent with the proposition that learning mechanisms have evolved to operate in multisensory conditions, and are most effective (and amazingly so) in multisensory conditions.

Levels of Study

Our research tackles the question of multisensory perception and learning at various levels:

Phenomenology: This is to find out how the different modalities interact at a descriptive level. We investigate the phenomenology of these interactions using behavioral experiments.

Brain Mechanisms: This is to find out which brain areas and pathways are involved, in what kind of circuitry (bottom-up, top-down, etc.), and how each area or mechanism contribute to processes of multisensory perception and learning. We have been using event related potentials and functional neuroimaging to investigate these questions. We are also collaborating with neurophysiologists making single-unit recordings in awake behaving monkeys.

Computational Principles: This is to find out what the general theoretical rules governing multisensory perception and learning are. To gain insight into these general principles, one needs to find a model that can account for the behavioral data. We have been using statistical modeling to gain insight into these rules and principles.

Methods of Study

  • Traditional psychophysics
  • alteredrealityAltered reality system: This is a portable and immersive system that allows the subject to move around, inside or outside of the lab, performing daily tasks while the images get altered in real-time and projected to the subject’s head-mounted display, in effect altering the subject’s “reality.” This system can be used to investigate how people adapt to changes in the environment.
  • fMRI 
    fmri1fmri2fmri3
  • tDCS (Transcranial Direct Current Stimulation) 
    tdcs1tdcs2
  • EEG/ERP
    eeg2eeg3
  • Computational Modeling
    model1model2

Summary of some of our recent research

Cross-modal interactions

Vision has been traditionally viewed as the dominant sensory modality and as operating independently of the other modalities. This general view has been changing in the light of accumulating evidence documenting the modulation of visual percepts by signals from other modalities. Until recently, this shift in mentality has been slow, since the reports of cross-modal modulation of the visual percept have traditionallly involved only small, incremental changes. However, the sound-induced flash illusion that we reported (Nature 2000) provides ground for a decisive change in the traditional view by demonstrating a radical alteration of the visual percept, i.e. an alteration in its phenomenological quality. This finding also suggests that these modulations are more commonplace than previously thought. 

Brain mechanisms underlying multisensory integration

Our ERP and MEG studies suggest that the neural activity associated with a visual stimulus is modulated by sound with a very short latency and in areas of the brain thought to process only vision. Our functional Magnetic Resonance Imaging (fMRI) studies confirmed that activity in primary visual cortex (V1) is selectively enhanced in trials in which sound alters the visual perception (i.e., when the illusion occurs) and not otherwise. This is an astonishingly early level of sensory processing, and thus, these results taken together show that even visual processing (which has been thought to be quite self-contained) can be modulated by sound and at the earliest stage of processing in the cortex. We are also investigating modulation of cortical areas through tDCS.

Modeling multi-modal perception

Modeling studies are necessary for gaining insight into the principles underlying cross-modal interactions. In multiple tasks, we compared human performance to the performance of an ideal observer that uses Bayesian statistical inference to infer which stimuli correspond to one object which stimuli correspond to different objects, and how to combine the information from different modalities to obtain an optimal estimate about the world. We found that human performance is remarkably consistent with that of a Bayesian ideal observer. In other words, it seems that evolution has figured out the best computational strategy for accomplishing these perceptual tasks, and found a way of implementing such probabilistic computations in the brain using our neural machinery. Astonishing!

Applications of our research

Our research on learning has important implications for education and rehabilitation. We are currently trying to apply our findings and methods to stroke rehab as well as training protocols in education.