The Auditory Lab

at Carnegie Mellon University

 

ONGOING RESEARCH

         



IMPROVING SPATIAL NAVIGATION USING SOUND


We are collaborating with professors in Electrical Engineering at Carnegie Mellon with the aim of improving human spatial navigation using sound. Echoes provide important acoustic information about the environment that is extremely effective for the navigation of certain animals (e.g., bats and dolphins). Because echoes are complex, humans do not normally use echolocation; however, echo information is in fact utilized by some blind people. We are harnessing technology in order to make echo information accessible to many more of the blind by helping them to learn to use echoes. Our approach is to offer a free smartphone game that gives people experience with navigating through a virtual maze by using echoes. In the Auditory Lab we are focusing on discovering the human sensitivity to echo information and how this can be extended to help design and improve training programs and devices.  

Collaborators: Prof. Pulkit Grover and Prof. Bruno Sinopoli, ECE, CMU.

Funding: Google, CMU undergraduate research training award.

If you are interested in participating, click HERE.


AUDIO-MOTOR PRIMING


We are exploring a new form of auditory-motor priming. Motor priming exists if an action is performed more rapidly after the presentation of facilitating cues than after the presentation of interfering cues. We hypothesized that environmental sounds could be used as cues to create motor priming. To create facilitation, we devised a congruent priming sound that was similar to the sound that would be made by the gesture that was about to be performed. To create interference, we devised an incongruent sound that would not normally be made by the gesture that was about to be performed. Using this paradigm we found evidence of auditory-motor priming between environmental sounds and simple gestures. Additionally, we found evidence for auditory-motor priming over a range of conditions.


NEURAL BASIS OF SOUND IDENTIFICATION


We are investigating the cognitive neuroscience of the auditory system's ability to identify the causes of sounds. The experimental question we are addressing is which neural networks are preferentially activated when subjects shift the focus of their attention toward different aspects of the source of sounds.


AUDITORY-VISUAL INTERACTIONS

Current studies are investigating the cognitive parameters that affect the integration of auditory and visual events. For example, sometimes visual and auditory stimuli are simultaneous even though the don’t arise from the same event: how do we figure this out? Conversely, sometimes the sights and sounds do belong together even though they are not strictly simultaneous: how do we know to glue them together across time and what are the limits?

 
TEAMteam.htmlteam.htmlshapeimage_3_link_0
HOMEhome.htmlhome.htmlshapeimage_4_link_0
SOUND EVENTSsound_events.htmlsound_events.htmlshapeimage_5_link_0
ONGOING RESEARCHshapeimage_6_link_0
PREVIOUS RESEARCHprevious_research.htmlprevious_research.htmlshapeimage_7_link_0