The auditory lab at carnegie Mellon University

by admin — Copyright 2008, Laurie Heller, Ph.D. Permission to use, copy, modify, and distribute this material for any purpose other than its incorporation into a commercial product or transfer for compensation is hereby granted without fee, provided that the above copyright notice appears in all copies, that Dr. Laurie Heller of Carnegie Mellon University and the support of NSF grant 0446955 are acknowledged in all publications and documents as the source of the material, and that the name of Dr. Laurie Heller not be used in endorsing any product resulting from the use of the material without specific, written prior permission.

Laurie Heller | Department of Psychology | Carnegie Mellon University


The Sound Events Database is a unique collection of recordings of sounds that were made for research purposes. A variety of objects underwent various impacts, scrapes, rolls, and deformations; liquids were dripped, poured, sloshed and splashed. Every type of sound event includes five exemplars, and each exemplar lasts for several seconds (when possible). All repetitive events were repeated at a steady 2 cycles per second. The recording conditions were similar for all sounds, yielding virtually no differences in background noise or spectral shaping. None of the sounds are clipped and all were recorded with high-quality equipment with a flat frequency response inside of a sound-attenuating chamber treated with acoustic foam wedges (equipment specifications available). Details about all of the recording conditions are documented in the downloads, including videos that show how the actual objects were handled. See the "sound events downloads" tab to download sounds from the NSF-funded Sound Events Database. Recordings of sound events can be downloaded individually or in groups, notes and videos detailing the recording procedure can also be downloaded separately or in a single package with all sounds from the database.

All content available on this site is provided under the terms defined in the LEGAL NOTICE.



We are exploring a new form of auditory-motor priming. Motor priming exists if an action is performed more rapidly after the presentation of facilitating cues than after the presentation of interfering cues. We hypothesized that environmental sounds could be used as cues to create motor priming. To create facilitation, we devised a congruent priming sound that was similar to the sound that would be made by the gesture that was about to be performed. To create interference, we devised an incongruent sound that would not normally be made by the gesture that was about to be performed. Using this paradigm we found evidence of auditory-motor priming between environmental sounds and simple gestures. Additionally, we found evidence for auditory-motor priming over a range of conditions.


We are investigating the cognitive neuroscience of the auditory system's ability to identify the causes of sounds. The experimental question we are addressing is which neural networks are preferentially activated when subjects shift the focus of their attention toward different aspects of the source of sounds.



How is high-level auditory information about our environment organized? There is a strong theoretical basis for connecting auditory perception with events rather than objects. It is a "tree falling in the forest" that is heard, not just the tree. Sound is generated by the physical interactions of objects, surfaces, and substances – in other words, events. The sound waveform contains a great deal of potential information about its sources properties. However, no single acoustic feature specifies a particular object or action. Information about sound sources is complex and time-varying, and it is not known to what degree or in what form it is exploited by human listeners. My research examines the human ability to understand what events are happening in the environment through sound. Perceptual experiments address whether there is an auditory organization of events that can be used to predict psychological phenomena such as prototypes or exaggerations, and whether audition plays a significant role in the perception of multi-modal events. This basic research, funded by the National Science Foundation, will relate psychological performance to acoustic properties and high-level auditory information. The results of this research may have the potential to enhance processing for hearing aids and improve auditory displays, both for virtual reality and for visually impaired computer users. I believe that immersive and interactive human/machine interfaces of the future will need to make advances in auditory interfaces as well as addressing the interaction between audition and vision.


We tested various hearing aid algorithms to reduce noise and enhance speech intelligibility. This research was funded by the Rhode Island Research Alliance's Science and Technology Advisory Council. We tested combinations of pre-processing strategies to determine which ones provide the most benefit to users. Both normal-hearing and hearing-impaired listeners tried to understand speech under quiet and noisy conditions in the laboratory. The goal was to influence development of future hearing aids.

PI: Laurie Heller, Ph.D., faculty member in Psychology


Post-doctoral Research Associate:

Guillaume LeMaitre, Ph.D.


Ben Skerritt

Sam Tarakajian

Emily Ammerman (RA)

Kathryn Wiseman

Lauren Wolf

Christine Carmody

Jillian Day

Suzanne Gilman

Karen Sripada

Elena Helman

Adam Ecker

Christine Clancy

Rachel Ostrand

Jason Weber

Molly Ball

Robert Goldman

Matt Simonson (RA)

Esra Aksu (RA)

John Szymanski

Ivayla Ivanova

Soojeong Song

Elizabeth Stancioff

Jayant Bhambhani

Rentaro Matsukata

Nicolas Zuniga

Clara Baron-Hyppolite

Nicole Navolio (RA)

Michael Kashaf

Matt Lehet

Amira Millette




Contact by email: auditory.lab 'at' to ask about opportunities for students to help conduct research, or to participate as a listener in experiments.

CMU's Department of Psychology