Personal tools
You are here: Home
Document Actions

Auditory Perception Lab at Carnegie Mellon University

by admin last modified 2009-06-26 10:37 Copyright 2008, Laurie Heller, Ph.D. Permission to use, copy, modify, and distribute this material for any purpose other than its incorporation into a commercial product or transfer for compensation is hereby granted without fee, provided that the above copyright notice appears in all copies, that Dr. Laurie Heller of Carnegie Mellon University and the support of NSF grant 0446955 are acknowledged in all publications and documents as the source of the material, and that the name of Dr. Laurie Heller not be used in endorsing any product resulting from the use of the material without specific, written prior permission.

Laurie Heller | Department of Psychology | Carnegie Mellon University

The Sound Events Database is a unique collection of recordings of sounds that were made for research purposes. A variety of objects underwent various impacts, scrapes, rolls, and deformations; liquids were dripped, poured, sloshed and splashed. Every type of sound event includes five exemplars, and each exemplar lasts for several seconds (when possible). All repetitive events were repeated at a steady 2 cycles per second. The recording conditions were similar for all sounds, yielding virtually no differences in background noise or spectral shaping. None of the sounds are clipped and all were recorded with high-quality equipment with a flat frequency response inside of a sound-attenuating chamber treated with acoustic foam wedges (equipment specifications available). Details about all of the recording conditions are documented in the downloads, including videos that show how the actual objects were handled.

See the "sound events downloads" tab to download sounds from the NSF-funded Sound Events Database. Recordings of sound events can be downloaded individually or in groups, notes and videos detailing the recording procedure  can also be downloaded separately or in a single package with all sounds from the database.

All content available on this site is provided under the terms defined in the LEGAL NOTICE.



How is high-level auditory information about our environment organized? There is a strong theoretical basis for connecting auditory perception with events rather than objects. It is a "tree falling in the forest" that is heard, not just the tree. Sound is generated by the physical interactions of objects, surfaces, and substances – in other words, events. The sound waveform contains a great deal of potential information about its sources properties. However, no single acoustic feature specifies a particular object or action. Information about sound sources is complex and time-varying, and it is not known to what degree or in what form it is exploited by human listeners. My research examines the human ability to understand what events are happening in the environment through sound. Perceptual experiments address whether there is an auditory organization of events that can be used to predict psychological phenomena such as prototypes or exaggerations, and whether audition plays a significant role in the perception of multi-modal events. This basic research, funded by the National Science Foundation, will relate psychological performance to acoustic properties and high-level auditory information. The results of this research may have the potential to enhance processing for hearing aids and improve auditory displays, both for virtual reality and for visually impaired computer users. I believe that immersive and interactive human/machine interfaces of the future will need to make advances in auditory interfaces as well as addressing the interaction between audition and vision.


We are testing various hearing aid algorithms to reduce noise and enhance speech intelligibility. This research has been funded by the Rhode Island Research Alliance's Science and Technology Advisory Council. We are testing combinations of pre-processing strategies to determine which ones provide the most benefit to users. Both normal-hearing and hearing-impaired listeners are trying to understand speech under quiet and noisy conditions in the laboratory. Ultimately, the results will influence development of future hearing aids.

PI: Laurie Heller, Ph.D., faculty member in Psychology


Contact by email: auditory.lab 'at' to ask about opportunities for students to help conduct research, or to participate as a listener in experiments.

CMU's Department of Psychology


  • Mouth Sounds - The website of Fred Newman (Foley Artist from Prairie Home Companion)


Ben Skerritt
Sam Tarakajian
Emily Ammerman (RA)
Kathryn Wiseman
Lauren Wolf
Christine Carmody
Jillian Day
Suzanne Gilman
Karen Sripada
Elena Helman
Adam Ecker
Christine Clancy
Rachel Ostrand
Jason Weber
Molly Ball
Robert Goldman
Matt Simonson (RA)
Esra Aksu (RA)
John Szymanski
Ivayla Ivanova
Soojeong Song
Elizabeth Stancioff
NSF Sound Event Research and database development funded by NSF.


« September 2009 »
Su Mo Tu We Th Fr Sa

Powered by Plone CMS, the Open Source Content Management System