CMU Language Technologies Institute Colloquium: Talk by Josh McDermott, MIT – March 1, CMU Wean Hall 7500, 2:30

CMU Language Technologies Institute Colloquium

Title:  Understanding audition via sound analysis and synthesis
Speaker: Josh McDermott, Department of Brain and Cognitive Sciences, MIT

Date: 1 March 2013
Time:  2.30-3.30 PM
Location:  WEH 7500, CMU

Abstract:

Humans infer many important things about the world from the sound
pressure waveforms that enter the ears. In doing so we solve a number
of difficult and intriguing computational problems. We recognize sound
sources despite large variability in the waveforms they produce,
extract behaviorally relevant attributes that are not explicit in the
input to the ear, and do so even when sound sources are embedded in
dense mixtures with other sounds. This talk will describe my recent
work investigating how we accomplish these feats. The work stems from
two premises: first, that understanding perception requires
understanding real-world sensory stimuli and their representation in
the brain, and second, that a theory of the perception of some
property should enable the synthesis of signals that appear to have
that property. Sound synthesis can thus be used to probe phenomena
inaccessible to conventional experimental methods. I will discuss two
related strands of research along these lines. The first strand uses
sound textures (as produced by rain, swarms of insects, or galloping
horses) as a window into the auditory system, synthesizing textures
from statistics of biological sound representations as tests of the
perceptual relevance of different acoustic measurements. The second
strand uses naturalistic synthetic sounds to reveal new aspects of
sound segregation. Together they indicate that simple statistical
properties of auditory representations capture a surprising number of
important perceptual phenomena.

Speaker Bio:

Josh McDermott is a perceptual scientist studying sound, hearing, and
music. His research investigates human and machine audition using
tools from experimental psychology, engineering, and neuroscience. He
is particularly interested in using the gap between human and machine
competence to both better understand biological hearing and design
better algorithms for analyzing sound.

McDermott obtained a BA in Brain and Cognitive Science from Harvard,
an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is currently an Assistant Professor in the Department of Brain and Cognitive Sciences at MIT.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>