Matematiktorvet, Building 321, Room 127
2800 Kgs. Lyngby
When we listen to somebody talking, we not only rely on hearing, we also see speech.
Talking faces carry speech information, which is helpful when the acoustic information is degraded, e.g. in noisy surroundings, or, for the hearing-impaired. For instance, vision may give cues as to when there is speech to listen in a noisy background.
But also in normal, quiet conditions, vision interacts with hearing.
In the so-called McGurk effect, visual lip movements change the way we hear an acoustic speech token. This is another phenomenon of audiovisual integration, here changing the phonetic percept. But is this due to vision changing the way the brain processes sensory inputs from our ears, or, is it due to an integration of two conflicting pieces of phonetic information?
My project investigates what actually happens when vision changes the way we hear.
To start answering this question, we need to know how early in the underlying brain processes that vision starts to merge into hearing. To measure this, EEG is the perfect method, as it allows accurate tracking of brain processes down to the millisecond.
The aim is to be able to map where hearing and vision meet in the brain and how they interact.
|- 2003||B.A. (philosophy) - University of Aarhus|
|- 2007||B.Sc. (psychology) - University of Copenhagen|
|- 2010||M.Sc. (psychology) / Cand.psych. - University of Copenhagen|
M.Sc. (psychology) / Cand.psych.
|2011 - -||Guest PhD student (CAHR) - Center for Applied Hearing Research, DTU Elektro|
Publication: Research - peer-review › Conference abstract in journal – Annual report year: 2013
Publication: Research - peer-review › Article in proceedings – Annual report year: 2011
Publication: Research - peer-review › Journal article – Annual report year: 2011
Latest activities and conferences