Watch this space and the PLOS ONE website for a forthcoming article by Molly Henry and me;
Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex
Harking back at what we had argued initially in our 2012 Frontiers op’ed piece (together with Björn Herrmann), Molly presents neat evidence for dissociable cortical signatures of slow amplitude versus frequency modulation. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.
German public television broadcaster 3sat featured our research on neural oscillations (see our PNAS Paper) in its series nano .
Unfortunately it’s only in German. However, have fun watching it:
[Update] If the embedded video is not working for you, watch it on the 3sat website (Flash).
Thalamic and parietal brain morphology predicts auditory category learning
Categorizing sounds is vital for adaptive human behavior. Accordingly, changing listening situations (external noise, but also peripheral hearing loss in aging) require listeners to flexibly adjust their categorization strategies, e.g., switch amongst available acoustic cues. However, listeners differ considerably in these adaptive capabilities. For this reason, we employed voxel-based morphometry (VBM) in our study (Neuropsychologia, In press), in order to assess the degree to which individual brain morphology is predictive of such adaptive listening behavior.
Oscillatory Phase Dynamics in Neural Entrainment Underpin Illusory Percepts of Time
Natural sounds like speech and music inherently vary in tempo over time. Yet, contextual factors such as variations in the sound’s loudness or pitch influence perception of temporal rate change towards slowing down or speeding up.
A new MEG study by Björn Herrmann, Molly Henry, Maren Grigutsch and Jonas Obleser asked for the neural oscillatory dynamics that underpin context-induced illusions in temporal rate change and found illusory percepts to be linked to changes in the neural phase patterns of entrained oscillations while the exact frequency of the oscillatory response was related to veridical percepts.
The paper is in press and forthcoming in The Journal of Neuroscience.
Paper is available online.
When we listen to sounds like speech and music, we have to make sense of different acoustic features that vary simultaneously along multiple time scales. This means that we, as listeners, have to selectively attend to, but at the same time selectively ignore, separate but intertwined features of a stimulus.
A newly published fMRI study by Molly Henry, Björn Herrmann, and Jonas Obleser found a network of brain regions that responded oppositely to identical stimulus characteristics depending on whether they were relevant or irrelevant, even when both stimulus features involved attention to time and temporal features.
You can check out the article here: