Understanding degraded speech leads to perceptual gating of a brainstem reflex in human listeners
dataset
posted on 2022-06-10, 02:33authored byHeivet Hernandez-PerezHeivet Hernandez-Perez, Jason Mikiel-Hunter, David McAlpine, Sumitrajit Dhar, Sriram Boothalingam, Jessica J.M. Monaghan, Catherine M. McMahon
The ability to navigate “cocktail-party” situations by focussing on sounds of interest over irrelevant, background sounds is often considered in terms of cortical mechanisms. However, subcortical circuits such as the pathway underlying the medial olivocochlear (MOC) reflex modulate the activity of the inner ear itself, supporting the extraction of salient features from auditory scene prior to any cortical processing. To understand the contribution of auditory subcortical nuclei and the cochlea in complex listening tasks, we made physiological recordings along the auditory pathway while listeners engaged in detecting non(sense)-words in lists of words. Both naturally spoken and intrinsically noisy, vocoded speech—filtering that mimics processing by a cochlear implant—significantly activated the MOC reflex, but this was not the case for speech in background noise, which more engaged midbrain and cortical resources. A model of the initial stages of auditory processing reproduced specific effects of each form of speech degradation, providing a rationale for goal-directed gating of the MOC reflex based on enhancing the representation of the energy envelope of the acoustic waveform. Our data reveals the co-existence of two strategies in the auditory system that may facilitate speech understanding in situations where the signal is either intrinsically degraded or masked by extrinsic acoustic energy. Whereas intrinsically degraded streams recruit the MOC reflex to improve representation of speech cues peripherally, extrinsically masked streams rely more on higher auditory centres to de-noise signals.
Methods
Dataset includes all sound and sentence wav-files used to generate auditory nerve spikes. Model of the auditory periphery and auditory brainstem (MAP_BS) is the work of Meddis group and is available here but can also be originally found (alongside more information about previous versions of the model) at http://essexpsychology.webmate.me/HearingLab/modelling.html.
Please refer to https://www.researchgate.net/publication/307583615_MAP-BSa_Matlab_Auditory_Processing_software_platform_for_studying_Auditory_BrainStem_activity for further information about MAP_BS and its origins.
Two analysis files are included to perform Shuffled Auto-/Cross- Correlograms of either Individual Words (list of words used in the article are found in 'Balanced_NS_and_S_lists.mat') or Mava Corpus sentences (https://app.alveo.edu.au/catalog/mava).
Usage Notes
Please read README.docx for an overview of source data in this dataset. There is also reading material to understand workings of MAP_BS Auditory nerve model (included in MAP_BSpublic_forDryad.zip). Please unzip Additional_PlosBio_files_for_MAP_BS.zip for custom mfiles and instructions on how to run simulations/analysis for individual words.