Cross-modal activity in adults with cochlear implants: A multimodal brain perspective
Large differences in auditory speech understanding in adult cochlear implant (CI) users exist, particularly when listening in background noise. Well-established peripheral factors account for only a small proportion of variability in CI outcomes, and there is increasing interest in understanding contributions from cortical factors. Sensory deprivation can lead to changes in cortical organisation, whereby an anatomically distinct area can become responsive to an alternate ‘intact’ sensory input. In particular, this cross-modal activity or ‘plasticity’ has been described in cases of congenital deafness and has been proposed to potentially limit functional listening ability after sensory restoration. It is not yet clear, however, how this may apply to adults with post-lingually acquired hearing loss. Further, conventional neuroimaging methods such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are limited in their capacity to capture cortical activity in CI users due to the characteristics of the implant itself, causing both electrical and magnetic artefacts. The studies presented in this thesis employed an emerging, non-invasive and CI-compatible technique called functional near-infrared spectroscopy (fNIRS).
Studies investigating cross-modal activity in post-lingually deaf CI users have shown a lack of consensus of how cross-modal activity relates to auditory speech understanding, particularly in the interpretation of whether this activity is ‘adaptive’ or ‘maladaptive’. From a perspective of the brain as a network of multimodal, interregional connections rather than a ‘sensory-specific’ view of cortical regions, and drawing from empirical evidence from animal models, this thesis (comprised of two experimental chapters and one conceptual review) argues that cross-modal manifestations in post-lingually deaf CI users might be explained by a more holistic consideration of everyday speech processing at the cortical level, encompassing a multimodal representation of language and communication, rather than solely a series of acoustic events.
Study 1. The first study aimed to assess differences in the type of stimuli used across studies, from ‘low-level’ visual gratings to ‘high-level’ connected speech, hypothesising that the use of different stimuli elicits different cortical response patterns (presumably due to engagement of different functional networks), and that this could result in diverse relationships reported between cortical activity and measures of speech understanding (if at all). Haemodynamic responses were measured to both auditory and visual ‘speech’ and ‘nonspeech’ stimuli in auditory and visual regions of interest (ROI) in the same cohort of postlingually deaf cochlear implant users, and a cohort of age-matched, normal hearing controls. Results demonstrated stimulus-specific differences in activation patterns to speech versus nonspeech stimuli in both CI and normal hearing (NH) subjects, and evidence of comparable crossmodal responses in auditory areas to visual speech in both groups. Further, a significant positive relationship between these cross-modal responses to visual speech and lip-reading abilities in CI users was observed. There were no significant correlations, however, between degree of cross-modal activity in auditory or visual regions and auditory speech understanding.
Study 2. The second study aimed to investigate this task-evoked cortical activity in terms of functional connectivity between auditory and visual cortical regions, hypothesising that the relationship between cross-modal activation to visual speech in CI users in the previous study could be a result of connections between auditory and visual cortical regions, as part of a multimodal speech processing network. Functional connectivity, specifically, coherence of haemodynamic responses in response to speech and non-speech stimuli, was examined between auditory and visual ROIs. Results revealed a trend of enhanced auditory-visual cortical connectivity in response to speech stimuli in the CI users relative to non-speech stimuli, and in particular, this enhanced connectivity in response to visual speech significantly correlated with speech-in-noise understanding in the CI users.
Together, these findings support existing evidence that activation of auditory brain regions typically associated with speech processing can also be elicited by purely visual stimuli, even in non-sensory deprived individuals, and therefore have implications for the interpretation of cross-modal activity in auditory areas and the relationship with functional speech outcomes in post-lingual CI users, particularly with the use of language-based stimuli. In particular, they highlight that examining task-related activity between brain regions using connectivity measures may be an important addition to purely amplitude-based measures within brain regions in disentangling cross-modal activity and cortical processing in general, in line with a ‘network-based’ view of brain processing. Finally, to contextualise these results, a conceptual review of the current literature of cross-modal activity in auditory areas in post-lingual CI users was conducted, applying a multimodal perspective of the sensory brain – conceptualising the auditory cortex as one component of a system of complex, distributed functional networks, which may be activated in a ‘function- or task-specific’ manner, regardless of the modality of input, e.g., a multimodal network for processing speech.