Jump to content

Neural encoding of sound

From Wikipedia, the free encyclopedia
(Redirected from Neuronal encoding of sound)

The neural encoding of sound is the representation of auditory sensation and perception in the nervous system.[1] The complexities of contemporary neuroscience are continually redefined. Thus what is known of the auditory system has been continually changing. The encoding of sounds includes the transduction of sound waves into electrical impulses (action potentials) along auditory nerve fibers, and further processing in the brain.

Basic physics of sound

[edit]

Sound waves are what physicists call longitudinal waves, which consist of propagating regions of high pressure (compression) and corresponding regions of low pressure (rarefaction).

Waveform

[edit]

Waveform is a description of the general shape of the sound wave. Waveforms are sometimes described by the sum of sinusoids, via Fourier analysis.

Amplitude

[edit]
Graph of a simple sine wave

Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceived. In a sinusoidal function such as , C represents the amplitude of the sound wave.

Frequency and wavelength

[edit]

The frequency of a sound is defined as the number of repetitions of its waveform per second, and is measured in hertz; frequency is inversely proportional to wavelength (in a medium of uniform propagation velocity, such as sound in air). The wavelength of a sound is the distance between any two consecutive matching points on the waveform. The audible frequency range for young humans is about 20 Hz to 20 kHz. Hearing of higher frequencies decreases with age, limiting to about 16 kHz for adults, and even down to 3 kHz for elders.[citation needed]

Anatomy of the ear

[edit]
How sounds make their way from the source to the brain
Flowchart of sound passage - outer ear

Given the simple physics of sound, the anatomy and physiology of hearing can be studied in greater detail.

Outer ear

[edit]

The Outer ear consists of the pinna or auricle (visible parts including ear lobes and concha), and the auditory meatus (the passageway for sound). The fundamental function of this part of the ear is to gather sound energy and deliver it to the eardrum. Resonances of the external ear selectively boost sound pressure with frequency in the range 2–5 kHz.[2]

The pinna as a result of its asymmetrical structure is able to provide further cues about the elevation from which the sound originated. The vertical asymmetry of the pinna selectively amplifies sounds of higher frequency from high elevation thereby providing spatial information by virtue of its mechanical design.[2][3]

Middle ear

[edit]
Flowchart of sound passage - middle ear

The middle ear plays a crucial role in the auditory process, as it essentially converts pressure variations in air to perturbations in the fluids of the inner ear. In other words, it is the mechanical transfer function that allows for efficient transfer of collected sound energy between two different media.[2] The three small bones that are responsible for this complex process are the malleus, the incus, and the stapes, collectively known as the ear ossicles.[4][5] The impedance matching is done through via lever ratios and the ratio of areas of the tympanic membrane and the footplate of the stapes, creating a transformer-like mechanism.[4] Furthermore, the ossicles are arranged in such a manner as to resonate at 700–800 Hz while at the same time protecting the inner ear from excessive energy.[5] A certain degree of top-down control is present at the middle ear level primarily through two muscles present in this anatomical region: the tensor tympani and the stapedius. These two muscles can restrain the ossicles so as to reduce the amount of energy that is transmitted into the inner ear in loud surroundings.[3][4]

Inner ear

[edit]
Flowchart of sound passage - inner ear

The cochlea of the inner ear, a marvel of physiological engineering, acts as both a frequency analyzer and nonlinear acoustic amplifier.[2] The cochlea has over 32,000 hair cells. Outer hair cells primarily provide amplification of traveling waves that are induced by sound energy, while inner hair cells detect the motion of those waves and excite the (Type I) neurons of the auditory nerve.

The basal end of the cochlea, where sounds enter from the middle ear, encodes the higher end of the audible frequency range while the apical end of the cochlea encodes the lower end of the frequency range. This tonotopy plays a crucial role in hearing, as it allows for spectral separation of sounds. A cross section of the cochlea will reveal an anatomical structure with three main chambers (scala vestibuli, scala media, and scala tympani).[5] At the apical end of the cochlea, at an opening known as the helicotrema, the scala vestibuli merges with the scala tympani. The fluid found in these two cochlear chambers is perilymph, while scala media, or the cochlear duct, is filled with endolymph.[3]

Transduction

[edit]

Auditory hair cells

[edit]

The auditory hair cells in the cochlea are at the core of the auditory system's special functionality (similar hair cells are located in the semicircular canals). Their primary function is mechanotransduction, or conversion between mechanical and neural signals. The relatively small number of the auditory hair cells is surprising when compared to other sensory cells such as the rods and cones of the visual system. Thus the loss of a lower number (in the order of thousands) of auditory hair cells can be devastating while the loss of a larger number of retinal cells (in the order to hundreds of thousands) will not be as bad from a sensory standpoint.[6]

Cochlear hair cells are organized as inner hair cells and outer hair cells; inner and outer refer to relative position from the axis of the cochlear spiral. The inner hair cells are the primary sensory receptors and a significant amount of the sensory input to the auditory cortex occurs from these hair cells. Outer hair cells on the other hand boost the mechanical signal by using electromechanical feedback.[6]

Mechanotransduction

[edit]

The apical surface of each cochlear hair cell contains a hair bundle. Each hair bundle contains approximately 300 fine projections known as stereocilia, formed by actin cytoskeletal elements.[7] The stereocilia in a hair bundle are arranged in multiple rows of different heights. In addition to the stereocilia, a true ciliary structure known as the kinocilium exists and is believed to play a role in hair cell degeneration that is caused by exposure to high frequencies.[2][7]

A stereocilium is able to bend at its point of attachment to the apical surface of the hair cell. The actin filaments that form the core of a stereocilium are highly interlinked and cross linked with fibrin, and are therefore stiff and inflexible at positions other than the base. When stereocilia in the tallest row are deflected in the positive-stimulus direction, the shorter rows of stereocilia are also deflected.[7] These simultaneous deflections occur due to filaments called tip links that attach the side of each taller stereocilium to the top of the shorter stereocilium in the adjacent row. When the tallest stereocilia are deflected, tension is produced in the tip links and causes the stereocilia in the other rows to deflect as well. At the lower end of each tip link is one or more mechano-electrical transduction (MET) channels, which are opened by tension in the tip links.[8] These MET channels are cation-selective transduction channels that allow potassium and calcium ions to enter the hair cell from the endolymph that bathes its apical end.

The influx of cations, particularly potassium, through the open MET channels causes the membrane potential of the hair cell to depolarize. This depolarization opens voltage-gated calcium channels to allow the further influx of calcium. This results in an increase in the calcium concentration, which triggers the exocytosis of neurotransmitter vesicles at ribbon synapses at the basolateral surface of the hair cell. The release of neurotransmitter at a ribbon synapse, in turn, generates an action potential in the connected auditory-nerve fiber.[7] Hyperpolarization of the hair cell, which occurs when potassium leaves the cell, is also important, as it stops the influx of calcium and therefore stops the fusion of vesicles at the ribbon synapses. Thus, as elsewhere in the body, the transduction is dependent on the concentration and distribution of ions.[7] The perilymph that is found in the scala tympani has a low potassium concentration, whereas the endolymph found in the scala media has a high potassium concentration and an electrical potential of about 80 millivolts compared to the perilymph.[2] Mechanotransduction by stereocilia is highly sensitive and able to detect perturbations as small as fluid fluctuations of 0.3 nanometers, and can convert this mechanical stimulation into an electrical nerve impulse in about 10 microseconds.[citation needed]

Nerve fibers from the cochlea

[edit]

There are two types of afferent neurons found in the cochlear nerve: Type I and Type II. Each type of neuron has specific cell selectivity within the cochlea.[9] The mechanism that determines the selectivity of each type of neuron for a specific hair cell has been proposed by two diametrically opposed theories in neuroscience known as the peripheral instruction hypothesis and the cell autonomous instruction hypothesis. The peripheral instruction hypothesis states that phenotypic differentiation between the two neurons are not made until after these undifferentiated neurons attach to hair cells which in turn will dictate the differentiation pathway. The cell autonomous instruction hypothesis states that differentiation into Type I and Type II neurons occur following the last phase of mitotic division but preceding innervations.[9] Both types of neuron participate in the encoding of sound for transmission to the brain.

Type I neurons

[edit]

Type I neurons innervate inner hair cells. There is significantly greater convergence of this type of neuron towards the basal end in comparison with the apical end.[9] A radial fiber bundle acts as an intermediary between Type I neurons and inner hair cells. The ratio of innervation that is seen between Type I neurons and inner hair cells is 1:1 which results in high signal transmission fidelity and resolution.[9]

Type II neurons

[edit]

Type II neurons on the other hand innervate outer hair cells. However, there is significantly greater convergence of this type of neuron towards the apex end in comparison with the basal end. A 1:30-60 ratio of innervation is seen between Type II neurons and outer hair cells which in turn make these neurons ideal for electromechanical feedback.[9] Type II neurons can be physiologically manipulated to innervate inner hair cells provided outer hair cells have been destroyed either through mechanical damage or by chemical damage induced by drugs such as gentamicin.[9]

Brainstem and midbrain

[edit]
Levels of transmission of neural auditory signals

The auditory nervous system includes many stages of information processing between the ear and cortex.

Auditory cortex

[edit]

Primary auditory neurons carry action potentials from the cochlea into the transmission pathway shown in the adjacent image. Multiple relay stations act as integration and processing centers. The signals reach the first level of cortical processing at the primary auditory cortex (A1), in the superior temporal gyrus of the temporal lobe.[6] Most areas up to and including A1 are tonotopically mapped (that is, frequencies are kept in an ordered arrangement). However, A1 participates in coding more complex and abstract aspects of auditory stimuli without coding well the frequency content, including the presence of a distinct sound or its echoes. [10] Like lower regions, this region of the brain has combination-sensitive neurons that have nonlinear responses to stimuli.[6]

Recent studies conducted in bats and other mammals have revealed that the ability to process and interpret modulation in frequencies primarily occurs in the superior and middle temporal gyri of the temporal lobe.[6] Lateralization of brain function exists in the cortex, with the processing of speech in the left cerebral hemisphere and environmental sounds in the right hemisphere of the auditory cortex. Music, with its influence on emotions, is also processed in the right hemisphere of the auditory cortex. While the reason for such localization is not quite understood, lateralization in this instance does not imply exclusivity as both hemispheres do participate in the processing, but one hemisphere tends to play a more significant role than the other.[6]

Recent ideas

[edit]
  • Alternation in encoding mechanisms have been noticed as one progresses through the auditory cortex. Encoding shifts from synchronous responses in the cochlear nucleus and later becomes dependent on rate encoding in the inferior colliculus.[11]
  • Despite advances in gene therapy that allow for the alteration of the expression of genes that affect audition, such as ATOH1, and the use of viral vectors for such end, the micro-mechanical and neural complexities that surrounds the inner ear hair cells, artificial regeneration in vitro remains a distant reality.[12]
  • Recent studies suggest that the auditory cortex may not be as involved in top down processing as was previous thought. In studies conducted on primates for tasks that required the discrimination of acoustic flutter, Lemus found that the auditory cortex played only a sensory role and had nothing to do with the cognition of the task at hand.[13]
  • Due to the presence of the tonotopic maps in the auditory cortex at an early age, it has been assumed that cortical reorganization had little to do with the establishment of these maps, but these maps are subject to plasticity.[14] The cortex seems to perform a more complex processing than spectral analysis or even spectro-temporal analysis.[10]

References

[edit]
  1. ^ Leonard, Matthew K.; Gwilliams, Laura; Sellers, Kristin K.; Chung, Jason E.; Xu, Duo; Mischler, Gavin; Mesgarani, Nima; Welkenhuysen, Marleen; Dutta, Barundeb; Chang, Edward F. (2024-02-15). "Large-scale single-neuron speech sound encoding across the depth of human cortex". Nature. 626 (7999): 593–602. doi:10.1038/s41586-023-06839-2. ISSN 0028-0836. PMC 10866713.
  2. ^ a b c d e f Hudspeth, AJ. (Oct 1989). "How the ear's works work". Nature. 341 (6241): 397–404. Bibcode:1989Natur.341..397H. doi:10.1038/341397a0. PMID 2677742. S2CID 33117543.
  3. ^ a b c Hudspeth, AJ. (2001). "How the ear's works work: mechanoelectrical transduction and amplification by hair cells of the internal ear". Harvey Lect. 97: 41–54. PMID 14562516.
  4. ^ a b c Hudde, H.; Weistenhofer, C. (2006). "Key features of the human middle ear". ORL J Otorhinolaryngol Relat Spec. 68 (6): 324–328. doi:10.1159/000095274. PMID 17065824. S2CID 42550955.
  5. ^ a b c Hudspeth, AJ.; Konishi, M. (Oct 2000). "Auditory neuroscience: development, transduction, and integration". Proceedings of the National Academy of Sciences of the United States of America. 97 (22): 11690–1. doi:10.1073/pnas.97.22.11690. PMC 34336. PMID 11050196.
  6. ^ a b c d e f Kaas, JH.; Hackett, TA.; Tramo, MJ. (Apr 1999). "Auditory processing in primate cerebral cortex" (PDF). Current Opinion in Neurobiology. 9 (2): 164–170. doi:10.1016/S0959-4388(99)80022-1. PMID 10322185. S2CID 22984374.
  7. ^ a b c d e Fettiplace, R.; Hackney, CM. (Jan 2006). "The sensory and motor roles of auditory hair cells". Nat Rev Neurosci. 7 (1): 19–29. doi:10.1038/nrn1828. PMID 16371947. S2CID 10155096.
  8. ^ Beurg, M.; Fettiplace, R.; Nam, JH.; Ricci, AJ. (May 2009). "Localization of inner hair cell mechanotransducer channels using high-speed calcium imaging". Nature Neuroscience. 12 (5): 553–558. doi:10.1038/nn.2295. PMC 2712647. PMID 19330002.
  9. ^ a b c d e f Rubel, EW.; Fritzsch, B. (2002). "Auditory system development: primary auditory neurons and their targets". Annual Review of Neuroscience. 25: 51–101. doi:10.1146/annurev.neuro.25.112701.142849. PMID 12052904.
  10. ^ a b Chechik, Gal; Nelken (2012). "Auditory abstraction from spectro-temporal features to coding auditory entities". Proceedings of the National Academy of Sciences of the United States of America. 109 (44): 18968–73. Bibcode:2012PNAS..10918968C. doi:10.1073/pnas.1111242109. PMC 3503225. PMID 23112145.
  11. ^ Frisina, RD. (Aug 2001). "Subcortical neural coding mechanisms for auditory temporal processing". Hearing Research. 158 (1–2): 1–27. doi:10.1016/S0378-5955(01)00296-9. PMID 11506933. S2CID 36727875.
  12. ^ Brigande, JV.; Heller, S. (Jun 2009). "Quo vadis, hair cell regeneration?". Nature Neuroscience. 12 (6): 679–685. doi:10.1038/nn.2311. PMC 2875075. PMID 19471265.
  13. ^ Lemus, L.; Hernández, A.; Romo, R. (Jun 2009). "Neural codes for perceptual discrimination of acoustic flutter in the primate auditory cortex". Proceedings of the National Academy of Sciences of the United States of America. 106 (23): 9471–9476. Bibcode:2009PNAS..106.9471L. doi:10.1073/pnas.0904066106. PMC 2684844. PMID 19458263.
  14. ^ Kandler, K.; Clause, A.; Noh, J. (Jun 2009). "Tonotopic reorganization of developing auditory brainstem circuits". Nature Neuroscience. 12 (6): 711–7. doi:10.1038/nn.2332. PMC 2780022. PMID 19471270.