When sound waves enter the right ear, which hemisphere receives the primary information?

The Sense of Hearing

John E. Hall PhD, in Guyton and Hall Textbook of Medical Physiology, 2021

Sound Frequency Perception in the Primary Auditory Cortex

At least sixtonotopic maps have been described in the primary auditory cortex and auditory association areas. In each of these maps, high-frequency sounds excite neurons at one end of the map, whereas low-frequency sounds excite neurons at the opposite end. In most maps, the low-frequency sounds are located anteriorly, as shown inFigure 53-10, and the high-frequency sounds are located posteriorly. This setup is not true for all the maps.

Why does the auditory cortex have so many different tonotopic maps? The answer, presumably, is that each of the separate areas dissects out some specific feature of the sounds. For example, one of the large maps in the primary auditory cortex almost certainly discriminates the sound frequencies and gives the person the psychic sensation of sound pitches. Another map is probably used to detect the direction from which the sound comes. Other auditory cortex areas detect special qualities, such as thesudden onset of sounds, or perhaps special modulations, such as noise versus pure frequency sounds.

The frequency range to which each individual neuron in the auditory cortex responds is much narrower than that in the cochlear and brain stem relay nuclei. Referring toFigure 53-5B, note that the basilar membrane near the base of the cochlea is stimulated by sounds of all frequencies and, in the cochlear nuclei, this same breadth of sound representation is found. Yet, by the time the excitation has reached the cerebral cortex, most sound-responsive neurons respond only to a narrow range of frequencies rather than to a broad range. Therefore, somewhere along the pathway, processing mechanisms “sharpen” the frequency response. This sharpening effect is believed to be caused mainly by lateral inhibition, discussed inChapter 47 in relation to mechanisms for transmitting information in nerves. That is, stimulation of the cochlea at one frequency inhibits sound frequencies on both sides of this primary frequency; this inhibition is caused by collateral fibers angling off the primary signal pathway and exerting inhibitory influences on adjacent pathways. This same effect is important in sharpening patterns of somesthetic images, visual images, and other types of sensations.

Many of the neurons in the auditory cortex,especially in the auditory association cortex, do not respond only to specific sound frequencies in the ear. It is believed that these neurons “associate” different sound frequencies with one another or associate sound information with information from other sensory areas of the cortex. Indeed, the parietal portion of the auditory association cortex partly overlaps somatosensory area II, which could provide an opportunity for the association of auditory information with somatosensory information.

Auditory Perception

D.H. Ashmead, in Encyclopedia of Infant and Early Childhood Development, 2008

This article summarizes the development of auditory perception in infancy and early childhood, with a distinction between basic processes that are set substantially by properties of the ear and auditory nerve, and higher-order process that reflect integrative listening. The basic processes include the ability to resolve changes in frequency, time, and intensity of acoustic signals. For the most part these abilities are quite good by late in the first year after birth, although there is gradual improvement during early childhood. Higher-order processes include the experiences of loudness, pitch, sound source segregation, and spatial localization. Many of these aspects of the listening experience have proved amenable to being measured in infants and young children.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123708779000153

Cortical Neuroplasticity in Hearing Loss

Paul W. Flint MD, FACS, in Cummings Otolaryngology: Head and Neck Surgery, 2021

Clinical Significance of Cross-Modal Neuroplasticity in Pediatric Hearing Loss

Not all children who receive audiological intervention exhibit good speech perception outcomes following CI. In hearing impaired children, for example, it has been estimated that less than 50% of variability in speech and language outcomes can be accounted for by demographic factors alone (e.g., age of implantation),113 emphasizing the large degree of heterogeneity and individual differences likely influencing performance outcomes after audiological intervention. Why do some children perform well with their devices, while other children continue to struggle to make age-appropriate gains in auditory performance? Individual differences in neuroplasticity, including cross-modal reorganization, could contribute to individual differences in performance outcomes.Fig. 132.7B highlights the neurophysiological profile of six individual children in this way.37Fig. 132.7B depicts EEG source localization using current density source reconstructions (CDRs) for the P2 CVEP component in a normal hearing child (age 10 years) (seeFig. 132.7C1), a pediatric CI user (age 8 years) with excellent auditory speech perception outcomes (96% on a clinical test of auditory speech perception in quiet, the Lexical Neighborhood Test [LNT]) (seeFig. 132.7C2), and a pediatric CI user (age 6 years) with average performance outcomes (67% on a clinical test of auditory speech perception in quiet, the Multisyllabic Lexical Neighborhood Test [MLNT]) (seeFig. 132.7C3), in response to a visual motion stimulus using 128-channel high-density EEG described. While the normal hearing child and CI recipient with excellent speech perception exhibit expected activation of visual cortical regions typically associated with visual motion processing (e.g., occipital gyrus, fusiform gyrus, lingual gyrus) (seeFig. 132.7C1 and 132.7C2), the CI recipient with average performance exhibits additional recruitment of auditory cortical regions (e.g., middle and superior temporal gyrus) (seeFig. 132.7C3), indicative of cross-modal neuroplasticity by vision.Fig. 132.7B also depicts EEG source localization using CDRs for the N70 CSSEP in three separate children in response to a 250 Hz vibrotactile stimulus applied to the right index finger using high-density 128-channel EEG reported in Sharma et al. (2015)37: A normal hearing child (age 7 years) (seeFig. 132.7C4), a pediatric CI user (age 13 years) exhibiting excellent auditory speech perception (94% on a clinical test of auditory speech perception in quiet, the Consonant Nucleus Consonant [CNC]) (seeFig. 132.7C5), and a pediatric CI user (age 15 years) exhibiting average performance on an auditory unimodal test of speech perception (76% on the CNC test) (seeFig. 132.7C6). While the normal hearing child and pediatric CI recipient with excellent speech perception show expected activation in cortical regions associated with somatosensory processing (e.g., post-central gyrus) (seeFigs. 132.7C4 and 132.7C5), the pediatric CI recipient with average auditory speech perception exhibits additional recruitment of temporal processing regions (e.g., superior temporal gyrus, transverse temporal gyrus) (seeFig. 132.7C6), indicative of cross-modal neuroplasticity by the somatosensory modality. While these data stem from single subjects and should be interpreted cautiously, results provide some preliminary evidence that underlying neurophysiological changes in cross-modal plasticity may relate to functional outcomes following audiological intervention. With a better understanding of cross-modal plasticity in the context of auditory deprivation and the potential for audiological intervention to reverse these changes in neuroplasticity, brain-based markers may help interventionists individualize intervention, rehabilitation, and training programs for individual patients with hearing loss receiving audiological intervention with hearing aids or CI.

Perception

I.E. Nagel, ... U. Lindenberger, in Encyclopedia of Gerontology (Second Edition), 2007

Hearing

Auditory perception also declines with advancing age. Hearing losses become noticeable at around age 30 for men and age 50 for women, possibly as a result of differential exposure to environmental noise, such as noise associated with the operation of heavy equipment. Losses are most pronounced for high-frequency tones and accelerate with age. Similar to visual perception, age-related changes in the auditory system occur at all levels of processing. In the cochlea, loss of basilar membrane hair cells and reduced neural transmission are observed. Furthermore, the cochlear wall becomes thinner. The number of neurons in the auditory nerve decreases, along with structural, functional, and chemical alterations of early auditory processing pathways. In part as a consequence of these sensory changes, the representation of sounds in the auditory cortex differs markedly by age.

Many aspects of hearing are affected by age. Hearing thresholds show a marked increase with age. Reduced hearing sensitivity at high frequencies is a good predictor for general hearing loss. Hearing loss affects psychoacoustic dimensions such as frequency discrimination (e.g., deciding whether two tones have the same pitch or not), intensity discrimination (e.g., discriminating loudness), and temporal processing. One way to measure temporal aspects of auditory processing is to assess the ability to detect gaps in a stream of auditory stimuli. Similar to critical flicker fusion in the visual modality, older adults are unable to detect short gaps between auditory stimuli that young adults notice easily, especially when the stimuli are complex. Another aspect of temporal auditory processing refers to duration discrimination, or the ability to notice differences between the lengths of two tones. Again, this ability is compromised in old age, especially with complex auditory stimuli. Finally, temporal processing also includes the ability to encode and represent the order of a tone sequence. Older adults show more difficulty than young adults in making such order discriminations. Many other aspects of hearing are also affected by age. One important example is spatial hearing or the sound localization from binaural cues (see Hearing).

In concert with other changes in auditory perception, age-based changes in temporal aspects of auditory processing have profound effects on speech comprehension. The increasing inability of elderly people to understand speech is by far the most important everyday implication of age-related auditory decline. Whereas speech perception is at least mildly impaired in half of the 70-to-80-year-olds, two-thirds of 80- to 90-year-olds are moderately to severely impaired, and two-thirds of individuals older than 90 years of age have moderate to severe problems in understanding speech. Difficulties in speech perception are exaggerated when background noise is high, when speech is speeded up, when many people take part in the conversation, or when the topic of conversation is complex. Again, declines in speech perception are best understood as an interaction of sensory changes (e.g., basilar membrane hair cell loss) and cognitive changes (e.g., decreasing working memory capacity). Speech is important for maintaining social contact with others. Hence, problems in understanding speech caused by declines of the auditory system can have far-reaching effects on participation in social life and psychological well-being (see Social Cognition). In terms of remediation in applied settings, older people's deficits in speech perception can be attenuated by providing contextual cues (e.g., explicitly naming and introducing the topic to be talked about), lowering the speed of speech production, and using well-adjusted hearing aids.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123708702001487

Audition

David Poeppel, Xiangbin Teng, in The Senses: A Comprehensive Reference (Second Edition), 2020

2.06.2 Introduction

Auditory perception arises from sensing vibrations that exist ubiquitously in the physical world. The auditory system senses acoustic vibrations within a certain frequency range, which varies across species (Fay, 2012). For humans, acoustic vibrations between 20 Hz and 20,000 Hz (as per classical textbooks) and ∼80 Hz and ∼8000 Hz (as per psychophysics) give rise to auditory sensation. The auditory system also extracts ecologically essential information from the amplitude and frequency modulations riding on top of the first-order acoustic vibrations (Nelken et al., 1999; Lewicki, 2002; Singh and Theunissen, 2003). Temporal dependencies between vibrations and modulations, short-term or long-term, lead to sounds of complex and hierarchically structured patterns, such as speech and music (Ding et al., 2015; Ding et al., 2017; Doelling and Poeppel, 2015). Statistics summarizing long-range properties of acoustic vibrations serve as bases of perception of acoustic textures (McDermott and Simoncelli, 2011). The auditory system is equipped with an essential mechanism tailored to code acoustic vibrations and to extract the relevant temporal regularities from the acoustic environment, namely auditory entrainment.

In this chapter, we focus on auditory entrainment because it has been increasingly understood – and vigorously debated - as a fundamental mechanism of temporal coding in the human auditory system, especially underpinning high-level auditory processes such as speech and music perception (Doelling and Poeppel, 2015; Giraud and Poeppel, 2012; Luo and Poeppel, 2007). We will first bring to readers' attention some broader properties, so that entrainment can be defined and carefully compared to other similar but distinct phenomena, then we turn to cortical auditory entrainment in human listeners and elaborate on the putative functional roles.

Natural sounds contain critical perceptual information over multiple timescales. Auditory entrainment is therefore required to function in a multi-scale manner, so that auditory information over a range of timescales can be concurrently extracted (Giraud and Poeppel, 2012; Poeppel, 2003; Teng and Poeppel, 2019; Teng et al., 2016; Teng et al., 2017). Our recent work on multi-scale auditory processing has explored cortical auditory entrainment across timescales and has revealed the discrete and concurrent nature of auditory entrainment (Boemio et al., 2005; Luo and Poeppel, 2012; Teng et al., 2016, 2017; Teng and Poeppel, 2019; Wang et al., 2012), which has enriched the understanding of how entrainment facilitates processing speech, language, and music. At the end, we introduce this work and hope to extend the horizon of research on auditory entrainment.

Many earlier reviews have summarized the research on neural and auditory entrainment and provide comprehensive overviews from various perspectives (Alexandrou et al., 2018; Ding and Simon, 2014; Lakatos et al., 2019; Meyer, 2018; Meyer et al., 2019; Obleser and Kayser, 2019; Poeppel, 2014; Wilson and Cook, 2016). We will not recapitulate those results here but encourage readers to read these helpful papers. One issue that has not been investigated as much and has not yet been a main focus in the literature concerns the functional role of auditory entrainment, or neural entrainment in general. In this chapter, we provide a more systemic view (Marr, 1982) on entrainment, with the major goal of rethinking entrainment, understanding the functions reflected in entrainment, and considering the computations involved, so that our perspective can help chart future directions of research on this fundamentally important mechanism of hearing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012805408600018X

Movement, perception and cognitive development

Jane E Carreiro DO, in An Osteopathic Approach to Children (Second Edition), 2009

AUDITORY PERCEPTUAL DEVELOPMENT

Auditory perception is dependent on three things: the appropriate transduction of sound waves into electrical signals, filtering out of background noise, and the reconstruction of complex sound patterns into recognizable bytes. Small changes in air pressure move the tympanic membrane and its attached malleus, which shifts the stapes and incus. Movement of the incus against the oval window of the cochlea affects the fluid within the scala vestibuli and indirectly the scala tympani and scala media (Fig. 10.3). These changes affect the basilar membrane of the cochlea (Hudspeth 2000). Bony or connective tissue disruption within the external auditory canal or middle ear will impede this process and lead to conductive hearing loss. The basilar membrane is a small connective tissue structure, which varies in width and thickness along its 33 mm length. Because of this, various areas will be affected differently, based on the frequency, amplitude and intensity of the fluid wave (Hudspeth 2000). Depending on how the basilar membrane moves, the hair cells will be driven into excitatory, inhibitory or neutral positions. Therefore, through the action of the hair cell, the mechanical stimulus of the wave is transduced into an electrical signal. This signal is sent via the cochlea nerveto the cochlea nucleus and into the central auditory pathways to the cortex. Along this route, the signals are processed and analyzed (Hudspeth 2000). The process by which these electrical signals are translated into the symbolic context of language or vice versa involves many areas of the cortex, and is unclear and beyond the scope of this chapter. However, it is important to recognize that the processing of language involves many different areas of the cortex, including areas concerned with integrating visual or somatosensory information (Dronkers et al 2000). Consequently, abnormalities of language processing, such as dyslexia, may result from disturbances in the integration of visual or somatosensory information, or from distorted input.

At birth, the auditory system is functioning; however, the cerebral cortex has not reached a state of maturity sufficient to handle auditory sensory information for perception. Language is the symbol system for the exchange and storage of information. The development of language is dependent on: afferent neural input (hearing, vision), intact CNS function and neural output to functional vocal structures (Coplan & Gleason 1990). Normal hearing occurs in the range of 250–16000 Hz (cycles per second) or amplitude of 0–120 dB HL (decibels hearing level).

A review of the literature shows that between 4% and 20% of school-age children have hearing loss. Hearing loss may be unilateral or bilateral, and conductive or sensorineural. Conductive hearing loss results from dysfunction or interference in the transmission of sound to the cochlea, vestibule and semicircular canals. Air conduction is usually impaired. The most common causes include atresia of the canal, ossicular malformation, tympanic membrane abnormality, and blockage of the canal by a foreign body, cerumen impaction, and effusion in the middle ear. Conductive hearing loss affects all frequencies; however, bony conduction is usually preserved. Sensorineural hearing loss occurs when dysfunction or impairment of the cochlea hair cells or auditory nerve affects stimuli received through both air and bone conduction. Lower-frequency hearing may be less affected; however, one must remember that speech occurs in higher frequencies. Common causes of sensorineural hearing loss include hypoxia, intracranial hemorrhage, meningitis, hyperbilirubinemia, measles, mumps and, rarely, chicken pox.

Masking is the process by which the brain filters out background noise based on phase differences. Sound waves will reach the ears at slightly different times. This difference is used by the brain to screen out unwanted sound. Binaural hearing is required for masking. Children with unilateral deafness may have difficulties in isolating a sound such as the teacher’s voice in a noisy environment like the first-grade classroom. This is especially true if the background noise occurs within the same frequencies as that to which the child is trying to attend. Partial hearing loss affects sibilants, which have high frequency and low amplitude, such as /s/, /sh/, /f/, /th/, while lower frequencies such as /r/, /m/, /v/ are unaffected. Children with a partial hearing loss may not be diagnosed until they enter school and exhibit an apparent learning disability.

Otitis media with effusion (OME) usually results in 10–50 dB hearing loss in acute cases; chronic otitis media results in 50–65 dB hearing loss, which includes most speech sounds. This hearing loss is usually temporary. However, during the first year of life, children with 130 days of OME will score one standard deviation lower on language skills than children with less than 30 days of OME.

Language disorders represent a dysfunction of cortical processes specifically involved with receptive and expressive function. A language disorder may be phonetic, such as deviant sound production, because the interpretation of sound is dysfunctional and children speak as it sounds to them. Another language disorder involves syntax, i.e. word order and grammar. The interpretation of word meaning and word relationships represents a disorder of semantics, while disorders of pragmatics affects the social appropriateness of language. Language disorders may involve one or more than one of these characteristics as an expressive or receptive function. Depending on the character of the disorder, sign language may be beneficial as a treatment and diagnostic modality. Often, language disorders are assumed to result from a problem with hearing. But, as we have seen, multiple sensory systems are involved with cognitive development. Think back to the example of the child who is unable to differentiate between the letters ‘d’, ‘b’, and ‘p’ because of a motor impairment. What will happen when that child is shown the letter ‘d’ and told the sound ‘dah’, then the letter ‘b’ and told the sound ‘bah’, and so on? How will the child discern the relationships between these letters and their sounds when he cannot consistently recognize the symbol for the sound?

Speech patterns are based on fluency, the rate and rhythm of the flow of speech. Very young children begin to mimic the speech patterns of their native language with early babbling. Fluency disorders (dysfluency) occur when there is impaired rate or rhythm of the flow of speech. Physiological dysfluency peaks between 2 and 4 years of age and then resolves. It is usually represented as phrase or whole-word repetition, such as ‘can I–can I’ or ‘can–can’. A more abnormal form of dysfluency may also occur as part-word or initial-word sound; Wwwwwwwwwhy? or wuh-wuh-wuh why? Alfred Tomatis reported that stuttering tends to be related to the length of the longest syllable of the spoken language. That is, the duration of the sound which the child stutters on is the same as the longest syllable. Tomatis suggested that the child is somehow delayed in processing what he is hearing himself speak, and suggested ‘abnormal cerebral representation of language and/or generalized abnormality of interhemispheric communication as the basis for stuttering’ (Tomatis 1991). He reported that by using earphones to change the length of the stuttered sound, the child would revert to a smooth, uninterrupted speech pattern. Osteopaths have anecdotally found an association between mild head trauma and the development of stuttering (chart review and practitioner survey). The question of whether stuttering is a language dysfunction or a vocal dysfunction is an interesting one. Vocal disorders are not disorders of language or perception, but represent a dysfunction of the mechanical component of speech.

Receptive language skills precede expressive skills. Very early in life, children can demonstrate receptive language skills. This may manifest as looking for their bottle when a parent verbally indicates that it is time for feeding, or glancing at the family pet when its name is mentioned. Most children demonstrate the ability to point to an object before 10 months of age, although they often cannot name it until after the first year. Children will respond to the word ‘no’ before they can say it (often this ability is inexplicably lost between the ages of 2 and 18, but that is another story). The babbling speech of infants often contains the inflections found in the language to which they are exposed and probably represent the first attempts at mimicry. Tomatis (1991) reports that the babbling of infants also tends to fall within the frequency range of the home language. Children raised in multilingual homes are frequently slightly delayed in expressive language skills, although receptive skills are appropriate for age. As might be expected, once speech develops, these children seem to have a proficiency at learning new languages. In general, individuals appear to have greater fluency in languages that have frequency ranges which fall within the range of the native tongue.

Much of what is known about language was learned by studying people with language disorders secondary to cortical injury. Our understanding of the processes contributing to the formation, comprehension and expression of language is still vague. Localization of function is the phrase used to describe the condition by which any given area of the brain is involved with a specific process. For example, seeing a word, hearing a word, thinking of a word and speaking a word all involve different areas of the brain (Kandel et al 2000). Furthermore, the location of cognitive processes involved in each of these tasks is different from the sensory areas involved with language. For example, understanding the written word c–a–t does not occur in the visual cortex, but the visual cortex is needed to see the word. Language is a symbolic representation of a concept – a cat, a hug, to sleep. These are all concepts, and language is the means by which they are communicated. Whether spoken, written, drawn or signed, the message symbolizes an idea. We can translate our ideas into any of these forms of language and we can interpret each of these forms into an idea. But each of these tasks occurs in a different area of the brain. Areas of association cortex in the frontal, parietal, temporal and occipital lobes of the dominant hemisphere are involved with language function (Dronkers et al 2000). The dominant hemisphere is the left in most people. The right or non-dominant hemisphere is concerned with the inflection, timing and rhythm of expressive language, which can be thought of as the emotional context.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780443067389000101

Marmosets in Auditory Research

Steven J. Eliades, Joji Tsunada, in The Common Marmoset in Captivity and Biomedical Research, 2019

Behavioral Methods: Psychoacoustics

Understanding auditory perception and cognition requires behavioral testing using psychoacoustic methods to determine marmosets' abilities to detect or discriminate different sounds. Although early sound perception threshold measurements used a conditioned shock avoidance task [3], use of such aversive conditioning has become ethically questionable for use in nonhuman primates. More recent endeavors have used operant conditioning techniques adapted from other animal species [4,80]. These methods use a conditioned Go/No-Go auditory task wherein the marmoset licks from a spout to receive a reward when it detects a sound. Marmosets rapidly learn this task, in less than 10 sessions for most animals tested, and have a low false-positive lick rate. This technique has also been adapted to test auditory discrimination using a change-detection task, wherein the animal has to lick when hearing a change in stimulus, and not lick when just hearing the reference stimulus repeated [6]. Recent work has also developed an operant conditioning apparatus for freely moving marmosets, using multiple touch sensors or bars rather than a lick response, allowing two (or more) choices rather than a purely Go/No-Go response task [81]. This approach has the potential to allow more complicated discrimination tasks, such as two-alternative, forced-choice methods. On the other hand, the bar press approach may not be as easily paired with neural recordings as increased movement may reduce the stability of single-unit recording methods, particularly for small-bodied animals like marmosets. Further use of implanted electrode techniques, discussed below, may potentially overcome this limitation in the future.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012811829000025X

Brain Lateralization across the Life Span

Merrill Hiscock, in Handbook of Neurolinguistics, 1998

24-1.3.3 Perception

Studies of auditory perception provide some of the most convincing evidence of early functional asymmetries. This evidence has been summarized by Best (1988). An initial dichotic listening study with infants between the ages of 22 and 140 days (Entus, 1977) yielded a right-ear advantage (REA) for detection of transitions between consonants (e.g., /ma/ to /da/), and a left-ear advantage (LEA) for transitions in musical timbre (e.g, cello to bassoon). Detection of a transition at either ear was indicated by an event-related dishabituation of the infant’s nonnutritive sucking. Best and her colleagues (see Best, 1988), using cardiac deceleration to indicate that a stimulus transition had been detected, confirmed both the REA for speech syllables and the LEA for musical stimuli in infants 3 months of age and older. Although an LEA for musical stimuli has been found in 2-month-old infants, a corresponding REA for speech perception has not been reported in infants below the age of 3 months.

A study based on a different behavioral method suggests that a speech-related brain asymmetry is present even in short-gestation infants who are not yet as mature as the typical newborn. Using limb movements as a measure of immaturity, Segalowitz and Chapman (1980) found that repeated exposure to speech, but not to music, caused a disproportionate reduction of right-arm tremor in infants with an average gestational age of 36 weeks, thus implying that speech affected the left side of the brain more than the right side.

Other investigators have found that neonates turn more often to the right than to the left when exposed to speech sounds (Hammer, 1977; Young & Gagnon, 1990). MacKain, Studdert-Kennedy, Spieker, and Stern (1983) reported that 6-month-old infants detect the synchronization of visual (articulatory) and aural components of adults’ speech, but only when the adult is positioned to the infant’s right. These findings suggest that orientation is biased to the right side of space in the presence of linguistic stimuli, presumably because the left side of the infant’s brain is more responsive than the right side to speech-specific activation. This phasic asymmetry appears to modulate the tonic left-hemisphere prepotency that biases orientation to the right.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126660555500289

The Human Auditory System

Ruth Litovsky, in Handbook of Clinical Neurology, 2015

Loudness

Another aspect of auditory perception that relates to sound level or intensity is that of perceived loudness of a stimulus. Whereas discrimination, whereby we measure whether a listener was correct when s/he reported whether a stimulus changed or did not change, is objective, loudness perception is subjective. Any subjective perception is difficult to measure in listeners, including not only infants and children, but adults as well. Loudness is an attribute for a sound that places perception on a scale ranging from inaudible/quiet to loud/uncomfortable, in response to change in sound pressure level (intensity). Because there is no correct answer, there is some challenge in knowing when and how to reinforce a child and how to train the child to respond. Nonetheless, it appears that, while some children have difficulty learning the task, others can perform similarly to adults (Serpanos and Gravel, 2000). Because loudness growth is abnormal in people with hearing loss, such that loudness grows rapidly over a small range of intensities, emphasis on understanding the importance of loudness perception maturation may come from the audiologic literature, with hearing-impaired children. Evaluations of hearing-aid fittings and perceived loudness of speech signals after amplification become clinically crucial, so that the speech signal is heard, understood, and comfortably presented (e.g., Scollie et al., 2010; Ching and Dillon, 2013). Future work on basic psychophysics may be important in order to capture the perception of young infants and children with typical hearing, and benchmark their abilities, so that expectations are appropriate for children who are fitted with hearing aids.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444626301000032

Handbook of Mammalian Vocalization

Günter Ehret, Simone Kurt, in Handbook of Behavioral Neuroscience, 2010

II.A Audiogram

The basis of auditory perception is sensation, i.e., sounds must be audible to the individual. By definition, sounds are audible if their frequency components are within the animal's audiogram, which is the curve illustrating the minimal sound pressure levels of just audible tones, as a function of the tone frequency (Fig. 1). A compilation of audiograms of many species can be found in Fay's psychophysics databook (Fay, 1988). Audiograms describe the frequency range of hearing of a species, together with frequency ranges of increased or reduced sensitivity. The species-specific shape of the audiogram is generated by the filter and amplifier characteristics of the outer, middle and inner (cochlea) ear (e.g., Ehret, 1989), i.e., the basis of hearing reflects very peripheral properties of processing in the auditory pathways.

When sound waves enter the right ear, which hemisphere receives the primary information?

Fig. 1. Relationship between audiograms of humans and house mice and the frequency ranges of their vocalizations. The audiograms represent the auditory threshold curves, i.e., the minimum sound pressure levels in dB (y-axis) as a function of the frequency of a just audible tone (x-axis). Two frequency scales are shown. One applies to human hearing (in kHz); the other is expressed in octaves and calibrated to the frequency with the lowest hearing threshold in the audiogram. The octave scale is common to mammals. The audiogram of the mouse shows the frequency of best hearing (fopt) at 15 kHz. The main frequency ranges of vowels of humans speech, of cries of human babies and of calls of adult mice all are in a frequency range around and about four octaves below fopt (normal frequency range of vocalizations) which can be found also in many other mammals. Mice have special adaptations to hear and vocalize in the high ultrasonic range up to three octaves above fopt. Other rodents and many bat species have such specializations. The audiograms are from Ehret (1974).

Further, audiograms set the limits for sound communication in the frequency-intensity space, i.e., frequencies of communication sounds must be within the frequency range of the audiogram, and at or above the minimal audible sound pressure levels represented by the audiogram. For many mammals the frequency range of the audiogram can be divided into three parts with regard to the frequency ranges of communications sounds (Fig. 1): (1) a central “normal” part where the main energy of most communication sounds is located, as illustrated for humans and mice; (2) a specialized part above the central part serving communication in the high ultrasonic ranges (e.g., rodents, bats; see Brudzynski and Fletcher, Chapter 3.3 in this volume); and (3) a specialized part below the central part for communicating over long distances with low-frequency sounds (e.g., elephants; see Garstang, Chapter 3.2 in this volume). Knowing the sound pressure levels of frequencies in communication sounds and their specific attenuation by the medium in which the sound spreads out, one can calculate the communication space of a sender (e.g., Haack et al., 1983). Since high frequencies, especially in the high ultrasonic range, are heavily damped in air, communication with ultrasounds is restricted to short distances around the sender.

Two other conditions have to be considered if one would like to take the audiogram of a species as a frame for estimating the audibility of communication sounds. First, audiograms usually describe the sensation of sounds of rather long duration (100 ms and longer, depending on the frequency spectrum). The perceptual thresholds of shorter sounds increase by about 10 dB for a decrease in sound duration by about one tenth, for example from 100 ms to 10 ms (Fay, 1988; Ehret, 1989). Thus, for optimal detection, communication sounds should either be long, or consist of a train of short duration pulses with a high repetition rate so that their energy is integrated over time to reach a low detection threshold.

Second, the shape of the audiogram changes with the age of the animal. Young animals start hearing in a restricted frequency range (often close to the best frequency range of hearing in adults) at rather high detection thresholds. With increasing age during development, the audible frequency range increases towards lower and higher frequencies, and the thresholds decrease towards those of adults (Ehret, 1983, 1988). Old animals may suffer from hearing loss, especially at high frequencies (Ehret, 1974). Thus, newborns of cats, dogs, many rodent and bat species, and marsupials start hearing only several days after birth and may reach adult auditory sensitivity within about 3-12 weeks; in humans not before the age of two years (Ehret, 1983, 1988). This means that very young mammals and also very old ones may be active in producing sounds, but may not be able to perceive sounds as young adults do. This requires special strategies for acoustic communication with young infants. If infants are the receivers of communication sounds, adult bats (Gould, 1971; Brown, 1976), cats (Haskins, 1977), wolves (Shalter et al., 1977) and humans (Stern et al., 1982) use sounds of a simple frequency and time structure, with frequency components in the optimal frequency range of the young and with rhythmic repetitions of elements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1569733910700217

What is the name of the band of fibers connecting the left to the right hemispheres?

The corpus callosum is a large bundle of more than 200 million myelinated nerve fibers that connect the two brain hemispheres, permitting communication between the right and left sides of the brain.

When an object is placed unseen in the right hand of a split

They would not know what the object is. A split-brain patient can name an unseen object placed in the right hand, but cannot name objects placed in the left hand. What does this suggest about the language abilities of the two hemispheres?

Where fibers connect the brain's left and right hemispheres thickens in adolescence and this improves adolescents ability to process information?

The corpus callosum, which connects the brain's left and right hemispheres, thickens in adolescence, and this improves adolescents' ability to: process information.

What does split

Sperry received the 1981 Nobel Prize in Physiology or Medicine for his split-brain research. Sperry discovered that the left hemisphere of the brain was responsible for language understanding and articulation, while the right hemisphere could recognize a word, but could not articulate it.