Pages

Hearing


Information about Hearing, Communication, and Understanding



Figure 1. Waterfall  Figure 1. Boy reciting  Figure 1. Saxophonist


Figure 1. Sounds may be classified as environmental, voiced, or musical.


Sound offers us a powerful means of communication. Our sense of hearing enables us to experience the world around us through sound. Because our sense of hearing allows us to gather, process, and interpret sounds continuously and without conscious effort, we may take this special sense of communication for granted. But, did you know that
  • Human communication is multisensory, involving visual, tactile, and sound cues?
  • The range of human hearing, from just audible to painful, is over 100-trillion-fold?
  • Tiny specialized cells in the inner ear, known as hair cells, are responsible for converting the vibrational waves of sound into electrical signals that can be interpreted by the brain?
  • Tinnitus, commonly known as “ringing in the ears,” is actually a problem that originates in the brain?
  • A recent study showed that men who hunt experience an increased risk of high-pitched hearing loss of 7 percent for every five years that they hunt? Nearly all (95 percent) of these same hunters report that they do not use hearing protection while hunting.11
Contemporary hearing research is guided by lessons learned from sensory research, namely that specialized nerve cells respond to different forms of energy—mechanical, chemical, or electromagnetic—and convert this energy into electrochemical impulses that can be processed by the brain. The brain then works as the central processor of sensory impulses. It perceives and interprets them using a “computational” approach that involves several regions of the brain interacting all at once. This notion is different from the long-held view that the brain processes information one step at a time in a single brain region. Over the past decade, scientists have begun to understand the intricate mechanisms that enable the ear to convert the mechanical vibrations of sound to electrical energy, thereby allowing the brain to process and interpret these signals.
Scientific understanding of the role of genes in hearing is also increasing at an impressive rate. The first gene associated with hearing was isolated in 1993. By the end of 2000, more than 60 genes related to hearing were identified.15 In addition, scientists have pinpointed over 100 chromosomal regions believed to harbor genes affecting the hearing pathway. Many genes were first isolated in the mouse, and from this, the human genes were identified. Completion of the Mouse and Human Genome Projects is helping scientists isolate these genes.
The rapid growth in our understanding is of more than academic interest. In a practical sense, sharing this information with young people can enable them to adopt a lifestyle that promotes the long-term health of their sense of hearing. With this in mind, this supplement will address several key issues, including
  • What is the nature of sound?
  • What mechanism allows us to process sounds with great precision—from the softest whisper to the roar of a jet engine, from a high-pitched whistle to a low rumble?
  • What are the roles of hearing, processing, and speaking in human communication?
  • What happens when the hearing mechanism is altered or damaged? How does sound processing change?
  • What can be done to prevent or accommodate damage to our sense of hearing?


Misconceptions Related to Sensory Perception and Hearing


In presenting the material contained within this supplement, you may have to deal with students’ incomplete understanding about hearing. Some of the likely misconceptions about hearing that students have follow:

Misconception 1: Our senses provide a complete and accurate picture of the world.

Younger students are often unaware of the limitations of their senses. They may believe that what they perceive is all that there is. Most students would be quite surprised to learn that their ears produce measurable sounds of their own that are normally inaudible to the brain. Also, they might not be aware that some animals use sound frequencies that are out of our hearing range. For example, whales communicate using low-frequency sounds that are inaudible to humans and can carry across vast expanses of ocean. This module will make students aware that our senses react to only a limited range of the energy inputs available. Much sensory information exists beyond our ability to experience it. Our level of awareness is influenced by our individual abilities, our genes, our environment, and our previous experiences, as well as the interactions among them. Learning about the limitations of our senses can help students interpret their environment more accurately.
Your ears produce sounds of their own that are normally inaudible to the brain.

Misconception 2: Our senses function independently of one another.

Students may believe that because each sense is specialized for a particular type of sensation, senses function by themselves and do not interact with one another or with the rest of the body. Research, however, reveals many interactions between the senses.7 During this module, students will learn about the sensory integration that takes place in the brain.

Misconception 3: As we age, our brain networks become fixed and cannot be changed.

Scientific research has shown that the brain never stops changing and adjusting to its environment.1 This ability is important for acquiring new knowledge and for compensating for deficiencies that result from age or injury. The ability of the brain to “reprogram” itself is called plasticity. Special brain exercises, or training techniques, exploit brain plasticity to help people cope with specific language and reading problems.


Figure 2


Figure 2. Regardless of the senses used, understanding occurs in the brain.

Misconception 4: Our senses do not really require any preventive maintenance.

Students may believe that because our senses function without any conscious input, always being “on,” their function and health are not influenced by what we do. The module will make students aware that the overall health of their senses, like all other bodily systems, is affected by the lifelong demands placed on them. Students will learn about biological mechanisms in which potentially harmful input can lead to both short-term and long-term hearing impairments, and they will learn about simple, effective ways to minimize harmful stimuli.


3 Major Concepts Related to Hearing and Communication

Research into hearing and communication is providing a scientific foundation for understanding the anatomy, physiology, and genetics of the hearing pathway, as well as the social and cultural aspects of human communication. The following discussion is designed to introduce you to some major concepts about hearing and communication.


3.1 Communication is multisensory

Communication with others makes use of sound and vision.
Although some people might define communication as an interaction between two or more living creatures, it involves much more than this. For example, we are constantly receiving information from, and changing our relationship with, our environment. This communication is received through our senses of smell, taste, touch, vision, and hearing. Communication with others makes use of vision (making eye contact or assessing body language) and sound (using speech or other sounds, such as laughing and crying). When a group of people shares a need or desire to communicate, language is born. The most common human language is the language of words. Words may be communicated in various ways. Although they are usually spoken, they also may be written, fingerspelled, or expressed through sign language.


Figure 3. Girl writing  Figure 3. Couple phoning  Figure 3. Man signing


Figure 3. Words may be communicated by writing, speaking, and signing.


3.2 Language acquisition: imprinting and critical periods

Our brains have specific regions devoted to speech, hearing, and language functions.
Since the time of Plato, there has been debate over the nature of language. Some believe that language is inborn and purposeful, while others believe it to be artificial and arbitrary. Some consider language to be an evolutionary product, while others do not. It appears that words are not “built into” the brain, because language is a relatively recent evolutionary development and also because languages differ substantially from one another. Language and communication are made possible by specialized structures. We have evolved a sophisticated apparatus for both speech and hearing. Our brains have specific regions devoted to speech, hearing, and language functions. Still, the mechanisms by which children acquire language are only partially understood.
There are two concepts important to the acquisition of language. One is imprinting, which refers to the ability of some animals to learn rapidly at a very early age and during a well-defined period in their development. Imprinting generally refers to the ability of offspring to acquire the behaviors characteristic of their parents. This process, once it occurs, is not reversible. A famous example of imprinting was described by Nobel laureate Konrad Lorenz in the 1930s.5 Lorenz observed that newly hatched goslings would follow him, rather than the mother goose, if they saw him first. The period of imprintability may be very short, just hours for some species.
Figure 4. Two goslings follow Konrad Lorenz as he walks past a house
Figure 4. Konrad Lorenz with young goslings that imprinted to him.
A second concept, related to imprinting, is critical periods. A nonhuman example of a critical period is the limited time frame within which a male bird must acquire his song.8 For instance, a male white-crowned sparrow usually begins singing his full song between 100 and 200 days of age. Proper song acquisition is needed for mating and for marking territory. However, to learn his song, the young bird must be exposed to an adult bird’s song consistently and frequently between one week and two months after hatching (the white-crowned sparrow's critical period for song acquisition). If the male sparrow hears the song only before or after the critical period, then he will not be able to learn the song correctly.
These examples demonstrate the brain’s flexibility—its ability to be changed or to adapt to its environment. They demonstrate that an animal may alter its behavior or acquire a behavior that helps improve its chances for survival. Do animals have anything to teach us about our own acquisition of language? The answer seems to be yes. Consider the following: Scientists have reported that seal pups learn to recognize their mothers’ voices within a few days of being born.2 This is important because the mother seals must leave their pups after roughly a week to go hunting. Upon returning, mother seals vocalize and wait for their pups to respond. By playing recordings of various females, the investigators determined that for the first few hours after birth, seal pups will respond to the voice of any adult female. However, after two to five days, the pups learn to respond only to their mother’s voice.
Very soon after birth, human infants learn to distinguish speech sounds from other types of sound. Within the next month or two, the infant learns to distinguish between different speech sounds.4, 14 An 18-month-old toddler can recognize and use the sounds (called phonemes) of his or her language and can construct two-word phrases. A 3½-year-old child can construct nearly all of the possible sentence types. From this point on, vocabulary and language continue to expand and be refined.12
Communication is truly a multisensory experience. For most individuals, the pathway from creating sound (speaking) to receiving, processing, and interpreting sound (hearing) is critical.
The parameters of language development and developmental phases are under rigorous study. For ethical reasons, investigators cannot explore such questions through human experimentation that would deprive infants of language intentionally. Occasionally, however, unusual circumstances provide us with a glimpse of how humans acquire or do not acquire spoken language. There are examples of individuals who were not exposed to spoken language from birth but who had normal hearing. These individuals never developed normal language or speech. Although such examples have been put forth as justification for the critical-period hypothesis for language development, there are confounding issues. The possibility exists that these individuals had some type of brain abnormality that was responsible for their not acquiring spoken language. Nonetheless, the importance of sound as a stimulus for the hearing apparatus and its need to be processed by the brain for understanding is unquestioned.
Communication is truly a multisensory experience. For most individuals, the pathway from creating sound (speaking) to receiving, processing, and interpreting sound (hearing) is critical. This module focuses on the key issues of how sound is processed so that communication is achieved.


3.3 Sound has a physical basis

Sound represents vibrational energy. It is created when a medium such as air, wood, metal, or a person’s vocal cords vibrate. Sounds carried as energy are transferred from one molecule to the next in the vibrating medium. To understand sound, consider the analogy in which a stone is dropped into a body of water. This action produces ripples that will spread out in all directions from the point where the stone contacted the water. The ripples become weaker (decrease in intensity) as they get farther away from the origin. So it is with sound. The vibration through a medium proceeds in waves. However, unlike ripples on water, sound waves move away from their point of origin in three dimensions, not just two.
Sound waves possess specific characteristics. Frequency represents the number of complete wave cycles per unit of time, usually one second (see Figure 5). Frequency is expressed in hertz (Hz), which means cycles per second. Low-frequency sounds are those that vibrate only a few times per second, while high-frequency sounds vibrate many more times per second. The term used to distinguish your perception of higher-frequency sounds from lower-frequency sounds is pitch.
Figure 5
Figure 5. Representation of frequency. The arrow indicates one cycle of the sound wave.
The speed of sound is constant for all frequencies, although it does vary with the medium through which it travels. In air, sound travels at a speed of roughly 340 meters per second. Sound travels fastest through metals because the molecules of that medium are packed very closely together. Similarly, sound travels about four times faster in water than in air. It follows that sound travels faster in humid air than dry air; in addition, humid air absorbs more high frequencies than low frequencies, leading to differences in the perception of sound heard through the two media. Finally, temperature can affect the speed of sound in any medium. For instance, the speed of sound in air increases by about 0.6 meters per second for each degree Celsius increase in temperature.
The human ear responds to frequencies in the range of 20 Hz to 20,000 Hz (20 kHz),18 although most speech frequencies lie between 100 and 4,000 Hz. Frequencies above 20,000 Hz are referred to as ultrasonic. Though ultrasonic frequencies are outside the range of human perception, many animals can hear these sounds. For instance, dogs can hear sounds at frequencies as high as 50,000 Hz, and bats can hear sounds as high as 100,000 Hz. Other sounds, such as some produced by earthquakes and volcanoes, have frequencies of less than 20 Hz. These sounds, referred to as infrasonic or subsonic, are also outside the range of human hearing.


Figure 6


Figure 6. The sound spectrum.
We all know that sounds can be louder or softer, but what does this mean? Sound is energy, and this energy, when traveling through air, displaces, or vibrates, air molecules. For example, the softest sound humans can hear is a sound that displaces particles of air by one-billionth of a centimeter.13 The extent to which air particles move from their original resting point determines the amplitude of the sound wave (see Figure 7). The greater the amplitude of the sound wave, the greater the intensity, or pressure, of the sound. Intensity refers to the overall amplitude of a sound. This distinction in terms is necessary, since nearly all sounds to which we are exposed are complex sounds made up of a combination of sound waves. Loudness is our perception of the intensity, frequency, and duration of a sound.


Figure 7


Figure 7. Representation of amplitudes of a wave. The dashed line has a lower amplitude than the solid line.
Every 10-dB increase in sound intensity represents a 10-fold increase in sound intensity and a perceived doubling in loudness.
Sound intensity is measured in relation to an accepted reference point. One such reference is the threshold at which a sound can be heard. How the intensity of any given sound compares with this standard reference level is given in units known as decibels (dB). The decibel is one-tenth of a bel, a unit named after the inventor Alexander Graham Bell. The decibel scale is not a linear one, but rather represents the ratio of the sound to the reference standard. To understand why ratios are necessary, consider the tremendous range of sound intensities we are capable of hearing. Scientists estimate that the human ear is sensitive to about 100,000,000,000,000 (1014) units of intensity. Also consider that a shout is about 1,000,000 (106) times more powerful than a whisper. Because dealing with such large numbers is cumbersome, the decibel scale is used to simplify comparisons (see Table 1). Every 10-dB increase in sound intensity represents a 10-fold increase in sound intensity and a perceived doubling in loudness. Therefore, a sound at 60 dB is 100 times as intense as a sound at 40 dB but is only perceived as four times as loud. In this way, the predominant range of human hearing is represented on a scale from 0 to 140 dB. The average intensities of some everyday sounds are presented in Table 2.
Table 1. The Decibel System
Intensity RatioIntensity Difference (dB)
1:10
2:13
4:16
8:19
10:110
16:112
20:113
100:120
400:126
800:129
1,000:130
2,000:133
8,000:139
10,000:140
100,000:150
1,000,000:160
10,000,000:170
100,000,000:180
1,000,000,000:190
10,000,000,000:1100
100,000,000,000:1110
1,000,000,000,000:1120
10,000,000,000,000:1130
100,000,000,000,000:1140
Table 2. Average Intensities of Everyday Sounds
SounddB Level
hearing threshold0
breathing10
rustling leaves20
whispering25
library30
refrigerator45
average home50
normal conversation60
clothes dryer60
washing machine65
car70
vacuum cleaner70
busy traffic75
noisy restaurant80
outboard motor80
inside car in city traffic85
electric shaver85
screaming child90
passing motorcycle90
convertible ride on freeway95
table saw95
hand drill100
tractor100
diesel truck100
circular saw100
jackhammer100
gas engine mower105
helicopter105
chain saw110
amplified rock concert90–130
shout into ear at 20 cm120
car horn120
siren120
threshold of pain120–140
gunshot140
jet engine140
12-gauge shotgun165
rocket launching180
loudest audible tone194
Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure.
Individuals are often unaware of the damage loud noise does to their hearing. Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure. Sporting events can also expose individuals to hazardous decibel levels as defined by the Occupational Health and Safety Administration (OSHA). Under OSHA guidelines, the limit of continuous noise exposure for an eight-hour day in an industrial setting is 90 dB. OSHA also prohibits workplace impact noise (short bursts of sound) greater than 140 dB. By increasing our awareness of decibel levels of common environmental noises, we can better limit our exposure to hazardous noise levels or take measures to protect our ears.


3.4 Perception of sound has a biological basis

When sound, as vibrational energy, arrives at the ear, it is processed in a complex but distinct series of steps. These steps reflect the anatomical division of the ear into the outer ear, middle ear, and inner ear (see Figure 8).


Figure 8


Figure 8. Anatomy of the human ear.
The pathway from the outer ear to the inner ear is remarkable in its ability to precisely process sounds from the very softest to the very loudest and to distinguish very small changes in the frequency of sound (pitch). Humans can discern a difference in frequency of just 0.1 percent. This means that humans can tell the difference between sounds at frequencies of 1,000 Hz and 1,001 Hz.
The outer ear. The outer ear is composed of two parts. The pinna is the outside portion of the ear and is composed of skin and cartilage. The second part is called the ear canal (also called the external auditory canal). The pinna, with its twists and folds, serves to enhance high-frequency sounds and to focus sound waves into the middle and inner portions of the ear. The pinna also helps us determine the direction from which a sound originates. However, the greatest asset in judging the location of a sound is having two ears. Because one ear is closer to the source of a sound than the other, the brain detects slight differences in the times and intensities of the arriving signals. This allows the brain to approximate the sound’s location. Interestingly, the position and orientation of the pinna, at the side of the head, help reduce sounds that originate behind us. This helps us hear sounds that originate in the direction we are looking and reduces distracting background noises.
Some students (and adults) may believe that the size of the ear is an indication of the organism’s hearing ability—that is, the larger the ear, the better the ability to hear. This misperception doesn’t take into account the internal structures of the ear that process sound vibrations. A large pinna may serve a function that is unrelated to hearing. For example, the external ear of the African elephant is filled with small blood vessels that help the animal dissipate excess heat. The external ear may be specialized in other ways, as well. Cat owners, for example, have undoubtedly observed the rather dramatic movement of their pet’s pinnae as the animal attempts to locate the source of a sound.
The ear canal acts as an amplifier for sound frequencies between 3,000 and 4,000 Hz.
The ear canal is about 2.5 cm (1 inch) long and leads to the tympanic membrane (eardrum) of the middle ear. The outer two-thirds of the canal contains glands that secrete a wax-like substance. This earwax, along with hairs that are present, serves to keep dust, insects, and other foreign material from going deeper into the ear. It also helps maintain a constant humidity and temperature for the middle ear. Individuals should not attempt to remove earwax, since this secretion will work itself out of the canal naturally in most cases. To avoid damage, it should be removed by a medical professional. Hearing researchers strongly concur with the truth of the adage: Put nothing smaller than your elbow into your ear. In addition to its protective function, the ear canal acts as an amplifier for sound frequencies between 3,000 and 4,000 Hz.
The elegance of the middle ear system lies in its ability to greatly amplify sound vibrations before they enter the inner ear.
The middle ear. The tympanic membrane (eardrum) separates the outer ear from the middle ear. It is a continuously growing structure, which means that damage to the membrane can generally be repaired. The membrane is circular in shape. The elastic properties of the tympanic membrane allow it to vibrate in response to sound waves. Vibrations from the tympanic membrane tend to focus near the center of the structure. From there, the vibrations are transferred to themalleus, the first of the three bones of the middle ear. The three bones of the middle ear are collectively called theossicles. The second bone of the middle ear is the incus, which is connected to the malleus and vibrates in concert with it. A third bone, the stapes, is connected to the incus, and also vibrates. The stapes sits in an opening in the bony wall, called the oval window, that separates the middle ear from the inner ear. The elegance of the middle ear system lies in its ability to greatly amplify sound vibrations before they enter the inner ear. Amplification occurs in part because the tympanic membrane is 15–30 times larger than the oval window. This size difference allows the force from the initial movement of the tympanic membrane to be concentrated as this energy transfers to the inner ear. The ossicles are the smallest bones in the body. The three bones are smaller than an orange seed. The malleus reaches an average length of about 8 mm, the incus 9 mm, and the stapes, only 3 mm. These bones also are referred to informally as the hammer, anvil, and stirrup, respectively.
The middle ear is an air-filled space. It is connected to the back of the throat by a small tube called the eustachian tube, which allows the air in the middle ear space to be refreshed periodically. The eustachian tube can become blocked by infection, and fluid may fill the middle ear space. Changes in air pressure can also affect the tympanic membrane, resulting in the ear-popping phenomenon experienced by people who fly in airplanes or drive over mountain roads. The membrane may bend in response to altered air pressure and then “pop” back to its original position when the eustachian tube opens and internal and external air pressures are equalized.
The process of converting the vibrational energy of sound into nerve impulses is called transduction.
The inner ear. Two interconnected parts that form a system of small cavities and passageways make up the inner ear. One part is the vestibular system, which is responsible for helping maintain balance. The second part is the cochlea, a coiled cavity about 35 mm long. The human cochlea makes about two turns. It is shaped like a spiral seashell or snail shell and is the hearing portion of the inner ear. It is responsible for converting the vibrational energy produced by the middle ear into nerve impulses (electrical energy) that will travel to the brain. The process of converting energy from one form into another is called transduction. Because the brain is incapable of interpreting the information in the vibrational energy of a sound source, transduction is a critical process, providing information to the brain in a form that it can process.
The cochlea is divided into an upper chamber, called the scala vestibuli or vestibular canal, and a lower chamber, called the scala tympani or tympanic canal. These are seen most easily if the cochlea is represented as uncoiled, as in Figure 9.


Figure 9


Figure 9. An uncoiled cochlea, to the right of the oval and round windows.
Both the upper and lower chambers are filled with a fluid, called perilymph, which is nearly identical to spinal fluid. The stapes vibrates against the oval window, creating fluid vibrations that are transmitted as pressure waves all the way through the cochlea. As represented by the arrows in Figure 10, these waves move from the upper chamber to the lower chamber, to the round window. The round window allows the release of the hydraulic pressure caused by vibration of the stapes in the oval window. Additionally, the diameter of the chambers decreases from base (closest to the windows) to apex.


Figure 10


Figure 10. Diagrammatic representation of the movement of vibrational energy through the cochlea.
The upper and lower chambers are separated from one another by the cochlear duct. The cochlear duct is separated from the lower chamber by the basilar membrane and is filled with endolymph, a fluid similar to that found within cells. Sitting on the basilar membrane is the highly sensitive organ of hearing called the organ of Corti, named after Alfonso Corti, the Italian anatomist who discovered it in the late 1800s. The relationships between the basilar membrane and the organ of Corti are depicted in Figure 11.


Figure 11


Figure 11. Details inside a coil of the cochlea showing the organ of Corti.
Hair cells ultimately translate, or transduce, mechanical phenomena occurring in the outer, middle, and inner ear into electrical impulses.
Hair cells of the organ of Corti are the specialized receptor cells of hearing. Under a microscope, these cells appear as elongated ovals with hairlike extensions, the stereocilia, waving at one end. Like microphones, hair cells ultimately translate, or transduce, mechanical vibrations occurring in the outer, middle, and inner ear into electrical impulses. These nerve impulses are then relayed to the brain via the auditory nerve. There are actually two types of hair cells. The inner hair cells are arranged in a single row along the full length of the organ of Corti. There are about 3,500 of them in total. The outer hair cells run the full length of the organ of Corti but are arranged in three parallel rows. There are nearly four times more outer hair cells than inner hair cells (about 12,000 per ear). The inner hair cells contact nearly all of the nerve fibers of the auditory nerve that transmits information to the brain. The outer hair cells primarily contact nerve fibers that carry information from the brain. Hair cells are quite sensitive to stimulation by slight sounds and also are extremely rapid in their responses and communication with auditory neurons. Hair cells, for example, respond 1,000 times faster to stimulation than do visual receptor cells. The key to their sensitivity lies in part with their structure. The membrane-bound hairlike structures that give hair cells their name, stereocilia, extend from the cell tops and are embedded in an overhanging sheet of cells called the tectorial membrane (see Figure 12). Each hair cell may have about 100 stereocilia. In a resting state, the stereocilia lean on one another and have the overall appearance of a conical bundle.


Figure 12


Figure 12. Details of the organ of Corti showing hair cells and the relationship of the stereocilia (hair bundles) to the adjacent membrane.
To understand how hair cells function to transduce the mechanical vibrations of sound, consider Figures 10 and 12.
The bodies of hair cells sit on top of the basilar membrane (see Figure 12). The stereocilia of hair cells connect the body of the cell with the tectorial membrane. Pressure waves in the cochlea (see Figure 10) move the basilar membrane and cause the stereocilia to move. This movement initiates biochemical events in the cells that result in the generation of electrical signals.
Sound is mapped to different parts of the cochlea according to frequency. Figure 13 shows where tones of different frequencies cause vibrations of maximum amplitude along the length of the cochlea. The base, close to the stapes, is stiff and narrow and responds more to high-frequency (high-pitched) sounds. The apex, far from the stapes, is broad and responds more to low-frequency sounds.


Figure 13


Figure 13. Specific frequencies cause vibrations of maximum amplitude at different points along the cochlea. The numbers in the diagram represent frequency in hertz.
Transmission to the brain. Extending from the organ of Corti are 30,000–40,000 nerve fibers that form the auditory nerve. The number of fibers required to carry a sound signal may give the brain a measure of the sound’s intensity. The fibers of the auditory nerve proceed a short distance to the brainstem. From there, fibers extend to the midbrain and then to the auditory cortex, which is located in the temporal lobeof the brain (see Figure 14). Through mechanisms that remain unknown, the brain interprets the electrochemical information it receives, thus allowing us to perceive sounds as having varying loudness and pitch.
Figure 14. Diagram of the human brain with a shaded area in the center labeled auditory cortex 
Figure 14.
 The location of the auditory cortex in the human brain.
The brain recognizes and interprets sound in our environment through a sequence of events called auditory processing. A disorder, known as auditory processing disorder (APD), came to prominence in the 1970s.3, 9 In APD, something interferes with the brain’s ability to process or interpret information about sound, although hearing seems to be normal. Children with APD typically have normal hearing and intelligence. Symptoms of APD are having difficulty paying attention and remembering information presented orally; poor listening skills; difficulty carrying out multistep directions; poor spelling, vocabulary, and reading comprehension skills; difficulty processing information; low academic performance; behavioral problems; language difficulty (tendency to confuse syllable sequences); and difficulty developing vocabulary and understanding language. APD is sometimes called “word-deafness” because children with the disorder may not recognize the subtle differences between sounds in words. For example, children with APD may hear the sentence “Tell me how a couch and a chair are alike” as “Tell me how a cow and a hair are alike.”
What causes this apparent deficiency or slowing in the brain’s ability to process auditory information? Researchers do not know. Auditory processing is a learned function, and if something interferes with the brain’s training, the result may be a deficit in the capacity to process sound.
Sound direction is localized by virtue of our having two ears and our ability to use different parts of the auditory system to process distinct aspects of incoming directional information.
Sound direction is localized by virtue of our having two ears and our ability to use different parts of the auditory system to process distinct aspects of incoming directional information. Certain cells in the brainstem compare the intensities of sound coming into each ear and then relay a computed signal to the auditory cortex to estimate the sound’s direction. Another group of brainstem cells contributes to the interpretation of sound direction by specifically comparing the time lag between the sound reaching them from the right ear versus the left ear.
Nerve fibers coming from the brain may carry information back to the ear. This is the brain’s way of filtering out signals that are unimportant, and concentrating only on important signals. Other nerve fibers proceed from the brain to the middle ear, where they control muscles that help protect against the effects of dangerously loud sounds.
Not only does the inner ear process the sound vibrations it receives, it also creates its own sound vibrations. When hair cells respond to vibration, their movement in the fluid environment of the cochlear duct produces friction, and this results in a loss of energy. However, a group of hair cells replaces the lost sound energy by creating their own. Some of this sound energy leaks back out of the ear and can be detected using a computer-based sound analyzer and a probe inserted into the outer third of the ear canal. This ability of hair cells to respond to sound by producing their own sound is the basis of one type of hearing test performed on infants and young children.


Hearing Loss


Hearing loss and deafness can result from sound exposure, heredity, ototoxic drugs, accidents, and disease or infection.
The auditory pathway is capable of providing a lifetime of useful service. It is, however, fragile and subject to damage from a variety of sources. Hearing loss and deafness can result from sound exposure, heredity, ototoxic drugs (chemicals that damage auditory tissues), accidents, and disease or infection. Conductive hearing loss results from damage to the outer or middle ear, and sensorineural hearing loss results from damage to the inner ear.
Damage associated with conductive hearing loss interferes with the efficient transfer of sound to the inner ear. Conductive hearing loss is characterized by a loss in sound intensity. Voices may sound muffled, while at the same time the individual’s own voice may seem quite loud. It can be caused by anything that interferes with the vibration of the eardrum or with the movement of the bones of the middle ear. Even a buildup of earwax can lead to conductive hearing loss.
A number of treatment options exist for conductive hearing loss. The appropriate response depends upon the cause of the problem. For example, an ear doctor can simply remove a buildup of earwax. It should be pointed out, however, that you should never try to remove wax from your own ears. You can too easily push the wax further into the ear canal and even damage your eardrum. A common cause of conductive hearing loss in children is ear infections. Other causes of conductive hearing loss are a punctured eardrum or otosclerosis (a buildup of spongy tissue around the middle ear). These can be treated through surgery.
Sensorineural hearing loss is generally associated with damage to the hair cells in the inner ear. Such damage is the most common cause of hearing loss and can result from several factors working alone or in combination.


4.1 Noise exposure

The effects of noise-induced hearing loss may be temporary or permanent, depending on the intensity and duration of the exposure.
When hair cells are damaged, their ability to participate in sound transduction is compromised. If your hair cells are completely destroyed, you will be unable to hear any sounds, no matter how loud they are. If the hair cells are damaged, you may still hear sounds, but the sounds will be distorted. Recall that different hair cells respond to different pitches. The pattern of hair-cell damage determines which pitches are preferentially lost. Typically, hair cells that respond to higher pitches are lost first. One reason is that the basilar membrane vibrates more vigorously in response to higher pitches. These vibrations can cause the delicate stereocilia of the hair cells to be sheared off (see Figure 15). One consequence of this damage is that it becomes more difficult to understand the higher-pitched voices of women and children. It also becomes more difficult to distinguish a person’s speaking voice from background noise. The effects of noise-induced hearing loss may be temporary or permanent, depending on the intensity and duration of the exposure. Although a person’s hearing may recover from temporary, slight damage to the hair cells, the complete loss of hair cells is irreversible in humans. Reptiles and birds are able to regenerate hair cells, however, so scientists are currently exploring ways to encourage regeneration of hair cells in humans.


Figure 15. Normal hairs  Figure 15. Damaged hairs  Figure 15. Fused hairs


Figure 15. The left panel shows normal stereocilia (or hair bundles) associated with inner hair cells in the cochlea. The middle and right panels show noise-induced damage to hair cells. Note the bent-over stereocilia in the middle panel. The right panel shows missing and fused stereocilia.
The phrase “too loud, too long, too close” summarizes the causes of noise-induced hearing loss.
The phrase “too loud, too long, too close” (see the WISE EARS! Web site, http://www.nidcd.nih.gov/health/wise/index.asp ) summarizes the causes of noise-induced hearing loss. The intensity, duration, and proximity of sound to the listener determine whether or not damage occurs and if that damage is reversible or permanent. Hearing loss can result from a single loud noise, such as an explosion, but more commonly results from repeated exposure to less intense sounds that are close by.


4.2 Aging

Damage to hair cells is associated with aging, though it is not inevitable. Such damage can result from a combination of factors, such as noise exposure, injury, heredity, illness, and circulation problems. Some of these factors, such as noise exposure, can take many years before their damaging effects are noticeable. Hearing loss often begins when a person is in his or her 20s, though it may not be noticed until the person is in his or her 50s. Not surprisingly, the greater the noise exposure over a lifetime, the greater the hearing loss. Because the hair cells at the base of the cochlea “wear out” before those at the apex, the higher pitches are lost first, followed by the lower ones.


4.3 Ototoxic drugs

Medications and chemicals that are poisonous to auditory structures are called ototoxic. Certain antibiotics can selectively destroy hair cells, enabling scientists to better understand hair-cell function in normal and abnormal hearing. Other types of drugs can be used to selectively destroy other tissues of the auditory pathway. A few common medications can produce the unwanted side effect of tinnitus, or ringing in the ears. One such drug is aspirin. Arthritis sufferers, who may consume large amounts of aspirin, sometimes experience tinnitus and hearing loss as a side effect of their aspirin use. Fortunately, the effect is temporary and the tinnitus tends to disappear when aspirin use is discontinued.


4.4 Disease and infections

Young children who experience ear infections accompanied by hearing loss for prolonged periods may also exhibit delayed speech development.
A variety of diseases and infections can lead to hearing loss. Children are especially prone to the ear infection calledotitis media from viruses or bacteria. Children are more susceptible to infection than adults are, partly because the location of their eustachian tube in relation to the middle ear allows easier access to bacteria from the nasal passages. These infections cause pain and may result in a buildup of fluid, which can lead to hearing loss. Usually, the bacterial infections can be controlled by antibiotics. Antibiotics are ineffective against viruses, however. The over-prescription of antibiotics to treat viral forms of otitis media has led to a rise in bacteria that are resistant to antibiotics. If allowed to progress untreated, ear infections can lead to a much more serious condition called meningitis. Young children who experience ear infections accompanied by hearing loss for prolonged periods may also exhibit delayed speech development. The reason for this is that the first three years of life are a critical period for acquiring language, which depends upon a child’s ability to hear spoken words.
Otosclerosis refers to a condition in which the bones of the middle ear are damaged by the buildup of spongy or bone-like tissue. The impaired function of the ossicles (the malleus, incus, and stapes) can reduce the sound reaching the ear by as much as 30 to 60 dB. This condition may be treated by surgically replacing all or part of the ossicular chain with an artificial one.
Ménière’s disease affects the inner ear and vestibular system, the system that helps us maintain our balance. In this disorder, the organ of Corti becomes swollen, leading to a loss of hearing that comes and goes. Other symptoms include tinnitus, episodes of vertigo (dizziness), and imbalance. The disease can exist in mild or severe forms. Unfortunately, the cause of the disease is not well understood and effective treatments are lacking.


4.5 Heredity

The Mouse and Human Genome Projects are setting the stage for identifying the genetic contributions to hearing.
The Mouse and Human Genome Projects are setting the stage for identifying the genetic contributions to hearing. Though deciphering the genetics underlying any developmental pathway is complex, identifying genes involved in the hearing pathway can greatly aid our understanding of the hearing process. Genes associated with a number of hereditary conditions that cause deafness, such as Usher syndrome16 and Waardenburg syndrome,17 have already been isolated. The identification of hearing-related genes has moved at an incredibly fast pace in the past decade. The first genetic mutation affecting hearing was isolated in 1993; by the end of 2000, the number of identified auditory genes was over 60. Scientists have also pinpointed over 100 chromosomal regions believed to harbor genes affecting the hearing pathway.
An important technology for investigating the roles that genes play in hearing is the production of transgenic and “knockout” mice, which result when scientists insert a foreign gene into (transgenic) or delete a targeted gene from (knockout) the mouse genome. The hearing responses of transgenic or knockout mice are compared with their unaltered counterparts. If differences are detected, they are presumed to be caused by the specific gene that was inserted or deleted. Eventually, scientists hope to use their understanding of the genetic basis of hearing to develop treatments for hereditary hearing loss and deafness.


4.6 Cochlear implants

cochlear implant (see Figure 16) is a hearing device designed to bypass absent or damaged hair cells. The cochlear implant is a small, complex, electronic device that can help provide an interpretable stimulus to a person who is profoundly deaf or severely hard-of-hearing. The implant is surgically placed under the skin behind the ear, and consists of four basic parts:
  • a microphone that picks up sound from the environment;
  • a speech processor that selects and arranges sounds picked up by the microphone;
  • a transmitter and receiver/stimulator that receives signals from the speech processor and converts them into electric impulses; and
  • electrodes that collect the impulses from the stimulator and send them to the brain.


Figure 16


Figure 16. Diagram of a typical cochlear-implant system.
A cochlear implant does not restore or create normal hearing. Instead, under the appropriate conditions, it can give a deaf or severely hard-of-hearing person a useful auditory understanding of the environment, including sirens and alarms. A cochlear implant is very different from a hearing aid. Whereas hearing aids amplify sound and change the acoustical signal to match the degree of hearing loss, cochlear implants compensate for damaged or nonworking parts of the inner ear by bypassing them altogether. When hearing is functioning normally, complex processes in the inner ear convert sound waves in the air into electrical impulses. These impulses are then sent to the brain, where a hearing person recognizes them as sound. A cochlear implant works in a similar manner: it electronically transforms sounds and then sends them to the brain. Hearing through an implant sounds different from normal hearing, but it allows many people with severe hearing problems to participate fully in oral communication.
Outcomes for patients with cochlear implants vary. For many, the implant provides sound cues that help them better understand speech. Many are helped to such an extent that they can carry on a telephone conversation. Originally, only patients with profound hearing loss were deemed suitable for the procedure. One reason for this restrictive policy is that when a patient receives a cochlear implant, whatever hearing they have is destroyed. Eventually, it was discovered that patients with some residual hearing could benefit more from the procedure than those with profound hearing loss. For appropriate individuals, cochlear implants can be extremely beneficial. Each case must be examined individually to determine whether the cochlear implant is the best treatment available.
The use of cochlear implants can be controversial, especially among some deaf people. Just as spoken language helps define the culture of the hearing world, sign language helps define the culture of the deaf community. The issues surrounding the use of speech or American Sign Language by the deaf community illustrate the profound effects of language, hearing, and communication on one’s sense of self.


Prevention of Noise-Induced Hearing Loss


An estimated 10 million Americans have suffered irreversible hearing damage due to noise exposure.
Noise-induced hearing loss (NIHL) is a serious health problem. It occurs on the job as well as in nonoccupational settings. An estimated 10 million Americans have suffered irreversible hearing damage due to noise exposure. Another 30 million Americans are exposed to dangerous levels of noise every day.10 This is especially tragic because NIHL is completely preventable. Although the consequences may vary for people who are exposed to identical levels of noise, some general conclusions can be stated. For example, studies have shown that sound levels of less than 75 dB are unlikely to cause permanent hearing loss, even after prolonged exposure. However, sound levels equal to or greater than 85 dB—about the same level as loud speech—for eight hours per day will produce permanent hearing loss after many years. At this time, it is not possible to predict a given individual’s degree of sensitivity to dangerous noise. Some people may be more sensitive to noise exposures than others.
In the work environment, employers are obligated to protect their workers from hazardous noise. Hearing-conservation programs, when implemented effectively, are associated with increased worker productivity and decreased absenteeism. They also lead to fewer workplace injuries and workman’s compensation claims. Whenever hazardous levels of sound are encountered, either on the job or at home, you can protect yourself by using ear protection such as earplugs or special earmuffs. Do not simply put your fingers in your ears or stuff cotton in them. Additionally, anyone exposed to significant levels of noise for long durations should receive regular hearing tests to detect changes in hearing.


Figure 17. Woman wearing molded earmuffs while working with spools of yarn in a factory


Figure 17. Ear protection, such as earplugs or special earmuffs, helps prevent noise-induced hearing loss.
Tinnitus is the medical term for the perception of sound when no external sound is present. The disorder is characterized by ringing, roaring, or repeated soft clicks in the ears. It is known that the ear continuously sends electrical impulses to the brain, even in the absence of sound. Some scientists speculate that when hair cells are damaged, the impulses are disrupted and the brain responds by generating its own sound signals. Normally, when an ear is stimulated by sound, auditory regions on both the left and right side of the brain become active. People experiencing tinnitus show brain activation in only one side of the brain, however. This difference in neural activity caused by external sounds (bilateral activation) versus tinnitus (unilateral activation) indicates that the disorder is likely to be a result of changes in the brain itself. Tinnitus may be produced by disturbances in auditory processing by the brain.
Over 50 million Americans experience tinnitus at some point in their lives. The disorder is perceived by some as an annoying background noise while others are incapacitated by loud noise that disturbs them day and night. Although the exact causes of tinnitus are not known, scientists agree that it is associated with damage to the ear. Possible triggers of tinnitus include NIHL, too much alcohol or caffeine, stress, inadequate circulation, allergies, medications, and disease. Of these factors, exposure to loud noise is by far the most probable cause of tinnitus. Perhaps not surprisingly, there is no single effective treatment. Depending on the suspected cause, individuals may be given drugs to increase blood flow, or provided guidance on ways to reduce their stress or to change their diets. The best advice for those concerned about NIHL is to limit exposure to hazardous noise (both proximity to and duration of), wear ear protection when exposed, and have hearing tests performed regularly.