Information about Hearing, Communication, and Understanding
Figure 1. Sounds may be classified as environmental, voiced, or musical.
Sound offers us a powerful means of communication. Our sense of hearing enables us to experience the world around us through sound. Because our sense of hearing allows us to gather, process, and interpret sounds continuously and without conscious effort, we may take this special sense of communication for granted. But, did you know that
- Human communication is multisensory, involving visual, tactile, and sound cues?
- The range of human hearing, from just audible to painful, is over 100-trillion-fold?
- Tiny specialized cells in the inner ear, known as hair cells, are responsible for converting the vibrational waves of sound into electrical signals that can be interpreted by the brain?
- Tinnitus, commonly known as “ringing in the ears,” is actually a problem that originates in the brain?
- A recent study showed that men who hunt experience an increased risk of high-pitched hearing loss of 7 percent for every five years that they hunt? Nearly all (95 percent) of these same hunters report that they do not use hearing protection while hunting.11
Scientific understanding of the role of genes in hearing is also increasing at an impressive rate. The first gene associated with hearing was isolated in 1993. By the end of 2000, more than 60 genes related to hearing were identified.15 In addition, scientists have pinpointed over 100 chromosomal regions believed to harbor genes affecting the hearing pathway. Many genes were first isolated in the mouse, and from this, the human genes were identified. Completion of the Mouse and Human Genome Projects is helping scientists isolate these genes.
The rapid growth in our understanding is of more than academic interest. In a practical sense, sharing this information with young people can enable them to adopt a lifestyle that promotes the long-term health of their sense of hearing. With this in mind, this supplement will address several key issues, including
- What is the nature of sound?
- What mechanism allows us to process sounds with great precision—from the softest whisper to the roar of a jet engine, from a high-pitched whistle to a low rumble?
- What are the roles of hearing, processing, and speaking in human communication?
- What happens when the hearing mechanism is altered or damaged? How does sound processing change?
- What can be done to prevent or accommodate damage to our sense of hearing?
Misconceptions Related to Sensory Perception and Hearing
Misconception 1: Our senses provide a complete and accurate picture of the world.
Younger students are often unaware of the limitations of their senses. They may believe that what they perceive is all that there is. Most students would be quite surprised to learn that their ears produce measurable sounds of their own that are normally inaudible to the brain. Also, they might not be aware that some animals use sound frequencies that are out of our hearing range. For example, whales communicate using low-frequency sounds that are inaudible to humans and can carry across vast expanses of ocean. This module will make students aware that our senses react to only a limited range of the energy inputs available. Much sensory information exists beyond our ability to experience it. Our level of awareness is influenced by our individual abilities, our genes, our environment, and our previous experiences, as well as the interactions among them. Learning about the limitations of our senses can help students interpret their environment more accurately.
Your ears produce sounds of their own that are normally inaudible to the brain.
Misconception 2: Our senses function independently of one another.
Students may believe that because each sense is specialized for a particular type of sensation, senses function by themselves and do not interact with one another or with the rest of the body. Research, however, reveals many interactions between the senses.7 During this module, students will learn about the sensory integration that takes place in the brain.Misconception 3: As we age, our brain networks become fixed and cannot be changed.
Scientific research has shown that the brain never stops changing and adjusting to its environment.1 This ability is important for acquiring new knowledge and for compensating for deficiencies that result from age or injury. The ability of the brain to “reprogram” itself is called plasticity. Special brain exercises, or training techniques, exploit brain plasticity to help people cope with specific language and reading problems.
Figure 2. Regardless of the senses used, understanding occurs in the brain.
Misconception 4: Our senses do not really require any preventive maintenance.
Students may believe that because our senses function without any conscious input, always being “on,” their function and health are not influenced by what we do. The module will make students aware that the overall health of their senses, like all other bodily systems, is affected by the lifelong demands placed on them. Students will learn about biological mechanisms in which potentially harmful input can lead to both short-term and long-term hearing impairments, and they will learn about simple, effective ways to minimize harmful stimuli.3 Major Concepts Related to Hearing and Communication
Research into hearing and communication is providing a scientific foundation for understanding the anatomy, physiology, and genetics of the hearing pathway, as well as the social and cultural aspects of human communication. The following discussion is designed to introduce you to some major concepts about hearing and communication.3.1 Communication is multisensory
Communication with others makes use of sound and vision.
Figure 3. Words may be communicated by writing, speaking, and signing.
3.2 Language acquisition: imprinting and critical periods
Our brains have specific regions devoted to speech, hearing, and language functions.
There are two concepts important to the acquisition of language. One is imprinting, which refers to the ability of some animals to learn rapidly at a very early age and during a well-defined period in their development. Imprinting generally refers to the ability of offspring to acquire the behaviors characteristic of their parents. This process, once it occurs, is not reversible. A famous example of imprinting was described by Nobel laureate Konrad Lorenz in the 1930s.5 Lorenz observed that newly hatched goslings would follow him, rather than the mother goose, if they saw him first. The period of imprintability may be very short, just hours for some species.
Figure 4. Konrad Lorenz with young goslings that imprinted to him.
These examples demonstrate the brain’s flexibility—its ability to be changed or to adapt to its environment. They demonstrate that an animal may alter its behavior or acquire a behavior that helps improve its chances for survival. Do animals have anything to teach us about our own acquisition of language? The answer seems to be yes. Consider the following: Scientists have reported that seal pups learn to recognize their mothers’ voices within a few days of being born.2 This is important because the mother seals must leave their pups after roughly a week to go hunting. Upon returning, mother seals vocalize and wait for their pups to respond. By playing recordings of various females, the investigators determined that for the first few hours after birth, seal pups will respond to the voice of any adult female. However, after two to five days, the pups learn to respond only to their mother’s voice.
Very soon after birth, human infants learn to distinguish speech sounds from other types of sound. Within the next month or two, the infant learns to distinguish between different speech sounds.4, 14 An 18-month-old toddler can recognize and use the sounds (called phonemes) of his or her language and can construct two-word phrases. A 3½-year-old child can construct nearly all of the possible sentence types. From this point on, vocabulary and language continue to expand and be refined.12
Communication is truly a multisensory experience. For most individuals, the pathway from creating sound (speaking) to receiving, processing, and interpreting sound (hearing) is critical.
Communication is truly a multisensory experience. For most individuals, the pathway from creating sound (speaking) to receiving, processing, and interpreting sound (hearing) is critical. This module focuses on the key issues of how sound is processed so that communication is achieved.
3.3 Sound has a physical basis
Sound represents vibrational energy. It is created when a medium such as air, wood, metal, or a person’s vocal cords vibrate. Sounds carried as energy are transferred from one molecule to the next in the vibrating medium. To understand sound, consider the analogy in which a stone is dropped into a body of water. This action produces ripples that will spread out in all directions from the point where the stone contacted the water. The ripples become weaker (decrease in intensity) as they get farther away from the origin. So it is with sound. The vibration through a medium proceeds in waves. However, unlike ripples on water, sound waves move away from their point of origin in three dimensions, not just two.Sound waves possess specific characteristics. Frequency represents the number of complete wave cycles per unit of time, usually one second (see Figure 5). Frequency is expressed in hertz (Hz), which means cycles per second. Low-frequency sounds are those that vibrate only a few times per second, while high-frequency sounds vibrate many more times per second. The term used to distinguish your perception of higher-frequency sounds from lower-frequency sounds is pitch.
The speed of sound is constant for all frequencies, although it does vary with the medium through which it travels. In air, sound travels at a speed of roughly 340 meters per second. Sound travels fastest through metals because the molecules of that medium are packed very closely together. Similarly, sound travels about four times faster in water than in air. It follows that sound travels faster in humid air than dry air; in addition, humid air absorbs more high frequencies than low frequencies, leading to differences in the perception of sound heard through the two media. Finally, temperature can affect the speed of sound in any medium. For instance, the speed of sound in air increases by about 0.6 meters per second for each degree Celsius increase in temperature.
The human ear responds to frequencies in the range of 20 Hz to 20,000 Hz (20 kHz),18 although most speech frequencies lie between 100 and 4,000 Hz. Frequencies above 20,000 Hz are referred to as ultrasonic. Though ultrasonic frequencies are outside the range of human perception, many animals can hear these sounds. For instance, dogs can hear sounds at frequencies as high as 50,000 Hz, and bats can hear sounds as high as 100,000 Hz. Other sounds, such as some produced by earthquakes and volcanoes, have frequencies of less than 20 Hz. These sounds, referred to as infrasonic or subsonic, are also outside the range of human hearing.
Figure 6. The sound spectrum.
Figure 7. Representation of amplitudes of a wave. The dashed line has a lower amplitude than the solid line.
Every 10-dB increase in sound intensity represents a 10-fold increase in sound intensity and a perceived doubling in loudness.
Intensity Ratio | Intensity Difference (dB) |
---|---|
1:1 | 0 |
2:1 | 3 |
4:1 | 6 |
8:1 | 9 |
10:1 | 10 |
16:1 | 12 |
20:1 | 13 |
100:1 | 20 |
400:1 | 26 |
800:1 | 29 |
1,000:1 | 30 |
2,000:1 | 33 |
8,000:1 | 39 |
10,000:1 | 40 |
100,000:1 | 50 |
1,000,000:1 | 60 |
10,000,000:1 | 70 |
100,000,000:1 | 80 |
1,000,000,000:1 | 90 |
10,000,000,000:1 | 100 |
100,000,000,000:1 | 110 |
1,000,000,000,000:1 | 120 |
10,000,000,000,000:1 | 130 |
100,000,000,000,000:1 | 140 |
Sound | dB Level |
---|---|
hearing threshold | 0 |
breathing | 10 |
rustling leaves | 20 |
whispering | 25 |
library | 30 |
refrigerator | 45 |
average home | 50 |
normal conversation | 60 |
clothes dryer | 60 |
washing machine | 65 |
car | 70 |
vacuum cleaner | 70 |
busy traffic | 75 |
noisy restaurant | 80 |
outboard motor | 80 |
inside car in city traffic | 85 |
electric shaver | 85 |
screaming child | 90 |
passing motorcycle | 90 |
convertible ride on freeway | 95 |
table saw | 95 |
hand drill | 100 |
tractor | 100 |
diesel truck | 100 |
circular saw | 100 |
jackhammer | 100 |
gas engine mower | 105 |
helicopter | 105 |
chain saw | 110 |
amplified rock concert | 90–130 |
shout into ear at 20 cm | 120 |
car horn | 120 |
siren | 120 |
threshold of pain | 120–140 |
gunshot | 140 |
jet engine | 140 |
12-gauge shotgun | 165 |
rocket launching | 180 |
loudest audible tone | 194 |
Even common noises, such as highly amplified music and gas-engine mowers or leaf blowers, can damage human hearing with prolonged exposure.
3.4 Perception of sound has a biological basis
When sound, as vibrational energy, arrives at the ear, it is processed in a complex but distinct series of steps. These steps reflect the anatomical division of the ear into the outer ear, middle ear, and inner ear (see Figure 8).
Figure 8. Anatomy of the human ear.
The outer ear. The outer ear is composed of two parts. The pinna is the outside portion of the ear and is composed of skin and cartilage. The second part is called the ear canal (also called the external auditory canal). The pinna, with its twists and folds, serves to enhance high-frequency sounds and to focus sound waves into the middle and inner portions of the ear. The pinna also helps us determine the direction from which a sound originates. However, the greatest asset in judging the location of a sound is having two ears. Because one ear is closer to the source of a sound than the other, the brain detects slight differences in the times and intensities of the arriving signals. This allows the brain to approximate the sound’s location. Interestingly, the position and orientation of the pinna, at the side of the head, help reduce sounds that originate behind us. This helps us hear sounds that originate in the direction we are looking and reduces distracting background noises.
Some students (and adults) may believe that the size of the ear is an indication of the organism’s hearing ability—that is, the larger the ear, the better the ability to hear. This misperception doesn’t take into account the internal structures of the ear that process sound vibrations. A large pinna may serve a function that is unrelated to hearing. For example, the external ear of the African elephant is filled with small blood vessels that help the animal dissipate excess heat. The external ear may be specialized in other ways, as well. Cat owners, for example, have undoubtedly observed the rather dramatic movement of their pet’s pinnae as the animal attempts to locate the source of a sound.
The ear canal acts as an amplifier for sound frequencies between 3,000 and 4,000 Hz.
The elegance of the middle ear system lies in its ability to greatly amplify sound vibrations before they enter the inner ear.
The middle ear is an air-filled space. It is connected to the back of the throat by a small tube called the eustachian tube, which allows the air in the middle ear space to be refreshed periodically. The eustachian tube can become blocked by infection, and fluid may fill the middle ear space. Changes in air pressure can also affect the tympanic membrane, resulting in the ear-popping phenomenon experienced by people who fly in airplanes or drive over mountain roads. The membrane may bend in response to altered air pressure and then “pop” back to its original position when the eustachian tube opens and internal and external air pressures are equalized.
The process of converting the vibrational energy of sound into nerve impulses is called transduction.
The cochlea is divided into an upper chamber, called the scala vestibuli or vestibular canal, and a lower chamber, called the scala tympani or tympanic canal. These are seen most easily if the cochlea is represented as uncoiled, as in Figure 9.
Figure 9. An uncoiled cochlea, to the right of the oval and round windows.
Figure 10. Diagrammatic representation of the movement of vibrational energy through the cochlea.
Figure 11. Details inside a coil of the cochlea showing the organ of Corti.
Hair cells ultimately translate, or transduce, mechanical phenomena occurring in the outer, middle, and inner ear into electrical impulses.
Figure 12. Details of the organ of Corti showing hair cells and the relationship of the stereocilia (hair bundles) to the adjacent membrane.
The bodies of hair cells sit on top of the basilar membrane (see Figure 12). The stereocilia of hair cells connect the body of the cell with the tectorial membrane. Pressure waves in the cochlea (see Figure 10) move the basilar membrane and cause the stereocilia to move. This movement initiates biochemical events in the cells that result in the generation of electrical signals.
Sound is mapped to different parts of the cochlea according to frequency. Figure 13 shows where tones of different frequencies cause vibrations of maximum amplitude along the length of the cochlea. The base, close to the stapes, is stiff and narrow and responds more to high-frequency (high-pitched) sounds. The apex, far from the stapes, is broad and responds more to low-frequency sounds.
Figure 13. Specific frequencies cause vibrations of maximum amplitude at different points along the cochlea. The numbers in the diagram represent frequency in hertz.
Figure 14. The location of the auditory cortex in the human brain.
What causes this apparent deficiency or slowing in the brain’s ability to process auditory information? Researchers do not know. Auditory processing is a learned function, and if something interferes with the brain’s training, the result may be a deficit in the capacity to process sound.
Sound direction is localized by virtue of our having two ears and our ability to use different parts of the auditory system to process distinct aspects of incoming directional information.
Nerve fibers coming from the brain may carry information back to the ear. This is the brain’s way of filtering out signals that are unimportant, and concentrating only on important signals. Other nerve fibers proceed from the brain to the middle ear, where they control muscles that help protect against the effects of dangerously loud sounds.
Not only does the inner ear process the sound vibrations it receives, it also creates its own sound vibrations. When hair cells respond to vibration, their movement in the fluid environment of the cochlear duct produces friction, and this results in a loss of energy. However, a group of hair cells replaces the lost sound energy by creating their own. Some of this sound energy leaks back out of the ear and can be detected using a computer-based sound analyzer and a probe inserted into the outer third of the ear canal. This ability of hair cells to respond to sound by producing their own sound is the basis of one type of hearing test performed on infants and young children.
Hearing Loss
Hearing loss and deafness can result from sound exposure, heredity, ototoxic drugs, accidents, and disease or infection.
Damage associated with conductive hearing loss interferes with the efficient transfer of sound to the inner ear. Conductive hearing loss is characterized by a loss in sound intensity. Voices may sound muffled, while at the same time the individual’s own voice may seem quite loud. It can be caused by anything that interferes with the vibration of the eardrum or with the movement of the bones of the middle ear. Even a buildup of earwax can lead to conductive hearing loss.
A number of treatment options exist for conductive hearing loss. The appropriate response depends upon the cause of the problem. For example, an ear doctor can simply remove a buildup of earwax. It should be pointed out, however, that you should never try to remove wax from your own ears. You can too easily push the wax further into the ear canal and even damage your eardrum. A common cause of conductive hearing loss in children is ear infections. Other causes of conductive hearing loss are a punctured eardrum or otosclerosis (a buildup of spongy tissue around the middle ear). These can be treated through surgery.
Sensorineural hearing loss is generally associated with damage to the hair cells in the inner ear. Such damage is the most common cause of hearing loss and can result from several factors working alone or in combination.
4.1 Noise exposure
The effects of noise-induced hearing loss may be temporary or permanent, depending on the intensity and duration of the exposure.
Figure 15. The left panel shows normal stereocilia (or hair bundles) associated with inner hair cells in the cochlea. The middle and right panels show noise-induced damage to hair cells. Note the bent-over stereocilia in the middle panel. The right panel shows missing and fused stereocilia.
The phrase “too loud, too long, too close” summarizes the causes of noise-induced hearing loss.
4.2 Aging
Damage to hair cells is associated with aging, though it is not inevitable. Such damage can result from a combination of factors, such as noise exposure, injury, heredity, illness, and circulation problems. Some of these factors, such as noise exposure, can take many years before their damaging effects are noticeable. Hearing loss often begins when a person is in his or her 20s, though it may not be noticed until the person is in his or her 50s. Not surprisingly, the greater the noise exposure over a lifetime, the greater the hearing loss. Because the hair cells at the base of the cochlea “wear out” before those at the apex, the higher pitches are lost first, followed by the lower ones.4.3 Ototoxic drugs
Medications and chemicals that are poisonous to auditory structures are called ototoxic. Certain antibiotics can selectively destroy hair cells, enabling scientists to better understand hair-cell function in normal and abnormal hearing. Other types of drugs can be used to selectively destroy other tissues of the auditory pathway. A few common medications can produce the unwanted side effect of tinnitus, or ringing in the ears. One such drug is aspirin. Arthritis sufferers, who may consume large amounts of aspirin, sometimes experience tinnitus and hearing loss as a side effect of their aspirin use. Fortunately, the effect is temporary and the tinnitus tends to disappear when aspirin use is discontinued.4.4 Disease and infections
Young children who experience ear infections accompanied by hearing loss for prolonged periods may also exhibit delayed speech development.
Otosclerosis refers to a condition in which the bones of the middle ear are damaged by the buildup of spongy or bone-like tissue. The impaired function of the ossicles (the malleus, incus, and stapes) can reduce the sound reaching the ear by as much as 30 to 60 dB. This condition may be treated by surgically replacing all or part of the ossicular chain with an artificial one.
Ménière’s disease affects the inner ear and vestibular system, the system that helps us maintain our balance. In this disorder, the organ of Corti becomes swollen, leading to a loss of hearing that comes and goes. Other symptoms include tinnitus, episodes of vertigo (dizziness), and imbalance. The disease can exist in mild or severe forms. Unfortunately, the cause of the disease is not well understood and effective treatments are lacking.
4.5 Heredity
The Mouse and Human Genome Projects are setting the stage for identifying the genetic contributions to hearing.
An important technology for investigating the roles that genes play in hearing is the production of transgenic and “knockout” mice, which result when scientists insert a foreign gene into (transgenic) or delete a targeted gene from (knockout) the mouse genome. The hearing responses of transgenic or knockout mice are compared with their unaltered counterparts. If differences are detected, they are presumed to be caused by the specific gene that was inserted or deleted. Eventually, scientists hope to use their understanding of the genetic basis of hearing to develop treatments for hereditary hearing loss and deafness.
4.6 Cochlear implants
A cochlear implant (see Figure 16) is a hearing device designed to bypass absent or damaged hair cells. The cochlear implant is a small, complex, electronic device that can help provide an interpretable stimulus to a person who is profoundly deaf or severely hard-of-hearing. The implant is surgically placed under the skin behind the ear, and consists of four basic parts:
- a microphone that picks up sound from the environment;
- a speech processor that selects and arranges sounds picked up by the microphone;
- a transmitter and receiver/stimulator that receives signals from the speech processor and converts them into electric impulses; and
- electrodes that collect the impulses from the stimulator and send them to the brain.
Figure 16. Diagram of a typical cochlear-implant system.
Outcomes for patients with cochlear implants vary. For many, the implant provides sound cues that help them better understand speech. Many are helped to such an extent that they can carry on a telephone conversation. Originally, only patients with profound hearing loss were deemed suitable for the procedure. One reason for this restrictive policy is that when a patient receives a cochlear implant, whatever hearing they have is destroyed. Eventually, it was discovered that patients with some residual hearing could benefit more from the procedure than those with profound hearing loss. For appropriate individuals, cochlear implants can be extremely beneficial. Each case must be examined individually to determine whether the cochlear implant is the best treatment available.
The use of cochlear implants can be controversial, especially among some deaf people. Just as spoken language helps define the culture of the hearing world, sign language helps define the culture of the deaf community. The issues surrounding the use of speech or American Sign Language by the deaf community illustrate the profound effects of language, hearing, and communication on one’s sense of self.
Prevention of Noise-Induced Hearing Loss
An estimated 10 million Americans have suffered irreversible hearing damage due to noise exposure.
In the work environment, employers are obligated to protect their workers from hazardous noise. Hearing-conservation programs, when implemented effectively, are associated with increased worker productivity and decreased absenteeism. They also lead to fewer workplace injuries and workman’s compensation claims. Whenever hazardous levels of sound are encountered, either on the job or at home, you can protect yourself by using ear protection such as earplugs or special earmuffs. Do not simply put your fingers in your ears or stuff cotton in them. Additionally, anyone exposed to significant levels of noise for long durations should receive regular hearing tests to detect changes in hearing.
Figure 17. Ear protection, such as earplugs or special earmuffs, helps prevent noise-induced hearing loss.
Over 50 million Americans experience tinnitus at some point in their lives. The disorder is perceived by some as an annoying background noise while others are incapacitated by loud noise that disturbs them day and night. Although the exact causes of tinnitus are not known, scientists agree that it is associated with damage to the ear. Possible triggers of tinnitus include NIHL, too much alcohol or caffeine, stress, inadequate circulation, allergies, medications, and disease. Of these factors, exposure to loud noise is by far the most probable cause of tinnitus. Perhaps not surprisingly, there is no single effective treatment. Depending on the suspected cause, individuals may be given drugs to increase blood flow, or provided guidance on ways to reduce their stress or to change their diets. The best advice for those concerned about NIHL is to limit exposure to hazardous noise (both proximity to and duration of), wear ear protection when exposed, and have hearing tests performed regularly.