Left and right ears not created equal as newborns process sound
Challenging decades of scientific belief that the decoding of sound originates from a preferred side of the brain, UCLA and University of Arizona scientists have demonstrated that right-left differences for the auditory processing of sound start at the ear.
Reported in the Sept. 10 edition of Science, the new research could hold profound implications for rehabilitation of persons with hearing loss in one or both ears, and help doctors enhance speech and language development in hearing-impaired newborns. “From birth, the ear is structured to distinguish between various types of sounds and to send them to the optimal side in the brain for processing,” explained Yvonne Sininger, Ph.D., visiting professor of head and neck surgery at the David Geffen School of Medicine at UCLA. “Yet no one has looked closely at the role played by the ear in processing auditory signals.”
Scientists have long understood that the auditory regions of the two halves of the brain sort out sound differently. The left side dominates in deciphering speech and other rapidly changing signals, while the right side leads in processing tones and music. Because of how the brains neural network is organized, the left half of the brain controls the right side of the body, and the left ear is more directly connected to the right side of the brain.
Prior research had assumed that a mechanism arising from cellular properties unique to each brain hemisphere explained why the two sides of the brain process sound differently. But Siningers findings suggest that the difference is inherent in the ear itself. “We always assumed that our left and right ears worked exactly the same way,” she said. “As a result, we tended to think it didnt matter which ear was impaired in a person. Now we see that it may have profound implications for the individuals speech and language development.”
Working with co-author Barbara Cone-Wesson, Ph.D., associate professor of speech and hearing sciences at the University of Arizona, Sininger studied tiny amplifiers in the outer hair cells of the inner ear. “When we hear a sound, tiny cells in our ear expand and contract to amplify the vibrations,” explained Sininger. “The inner hair cells convert the vibrations to neural cells and send them to the brain, which decodes the input.”
“These amplified vibrations also leak back out to the ear in a phenomena call otoacoustic emission (OAE),” added Sininger. “We measured the OAE by inserting a microphone in the ear canal.”
In a six-year study, the UCLA/UA team evaluated more than 3,000 newborns for hearing ability before they left the hospital. Sininger and Cone-Wesson placed a tiny probe device in the babys ear to test its hearing. The probe emitted a sound and measured the ears OAE.
The researchers measured the babies OAE with two types of sound. First, they used rapid clicks and then sustained tones. They were surprised to find that the left ear provides extra amplification for tones like music, while the right ear provides extra amplification for rapid sounds timed like speech.
“We were intrigued to discover that the clicks triggered more amplification in the babys right ear, while the tones induced more amplification in the babys left ear,” said Sininger. “This parallels how the brain processes speech and music, except the sides are reversed due to the brains cross connections.”
“Our findings demonstrate that auditory processing starts in the ear before it is ever seen in the brain,” said Cone-Wesson. “Even at birth, the ear is structured to distinguish between different types of sound and to send it to the right place in the brain.”
Previous research supports the teams new findings. For example, earlier research shows that children with impairment in the right ear encounter more trouble learning in school than children with hearing loss in the left ear.
“If a person is completely deaf, our findings may offer guidelines to surgeons for placing a cochlear implant in the individuals left or right ear and influence how cochlear implants or hearing aids are programmed to process sound,” explained Cone-Wesson. “Sound-processing programs for hearing devices could be individualized for each ear to provide the best conditions for hearing speech or music.”
“Our next step is to explore parallel processing in brain and ear simultaneously,” said Sininger. “Do the ear and brain work together or independently in dealing with stimuli? How does one-sided hearing loss affect this process? And finally, how does hearing loss compare to one-sided loss in the right or left ear?”
Media Contact
More Information:
http://www.ucla.eduAll latest news from the category: Studies and Analyses
innovations-report maintains a wealth of in-depth studies and analyses from a variety of subject areas including business and finance, medicine and pharmacology, ecology and the environment, energy, communications and media, transportation, work, family and leisure.
Newest articles
New model of neuronal circuit provides insight on eye movement
Working with week-old zebrafish larva, researchers at Weill Cornell Medicine and colleagues decoded how the connections formed by a network of neurons in the brainstem guide the fishes’ gaze. The…
Innovative protocol maps NMDA receptors in Alzheimer’s-Affected brains
Researchers from the Institute for Neurosciences (IN), a joint center of the Miguel Hernández University of Elche (UMH) and the Spanish National Research Council (CSIC), who are also part of…
New insights into sleep
…uncover key mechanisms related to cognitive function. Discovery suggests broad implications for giving brain a boost. While it’s well known that sleep enhances cognitive performance, the underlying neural mechanisms, particularly…