Speech and Hearing Sciences
January 2011
Kaylah Lalonde |
Then, during her first year as an undergraduate student at Louisiana State University, she shadowed a speech therapist for a day. As they traveled from home-to-home to meet with children under three, Lalonde learned about the wide range of issues encountered by a speech therapist –from autism to hearing loss to cognitive impairment – and found it fascinating.
At first, she took classes on the clinical track for speech pathology, but the summer before her junior year, Lalonde was invited to join the McNair Scholars Program, paired with a mentor, and with that person’s help, developed a project of her own.
“I had a great mentor there who let me work hands-on. We started from scratch, developed the idea and ran all the way through with it. That’s how I got involved in research and decided it was more interesting and fun than the clinic,” she said. She continued on into the doctoral program in Speech and Hearing Sciences at IU, where Lalonde received the McNair Graduate Fellowship, a highly selective, campus-wide competition supporting graduate students for up to five years.
Lalonde is researching developmental speech perception. She studies the processes that allow us to understand speech, starting with the ear and working up to the brain, and how those processes develop.
“There are different levels of speech perception, such as detection, discrimination, recognition, and comprehension,” Lalonde said. Her current research focuses on the discrimination level.
“All of these levels are important, and a breakdown at any level can be problematic for language development. If a child's hearing aid or cochlear implant settings are not sufficient to allow them to discriminate between speech sounds (such as the difference between the words ‘so’ and ‘show’), they may have difficulty forming robust phonological representations,” she said.
She has worked closely with her advisor, Professor Rachael Holt on a project designed to find better ways to test speech discrimination in toddlers. Before this study began, Professor Holt worked with speech discrimination in adults and children from the age of four, and similar work has been done with infants. The research involves adapting the procedures that have been successful with older children, to compensate for the limitations of testing 2- and 3-year-olds. Holt estimates that Lalonde has tested the sound-discrimination of more than 40 children between the ages of 2- and 3-years-old.
“The age range we work with has been ignored, because toddlers are so hard to test,” she said. “You can hold an infant in your lap and watch their reaction to a stimulus, but toddlers are hard to keep in one place. Getting toddlers to understand what you’re asking of them, and then holding their interest is a challenge. Toddlers aren’t fully intelligible—we can’t always understand what they are trying to say—so you can’t simply ask a toddler what they hear and see,” she said.
First the toddlers receive a quick hearing screening, as is done with newborns, to make sure the child has normal hearing.
“Then to test speech discrimination we have them stand on a mat and listen to some speech sounds and make a judgment as to whether all the sounds are the same or whether there was a change in the sound. So maybe it will say ‘ba ba ba ba’ or maybe ‘ba ba boo boo.’ We of course put it in the context of games,” Lalonde said. “We do whatever works, really. You have to work around what the child does and adapt yourself to their manner.”
To measure whether the child can discriminate between the sounds, the child is first taught how the test will work using animal sounds, so they will understand what is being asked of them. Then, the tester moves into more abstract non-words like the sounds that will be used during the testing. Lalonde said this is so that she can test perceptual abilities, as opposed to language abilities. Each sound pair is presented 36 times.
“We then use statistics to determine whether the toddler can differentiate between sounds and how much of their choice is based on some bias, such as preferring a particular picture,” she said. “We hope to develop better methods that can eventually be used in the clinic for young children with hearing loss, to test the benefits of cochlear implants and hearing aids and determine whether the aids need to be adjusted,” she said.
With this project, Lalonde and her co-researchers found that children were variable in their performance, as is to be expected, she said, and she is now looking into factors that might be causing this by looking at behavior, language development, and working memory measures, and “we’re adapting the test a little based on what we saw with the kids,” she said, “. . . to make sure they really do understand the task and to hold their attention longer.”
For her next project, Lalonde will also use audio-visual testing to see if toddlers use visuals speech information (e.g. such as in lip reading) to compliment the auditory signal, particularly when noise disrupts the auditory signal. Other research with older children suggests that children this age don’t use signals from the face in the same manner as adults and may not use the face to compensate for the information they miss when listening in noisy situations.
“No one has been able to show whether children that young benefit from seeing the face whenever they are hearing speech,” she said. “We know that in a noisy situation, or in adults with hearing impairment, whenever the speech is degraded in some way, people look at the face of the person speaking for extra information to help piece together the signal. We know infants are sensitive to that sort of information, but the ability to use the information develops with age.
“I hope that by using discrimination, a lower level of perception than used in previous research, we can demonstrate that children use visual speech information at a younger age than previous literature suggests.”