New research suggests that babies primarily learn languages through rhythmic rather than phonetic information in their initial months. This finding challenges the conventional understanding of early language acquisition and emphasizes the significance of sing-song speech, like nursery rhymes, for babies. The study was published in Nature Communications.
Traditional theories have posited that phonetic information, the smallest sound elements of speech, forms the foundation of language learning. In language development, acquiring phonetic information means learning to produce and understand these different sounds, recognizing how they form words and convey meaning.
Infants were believed to learn these individual sound elements to construct words. However, recent findings from the University of Cambridge and Trinity College Dublin suggest a different approach to understanding how babies learn languages.
The new study was motivated by the desire to better understand how infants process speech in their first year of life, specifically focusing on the neural encoding of phonetic categories in continuous natural speech. Previous research in this field predominantly used behavioral methods and discrete stimuli, which limited insights into how infants perceive and process continuous speech. These traditional methods were often constrained to simple listening scenarios and few phonetic contrasts, which didn’t fully represent natural speech conditions.
To address these gaps, the researchers used neural tracking measures to assess the neural encoding of the full phonetic feature inventory of continuous speech. This method allowed them to explore how infants’ brains process acoustic and phonetic information in a more naturalistic listening environment.
The study involved a group of 50 infants, monitored at four, seven, and eleven months of age. Each baby was full-term and without any diagnosed developmental disorders. The research team also included 22 adult participants for comparison, though data from five were later excluded.
In a carefully controlled environment, the infant participants were seated in a highchair, a meter away from their caregiver, inside a sound-proof chamber. The adults sat similarly in a normal chair. Each participant, whether infant or adult, was presented with eighteen nursery rhymes played via video recordings. These rhymes, sung or chanted by a native English speaker, were selected carefully to cover a range of phonetic features. The sounds were delivered at a consistent volume.
To capture how the infants’ brains responded to these nursery rhymes, the researchers used a method called electroencephalography (EEG), which records patterns of brain activity. This technique is non-invasive and involved placing a soft cap with sensors on the infants’ heads to measure their brainwaves.
The brainwave data was then analyzed using a sophisticated algorithm to decode the phonological information – allowing them to create a “readout” of how the infants’ brains were processing the different sounds in the nursery rhymes. This technique is significant as it moved beyond the traditional method of just comparing reactions to individual sounds or syllables, allowing a more comprehensive understanding of how continuous speech is processed.
Contrary to what was previously thought, the researchers found that infants do not process individual speech sounds reliably until they are about seven months old. Even at eleven months, when many babies start to say their first words, the processing of these sounds is still sparse.
Furthermore, the study discovered that phonetic encoding in babies emerged gradually over the first year. The ‘read out’ of brain activity showed that the processing of speech sounds in infants started with simpler sounds like labial and nasal ones, and this processing became more adult-like as they grew older.
“Our research shows that the individual sounds of speech are not processed reliably until around seven months, even though most infants can recognize familiar words like ‘bottle’ by this point,” said study co-author Usha Goswami, a professor at the University of Cambridge. “From then individual speech sounds are still added in very slowly – too slowly to form the basis of language.”
The current study is part of the BabyRhythm project, which is led by Goswami.
First author Giovanni Di Liberto, a professor at Trinity College Dublin, added: “This is the first evidence we have of how brain activity relates to phonetic information changes over time in response to continuous speech.”
The researchers propose that rhythmic speech – the pattern of stress and intonation in spoken language – is crucial for language learning in infants. They found that rhythmic speech information was processed by babies as early as two months old, and this processing predicted later language outcomes.
The findings challenge traditional theories of language acquisition that emphasize the rapid learning of phonetic elements. Instead, the study suggests that the individual sounds of speech are not processed reliably until around seven months, and the addition of these sounds into language is a gradual process.
The study underscores the importance of parents talking and singing to their babies, using rhythmic speech patterns such as those found in nursery rhymes. This could significantly influence language outcomes, as rhythmic information serves as a framework for adding phonetic information.
“We believe that speech rhythm information is the hidden glue underpinning the development of a well-functioning language system,” said Goswami. “Infants can use rhythmic information like a scaffold or skeleton to add phonetic information on to. For example, they might learn that the rhythm pattern of English words is typically strong-weak, as in ‘daddy’ or ‘mummy,’ with the stress on the first syllable. They can use this rhythm pattern to guess where one word ends and another begins when listening to natural speech.”
“Parents should talk and sing to their babies as much as possible or use infant directed speech like nursery rhymes because it will make a difference to language outcome,” she added.
While this study offers valuable insights into infant language development, it’s important to recognize its limitations. The research focused on a specific demographic – full-term infants without developmental disorders, mainly from a monolingual English-speaking environment. Future research could look into how infants from different linguistic and cultural backgrounds, or those with developmental challenges, process speech.
Additionally, the study opens up new avenues for exploring how early speech processing relates to language disorders, such as dyslexia. This could be particularly significant in understanding and potentially intervening in these conditions early in life.
The study, “Emergence of the cortical encoding of phonetic features in the first year of life“, was authored by Giovanni M. Di Liberto, AdamAttaheri, Giorgia Cantisani, Richard B. Reilly, ÁineNí Choisdealbha, SineadRocha, PerrineBrusini, and Usha Goswami.
Discussion about this post