Frequent sexual activity can boost brain power in older adults

More frequent sexual activity has been linked to improved brain function in older adults, according to a study by the universities of Coventry and Oxford.

Researchers found that people who engaged in more regular sexual activity scored higher on tests that measured their verbal fluency and their ability to visually perceive objects and the spaces between them.

The study, published today in The Journals of Gerontology, Series B: Psychological and Social Sciences, involved 73 people aged between 50 and 83.

Participants filled in a questionnaire on how often, on average, they had engaged in sexual activity over the past 12 months — whether that was never, monthly or weekly — as well as answering questions about their general health and lifestyle.

The 28 men and 45 women also took part in a standardized test, which is typically used to measure different patterns of brain function in older adults, focusing on attention, memory, fluency, language and visuospatial ability.

This included verbal fluency tests in which participants had 60 seconds to name as many animals as possible, and then to say as many words beginning with F as they could — tests which reflect higher cognitive abilities.

They also took part in tests to determine their visuospatial ability which included copying a complex design and drawing a clock face from memory.

It was these two sets of tests where participants who engaged in weekly sexual activity scored the most highly, with the verbal fluency tests showing the strongest effect.

The results suggested that frequency of sexual activity was not linked to attention, memory or language. In these tests, the participants performed just as well regardless of whether they reported weekly, monthly or no sexual activity.

This study expanded on previous research from 2016, which found that older adults who were sexually active scored higher on cognitive tests than those who were not sexually active.

But this time the research looked more specifically at the impact of the frequency of sexual activity (i.e. does it make a difference how often you engage in sexual activity) and also used a broader range of tests to investigate different areas of cognitive function.

The academics say further research could look at how biological elements, such as dopamine and oxytocin, could influence the relationship between sexual activity and brain function to give a fuller explanation of their findings.

Lead researcher Dr Hayley Wright, from Coventry University’s Centre for Research in Psychology, Behaviour and Achievement, said:

“We can only speculate whether this is driven by social or physical elements — but an area we would like to research further is the biological mechanisms that may influence this.

“Every time we do another piece of research we are getting a little bit closer to understanding why this association exists at all, what the underlying mechanisms are, and whether there is a ’cause and effect’ relationship between sexual activity and cognitive function in older people.

“People don’t like to think that older people have sex — but we need to challenge this conception at a societal level and look at what impact sexual activity can have on those aged 50 and over, beyond the known effects on sexual health and general wellbeing.”

Story Source:

Materials provided by Coventry University. Note: Content may be edited for style and length.

 

Long-term memories made with meaningful information

When trying to memorize information, it is better to relate it to something meaningful rather than repeat it again and again to make it stick, according to a recent Baycrest Health Sciences study published in NeuroImage.

“When we are learning new information, our brain has two different ways to remember the material for a short period of time, either by mentally rehearsing the sounds of the words or thinking about the meaning of the words,” says Dr. Jed Meltzer, lead author and neurorehabilitation scientist at Baycrest’s Rotman Research Institute. “Both strategies create good short-term memory, but focusing on the meaning is more effective for retaining the information later on. Here’s a case where working harder does not mean better.”

Past studies have looked at repetition to create short-term memories, but these findings suggest that using the word’s meaning will help “transfer” memories from the short-term to the long-term, says Dr. Meltzer. This finding is consistent with the strategies used by the world’s top memory champions, who create stories rich with meaning to remember random information, such as the order of a deck of cards.

Through this work, researchers were able to pinpoint the different parts of the brain involved in creating the two types of short-term memories.

“This finding shows that there are multiple brain mechanisms supporting short-term memory, whether it’s remembering information based on sound or meaning,” says Dr. Meltzer, who is also a psychology professor at the University of Toronto. “When people have brain damage from stroke or dementia, one of the mechanisms may be disrupted. People could learn to compensate for this by relying on an alternate method to form short-term memories.”

For example, people who have trouble remembering things could carry a pad and rehearse the information until they have a chance to write it down, he adds.

The study recorded the brain waves of 25 healthy adults as they listened to sentences and word lists. Participants were asked to hold the information in their short-term memory over several seconds, and then recite it back, while their brain waves were recorded. Participants were then taken to a testing room to see if they could recall the information that had been heard. Through the brain scans, researchers identified brain activity related to memorizing through sound and meaning.

As next steps, Dr. Meltzer will use these findings to explore targeted brain stimulation that could boost the short-term memory of stroke patients. Additional funding would support the exploration of which types of memory are best treated by current drugs or brain stimulation and how these can be improved.

Story Source:

Materials provided by Baycrest Centre for Geriatric Care. Note: Content may be edited for style and length.

 

Gender, race and class: Language change in post-apartheid South Africa

A new study of language and social change in post-apartheid South Africa demonstrates that gender is a more powerful determinant than class among black university students. The study “Class, gender, and substrate erasure in sociolinguistic change: A sociophonetic study of schwa in deracializing South African English,” by Rajend Mesthrie (University of Cape Town) will be published in the June, 2017 issue of the scholarly journal Language. A pre-print version of the article may be found at: https://www.linguisticsociety.org/sites/default/files/Mesthrie.pdf .

The article explores the extent to which the categories of race, class and gender were implicated in the degree of language change evident as schools that had previously been restricted to whites only were opened up to blacks and other racial minorities. With the end of apartheid education policies in South Africa in 1994, new flexibilities developed that enabled the author to study the relation between social change and language change. Mesthrie observed that a continuum opened up between traditional, second language varieties of English and the “crossover” varieties associated previously with whites. Focusing mainly on young black university students, the study demonstrates that social class (associated largely with type of schooling) does correlate with different types of English. The key variables that Mesthrie studied included the unstressed vowel, “schwa,” which is differentially realized in the traditional English varieties in South Africa, and the related property of the length of vowels. His study used the latest acoustic techniques emanating largely from the University of Pennsylvania and North Carolina State University.

The most important contribution of the paper is the statistical demonstration that sex differences are equally — and perhaps more — salient than social class in affecting language change. Young black women are more likely than young black males (of their respective classes) to acquire the crossover language varieties. For males on the whole, an “African solidarity” precludes too great a linguistic crossover, even for those who graduated from the more prestigious schools. In short, the black males did not want to sound “too white.” Mesthrie points to the success of young black women, who appear set to become the future prestige accent models on radio and television, for English anyway. The author sees this as a positive “deracializing” of the English language, so that prestige accents of the sort encountered on television are no longer associated with one race group (whites) alone.

Story Source:

Materials provided by Linguistic Society of America. Note: Content may be edited for style and length.

All in the eyes: What the pupils tells us about language

The meaning of a word is enough to trigger a reaction in our pupil: when we read or hear a word with a meaning associated with luminosity (“sun,” “shine,” etc.), our pupils contract as they would if they were actually exposed to greater luminosity. And the opposite occurs with a word associated with darkness (“night,” “gloom,” etc.).

These results, published on 14 June 2017 in Psychological Science by researchers from the Laboratoire de psychologie cognitive (CNRS/AMU), the Laboratoire parole et langage (CNRS/AMU) and the University of Groningen (Netherlands), open up a new avenue for better understanding how our brain processes language.

The researchers demonstrate here that the size of the pupils does not depend simply on the luminosity of the objects observed, but also on the luminance of the words evoked in writing or in speech. They suggest that our brain automatically creates mental images of the read or heard words, such as a bright ball in the sky for the word “sun,” for example. It is thought that this mental image is the reason why the pupils become smaller, as if we really did have the sun in our eyes.

This new study raises important questions. Are these mental images necessary to understand the meaning of words? Or, on the contrary, are they merely an indirect consequence of language processing in our brain, as though our nervous system were preparing, as a reflex, for the situation evoked by the heard or read word? In order to respond to these questions, the researchers wish to pursue their experiment by varying the language parameters, by testing their hypothesis in other languages, for example.

Story Source:

Materials provided by CNRS. Note: Content may be edited for style and length.

 

Human brain tunes into visual rhythms in sign language

The human brain works in rhythms and cycles. These patterns occur at predictable frequencies that depend on what a person is doing and on what part of the brain is active during the behavior.

Similarly, there are rhythms and patterns out in the world, and for the last 20 years, scientists have been perplexed by the brain’s ability to “entrain,” or match up, with these patterns. Language is one of those areas in which scientists observe neural entrainment: When people listen to speech, their brain waves lock up with the volume-based rhythms they hear. Since people can’t pay attention to everything happening in their environment at once, this phase locking is thought to help anticipate when important information is likely to appear.

Many studies have documented this phenomenon in language processing; however, it has been difficult to tell whether neural entrainment is specialized for spoken language. In a new study in the Proceedings of the National Academy of Sciences, University of Chicago scholars designed an experiment using sign language to answer that question.

“To determine if neural entrainment to language is specialized for speech or if it is a general-purpose tool that humans can use for anything that is temporally predictable, we had to go outside of speech and outside of auditory perception,” said Geoffrey Brookshire, the study’s lead author and a PhD student in the Department of Psychology.

Brookshire worked with Daniel Casasanto, assistant professor of psychology and leader of the Experience and Cognition Lab; Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor of in the Department of Psychology and an acclaimed scholar of language and gesture; Howard Nusbaum, professor of psychology and an expert in spoken language and language use; and Jenny Lu, a PhD student specializing in sign language, gesture and language development.

“By looking at sign, we’re learning something about how the brain processes language more generally. We’re solving a mystery we couldn’t crack by studying speech alone,” Casasanto said.

In speech, the brain locks on to syllables, words and phrases, and those rhythms occur below 8 Hz, or 8 pulses per second. Vision also has a preferred frequency onto which it latches.

“When we focus on random flashes of light, for example, our brains most enthusiastically lock on to flashes around 10 Hz. By looking at sign language, we can ask whether the important thing for entrainment is which sense you’re using, or the kind of information you’re getting,” Brookshire said.

To determine if people tune into visual rhythms in the same way they tune into the auditory rhythms of language, they showed videos of stories told in American Sign Language to fluent signers and measured brain activity as they watched. Once the researchers had these electroencephalogram readings, they needed a way to measure visual rhythms in sign language.

While there are well-established methods to measure rhythms in speech, there are no automatic, objective equivalents for the temporal structure of sign language. So the researchers created one.

They developed a new metric, called the instantaneous visual change, which summarizes the degree of change at each time period during signing. They ran experiment videos, the ones watched by participants, through their new algorithm to identify peaks and valleys in visual changes between frames. The largest peaks were associated with large, quick movements.

With this roadmap illustrating the magnitude of visual changes over time in the videos, Brookshire overlaid the participants’ EEGs to see whether people entrain around the normal visual frequency of about 10 Hz, or at the lower frequencies of signs and phrases in sign language — about 2 Hz.

Their discovery answers a fundamental question that has been lingering for years in research on speech entrainment: Is it specialized for auditory speech? The study reveals that the brain entrains depending on the information in the signal — not on the differences between seeing and hearing. Participants’ brain waves locked into the specific frequencies of sign language, rather than locking into the higher frequency that vision tends to prefer.

“This is an exciting finding because scientists have been theorizing for years about how adaptable or flexible entrainment may be, but we were never sure if it was specific to auditory processing or if it was more general purpose,” Brookshire said. “This study suggests that humans have the ability to follow perceptual rhythms and make temporal predictions in any of our senses.”

In a broader sense, neuroscientists want to understand how the human brain creates and perceives language, and entrainment has emerged as an important mechanism. In revealing neural entrainment as a generalized strategy for improving sensitivity to informational peaks, this study takes significant steps toward advancing the understanding of human language and perception.

“The piece of the paper that I find particularly exciting is that it compares how signers and non-signers process American Sign Language stimuli,” Goldin-Meadow said. “Although both groups showed the same level of entrainment in early visual regions, they displayed differences in frontal regions — this finding sets the stage for us to identify aspects of neural entrainment that are linked to the physical properties of the visual signal compared to aspects that appear only with linguistic knowledge.”

 

Elementary school: Early English language lessons less effective than expected

Seven years later, children who start learning English in the first grade achieve poorer results in this subject than children whose first English lesson isn’t until the third grade. This is according to the findings of the team headed by Dr Nils Jäkel and Prof Dr Markus Ritter at Ruhr-Universität Bochum. The researchers evaluated data gathered in a large longitudinal study in North Rhine-Westphalia, Germany, that was carried out between 2010 and 2014. The results have been published in the journal Language Learning.

Highly recommended, yet not scientifically proven

“Starting foreign-language lessons at an early age is often very much commended, even though hardly any research exists that would support this myth,” says Nils Jäkel from the Chair of English Language Teaching in Bochum. Together with his colleagues from Bochum and from the Technical University Dortmund, they analysed data of 5,130 students from 31 secondary schools of the Gymnasium type in North Rhine-Westphalia. The researchers compared two student cohorts, one of which started learning English in the first grade, the other in the third grade. They evaluated the children’s reading and hearing proficiency in English in the fifth and seventh grade respectively.

In the fifth grade, children who had their first English lessons very early in elementary school achieved better results with respect to reading and hearing proficiency. This changed by the seventh grade. By then, the latecomers, i.e. children who didn’t start to learn English until the third grade, were better.

Results from other countries confirmed

“Our study confirmed results from other countries, for example Spain, that show that early English lessons with one or two hours per week in elementary school aren’t very conductive to attaining language competence in the long term,” says Jäkel. In the next months, he and his colleagues are going to analyse additional data to investigate if the results can be confirmed for the ninth grade.

A possible interpretation of the results: “Early English-language lessons in elementary school take place at a time when deep immersion would be necessary to achieve sustainable effects,” describes Nils Jäkel. “Instead, the children attend English lessons that amount to 90 minutes per week at most.”

Critical transition from elementary school school to grammar school

Moreover, the authors point out a rupture that takes place during the transition period from elementary school to grammar school. “Broadly speaking, the predominantly playful, holistically structured elementary-school lessons make way for rather more cognitive, intellectualised grammar-school methodology,” says Jäkel.

In elementary school, English is taught through child-appropriate, casual immersion in and experience of the foreign language through rhymes, songs, movement and stories. Secondary schools focus primarily on prescribed grammar and vocabulary lessons. This would explain why the early advantages in listening proficiency that are identified in the fifth grade are partially forfeit by the seventh grade, as the authors elaborate; this is possibly due to a lapse in motivation, as students feel the rupture more keenly after experiencing four years of English lessons in elementary school.

It is also possible that the potential of English lessons at an early stage had not been fully exploited, as they had been rather hastily adapted for the first grade. “When English lessons were introduced in elementary school, many teachers had to qualify for lateral entry on short notice,” explains Jäkel.

Consequences and recommendations

With their findings, the researchers do not question early English lessons as such. On the contrary, it is an important factor contributing to the European multilingualism we aspire to, as it paves the way for further language acquisition in secondary schools. Early English lessons might help make the children aware of linguistic and cultural diversity. “But it would be wrong to have unreasonably high expectations,” says Jäkel. “A reasonable compromise might be the introduction of English in the third grade, with more lessons per week.” And it is just as important to better coordinate the didactical approaches on elementary and grammar school levels. Here, teachers at these two different types of school could learn from each other.

New hope for patients with primary progressive aphasia

A Baycrest Health Sciences researcher and clinician has developed the first group language intervention that helps individuals losing the ability to speak due to a rare form of dementia, and could help patients maintain their communication abilities for longer.

Primary Progressive Aphasia (PPA) is a unique language disorder that involves struggles with incorrect word substitutions, mispronounced words and/or difficulty understanding simple words and forgetting names of familiar objects and people. With PPA, language function declines before the memory systems, which is the opposite of Alzheimer’s disease.

Dr. Regina Jokel, a speech-language pathologist at Baycrest’s Sam and Ida Ross Memory Clinic and a clinician-scientist with the Rotman Research Institute (RRI), has developed the first structured group intervention for PPA patients and their caregivers. This intervention could also help treat patients with other communication problems, such as mild cognitive impairment (a condition that is likely to develop into Alzheimer’s). The results of her pilot program were published in the Journal of Communication Disorders on April 14, 2017.

“This research aims to address the needs of one of the most underserviced populations in language disorders,” says Dr. Jokel. “Individuals with PPA are often referred to either Alzheimer’s programs or aphasia centres. Neither option is appropriate in this case, which often leaves individuals with PPA adrift in our health care system. Our group intervention has the potential to fill the existing void and reduce demands on numerous other health services.”

Language rehabilitation has made headway in managing the disorder, but there are limited PPA treatment options, adds Dr. Jokel.

Dr. Jokel is one of the few researchers in the world studying this disease. She was motivated to acquire her PhD. and devise the intervention after encountering her first PPA patient more than 25 years ago.

“When I realized the patient had PPA, I ran to the rehabilitation literature thinking that he needed to start some sort of therapy. I ran a search and came up with nothing. Absolutely nothing,” says Jokel. “That’s when I thought, ‘It’s time to design something.'”

The 10-week intervention included working on language activities, learning communication strategies and receiving counselling and education for both patients and their caregivers. During the pilot program, patients either improved or remained unchanged on communication assessments for adults with communication disorders. Their caregivers also reported being better prepared to manage psychosocial issues and communication challenges and had more knowledge of PPA and the disease’s progression.

“In progressive disorders, any sign of maintaining current level of function should be interpreted as success,” says Dr. Jokel. “Slowing the progression and maintenance of communication abilities should be the most important goal.”

For the study’s next steps, Dr. Jokel has received support from a Brain Canada-Alzheimer’s Association partnership grant to assess the therapy’s impact on the language skills of PPA patients. With support from the Ontario Brain Institute, she is also collaborating with RRI brain rehabilitation scientist, Dr. Jed Meltzer, to explore the effect of brain stimulation on patients also undergoing language therapy.

Story Source:

Materials provided by Baycrest Centre for Geriatric Care. Note: Content may be edited for style and length.

Language shapes how the brain perceives time

Language has such a powerful effect, it can influence the way in which we experience time, according to a new study.

Professor Panos Athanasopoulos, a linguist from Lancaster University and Professor Emanuel Bylund, a linguist from Stellenbosch University and Stockholm University, have discovered that people who speak two languages fluently think about time differently depending on the language context in which they are estimating the duration of events.

The finding, reported in the ‘Journal of Experimental Psychology: General‘, published by the American Psychological Association, reports the first evidence of cognitive flexibility in people who speak two languages.

Bilinguals go back and forth between their languages rapidly and, often, unconsciously — a phenomenon called code-switching.

But different languages also embody different worldviews, different ways of organizing the world around us. And time is a case in point. For example, Swedish and English speakers prefer to mark the duration of events by referring to physical distances, e.g. a short break, a long wedding, etc. The passage of time is perceived as distance travelled.

But Greek and Spanish speakers tend to mark time by referring to physical quantities, e.g. a small break, a big wedding. The passage of time is perceived as growing volume.

The study found that bilinguals seemed to flexibly utilize both ways of marking duration, depending on the language context. This alters how they experience the passage of time.

In the study, Professor Bylund and Professor Athanasopoulos asked Spanish-Swedish bilinguals to estimate how much time had passed while watching either a line growing across a screen or a container being filled.

At the same time, participants were prompted with either the word ‘duración’ (the Spanish word for duration) or ‘tid’ (the Swedish word for duration).

The results were clear-cut

When watching containers filling up and prompted by the Spanish prompt word, bilinguals based their time estimates of how full the containers were, perceiving time as volume. They were unaffected by the lines growing on screens.

Conversely, when given the Swedish prompt word, bilinguals suddenly switched their behaviour, with their time estimates becoming influenced by the distance the lines had travelled, but not by how much the containers had filled.

“By learning a new language, you suddenly become attuned to perceptual dimensions that you weren’t aware of before,” says Professor Athanasopoulos. “The fact that bilinguals go between these different ways of estimating time effortlessly and unconsciously fits in with a growing body of evidence demonstrating the ease with which language can creep into our most basic senses, including our emotions, our visual perception, and now it turns out, our sense of time.

“But it also shows that bilinguals are more flexible thinkers, and there is evidence to suggest that mentally going back and forth between different languages on a daily basis confers advantages on the ability to learn and multi-task, and even long term benefits for mental well-being.”

Story Source:

Materials provided by Lancaster University. Note: Content may be edited for style and length.

Repetition a key factor in language learning

Lilli Kimppa from the University of Helsinki studied language acquisition in the brain. Even short repetitive exposure to novel words induced a rapid neural response increase that is suggested to manifest memory-trace formation.

Rapid learning of new words is crucial for language acquisition, and frequent exposure to spoken words enables vocabulary development.

In her doctoral dissertation, Lilli Kimppa studied neural response dynamics to new words over brief exposure. She measured the neural activation of Finnish-speaking volunteers with electroencephalography (EEG) during auditory tasks in which existing Finnish words, and non-words with Finnish and non-native phonology, were repeated.

“Unlike to existing words, new words showed a neural response enhancement between the early and late stages of exposure on the left frontal and temporal cortices, which was interpreted as the build-up of neural memory circuits. The magnitude of this neural enhancement also correlated with how well the participants remembered the new words afterwards,” Kimppa says.

To examine the effect of attention, the words were presented for ~30 minutes in two conditions: participants were either passively exposed to the spoken words on the background, or they attended to the speech. Similar neural enhancement to novel words was observed in both listening conditions.

Kimppa also investigated how participants’ language background influenced the word memory-trace formation.

She noticed that the response enhancement to new non-native words was larger in participants who had learned more foreign languages with earlier learning onset, implying greater flexibility of the brain to acquire speech with novel phonology.

Conversely, later onset of foreign language learning was associated with stronger neural increase to new words with Finnish phonology.

“Their brain had apparently become more tuned to the native language,” Kimppa states.

In her doctoral dissertation, Kimppa also studied rapid neural word learning among 9-12 year-old dyslexic and normally reading children.

“Control children exhibited a response increase to a novel word within the first 6 minutes of passive perceptual exposure. Children with dyslexia, however, did not show such neural enhancement during the entire 11-minute session. This suggests deficient rapid word learning abilities of the brain in dyslexia compared to non-affected peers. Dyslexics possibly need even more repetition or different kinds of learning strategies to show the neural effect,” Kimppa says.

The dissertation is available online in this website: https://helda.helsinki.fi/handle/10138/178917

Story Source:

Materials provided by University of Helsinki. Note: Content may be edited for style and length.

What’s coming next? Scientists identify how the brain predicts speech

An international collaboration of neuroscientists has shed light on how the brain helps us to predict what is coming next in speech.

In the study, publishing on April 25 in the open access journal PLOS Biology scientists from Newcastle University, UK, and a neurosurgery group at the University of Iowa, USA, report that they have discovered mechanisms in the brain’s auditory cortex involved in processing speech and predicting upcoming words, which is essentially unchanged throughout evolution. Their research reveals how individual neurons coordinate with neural populations to anticipate events, a process that is impaired in many neurological and psychiatric disorders such as dyslexia, schizophrenia and Attention Deficit Hyperactivity Disorder (ADHD).

Using an approach first developed for studying infant language learning, the team of neuroscientists led by Dr Yuki Kikuchi and Prof Chris Petkov of Newcastle University had humans and monkeys listen to sequences of spoken words from a made-up language. Both species were able to learn the predictive relationships between the spoken sounds in the sequences.

Neural responses from the auditory cortex in the two species revealed how populations of neurons responded to the speech sounds and to the learned predictive relationships between the sounds. The neural responses were found to be remarkably similar in both species, suggesting that the way the human auditory cortex responds to speech harnesses evolutionarily conserved mechanisms, rather than those that have uniquely specialized in humans for speech or language.

“Being able to predict events is vital for so much of what we do every day,” Professor Petkov notes. “Now that we know humans and monkeys share the ability to predict speech we can apply this knowledge to take forward research to improve our understanding of the human brain.”

Dr Kikuchi elaborates, “in effect we have discovered the mechanisms for speech in your brain that work like predictive text on your mobile phone, anticipating what you are going to hear next. This could help us better understand what is happening when the brain fails to make fundamental predictions, such as in people with dementia or after a stroke.”

Building on these results, the team are working on projects to harness insights on predictive signals in the brain to develop new models to study how these signals go wrong in patients with stroke or dementia. The long-term goal is to identify strategies that yield more accurate prognoses and treatments for these patients.

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

 

In young bilingual children, two languages develop simultaneously but independently

A new study of Spanish-English bilingual children by researchers at Florida Atlantic University published in the journal Developmental Science finds that when children learn two languages from birth each language proceeds on its own independent course, at a rate that reflects the quality of the children’s exposure to each language.

In addition, the study finds that Spanish skills become vulnerable as children’s English skills develop, but English is not vulnerable to being taken over by Spanish. In their longitudinal data, the researchers found evidence that as the children developed stronger skills in English, their rates of Spanish growth declined. Spanish skills did not cause English growth to slow, so it’s not a matter of necessary trade-offs between two languages.

“One well established fact about monolingual development is that the size of children’s vocabularies and the grammatical complexity of their speech are strongly related. It turns out that this is true for each language in bilingual children,” said Erika Hoff, Ph.D., lead author of the study, a psychology professor in FAU’s Charles E. Schmidt College of Science, and director of the Language Development Lab. “But vocabulary and grammar in one language are not related to vocabulary or grammar in the other language.”

For the study, Hoff and her collaborators David Giguere, a graduate research assistant at FAU and Jamie M. Quinn, a graduate research assistant at Florida State University, used longitudinal data on children who spoke English and Spanish as first languages and who were exposed to both languages from birth. They wanted to know if the relationship between grammar and vocabulary were specific to a language or more language general. They measured the vocabulary and level of grammatical development in these children in six-month intervals between the ages of 2 and a half to 4 years.

The researchers explored a number of possibilities during the study. They thought it might be something internal to the child that causes vocabulary and grammar to develop on the same timetable or that there might be dependencies in the process of language development itself. They also considered that children might need certain vocabulary to start learning grammar and that vocabulary provides the foundation for grammar or that grammar helps children learn vocabulary. One final possibility they explored is that it may be an external factor that drives both vocabulary development and grammatical development.

“If it’s something internal that paces language development then it shouldn’t matter if it’s English or Spanish, everything should be related to everything,” said Hoff. “On the other hand, if it’s dependencies within a language of vocabulary and grammar or vice versa then the relations should be language specific and one should predict the other. That is a child’s level of grammar should predict his or her future growth in vocabulary or vice versa.”

Turns out, the data were consistent only with the final possibility — that the rate of vocabulary and grammar development are a function of something external to the child and that exerts separate influences on growth in English and Spanish. Hoff and her collaborators suggest that the most cogent explanation would be in the properties of children’s input or their language exposure.

“Children may hear very rich language use in Spanish and less rich use in English, for example, if their parents are more proficient in Spanish than in English,” said Hoff. “If language growth were just a matter of some children being better at language learning than others, then growth in English and growth in Spanish would be more related than they are.”

Detailed results of the study are described in the article, “What Explains the Correlation between Growth in Vocabulary and Grammar? New Evidence from Latent Change Score Analyses of Simultaneous Bilingual Development.”

“There is something about differences among the children and the quality of English they hear that make some children acquire vocabulary and grammar more rapidly in English and other children develop more slowly,” said Hoff. “I think the key takeaway from our study is that it’s not the quantity of what the children are hearing; it’s the quality of their language exposure that matters. They need to experience a rich environment.”

###

This project is supported by the National Institutes of Health (NIH) through grant number R01 HD068421.

About Florida Atlantic University:

Florida Atlantic University, established in 1961, officially opened its doors in 1964 as the fifth public university in Florida. Today, the University, with an annual economic impact of $6.3 billion, serves more than 30,000 undergraduate and graduate students at sites throughout its six-county service region in southeast Florida. FAU’s world-class teaching and research faculty serves students through 10 colleges: the Dorothy F. Schmidt College of Arts and Letters, the College of Business, the College for Design and Social Inquiry, the College of Education, the College of Engineering and Computer Science, the Graduate College, the Harriet L. Wilkes Honors College, the Charles E. Schmidt College of Medicine, the Christine E. Lynn College of Nursing and the Charles E. Schmidt College of Science. FAU is ranked as a High Research Activity institution by the Carnegie Foundation for the Advancement of Teaching. The University is placing special focus on the rapid development of critical areas that form the basis of its strategic plan: Healthy aging, biotech, coastal and marine issues, neuroscience, regenerative medicine, informatics, lifespan and the environment. These areas provide opportunities for faculty and students to build upon FAU’s existing strengths in research and scholarship. For more information, visit http://www.fau.edu.

Repeating non-verbs as well as verbs can boost the syntactic priming effect

According to Glasgow and HSE/Northumbria researchers, repetition of non-verbs as well as verbs can boost the effect of syntactic priming, i.e. the likelihood of people reproducing the structure of the utterance they have just heard.

The way the human brain works makes people prone to repeating the syntactic structures they have recently heard or uttered. In psycholinguistics, this phenomenon is called the syntactic priming effect. Until recently, it was believed that repetition of verbs in particular could enhance this effect. University of Glasgow researchers Christoph Scheepers and Claudine Raffray, in collaboration with Andriy Myachykov (representing HSE and Northumbria University), have shown in their experiments that this is not necessarily true, and that repetition of other parts of speech, not only verbs, can influence the magnitude of the syntactic priming effect. Their findings are published in the Journal of Memory and Language in the article “The lexical boost effect is not diagnostic of lexically-specific syntactic representations.”

The priming effect, i.e. people’s ability to unconsciously reproduce prior experience — something that they have seen, heard, etc. — is well documented in psychology. Priming can manifest itself in simple things, such as the unconscious repetition of gestures, intonations or body poses of others, and in more complex behavioural patterns. This happens because perceptions tend to ‘warm up’ the brain, preparing it for similar experiences. For example, someone who has just spent an hour solving mathematical problems can handle another mathematical problem faster than someone who has been cooking or reading War and Peace.

Classical priming studies have often focused on basic elements of perception, such as processing similar visual stimuli. Having seen a round pizza image, a subject will react faster to a coin image, because it has a similar shape. Yet at a deeper level, the same effect manifests itself in the perception and reproduction of content and meaning.

“People tend to repeat their own and others’ behaviour. It is the foundation of priming. This effect, according to the interactive alignment theory, is more than just experimental curiosity or the reflection of very primitive behavioural patterns. In fact, it is an important subconscious mechanism that underlies children’s linguistic and broader cognitive development, allowing us to signal to each other that ‘we are of the same blood’ and helps reduce everyone’s cognitive burden, since people no longer need to control their every word and gesture and invent something new all the time,” the researches explain. Verbal or linguistic priming, i.e. the tendency to reproduce one’s own or other person’s linguistic patterns at different levels — lexical (words), semantic (meanings) and syntactic (sentence structures) — is the main theme of the study.

The syntactic priming effect was first demonstrated in the 1980s. It was shown, for example, that after reading a sentence with a certain syntactic structure, a person will perceive and process the next sentence with a similar structure much faster and will be more likely to repeat the syntactic frame of the sentence just heard.

Scheepers, Raffray, and Myachykov offer the following example of syntactic priming. “Imagine someone describing an event in which a girl handed a ball to a boy. This event can be described in more than one way. One can say, ‘the girl gave the boy a ball’ or ‘the girl gave a ball to the boy’. Let’s say the person you are talking to uses the first option, ‘the girl gave the boy a ball’. Let’s call this sentence a prime. Let’s assume that now you need to describe an event to the other person, in which an artist shows an easel to a child. Let’s call this sentence a target. It turns out that you are more likely to say, ‘the artist showed the child an easel’ than ‘the artist showed an easel to the child’, repeating the syntactic structure of the prime. While, of course, it does not work every time, the tendency to repeat a syntactic structure from one utterance to the next is real and forms the basis of syntactic priming.”

It was initially assumed that the syntactic priming effect is autonomous and not subject to external influences, such as the repetition of words or their meanings between prime and target. Then, in the late nineties, papers began to appear showing a ‘lexically boosted’ syntactic priming effect. Specifically, it was shown that if prime and target utterances both contain the verb give, the likelihood of re-using the syntactic structure of the prime in the target increases even more than if the prime contains the verb give and the target the verb show. Curiously, the question of whether repeated nouns could produce comparable lexical boosts to structural priming had been largely ignored in past research.

“Indeed, our research reveals that repetition of any content word of a sentence — noun or verb — can boost the syntactic priming effect, and that the more words are repeated, the stronger syntactic priming turns out to be,” say the authors. In the target trials of their experiments, subjects were asked to produce sentences from randomly arranged words on screen; these target trials were preceded by prime trials in which subjects had to read out complete sentences. Across conditions, the authors systematically varied the numbers and types of content words shared between the primes and the targets.

These findings are of academic significance in the context of the theory of syntax and simple sentence theories. “While there is consensus that the verb plays a pivotal role in determining the syntactic structure of a sentence, our research shows that the lexical boost to syntactic priming is not bound to repetition of verbs,” the researchers explain, adding “Contrary to previously held views, the lexical boost effect is not a very good diagnostic of lexicalised syntax.”

Study analyzes what ‘the’ and ‘a’ tell us about language acquisition

If you have the chance, listen to a toddler use the words “a” and “the” before a noun. Can you detect a pattern? Is he or she using those two words correctly?

And one more question: When kids start using language, how much of their know-how is intrinsic, and how much is acquired by listening to others speak?

Now a study co-authored by an MIT professor uses a new approach to shed more light on this matter — a central issue in the area of language acquisition.

The results suggest that experience is an important component of early-childhood language usage although it doesn’t necessarily account for all of a child’s language facility. Moreover, the extent to which a child learns grammar by listening appears to change over time, with a large increase occurring around age 2 and a leveling off taking place in subsequent years.

“In this view, adult-like, rule-based [linguistic] development is the end-product of a construction of knowledge,” says Roger Levy, an MIT professor and co-author of a new paper summarizing the study. Or, as the paper states, the findings are consistent with the idea that children “lack rich grammatical knowledge at the outset of language learning but rapidly begin to generalize on the basis of structural regularities in their input.”

The paper, “The Emergence of an Abstract Grammatical Category in Children’s Early Speech,” appears in the latest issue of Psychological Science. The authors are Levy, a professor in MIT’s Department of Brain and Cognitive Sciences; Stephan Meylann of the University of California at Berkeley; Michael Frank of Stanford University; and Brandon Roy of Stanford and the MIT Media Lab.

Learning curve

Studying how children use terms such as “a dog” or “the dog” correctly can be a productive approach to language acquisition, since children use the articles “a” and “the” relatively early in their lives and tend to use them correctly. Again, though: Is that understanding of grammar innate or acquired?

Some previous studies have examined this specific question by using an “overlap score,” that is, the proportion of nouns that children use with both “a” and “the,” out of all the nouns they use. When children use both terms correctly, it indicates they understand the grammatical difference between indefinite and definite articles, as opposed to cases where they may (incorrectly) think only one or the other is assigned to a particular noun.

One potential drawback to this approach, however, is that the overlap score might change over time simply because a child might hear more article-noun pairings, without fully recognizing the grammatical distinction between articles.

By contrast, the current study builds a statistical model of language use that incorporates not only child language use but adult language use recorded around children, from a variety of sources. Some of these are publicly available copora of recordings of children and caregivers; others are records of individual children; and one source is the “Speechome” experiment conducted by Deb Roy of the MIT Media Lab, which features recordings of over 70 percent of his child’s waking hours.

The Speechome data, as the paper notes, provides some of the strongest evidence yet that “children’s syntactic productivity changes over development” — that younger children learn grammar from hearing it, and do so at different rates during different phases of early childhood.

“I think the method starts to get us traction on the problem,” Levy says. “We saw this as an opportunity both to use more comprehensive data and to develop new analytic techniques.”

A work in progress

Still, as the authors note, a second conclusion of the paper is that more basic data about language development is needed. As the paper notes, much of the available information is not comprehensive enough, and thus “likely not sufficient to yield precise developmental conclusions.”

And as Levy readily acknowledges, developing an airtight hypothesis about grammar acquisition is always likely to be a challenge.

“We’re never going to have an absolute complete record of everything a child has ever heard,” Levy says.

That makes it much harder to interpret the cognitive process leading to either correct or incorrect uses of, say, articles such as “a” and “the.” After all, if a child uses the phrase “a bus” correctly, it still might only be because that child has heard the phrase before and likes the way it sounds, not because he or she grasped the underlying grammar.

“Those things are very hard to tease apart, but that’s what we’re trying to do,” Levy says. “This is only really an initial step.”

Language learning: ‘Say it fast, fluent and flawless’

A new doctoral dissertation by Parvin Gheitasi at Umeå University in Sweden explores the different functions of prefabricated phrases in young learners’ oral language production. These phrases provided learners with an instrument to overcome their lack of knowledge, to improve their fluency, and to enjoy some language play.

“The young learners in this study remind us that it could be frustrating and sometimes demotivating to be in an environment where one cannot communicate easily due to lack of words and/or a low proficiency level in language structure,” says Parvin Gheitasi, doctoral student at the Department of Language Studies, and continues: “Yet, they suggest that one could accept this challenge and try to enjoy both learning something new and becoming part of a community. They show how to be observant and attentive to the input from the more competent language users and pick up some prefabricated phrases to be able to produce an utterance, which could be fluent and accurate.”

Parvin Gheitasi further explains that the young learners in the study remind us about the advantages of applying these prefabs to buy planning time; to boost confidence and to be able to engage in a conversation when one does not feel ready to construct an utterance. They believed that language is a fun game. Playing with sounds helps us to remember the phrase more easily.

“Moreover, they introduced a fun play with language, characterising it as a game similar to Lego where one can pick a piece from a bunch and replace it with other pieces with a similar shape but probably different colour. By deviating from the established constructions one can practice with language and at the same time enjoy the fun.”

“All in all, these learners indicated that language learning is not always ‘sunny and hot’ it can also be ‘sunny and rainy’!”

This study explored the different functions of prefabricated phrases in young learners’ oral language production. These phrases provided learners with an instrument to overcome their lack of knowledge, to improve their fluency, and to enjoy playing with language. The use of these phrases was affected by the learners’ relationship with their peers and also the relationship between the learners and their teacher.

“Language users might adopt a role model and pick up the language from their role model. Sometimes the role model is the teacher and sometimes the role model can be a peer. In sum, language learning has much to do with picking up phrases or sentences from a role mode and it can be justified by the individual’s desire to be understood and to be part of a community.”

It was observed that although all the learners of this study applied prefabricated phrases in their language production, there was great variation among individual learners in their practices and intentions. Some learners used these phrases to be able to extend their utterances and produce more of the language, whereas other learners used them to avoid further language production (e.g. by using the phrase ‘I don’t know’, they could limit their language production).

“Sometimes learners were conservative and stuck to their set phrases, and sometimes they enjoyed playing with the norms and had fun practicing with language. All in all, it seemed that individual learners’ different personalities, needs or limitations served as explanation for the application of set phrases in different contexts.

Story Source:

Materials provided by Umea University. Note: Content may be edited for style and length.

 

Age at immigration influences occupational skill development

The future occupations of U.S. immigrant children are influenced by how similar their native language is to English, finds a new study by scholars at Duke University and the U.S. Naval Postgraduate School.

“The more difficult it is for the child to learn English, the more likely they will invest in math/logic and physical skills over communications skills,” said co-author Marcos Rangel, an assistant professor of public policy and economics at Duke’s Sanford School of Public Policy. “It is really a story about what skills people who immigrated as children develop given the costs and benefits associated with the learning processes.”

Two factors strongly influence the skills immigrants use as adults, researchers found: immigration before the age of 10, and whether immigrants’ native language is linguistically distant from English.

Immigrants who arrive before the age of 10 pursue occupations very similar to those pursued by native-born Americans. They develop the same range of skills as native-borns, including communication, math/logic, socio-emotional and physical skills.

But for those who are older when they immigrate, the picture is different. After age 10, learning a second language is more difficult, and a child’s particular linguistic background matters more. Some languages, such as Vietnamese, are linguistically very distant from English. Children from those countries are more likely to major in science, technology, engineering and math (STEM) fields than those whose native language is linguistically close to English, such as German.

“Late arrivals from English-distant countries develop a comparative advantage in math/logic, socio-emotional and physical skills relative to communication skills which ultimately generates the occupational segregation we are used to seeing in the labor market,” Rangel said.

The choice of majors made it clear that where these immigrants ended up in the labor market was not just because of different treatment by employers. It was also due to “the way the immigrants themselves look ahead and invest their time in becoming skilled in different tasks,” Rangel said.

The study, published online in the journal Demography on March 20, provides insight into why “some U.S. immigrants find it more attractive to invest in brawn rather than in brain, in mathematics rather than in poetry, in science rather than in history,” write Rangel and co-author Marigee Bacolod, associate professor of economics at the U.S. Naval Postgraduate School’s Graduate School of Business and Public Policy.

“Public policy designed to improve the education of English-learners could potentially have distinct long-lasting effects over the assimilation of immigrant children and over the future distribution of skills within the U.S. workforce,” the authors conclude.

The researchers used data from the 1990 and 2000 U.S. Censuses, the 2009 to 2013 American Community Survey and the Dictionary of Occupational Titles and the Occupational Information Network. The researchers also used a measure developed by the Max Planck Institute of Evolutionary Anthropology to determine the distance of a language from English.

Story Source:

Materials provided by Duke University. Note: Content may be edited for style and length.