Frequent sexual activity can boost brain power in older adults

More frequent sexual activity has been linked to improved brain function in older adults, according to a study by the universities of Coventry and Oxford.

Researchers found that people who engaged in more regular sexual activity scored higher on tests that measured their verbal fluency and their ability to visually perceive objects and the spaces between them.

The study, published today in The Journals of Gerontology, Series B: Psychological and Social Sciences, involved 73 people aged between 50 and 83.

Participants filled in a questionnaire on how often, on average, they had engaged in sexual activity over the past 12 months — whether that was never, monthly or weekly — as well as answering questions about their general health and lifestyle.

The 28 men and 45 women also took part in a standardized test, which is typically used to measure different patterns of brain function in older adults, focusing on attention, memory, fluency, language and visuospatial ability.

This included verbal fluency tests in which participants had 60 seconds to name as many animals as possible, and then to say as many words beginning with F as they could — tests which reflect higher cognitive abilities.

They also took part in tests to determine their visuospatial ability which included copying a complex design and drawing a clock face from memory.

It was these two sets of tests where participants who engaged in weekly sexual activity scored the most highly, with the verbal fluency tests showing the strongest effect.

The results suggested that frequency of sexual activity was not linked to attention, memory or language. In these tests, the participants performed just as well regardless of whether they reported weekly, monthly or no sexual activity.

This study expanded on previous research from 2016, which found that older adults who were sexually active scored higher on cognitive tests than those who were not sexually active.

But this time the research looked more specifically at the impact of the frequency of sexual activity (i.e. does it make a difference how often you engage in sexual activity) and also used a broader range of tests to investigate different areas of cognitive function.

The academics say further research could look at how biological elements, such as dopamine and oxytocin, could influence the relationship between sexual activity and brain function to give a fuller explanation of their findings.

Lead researcher Dr Hayley Wright, from Coventry University’s Centre for Research in Psychology, Behaviour and Achievement, said:

“We can only speculate whether this is driven by social or physical elements — but an area we would like to research further is the biological mechanisms that may influence this.

“Every time we do another piece of research we are getting a little bit closer to understanding why this association exists at all, what the underlying mechanisms are, and whether there is a ’cause and effect’ relationship between sexual activity and cognitive function in older people.

“People don’t like to think that older people have sex — but we need to challenge this conception at a societal level and look at what impact sexual activity can have on those aged 50 and over, beyond the known effects on sexual health and general wellbeing.”

Story Source:

Materials provided by Coventry University. Note: Content may be edited for style and length.


Long-term memories made with meaningful information

When trying to memorize information, it is better to relate it to something meaningful rather than repeat it again and again to make it stick, according to a recent Baycrest Health Sciences study published in NeuroImage.

“When we are learning new information, our brain has two different ways to remember the material for a short period of time, either by mentally rehearsing the sounds of the words or thinking about the meaning of the words,” says Dr. Jed Meltzer, lead author and neurorehabilitation scientist at Baycrest’s Rotman Research Institute. “Both strategies create good short-term memory, but focusing on the meaning is more effective for retaining the information later on. Here’s a case where working harder does not mean better.”

Past studies have looked at repetition to create short-term memories, but these findings suggest that using the word’s meaning will help “transfer” memories from the short-term to the long-term, says Dr. Meltzer. This finding is consistent with the strategies used by the world’s top memory champions, who create stories rich with meaning to remember random information, such as the order of a deck of cards.

Through this work, researchers were able to pinpoint the different parts of the brain involved in creating the two types of short-term memories.

“This finding shows that there are multiple brain mechanisms supporting short-term memory, whether it’s remembering information based on sound or meaning,” says Dr. Meltzer, who is also a psychology professor at the University of Toronto. “When people have brain damage from stroke or dementia, one of the mechanisms may be disrupted. People could learn to compensate for this by relying on an alternate method to form short-term memories.”

For example, people who have trouble remembering things could carry a pad and rehearse the information until they have a chance to write it down, he adds.

The study recorded the brain waves of 25 healthy adults as they listened to sentences and word lists. Participants were asked to hold the information in their short-term memory over several seconds, and then recite it back, while their brain waves were recorded. Participants were then taken to a testing room to see if they could recall the information that had been heard. Through the brain scans, researchers identified brain activity related to memorizing through sound and meaning.

As next steps, Dr. Meltzer will use these findings to explore targeted brain stimulation that could boost the short-term memory of stroke patients. Additional funding would support the exploration of which types of memory are best treated by current drugs or brain stimulation and how these can be improved.

Story Source:

Materials provided by Baycrest Centre for Geriatric Care. Note: Content may be edited for style and length.


Gender, race and class: Language change in post-apartheid South Africa

A new study of language and social change in post-apartheid South Africa demonstrates that gender is a more powerful determinant than class among black university students. The study “Class, gender, and substrate erasure in sociolinguistic change: A sociophonetic study of schwa in deracializing South African English,” by Rajend Mesthrie (University of Cape Town) will be published in the June, 2017 issue of the scholarly journal Language. A pre-print version of the article may be found at: .

The article explores the extent to which the categories of race, class and gender were implicated in the degree of language change evident as schools that had previously been restricted to whites only were opened up to blacks and other racial minorities. With the end of apartheid education policies in South Africa in 1994, new flexibilities developed that enabled the author to study the relation between social change and language change. Mesthrie observed that a continuum opened up between traditional, second language varieties of English and the “crossover” varieties associated previously with whites. Focusing mainly on young black university students, the study demonstrates that social class (associated largely with type of schooling) does correlate with different types of English. The key variables that Mesthrie studied included the unstressed vowel, “schwa,” which is differentially realized in the traditional English varieties in South Africa, and the related property of the length of vowels. His study used the latest acoustic techniques emanating largely from the University of Pennsylvania and North Carolina State University.

The most important contribution of the paper is the statistical demonstration that sex differences are equally — and perhaps more — salient than social class in affecting language change. Young black women are more likely than young black males (of their respective classes) to acquire the crossover language varieties. For males on the whole, an “African solidarity” precludes too great a linguistic crossover, even for those who graduated from the more prestigious schools. In short, the black males did not want to sound “too white.” Mesthrie points to the success of young black women, who appear set to become the future prestige accent models on radio and television, for English anyway. The author sees this as a positive “deracializing” of the English language, so that prestige accents of the sort encountered on television are no longer associated with one race group (whites) alone.

Story Source:

Materials provided by Linguistic Society of America. Note: Content may be edited for style and length.

All in the eyes: What the pupils tells us about language

The meaning of a word is enough to trigger a reaction in our pupil: when we read or hear a word with a meaning associated with luminosity (“sun,” “shine,” etc.), our pupils contract as they would if they were actually exposed to greater luminosity. And the opposite occurs with a word associated with darkness (“night,” “gloom,” etc.).

These results, published on 14 June 2017 in Psychological Science by researchers from the Laboratoire de psychologie cognitive (CNRS/AMU), the Laboratoire parole et langage (CNRS/AMU) and the University of Groningen (Netherlands), open up a new avenue for better understanding how our brain processes language.

The researchers demonstrate here that the size of the pupils does not depend simply on the luminosity of the objects observed, but also on the luminance of the words evoked in writing or in speech. They suggest that our brain automatically creates mental images of the read or heard words, such as a bright ball in the sky for the word “sun,” for example. It is thought that this mental image is the reason why the pupils become smaller, as if we really did have the sun in our eyes.

This new study raises important questions. Are these mental images necessary to understand the meaning of words? Or, on the contrary, are they merely an indirect consequence of language processing in our brain, as though our nervous system were preparing, as a reflex, for the situation evoked by the heard or read word? In order to respond to these questions, the researchers wish to pursue their experiment by varying the language parameters, by testing their hypothesis in other languages, for example.

Story Source:

Materials provided by CNRS. Note: Content may be edited for style and length.


Human brain tunes into visual rhythms in sign language

The human brain works in rhythms and cycles. These patterns occur at predictable frequencies that depend on what a person is doing and on what part of the brain is active during the behavior.

Similarly, there are rhythms and patterns out in the world, and for the last 20 years, scientists have been perplexed by the brain’s ability to “entrain,” or match up, with these patterns. Language is one of those areas in which scientists observe neural entrainment: When people listen to speech, their brain waves lock up with the volume-based rhythms they hear. Since people can’t pay attention to everything happening in their environment at once, this phase locking is thought to help anticipate when important information is likely to appear.

Many studies have documented this phenomenon in language processing; however, it has been difficult to tell whether neural entrainment is specialized for spoken language. In a new study in the Proceedings of the National Academy of Sciences, University of Chicago scholars designed an experiment using sign language to answer that question.

“To determine if neural entrainment to language is specialized for speech or if it is a general-purpose tool that humans can use for anything that is temporally predictable, we had to go outside of speech and outside of auditory perception,” said Geoffrey Brookshire, the study’s lead author and a PhD student in the Department of Psychology.

Brookshire worked with Daniel Casasanto, assistant professor of psychology and leader of the Experience and Cognition Lab; Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor of in the Department of Psychology and an acclaimed scholar of language and gesture; Howard Nusbaum, professor of psychology and an expert in spoken language and language use; and Jenny Lu, a PhD student specializing in sign language, gesture and language development.

“By looking at sign, we’re learning something about how the brain processes language more generally. We’re solving a mystery we couldn’t crack by studying speech alone,” Casasanto said.

In speech, the brain locks on to syllables, words and phrases, and those rhythms occur below 8 Hz, or 8 pulses per second. Vision also has a preferred frequency onto which it latches.

“When we focus on random flashes of light, for example, our brains most enthusiastically lock on to flashes around 10 Hz. By looking at sign language, we can ask whether the important thing for entrainment is which sense you’re using, or the kind of information you’re getting,” Brookshire said.

To determine if people tune into visual rhythms in the same way they tune into the auditory rhythms of language, they showed videos of stories told in American Sign Language to fluent signers and measured brain activity as they watched. Once the researchers had these electroencephalogram readings, they needed a way to measure visual rhythms in sign language.

While there are well-established methods to measure rhythms in speech, there are no automatic, objective equivalents for the temporal structure of sign language. So the researchers created one.

They developed a new metric, called the instantaneous visual change, which summarizes the degree of change at each time period during signing. They ran experiment videos, the ones watched by participants, through their new algorithm to identify peaks and valleys in visual changes between frames. The largest peaks were associated with large, quick movements.

With this roadmap illustrating the magnitude of visual changes over time in the videos, Brookshire overlaid the participants’ EEGs to see whether people entrain around the normal visual frequency of about 10 Hz, or at the lower frequencies of signs and phrases in sign language — about 2 Hz.

Their discovery answers a fundamental question that has been lingering for years in research on speech entrainment: Is it specialized for auditory speech? The study reveals that the brain entrains depending on the information in the signal — not on the differences between seeing and hearing. Participants’ brain waves locked into the specific frequencies of sign language, rather than locking into the higher frequency that vision tends to prefer.

“This is an exciting finding because scientists have been theorizing for years about how adaptable or flexible entrainment may be, but we were never sure if it was specific to auditory processing or if it was more general purpose,” Brookshire said. “This study suggests that humans have the ability to follow perceptual rhythms and make temporal predictions in any of our senses.”

In a broader sense, neuroscientists want to understand how the human brain creates and perceives language, and entrainment has emerged as an important mechanism. In revealing neural entrainment as a generalized strategy for improving sensitivity to informational peaks, this study takes significant steps toward advancing the understanding of human language and perception.

“The piece of the paper that I find particularly exciting is that it compares how signers and non-signers process American Sign Language stimuli,” Goldin-Meadow said. “Although both groups showed the same level of entrainment in early visual regions, they displayed differences in frontal regions — this finding sets the stage for us to identify aspects of neural entrainment that are linked to the physical properties of the visual signal compared to aspects that appear only with linguistic knowledge.”


Elementary school: Early English language lessons less effective than expected

Seven years later, children who start learning English in the first grade achieve poorer results in this subject than children whose first English lesson isn’t until the third grade. This is according to the findings of the team headed by Dr Nils Jäkel and Prof Dr Markus Ritter at Ruhr-Universität Bochum. The researchers evaluated data gathered in a large longitudinal study in North Rhine-Westphalia, Germany, that was carried out between 2010 and 2014. The results have been published in the journal Language Learning.

Highly recommended, yet not scientifically proven

“Starting foreign-language lessons at an early age is often very much commended, even though hardly any research exists that would support this myth,” says Nils Jäkel from the Chair of English Language Teaching in Bochum. Together with his colleagues from Bochum and from the Technical University Dortmund, they analysed data of 5,130 students from 31 secondary schools of the Gymnasium type in North Rhine-Westphalia. The researchers compared two student cohorts, one of which started learning English in the first grade, the other in the third grade. They evaluated the children’s reading and hearing proficiency in English in the fifth and seventh grade respectively.

In the fifth grade, children who had their first English lessons very early in elementary school achieved better results with respect to reading and hearing proficiency. This changed by the seventh grade. By then, the latecomers, i.e. children who didn’t start to learn English until the third grade, were better.

Results from other countries confirmed

“Our study confirmed results from other countries, for example Spain, that show that early English lessons with one or two hours per week in elementary school aren’t very conductive to attaining language competence in the long term,” says Jäkel. In the next months, he and his colleagues are going to analyse additional data to investigate if the results can be confirmed for the ninth grade.

A possible interpretation of the results: “Early English-language lessons in elementary school take place at a time when deep immersion would be necessary to achieve sustainable effects,” describes Nils Jäkel. “Instead, the children attend English lessons that amount to 90 minutes per week at most.”

Critical transition from elementary school school to grammar school

Moreover, the authors point out a rupture that takes place during the transition period from elementary school to grammar school. “Broadly speaking, the predominantly playful, holistically structured elementary-school lessons make way for rather more cognitive, intellectualised grammar-school methodology,” says Jäkel.

In elementary school, English is taught through child-appropriate, casual immersion in and experience of the foreign language through rhymes, songs, movement and stories. Secondary schools focus primarily on prescribed grammar and vocabulary lessons. This would explain why the early advantages in listening proficiency that are identified in the fifth grade are partially forfeit by the seventh grade, as the authors elaborate; this is possibly due to a lapse in motivation, as students feel the rupture more keenly after experiencing four years of English lessons in elementary school.

It is also possible that the potential of English lessons at an early stage had not been fully exploited, as they had been rather hastily adapted for the first grade. “When English lessons were introduced in elementary school, many teachers had to qualify for lateral entry on short notice,” explains Jäkel.

Consequences and recommendations

With their findings, the researchers do not question early English lessons as such. On the contrary, it is an important factor contributing to the European multilingualism we aspire to, as it paves the way for further language acquisition in secondary schools. Early English lessons might help make the children aware of linguistic and cultural diversity. “But it would be wrong to have unreasonably high expectations,” says Jäkel. “A reasonable compromise might be the introduction of English in the third grade, with more lessons per week.” And it is just as important to better coordinate the didactical approaches on elementary and grammar school levels. Here, teachers at these two different types of school could learn from each other.

New hope for patients with primary progressive aphasia

A Baycrest Health Sciences researcher and clinician has developed the first group language intervention that helps individuals losing the ability to speak due to a rare form of dementia, and could help patients maintain their communication abilities for longer.

Primary Progressive Aphasia (PPA) is a unique language disorder that involves struggles with incorrect word substitutions, mispronounced words and/or difficulty understanding simple words and forgetting names of familiar objects and people. With PPA, language function declines before the memory systems, which is the opposite of Alzheimer’s disease.

Dr. Regina Jokel, a speech-language pathologist at Baycrest’s Sam and Ida Ross Memory Clinic and a clinician-scientist with the Rotman Research Institute (RRI), has developed the first structured group intervention for PPA patients and their caregivers. This intervention could also help treat patients with other communication problems, such as mild cognitive impairment (a condition that is likely to develop into Alzheimer’s). The results of her pilot program were published in the Journal of Communication Disorders on April 14, 2017.

“This research aims to address the needs of one of the most underserviced populations in language disorders,” says Dr. Jokel. “Individuals with PPA are often referred to either Alzheimer’s programs or aphasia centres. Neither option is appropriate in this case, which often leaves individuals with PPA adrift in our health care system. Our group intervention has the potential to fill the existing void and reduce demands on numerous other health services.”

Language rehabilitation has made headway in managing the disorder, but there are limited PPA treatment options, adds Dr. Jokel.

Dr. Jokel is one of the few researchers in the world studying this disease. She was motivated to acquire her PhD. and devise the intervention after encountering her first PPA patient more than 25 years ago.

“When I realized the patient had PPA, I ran to the rehabilitation literature thinking that he needed to start some sort of therapy. I ran a search and came up with nothing. Absolutely nothing,” says Jokel. “That’s when I thought, ‘It’s time to design something.'”

The 10-week intervention included working on language activities, learning communication strategies and receiving counselling and education for both patients and their caregivers. During the pilot program, patients either improved or remained unchanged on communication assessments for adults with communication disorders. Their caregivers also reported being better prepared to manage psychosocial issues and communication challenges and had more knowledge of PPA and the disease’s progression.

“In progressive disorders, any sign of maintaining current level of function should be interpreted as success,” says Dr. Jokel. “Slowing the progression and maintenance of communication abilities should be the most important goal.”

For the study’s next steps, Dr. Jokel has received support from a Brain Canada-Alzheimer’s Association partnership grant to assess the therapy’s impact on the language skills of PPA patients. With support from the Ontario Brain Institute, she is also collaborating with RRI brain rehabilitation scientist, Dr. Jed Meltzer, to explore the effect of brain stimulation on patients also undergoing language therapy.

Story Source:

Materials provided by Baycrest Centre for Geriatric Care. Note: Content may be edited for style and length.

Language shapes how the brain perceives time

Language has such a powerful effect, it can influence the way in which we experience time, according to a new study.

Professor Panos Athanasopoulos, a linguist from Lancaster University and Professor Emanuel Bylund, a linguist from Stellenbosch University and Stockholm University, have discovered that people who speak two languages fluently think about time differently depending on the language context in which they are estimating the duration of events.

The finding, reported in the ‘Journal of Experimental Psychology: General‘, published by the American Psychological Association, reports the first evidence of cognitive flexibility in people who speak two languages.

Bilinguals go back and forth between their languages rapidly and, often, unconsciously — a phenomenon called code-switching.

But different languages also embody different worldviews, different ways of organizing the world around us. And time is a case in point. For example, Swedish and English speakers prefer to mark the duration of events by referring to physical distances, e.g. a short break, a long wedding, etc. The passage of time is perceived as distance travelled.

But Greek and Spanish speakers tend to mark time by referring to physical quantities, e.g. a small break, a big wedding. The passage of time is perceived as growing volume.

The study found that bilinguals seemed to flexibly utilize both ways of marking duration, depending on the language context. This alters how they experience the passage of time.

In the study, Professor Bylund and Professor Athanasopoulos asked Spanish-Swedish bilinguals to estimate how much time had passed while watching either a line growing across a screen or a container being filled.

At the same time, participants were prompted with either the word ‘duración’ (the Spanish word for duration) or ‘tid’ (the Swedish word for duration).

The results were clear-cut

When watching containers filling up and prompted by the Spanish prompt word, bilinguals based their time estimates of how full the containers were, perceiving time as volume. They were unaffected by the lines growing on screens.

Conversely, when given the Swedish prompt word, bilinguals suddenly switched their behaviour, with their time estimates becoming influenced by the distance the lines had travelled, but not by how much the containers had filled.

“By learning a new language, you suddenly become attuned to perceptual dimensions that you weren’t aware of before,” says Professor Athanasopoulos. “The fact that bilinguals go between these different ways of estimating time effortlessly and unconsciously fits in with a growing body of evidence demonstrating the ease with which language can creep into our most basic senses, including our emotions, our visual perception, and now it turns out, our sense of time.

“But it also shows that bilinguals are more flexible thinkers, and there is evidence to suggest that mentally going back and forth between different languages on a daily basis confers advantages on the ability to learn and multi-task, and even long term benefits for mental well-being.”

Story Source:

Materials provided by Lancaster University. Note: Content may be edited for style and length.

In young bilingual children, two languages develop simultaneously but independently

A new study of Spanish-English bilingual children by researchers at Florida Atlantic University published in the journal Developmental Science finds that when children learn two languages from birth each language proceeds on its own independent course, at a rate that reflects the quality of the children’s exposure to each language.

In addition, the study finds that Spanish skills become vulnerable as children’s English skills develop, but English is not vulnerable to being taken over by Spanish. In their longitudinal data, the researchers found evidence that as the children developed stronger skills in English, their rates of Spanish growth declined. Spanish skills did not cause English growth to slow, so it’s not a matter of necessary trade-offs between two languages.

“One well established fact about monolingual development is that the size of children’s vocabularies and the grammatical complexity of their speech are strongly related. It turns out that this is true for each language in bilingual children,” said Erika Hoff, Ph.D., lead author of the study, a psychology professor in FAU’s Charles E. Schmidt College of Science, and director of the Language Development Lab. “But vocabulary and grammar in one language are not related to vocabulary or grammar in the other language.”

For the study, Hoff and her collaborators David Giguere, a graduate research assistant at FAU and Jamie M. Quinn, a graduate research assistant at Florida State University, used longitudinal data on children who spoke English and Spanish as first languages and who were exposed to both languages from birth. They wanted to know if the relationship between grammar and vocabulary were specific to a language or more language general. They measured the vocabulary and level of grammatical development in these children in six-month intervals between the ages of 2 and a half to 4 years.

The researchers explored a number of possibilities during the study. They thought it might be something internal to the child that causes vocabulary and grammar to develop on the same timetable or that there might be dependencies in the process of language development itself. They also considered that children might need certain vocabulary to start learning grammar and that vocabulary provides the foundation for grammar or that grammar helps children learn vocabulary. One final possibility they explored is that it may be an external factor that drives both vocabulary development and grammatical development.

“If it’s something internal that paces language development then it shouldn’t matter if it’s English or Spanish, everything should be related to everything,” said Hoff. “On the other hand, if it’s dependencies within a language of vocabulary and grammar or vice versa then the relations should be language specific and one should predict the other. That is a child’s level of grammar should predict his or her future growth in vocabulary or vice versa.”

Turns out, the data were consistent only with the final possibility — that the rate of vocabulary and grammar development are a function of something external to the child and that exerts separate influences on growth in English and Spanish. Hoff and her collaborators suggest that the most cogent explanation would be in the properties of children’s input or their language exposure.

“Children may hear very rich language use in Spanish and less rich use in English, for example, if their parents are more proficient in Spanish than in English,” said Hoff. “If language growth were just a matter of some children being better at language learning than others, then growth in English and growth in Spanish would be more related than they are.”

Detailed results of the study are described in the article, “What Explains the Correlation between Growth in Vocabulary and Grammar? New Evidence from Latent Change Score Analyses of Simultaneous Bilingual Development.”

“There is something about differences among the children and the quality of English they hear that make some children acquire vocabulary and grammar more rapidly in English and other children develop more slowly,” said Hoff. “I think the key takeaway from our study is that it’s not the quantity of what the children are hearing; it’s the quality of their language exposure that matters. They need to experience a rich environment.”


This project is supported by the National Institutes of Health (NIH) through grant number R01 HD068421.

About Florida Atlantic University:

Florida Atlantic University, established in 1961, officially opened its doors in 1964 as the fifth public university in Florida. Today, the University, with an annual economic impact of $6.3 billion, serves more than 30,000 undergraduate and graduate students at sites throughout its six-county service region in southeast Florida. FAU’s world-class teaching and research faculty serves students through 10 colleges: the Dorothy F. Schmidt College of Arts and Letters, the College of Business, the College for Design and Social Inquiry, the College of Education, the College of Engineering and Computer Science, the Graduate College, the Harriet L. Wilkes Honors College, the Charles E. Schmidt College of Medicine, the Christine E. Lynn College of Nursing and the Charles E. Schmidt College of Science. FAU is ranked as a High Research Activity institution by the Carnegie Foundation for the Advancement of Teaching. The University is placing special focus on the rapid development of critical areas that form the basis of its strategic plan: Healthy aging, biotech, coastal and marine issues, neuroscience, regenerative medicine, informatics, lifespan and the environment. These areas provide opportunities for faculty and students to build upon FAU’s existing strengths in research and scholarship. For more information, visit

Repeating non-verbs as well as verbs can boost the syntactic priming effect

According to Glasgow and HSE/Northumbria researchers, repetition of non-verbs as well as verbs can boost the effect of syntactic priming, i.e. the likelihood of people reproducing the structure of the utterance they have just heard.

The way the human brain works makes people prone to repeating the syntactic structures they have recently heard or uttered. In psycholinguistics, this phenomenon is called the syntactic priming effect. Until recently, it was believed that repetition of verbs in particular could enhance this effect. University of Glasgow researchers Christoph Scheepers and Claudine Raffray, in collaboration with Andriy Myachykov (representing HSE and Northumbria University), have shown in their experiments that this is not necessarily true, and that repetition of other parts of speech, not only verbs, can influence the magnitude of the syntactic priming effect. Their findings are published in the Journal of Memory and Language in the article “The lexical boost effect is not diagnostic of lexically-specific syntactic representations.”

The priming effect, i.e. people’s ability to unconsciously reproduce prior experience — something that they have seen, heard, etc. — is well documented in psychology. Priming can manifest itself in simple things, such as the unconscious repetition of gestures, intonations or body poses of others, and in more complex behavioural patterns. This happens because perceptions tend to ‘warm up’ the brain, preparing it for similar experiences. For example, someone who has just spent an hour solving mathematical problems can handle another mathematical problem faster than someone who has been cooking or reading War and Peace.

Classical priming studies have often focused on basic elements of perception, such as processing similar visual stimuli. Having seen a round pizza image, a subject will react faster to a coin image, because it has a similar shape. Yet at a deeper level, the same effect manifests itself in the perception and reproduction of content and meaning.

“People tend to repeat their own and others’ behaviour. It is the foundation of priming. This effect, according to the interactive alignment theory, is more than just experimental curiosity or the reflection of very primitive behavioural patterns. In fact, it is an important subconscious mechanism that underlies children’s linguistic and broader cognitive development, allowing us to signal to each other that ‘we are of the same blood’ and helps reduce everyone’s cognitive burden, since people no longer need to control their every word and gesture and invent something new all the time,” the researches explain. Verbal or linguistic priming, i.e. the tendency to reproduce one’s own or other person’s linguistic patterns at different levels — lexical (words), semantic (meanings) and syntactic (sentence structures) — is the main theme of the study.

The syntactic priming effect was first demonstrated in the 1980s. It was shown, for example, that after reading a sentence with a certain syntactic structure, a person will perceive and process the next sentence with a similar structure much faster and will be more likely to repeat the syntactic frame of the sentence just heard.

Scheepers, Raffray, and Myachykov offer the following example of syntactic priming. “Imagine someone describing an event in which a girl handed a ball to a boy. This event can be described in more than one way. One can say, ‘the girl gave the boy a ball’ or ‘the girl gave a ball to the boy’. Let’s say the person you are talking to uses the first option, ‘the girl gave the boy a ball’. Let’s call this sentence a prime. Let’s assume that now you need to describe an event to the other person, in which an artist shows an easel to a child. Let’s call this sentence a target. It turns out that you are more likely to say, ‘the artist showed the child an easel’ than ‘the artist showed an easel to the child’, repeating the syntactic structure of the prime. While, of course, it does not work every time, the tendency to repeat a syntactic structure from one utterance to the next is real and forms the basis of syntactic priming.”

It was initially assumed that the syntactic priming effect is autonomous and not subject to external influences, such as the repetition of words or their meanings between prime and target. Then, in the late nineties, papers began to appear showing a ‘lexically boosted’ syntactic priming effect. Specifically, it was shown that if prime and target utterances both contain the verb give, the likelihood of re-using the syntactic structure of the prime in the target increases even more than if the prime contains the verb give and the target the verb show. Curiously, the question of whether repeated nouns could produce comparable lexical boosts to structural priming had been largely ignored in past research.

“Indeed, our research reveals that repetition of any content word of a sentence — noun or verb — can boost the syntactic priming effect, and that the more words are repeated, the stronger syntactic priming turns out to be,” say the authors. In the target trials of their experiments, subjects were asked to produce sentences from randomly arranged words on screen; these target trials were preceded by prime trials in which subjects had to read out complete sentences. Across conditions, the authors systematically varied the numbers and types of content words shared between the primes and the targets.

These findings are of academic significance in the context of the theory of syntax and simple sentence theories. “While there is consensus that the verb plays a pivotal role in determining the syntactic structure of a sentence, our research shows that the lexical boost to syntactic priming is not bound to repetition of verbs,” the researchers explain, adding “Contrary to previously held views, the lexical boost effect is not a very good diagnostic of lexicalised syntax.”

Biased bots: Human prejudices sneak into artificial intelligence systems

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender — like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”


Study analyzes what ‘the’ and ‘a’ tell us about language acquisition

If you have the chance, listen to a toddler use the words “a” and “the” before a noun. Can you detect a pattern? Is he or she using those two words correctly?

And one more question: When kids start using language, how much of their know-how is intrinsic, and how much is acquired by listening to others speak?

Now a study co-authored by an MIT professor uses a new approach to shed more light on this matter — a central issue in the area of language acquisition.

The results suggest that experience is an important component of early-childhood language usage although it doesn’t necessarily account for all of a child’s language facility. Moreover, the extent to which a child learns grammar by listening appears to change over time, with a large increase occurring around age 2 and a leveling off taking place in subsequent years.

“In this view, adult-like, rule-based [linguistic] development is the end-product of a construction of knowledge,” says Roger Levy, an MIT professor and co-author of a new paper summarizing the study. Or, as the paper states, the findings are consistent with the idea that children “lack rich grammatical knowledge at the outset of language learning but rapidly begin to generalize on the basis of structural regularities in their input.”

The paper, “The Emergence of an Abstract Grammatical Category in Children’s Early Speech,” appears in the latest issue of Psychological Science. The authors are Levy, a professor in MIT’s Department of Brain and Cognitive Sciences; Stephan Meylann of the University of California at Berkeley; Michael Frank of Stanford University; and Brandon Roy of Stanford and the MIT Media Lab.

Learning curve

Studying how children use terms such as “a dog” or “the dog” correctly can be a productive approach to language acquisition, since children use the articles “a” and “the” relatively early in their lives and tend to use them correctly. Again, though: Is that understanding of grammar innate or acquired?

Some previous studies have examined this specific question by using an “overlap score,” that is, the proportion of nouns that children use with both “a” and “the,” out of all the nouns they use. When children use both terms correctly, it indicates they understand the grammatical difference between indefinite and definite articles, as opposed to cases where they may (incorrectly) think only one or the other is assigned to a particular noun.

One potential drawback to this approach, however, is that the overlap score might change over time simply because a child might hear more article-noun pairings, without fully recognizing the grammatical distinction between articles.

By contrast, the current study builds a statistical model of language use that incorporates not only child language use but adult language use recorded around children, from a variety of sources. Some of these are publicly available copora of recordings of children and caregivers; others are records of individual children; and one source is the “Speechome” experiment conducted by Deb Roy of the MIT Media Lab, which features recordings of over 70 percent of his child’s waking hours.

The Speechome data, as the paper notes, provides some of the strongest evidence yet that “children’s syntactic productivity changes over development” — that younger children learn grammar from hearing it, and do so at different rates during different phases of early childhood.

“I think the method starts to get us traction on the problem,” Levy says. “We saw this as an opportunity both to use more comprehensive data and to develop new analytic techniques.”

A work in progress

Still, as the authors note, a second conclusion of the paper is that more basic data about language development is needed. As the paper notes, much of the available information is not comprehensive enough, and thus “likely not sufficient to yield precise developmental conclusions.”

And as Levy readily acknowledges, developing an airtight hypothesis about grammar acquisition is always likely to be a challenge.

“We’re never going to have an absolute complete record of everything a child has ever heard,” Levy says.

That makes it much harder to interpret the cognitive process leading to either correct or incorrect uses of, say, articles such as “a” and “the.” After all, if a child uses the phrase “a bus” correctly, it still might only be because that child has heard the phrase before and likes the way it sounds, not because he or she grasped the underlying grammar.

“Those things are very hard to tease apart, but that’s what we’re trying to do,” Levy says. “This is only really an initial step.”

Age at immigration influences occupational skill development

The future occupations of U.S. immigrant children are influenced by how similar their native language is to English, finds a new study by scholars at Duke University and the U.S. Naval Postgraduate School.

“The more difficult it is for the child to learn English, the more likely they will invest in math/logic and physical skills over communications skills,” said co-author Marcos Rangel, an assistant professor of public policy and economics at Duke’s Sanford School of Public Policy. “It is really a story about what skills people who immigrated as children develop given the costs and benefits associated with the learning processes.”

Two factors strongly influence the skills immigrants use as adults, researchers found: immigration before the age of 10, and whether immigrants’ native language is linguistically distant from English.

Immigrants who arrive before the age of 10 pursue occupations very similar to those pursued by native-born Americans. They develop the same range of skills as native-borns, including communication, math/logic, socio-emotional and physical skills.

But for those who are older when they immigrate, the picture is different. After age 10, learning a second language is more difficult, and a child’s particular linguistic background matters more. Some languages, such as Vietnamese, are linguistically very distant from English. Children from those countries are more likely to major in science, technology, engineering and math (STEM) fields than those whose native language is linguistically close to English, such as German.

“Late arrivals from English-distant countries develop a comparative advantage in math/logic, socio-emotional and physical skills relative to communication skills which ultimately generates the occupational segregation we are used to seeing in the labor market,” Rangel said.

The choice of majors made it clear that where these immigrants ended up in the labor market was not just because of different treatment by employers. It was also due to “the way the immigrants themselves look ahead and invest their time in becoming skilled in different tasks,” Rangel said.

The study, published online in the journal Demography on March 20, provides insight into why “some U.S. immigrants find it more attractive to invest in brawn rather than in brain, in mathematics rather than in poetry, in science rather than in history,” write Rangel and co-author Marigee Bacolod, associate professor of economics at the U.S. Naval Postgraduate School’s Graduate School of Business and Public Policy.

“Public policy designed to improve the education of English-learners could potentially have distinct long-lasting effects over the assimilation of immigrant children and over the future distribution of skills within the U.S. workforce,” the authors conclude.

The researchers used data from the 1990 and 2000 U.S. Censuses, the 2009 to 2013 American Community Survey and the Dictionary of Occupational Titles and the Occupational Information Network. The researchers also used a measure developed by the Max Planck Institute of Evolutionary Anthropology to determine the distance of a language from English.

Story Source:

Materials provided by Duke University. Note: Content may be edited for style and length.

Happy spouse, happy house

Achieving marital quality could seem daunting, even impossible to any couple, let alone a couple in which one of the partners is dealing with a serious illness. But a new study by Megan Robbins, psychology professor at the University of California, Riverside, may hold the answer.

Achieving marital quality could be as simple as using the right words, and finding balance, the study asserts. The results of the research paper show that the use of pronouns such as “I,” “me,” and “my,” spoken by the spouse, and “you,” and “your,” by the patient, reflect positive marriage quality.

Published in the journal Personal Relationships, “Everyday Emotion Word and Personal Pronoun Use Reflects Dyadic Adjustment Among Couples Coping with Breast Cancer,” Robbins and graduate students Alex Karan and Robert Wright analyzed 52 couples coping with breast cancer. The couples went home with an “Electronically Activated Recorder,” or “EAR,” that recorded 50 seconds of sound every nine minutes. Except for sleeping hours, they wore the EAR for a weekend (Friday-Sunday). Researchers analyzed conversations that did not concentrate on cancer — otherwise called “normal conversations,” which made up 95 percent of couples’ daily conversations.

The authors focused on participants’ use of first-person singular (e.g., “I,” “me”), and second-person (e.g., “you,” “your”) pronouns. Their analysis also focused on each participant’s positive emotion words (e.g. care, love), anxiety words (e.g. worry, stress), anger words (e.g. hate, resent), sadness words (e.g. cry, woe), and a category of negative emotion words that did not contain the words above.

“It may seem like an insignificant thing, but our research shows words can reflect important differences among romantic relationships,” Robbins said. “Spouses’ use of first-person singular pronouns, and patients’ use of second-person pronouns, was positively related to better marital quality for both partners as the focus wasn’t always on the patient. So, it reflects balance and interdependency between partners.

“Personal pronoun use can tell us who the individual is focusing on, and how he or she construes themselves within the relationship,” Robbins said. “It seems like a small word, but it says a lot about the relationship during a trying time. We found that focus on the spouse, rather than on the patient, lent to better marital quality for both partners. It was an indicator for us that the couple thought of themselves as a team, or a unit — not exclusively focusing on the patient.”

The researchers also found that not only were positive emotion words positively associated with marital quality, but negative pronoun use was associated with a negative marital quality.

Story Source:

Materials provided by University of California – Riverside. Original written by Mojgan Sherkat. Note: Content may be edited for style and length.


The way the brain processes speech could serve as a predictor of early dementia

Early dementia is typically associated with memory and thinking problems; but older adults should also be vigilant about hearing and communication problems, suggest recent findings in a joint Baycrest-University of Memphis study.

Within older adults who scored below the normal benchmark on a dementia screening test, but have no noticeable communication problems, scientists have discovered a new potential predictor of early dementia through abnormal functionality in regions of the brain that process speech (the brainstem and auditory cortex).

These brain regions are thought to be more resilient to Alzheimer’s. However, this discovery demonstrates changes occur early in the brain’s conversion of speech sound into understandable words. This finding could be the first sign of decline in brain function related to communication that presents itself before individuals become aware of these problems.

Their research technique of measuring electrical brain activity using an electroencephalogram (EEG) in these brain regions also predicted mild cognitive impairment (MCI), a condition that is likely to develop into Alzheimer’s, with 80 per cent accuracy. This test could be developed into a cost-effective and objective diagnostic assessment for older adults.

The study, published online in the Journal of Neuroscience prior to print publication, looked at older adults with no known history of neurological or psychiatric illnesses with similar hearing acuity.

The brain activity within the brainstem of these older adults demonstrated abnormally large speech sound processing within seven to 10 milliseconds of the signal hitting the ear, which could be a sign of greater communication problems in the future.

“This opens a new door in identifying biological markers for dementia since we might consider using the brain’s processing of speech sounds as a new way to detect the disease earlier,” says Dr. Claude Alain, the study’s senior author and senior scientist at Baycrest’s Rotman Research Institute (RRI) and professor at the University of Toronto’s psychology department.

“Losing the ability to communicate is devastating and this finding could lead to the development of targeted treatments or interventions to maintain this capability and slow progression of the disease.”

The study involved 23 older adults between the ages of 52 and 86. Participants were separated into two groups based on their results on a dementia screening test, the Montreal Cognitive Assessment (MoCA). Researchers measured brain activity in the brainstem while participants were watching a video. They measured brain activity in the auditory cortex while participants were identifying vowel sounds. Statistical methods were used to combine both sets of brain activity to predict MCI.

“When we hear a sound, the normal aging brain keeps the sound in check during processing, but those with MCI have lost this inhibition and it was as if the flood gates were open since their neural response to the same sounds were over-exaggerated,” says Dr. Gavin Bidelman, first author on the study, a former RRI post-doctoral fellow and assistant professor at the University of Memphis. “This functional biomarker could help identify people who should be monitored more closely for their risk of developing dementia.”

The next steps involve studying whether those individuals who already have dementia or convert early from MCI to dementia also demonstrate these same changes in brain activity when they hear speech.

Research for this study was conducted with support from the Grammy Foundation, the Canadian Institutes of Health Research, the FedEx Institute of Technology and the Center for Technologies and Research in Alzheimer’s Care, which supported the staff and equipment needed to conduct the study.

With additional funds, researchers could explore developing a portable, reliable and easy-to-use alternate diagnostic test for MCI that incorporates the body’s different senses.

“MCI is known to cause changes in different senses, such as vision or touch,” says Dr. Alain. “If we could incorporate these changes into a wireless EEG test, we could combine all this information and develop a better biomarker. One day, doctors could administer a short, 10-minute assessment and instantly provide results.”

“This could offer a new diagnostic assessment that tests a person’s cognitive abilities, such as their ability to communicate, and objectively measure physiological changes in the brain that reflect early signs of dementia,” says Dr. Bidelman.