Why do we choose to get vaccinations?

Since vaccines protect not only those who take them, but also the people who otherwise could have been infected, there are many plausible motives for choosing to get vaccinated. Apart from the most obvious – wanting to protect oneself or one’s children from becoming ill – research shows that many also are affected by care for others.

But if you care about others, who is it you care about? In his doctoral thesis in political science, Rafael Ahlskog has studied the distinction between narrow and wide caring for others – altruism. Narrow altruism includes those nearest – family and friends – while wide altruism can include strangers you have never met, people living far away or who are very different from yourself: in short a broader form of social caring. The results from a number of survey experiments show that both types of altruism can affect our willingness to get vaccinations, but in different people.

“Before you have a family and children, a broader form of caring seems to affect decisions to vaccinate, but this caring gives way to the narrower form when family and children become part of the picture,” says Rafael Ahlskog.

This knowledge could play an important role in the design of future vaccination campaigns, but also highlights a deeper evolutionary logic which modern humans sometimes are governed by: as social beings, in the right circumstances, we can afford to take into account a broader societal context, but when we get the chance to invest in the evolutionary ‘core values’ (survival and procreation) the larger context is easily forgotten.


Story Source:

Materials provided by Uppsala University. Note: Content may be edited for style and length.

Man with quadriplegia employs injury bridging technologies to move again — just by thinking

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies.

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Holding a makeshift handle pierced through a dry sponge, Kochevar scratched the side of his nose with the sponge. He scooped forkfuls of mashed potatoes from a bowl — perhaps his top goal — and savored each mouthful.

“For somebody who’s been injured eight years and couldn’t move, being able to move just that little bit is awesome to me,” said Kochevar, 56, of Cleveland. “It’s better than I thought it would be.”

A video of Kochevar can be found at: https://youtu.be/OHsFkqSM7-A

Kochevar is the focal point of research led by Case Western Reserve University, the Cleveland Functional Electrical Stimulation (FES) Center at the Louis Stokes Cleveland VA Medical Center and University Hospitals Cleveland Medical Center (UH). A study of the work will be published in the The Lancet March 28 at 6:30 p.m. U.S. Eastern time.

“He’s really breaking ground for the spinal cord injury community,” said Bob Kirsch, chair of Case Western Reserve’s Department of Biomedical Engineering, executive director of the FES Center and principal investigator (PI) and senior author of the research. “This is a major step toward restoring some independence.”

When asked, people with quadriplegia say their first priority is to scratch an itch, feed themselves or perform other simple functions with their arm and hand, instead of relying on caregivers.

“By taking the brain signals generated when Bill attempts to move, and using them to control the stimulation of his arm and hand, he was able to perform personal functions that were important to him,” said Bolu Ajiboye, assistant professor of biomedical engineering and lead study author.

Technology and training

The research with Kochevar is part of the ongoing BrainGate2* pilot clinical trial being conducted by a consortium of academic and VA institutions assessing the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Other investigational BrainGate research has shown that people with paralysis can control a cursor on a computer screen or a robotic arm.

“Every day, most of us take for granted that when we will to move, we can move any part of our body with precision and control in multiple directions and those with traumatic spinal cord injury or any other form of paralysis cannot,” said Benjamin Walter, associate professor of Neurology at Case Western Reserve School of Medicine, Clinical PI of the Cleveland BrainGate2 trial and medical director of the Deep Brain Stimulation Program at UH Cleveland Medical Center.

“The ultimate hope of any of these individuals is to restore this function,” Walter said. “By restoring the communication of the will to move from the brain directly to the body this work will hopefully begin to restore the hope of millions of paralyzed individuals that someday they will be able to move freely again.”

Jonathan Miller, assistant professor of neurosurgery at Case Western Reserve School of Medicine and director of the Functional and Restorative Neurosurgery Center at UH, led a team of surgeons who implanted two 96-channel electrode arrays — each about the size of a baby aspirin — in Kochevar’s motor cortex, on the surface of the brain.

The arrays record brain signals created when Kochevar imagines movement of his own arm and hand. The brain-computer interface extracts information from the brain signals about what movements he intends to make, then passes the information to command the electrical stimulation system.

To prepare him to use his arm again, Kochevar first learned how to use his brain signals to move a virtual-reality arm on a computer screen.

“He was able to do it within a few minutes,” Kirsch said. “The code was still in his brain.”

As Kochevar’s ability to move the virtual arm improved through four months of training, the researchers believed he would be capable of controlling his own arm and hand.

Miller then led a team that implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm.

The BCI decodes the recorded brain signals into the intended movement command, which is then converted by the FES system into patterns of electrical pulses.

The pulses sent through the FES electrodes trigger the muscles controlling Kochevar’s hand, wrist, arm, elbow and shoulder. To overcome gravity that would otherwise prevent him from raising his arm and reaching, Kochevar uses a mobile arm support, which is also under his brain’s control.

New Capabilities

Eight years of muscle atrophy required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.

Kochevar can make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.

When asked to describe how he commanded the arm movements, Kochevar told investigators, “I’m making it move without having to really concentrate hard at it…I just think ‘out’…and it goes.”

Kocehvar is fitted with temporarily implanted FES technology that has a track record of reliable use in people. The BCI and FES system together represent early feasibility that gives the research team insights into the potential future benefit of the combined system.

Advances needed to make the combined technology usable outside of a lab are not far from reality, the researchers say. Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.

Kochevar welcomes new technology — even if it requires more surgery — that will enable him to move better. “This won’t replace caregivers,” he said. “But, in the long term, people will be able, in a limited way, to do more for themselves.”

The investigational BrainGate technology was initially developed in the Brown University laboratory of John Donoghue, now the founding director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The implanted recording electrodes are known as the Utah array, originally designed by Richard Normann, Emeritus Distinguished Professor of Bioengineering at the University of Utah.

The report in today’s Lancet is the result of a long-running collaboration between Kirsch, Ajiboye and the multi-institutional BrainGate consortium. Leigh Hochberg, MD, PhD, a neurologist and neuroengineer at Massachusetts General Hospital, Brown University and the VA RR&D Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.

“It’s been so inspiring to watch Mr. Kochevar move his own arm and hand just by thinking about it,” Hochberg said. “As an extraordinary participant in this research, he’s teaching us how to design a new generation of neurotechnologies that we all hope will one day restore mobility and independence for people with paralysis.”

Other researchers involved with the study include: Francis R. Willett, Daniel Young, William Memberg, Brian Murphy, PhD, and P. Hunter Peckham, PhD, from Case Western Reserve; Jennifer Sweet, MD, from UH; Harry Hoyen, MD,and Michael Keith, MD, from MetroHealth Medical Center and CWRU School of Medicine; and John Simeral, PhD from Brown University and Providence VA Medical Center.

*CAUTION: Investigational Device. Limited by Federal Law to Investigational Use.

Brain stimulation improves schizophrenia-like cognitive problems

“A beautiful, lobular structure,” is how Krystal Parker describes the cerebellum — a brain region located at the base of the skull just above the spinal column. The cerebellum is most commonly associated with movement control, but work from Parker’s lab and others is gradually revealing a much more complex role in cognition that positions the cerebellum as a potential target for treating diseases that affect thinking, attention, and planning, such as schizophrenia.

A new study from Parker’s lab and the lab of Nandakumar Narayanan at the University of Iowa Carver College of Medicine finds that stimulating the cerebellum in rats with schizophrenia-like thinking problems normalizes brain activity in the frontal cortex and corrects the rats’ ability to estimate the passage of time — a cognitive deficit that is characteristic in people with schizophrenia.

“Cerebellar interactions with the frontal cortex in cognitive processes has never been shown before in animal models,” says Parker, UI assistant professor of psychiatry and the first faculty hire of the new Iowa Neuroscience Institute. “In addition to showing that the signal travels from the cerebellum to the frontal cortex, the study also showed that normal timing behavior was rescued when the signal was restored.”

The UI study, which was published March 28 online in the journal Molecular Psychiatry, adds to the accumulating evidence, including recent human studies from Harvard University, that suggests cerebellar stimulation might help improve cognitive problems in patients with schizophrenia.

Schizophrenia is a serious and debilitating psychiatric illness that disrupts a person’s ability to think and to understand the world around them. About 1 percent of the population is affected by schizophrenia. There is no cure and few therapies reliably improve the condition’s cognitive problems.

Knowing it’s essential for cognitive function, the researchers recorded brain activity from the frontal cortex of nine patients with schizophrenia and nine healthy controls while they performed a timing task where they had to estimate the passage of 12 seconds.

“We think timing is a window into cognitive function,” Parker explains. “It allows us to probe executive processes like working memory, attention, planning — all those things are abnormal in schizophrenia.”

Compared to healthy individuals, patients with schizophrenia performed poorly on the timing task. They also lacked a low frequency burst of brain activity (the delta brain wave) that occurs right at the start of the trial in healthy subjects.

To probe the brain circuitry involved in this signal, and to assess the role of the cerebellum, the team turned to an animal model. In schizophrenia, dopamine signaling in the frontal cortex is abnormal. By blocking dopamine signaling in the frontal cortex of rats, the team was able to reproduce the schizophrenia-like timing problems in the animals.

Recordings of neural activity in the frontal cortex of the rats showed that, like humans with schizophrenia, these rats also lacked the low frequency burst of brain activity (delta wave) during the timing task. The study also showed that, in control rats, the same delta wave activity occurred in the rat’s cerebellum during the timing task and, interestingly, the cerebellar activity preceded the activity in the frontal cortex.

“We think that delta wave burst of activity acts like a ‘go’ signal that triggers individual neurons to start ramping their activity to encode the passage of time,” Parker explains. “That happens in both the frontal cortex and the cerebellum, but there is synchrony between the two and the cerebellum is actually leading the frontal cortex and providing the signal to the frontal cortex.”

Finally, the researcher used optogenetics to stimulate the rats’ cerebellar region at the precise delta wave frequency of 2 Hertz. This stimulation restored normal delta wave activity in the rats’ frontal cortex and normalized the rats’ performance on the timing test.

The findings explain how cerebellar stimulation might have a therapeutic benefit in schizophrenia. Parker adds that the research may also inspire novel cerebellar targeted pharmacological treatments for schizophrenia.

Non-invasive brain stimulation is currently approved as a treatment for depression. However, cerebellar stimulation is still an experimental approach and is not FDA approved as a therapy. Early experimental studies from Harvard in patients with schizophrenia suggest that cerebellar stimulation is safe and appears to improve some of the patients’ cognitive abnormalities. Parker is currently pursuing human testing of non-invasive cerebellar stimulation at the UI in collaboration with Aaron Boes, director of the Iowa Brain Stimulation Program. They are working to understand how cerebellar stimulation influences cognitive function and frontal cortex activity in patients with schizophrenia.

Although the current study focused on schizophrenia, similar cognitive problems along with cerebellar abnormalities also are a feature of autism, Parkinson’s disease, addiction, OCD, bipolar disorder, and depression. If cerebellar stimulation proves helpful for schizophrenia, it might also be beneficial for some patients with these other conditions.

Microrna treatment restores nerve insulation, limb function in mice with MS

Scientists partially re-insulated ravaged nerves in mouse models of multiple sclerosis (MS) and restored limb mobility by treating the animals with a small non-coding RNA called a microRNA.

In a study published online March 27 in Developmental Cell, researchers at Cincinnati Children’s Hospital Medical Center report that treatment with a microRNA called miR-219 restarted production of a substance called myelin in nerves of the central nervous system. Myelin forms a protective sheath around nerves, allowing them to efficiently transmit electrical impulses that stimulate movement.

Study authors administered miR-219 into the spinal columns and cerebrospinal fluid of mice with nerve coatings damaged by a chemical called lysolecithin or by autoimmune encephalomyelitis induced in the animals, which is used to model MS. Treatment with miR-219 reinvigorated the function of damaged cells called oligodendrocytes that produce myelin, which allowed the substance to reform and reinsulate nerves.

“We show that miR-219 targets multiple processes that inhibit myelin formation after nerve injury by the disease process, and that treatment with this microRNA partially restores myelination and limb function,” said Q. Richard Lu, PhD, lead investigator and scientific director of the Brain Tumor Center at Cincinnati Children’s. “It is conceivable that augmenting miR-219 treatment with other blockers of myelin regrowth may provide a multipoint treatment strategy for people with demyelinating diseases like MS.”

The authors stress that because their study was conducted in laboratory mouse models of disease, their data cannot at this stage be applied to clinical treatment in humans.

Lu’s laboratory studies how certain glial cell subtypes of the central and peripheral nervous system form, participate in regeneration and how they can transform into cancerous cells.

Molecular Silencer

MicroRNAs are short segments of RNA encoded on the chromosomes of cells. They regulate gene expression in cells by acting as molecular silencers, essentially blocking gene expression in certain situations.

A number of earlier research papers have pointed to the absence of miR-219 in the damaged nerves and tissues with certain neurodegenerative diseases like multiple sclerosis.

Lu and his colleagues tested the presence and effects of miR-219 in genetically-engineered mouse models of MS with chemically induced nerve coating damage by lysolecithin and autoimmune encephalomyelitis. They also deleted miR-219 in mice to test the impact this had on myelin-forming oligodendrocyte cells.

The absence of miR-219 allowed a surge of activity by several inhibitors of nerve re-myelination — including a protein called Lingo1. Further testing revealed that miR-219 is an essential part of a network that targets and blocks molecules that inhibit the ability of oligodendrocytes to form myelin.

This prompted the researchers to test treatment with miR-219 in their animal models. For this they used a miR-219 mimic — essentially a synthesized version of the microRNA. After administering the mimic to their mouse models, the researchers noted improved limb function and regeneration of the myelin coating on nerves.

Next steps

Lu and his colleagues are now trying to develop additional mimics of miR-219 and therapeutically effective formulations of the microRNA to ease its delivery — particularly into brain tissue. The researchers also continue to test the potential effectiveness of miR-219 treatment in different models of neurodegenerative disease.

Story Source:

Materials provided by Cincinnati Children’s Hospital Medical Center. Note: Content may be edited for style and length.


People who watch entertainment TV are more likely to vote for populist politicians

People exposed to entertainment television are more likely to vote for populist politicians according to a new study co-authored by an economist at Queen Mary University of London.

The researchers investigated the political impact of entertainment television in Italy over the last 30 years during the phased introduction of Silvio Berlusconi’s commercial TV network Mediaset.

They compared the voting behaviours of people who lived in regions where Mediaset was broadcast versus those where Berlusconi’s network was unavailable. The researchers found that people who had access to Mediaset prior to 1985 — when the network only featured light entertainment — voted on average 1 percentage point more for Berlusconi’s Forza Italia party, compared to municipalities that were exposed later as the network rolled out.

The researchers found that the effect persisted for almost two decades and five elections. It is especially pronounced among older people and young people, although it affected these groups in very different ways.

Author Dr Andrea Tesei from QMUL’s School of Economics and Finance said: “Our results suggest that individuals exposed to entertainment TV as children are less cognitively sophisticated and less socio-politically engaged as adults, and ultimately more vulnerable to Berlusconi’s populist rhetoric. Older people, on the other hand, appear to have been hooked by the light entertainment Mediaset provided and were later exposed to biased news content on the same channels.”

Less educated people (high-school dropouts in this case) who were exposed to entertainment TV voted three percentage points more for Forza Italia than their non-exposed counterparts (i.e. high-school dropouts in municipalities where Mediaset wasn’t available). People exposed to entertainment TV as children voted almost eight percentage points more for Berlusconi, compared to same-age individuals who were exposed later.

The researchers found that people who are exposed to entertainment TV as children are cognitively disadvantaged in later life. Those who were exposed as children score five per cent worse than their non-exposed peers in cognitive test as adults; they are also 13 per cent less likely to report an interest in politics, and 10 per cent less likely to be involved in a voluntary group.

The researchers found an even stronger effect among people who were already older (55+) when exposed to entertainment TV. This group voted on average by 10 percentage points more for Forza Italia than non-exposed voters of the same age. The study found that older people exposed to entertainment TV during the 1980s are 16 per cent more likely to report to watch news (traditionally slanted in favor of Berlusconi and introduced on Mediaset in 1992) on Mediaset channels which is traditionally slanted in favor of Berlusconi.

The researchers also found that exposure to entertainment TV does not just increase support for Berlusconi but also for other parties with similar populist features. Indeed, early access to Mediaset appears also to be associated with higher support for the Five-Star-Movement — led by former comedian Beppe Grillo — which first fielded candidates in 2013. These results suggest that Mediaset influenced political attitudes and voting behavior beyond its effect on Berlusconi’s party. In particular, the results suggest a relationship between exposure to light-fare TV and preferences for populist parties and leaders.

The researchers used a combination of research methods, including engineer-developed software to simulate TV signal propagation, econometric analysis based on municipal-level election data, and geo-referenced survey data.

Their results were significant and stood up when controlled for geographical and socio-economic characteristics at the municipal level.

Dr Tesei said: “Our results suggest that entertainment content can influence political attitudes, creating a fertile ground for the spread of populist messages. It’s the first major study to investigate the political effect of exposure among voters to a diet of ‘light’ entertainment. The results are timely as the United States adjusts to the Presidency of Donald Trump.”

The co-authors were Ruben Durante (Universitat Pompeu Fabra and Sciences Po) and Paolo Pinotti from Bocconi University. The research is published in a School of Economics and Finance (Queen Mary University of London) working paper.

Control factors:

The researchers first controlled for local measures of education and economic activity. Second they showed there are no pre-existing trends in voting for any political party before the introduction of Mediaset at the municipal level. Third, their methodology only relies on the signal intensity that is due to exogenous geographical characteristics (mountains, valleys).

The presence or absence of mountains interfering with the line of transmission between the municipality and the antenna is a matter of luck and is unrelated to any characteristic of the municipality. This effectively corresponds to exposing individuals (i.e. municipalities) to a random treatment (i.e. exposure to entertainment TV) and identifies the causal effect of exposure to entertainment TV on voting.

Find the paper online at: http://media.wix.com/ugd/a37348_243705eb069246ddadf39e773005a39a.pdf


Scientists get closer look at living nerve synapses

The brain hosts an extraordinarily complex network of interconnected nerve cells that are constantly exchanging electrical and chemical signals at speeds difficult to comprehend. Now, scientists at Washington University School of Medicine in St. Louis report they have been able to achieve — with a custom-built microscope — the closest view yet of living nerve synapses.

Understanding the detailed workings of a synapse — the junction between neurons that govern how these cells communicate with each other — is vital for modeling brain networks and understanding how diseases as diverse as depression, Alzheimer’s or schizophrenia may affect brain function, according to the researchers.

The study is published March 23 in the journal Neuron.

Studying active rat neurons, even those growing in a dish, is a challenge because they are so small. Further, they move, making it difficult to keep them in focus at high magnifications under a light microscope.

“Synapses are little nanoscale machines that transmit information,” said senior author Vitaly A. Klyachko, PhD, an associate professor of cell biology and physiology at the School of Medicine. “They’re very difficult to study because their scale is below what conventional light microscopes can resolve. So what is happening in the active zone of a synapse looks like a blur.

“To remedy this, our custom-built microscope has a very sensitive camera and is extremely stable at body temperatures, but most of the novelty comes from the analysis of the images,” he added. “Our approach gives us the ability to resolve events in the synapse with high precision.”

Until now, close-up views of the active zone have been provided by electron microscopes. While offering resolutions of mere tens of nanometers — about 1,000 times thinner than a human hair and smaller — electron microscopes can’t view living cells. To withstand bombardment by electrons, samples must be fixed in an epoxy resin or flash frozen, cut into extremely thin slices and coated in a layer of metal atoms.

“Most of what we know about the active zone is from indirect studies, including beautiful electron microscopy images,” said Klyachko, also an associate professor of biomedical engineering at the School of Engineering & Applied Science. “But these are static pictures. We wanted to develop a way to see the synapse function.”

A synapse consists of a tiny gap between two nerves, with one nerve serving as the transmitter and the other as the receiver. When sending signals, the transmitting side of the synapse releases little packages of neurotransmitters, which traverse the gap and bind to receptors on the receiving side, completing the information relay. On the transmitting side of the synapse the neurotransmitters at the active zone are packaged into synaptic vesicles.

“One of the most fundamental questions is: Are there many places at the active zone where a vesicle can release its neurotransmitters into the gap, or is there only one?” Klyachko said. “A lot of indirect measurements suggested there might be only one, or maybe two to three, at most.”

In other words, if the active zone could be compared to a shower head, the question would be whether it functions more as a single jet or as a rain shower.

Klyachko and first author Dario Maschi, PhD, a postdoctoral researcher, showed that the active zone is more of a rain shower. But it’s not a random shower; there are about 10 locations dotted across the active zone that are reused too often to be left to chance. They also found there is a limit to how quickly these sites can be reused — about 100 milliseconds must pass before an individual site can be used again. And at higher rates of vesicle release, the site usage tends to move from the center to the periphery of the active zone.

“Neurons often fire at 50 to 100 times per second, so it makes sense to have multiple sites,” Klyachko said. “If one site has just been used, the active zone can still be transmitting signals through its other sites.

“We’re studying the most basic machinery of the brain,” he added. “Our data suggest these machines are extremely fine-tuned — even subtle modulations may lead to disease. But before we can study disease, we need to understand how healthy synapses work.”


Untreated sleep apnea in children can harm brain cells tied to cognition and mood

A study comparing children between 7 and 11 years of age who have moderate or severe obstructive sleep apnea to children the same age who slept normally, found significant reductions of gray matter — brain cells involved in movement, memory, emotions, speech, perception, decision making and self-control — in several regions of the brains of children with sleep apnea.

The finding points to a strong connection between this common sleep disturbance, which affects up to five percent of all children, and the loss of neurons or delayed neuronal growth in the developing brain. This extensive reduction of gray matter in children with a treatable disorder provides one more reason for parents of children with symptoms of sleep apnea to consider early detection and therapy.

“The images of gray matter changes are striking,” said one of the study’s senior authors, Leila Kheirandish-Gozal, MD, director of pediatric clinical sleep research at the University of Chicago. “We do not yet have a precise guide to correlate loss of gray matter with specific cognitive deficits, but there is clear evidence of widespread neuronal damage or loss compared to the general population.”

For this study, published March 17, 2017, in the journal Scientific Reports, the researchers recruited 16 children with obstructive sleep apnea (OSA). The children’s sleep patterns were evaluated overnight in the University of Chicago’s pediatric sleep laboratory. Each child also went through neuro-cognitive testing and had his or her brain scanned with non-invasive magnetic resonance imaging (MRI). Colleagues from the University of California at Los Angeles performed the image analysis.

The researchers compared those scans, plus neuro-cognitive test results, with MRI images from nine healthy children of the same age, gender, ethnicity and weight, who did not have apnea. They also compared the 16 children with OSA to 191 MRI scans of children who were part of an existing pediatric-MRI database assembled by the National Institutes of Health.

They found reductions in the volume of gray matter in multiple regions of the brains of children with OSA. These included the frontal cortices (which handle movement, problem solving, memory, language, judgement and impulse control), the prefrontal cortices (complex behaviors, planning, personality), parietal cortices (integrating sensory input), temporal lobe (hearing and selective listening) and the brainstem (controlling cardiovascular and respiratory functions).

Although these gray matter reductions were rather extensive, the direct consequences can be difficult to measure.

“MRI scans give us a bird’s eye view of the apnea-related difference in volume of various parts of the brain, but they don’t tell us, at the cellular level, what happened to the affected neurons or when,” said co-author David Gozal, MD, professor of pediatrics, University of Chicago. “The scans don’t have the resolution to determine whether brain cells have shrunk or been lost completely,” he added. “We can’t tell exactly when the damage occurred. But previous studies from our group showed that we can connect the severity of the disease with the extent of the cognitive deficits, when such deficits are detectable.”

In addition, “we are planning future collaborative studies between the University of Chicago and UCLA that will use state-of-the-art imaging approaches to answer the many questions raised by the current study,” said Paul Macey, PhD, who, along with colleague Rajesh Kumar, PhD, led the image analyses at UCLA.

Without extensive tests of cognitive function prior to the onset of sleep apnea, “we can’t measure the effect of the loss of neurons,” Gozal said.

“If you’re born with a high IQ — say 180 — and you lose 8 to 10 points, which is about the extent of IQ loss that sleep apnea will induce on average, that may never become apparent. But if your IQ as a child was average, somewhere around 90 to 100, and you had sleep apnea that went untreated and lost 8-10 points, that could potentially place you one standard deviation below normal,” Gozal said. “No one wants that.”

Or, it may just be too soon to measure. The children in this study were between 7 to 11 years old. The connections between greater gray matter volume and intelligence have been documented only in children with an average age of 15.4 years.

“The exact nature of the gray matter reductions and their potential reversibility remain virtually unexplored,” the authors conclude, but “altered regional gray matter is likely impacting brain functions, and hence cognitive developmental potential may be at risk.” This, they suggest, should prompt “intensive future research efforts in this direction.”


The genes, neural circuits behind autism’s impaired sociability

Researchers at Beth Israel Deaconess Medical Center (BIDMC) have gained new insight into the genetic and neuronal circuit mechanisms that may contribute to impaired sociability in some forms of Autism Spectrum Disorder. Led by Matthew P. Anderson, MD, PhD, Director of Neuropathology at BIDMC, the scientists determined how a gene linked to one common form of autism works in a specific population of brain cells to impair sociability. The research, published in the journal Nature, reveals the neurobiological control of sociability and could represent important first steps toward interventions for patients with autism.

Anderson and colleagues focused on the gene UBE3A, multiple copies of which causes a form of autism in humans (called isodicentric chromosome 15q).Conversely, the lack of this same gene in humans leads to a developmental disorder called Angelman’s syndrome, characterized by increased sociability. In previous work, Anderson’s team demonstrated that mice engineered with extra copies of the UBE3A gene show impaired sociability, as well as heightened repetitive self-grooming and reduced vocalizations with other mice.

“In this study, we wanted to determine where in the brain this social behavior deficit arises and where and how increases of the UBE3A gene repress it,” said Anderson, who is also an Associate Professor in the Program in Neuroscience at Harvard Medical School and Director of Autism BrainNET Boston Node. “We had tools in hand that we built ourselves. We not only introduced the gene into specific brain regions of the mouse, but we could also direct it to specific cell types to test which ones played a role in regulating sociability.”

When Anderson and colleagues compared the brains of the mice engineered to model autism to those of normal — or wild type (WT) — mice, they observed that the increased UBE3A gene copies interacted with nearly 600 other genes.

After analyzing and comparing protein interactions between the UBE3A regulated gene and genes altered in human autism, the researchers noticed that increased doses of UBE3A repressed Cerebellin genes.

Cerebellin is a family of genes that physically interact with other autism genes to form glutamatergic synapses, the junctions where neurons communicate with each other via the neurotransmitter glutamate. The researchers chose to focus on one of them, Cerebellin 1 (CBLN1), as the potential mediator of UBE3A’s effects. When they deleted CBLN1 in glutamate neurons, they recreated the same impaired sociability produced by increased UBE3A.

“Selecting Cerebellin 1 out of hundreds of other potential targets was something of a leap of faith,” Anderson said. “When we deleted the gene and were able to reconstitute the social deficits, that was the moment we realized we’d hit the right target. Cerebellin 1 was the gene repressed by UBE3A that seemed to mediate its effects.”

In another series of experiments, Anderson and colleagues demonstrated an even more definitive link between UBE3A and CBLN1. Seizures are a common symptom among people with autism including this genetic form. Seizures themselves when sufficiently severe, also impaired sociability. Anderson’s team suspected this seizure-induced impairment of sociability was the result of repressing the Cerebellin genes. Indeed, the researchers found that deleting UBE3A, upstream from Cerebellin genes, prevented the seizure-induced social impairments and blocked seizures ability to repress CBLN1.

“If you take away UBE3A, seizures can’t repress sociability or Cerebellin,” said Anderson. “The flip side is, if you have just a little extra UBE3A — as a subset of people with autism do — and you combine that with less severe seizures — you can get a full-blown loss of social interactions.”

The researchers next conducted a variety of brain mapping experiments to locate where in the brain these crucial seizure-gene interactions take place.

“We mapped this seat of sociability to a surprising location,” Anderson explained. Most scientists would have thought they take place in the cortex — the area of the brain where sensory processing and motor commands take place — but, in fact, these interactions take place in the brain stem, in the reward system.”

Then the researchers used their engineered mouse model to confirm the precise location, the ventral tegmental area (VTA), part of the midbrain that plays a role in the reward system and addiction. Anderson and colleagues used chemogenetics — an approach that makes use of modified receptors introduced into neurons that responds to drugs, but not to naturally-occurring neurotransmitters — to switch this specific group of neurons on or off. Turning these neurons on could magnify sociability and rescue seizure and UBE3A-induced sociability deficits.

“We were able to abolish sociability by inhibiting these neurons and we could magnify and prolong sociability by turning them on,” said Anderson. “So we have a toggle switch for sociability. It has a therapeutic flavor; someday, we might be able to translate this into a treatment that will helps patients.”


Brain is ten times more active than previously measured

A new UCLA study could change scientists’ understanding of how the brain works — and could lead to new approaches for treating neurological disorders and for developing computers that “think” more like humans.

The research focused on the structure and function of dendrites, which are components of neurons, the nerve cells in the brain. Neurons are large, tree-like structures made up of a body, the soma, with numerous branches called dendrites extending outward. Somas generate brief electrical pulses called “spikes” in order to connect and communicate with each other. Scientists had generally believed that the somatic spikes activate the dendrites, which passively send currents to other neurons’ somas, but this had never been directly tested before. This process is the basis for how memories are formed and stored.

Scientists have believed that this was dendrites’ primary role.

But the UCLA team discovered that dendrites are not just passive conduits. Their research showed that dendrites are electrically active in animals that are moving around freely, generating nearly 10 times more spikes than somas. The finding challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information. It may pave the way for understanding and treating neurological disorders, and for developing brain-like computers.”

The research is reported in the March 9 issue of the journal Science.

Scientists have generally believed that dendrites meekly sent currents they received from the cell’s synapse (the junction between two neurons) to the soma, which in turn generated an electrical impulse. Those short electrical bursts, known as somatic spikes, were thought to be at the heart of neural computation and learning. But the new study demonstrated that dendrites generate their own spikes 10 times more often than the somas.

The researchers also found that dendrites generate large fluctuations in voltage in addition to the spikes; the spikes are binary, all-or-nothing events. The somas generated only all-or-nothing spikes, much like digital computers do. In addition to producing similar spikes, the dendrites also generated large, slowly varying voltages that were even bigger than the spikes, which suggests that the dendrites execute analog computation.

“We found that dendrites are hybrids that do both analog and digital computations, which are therefore fundamentally different from purely digital computers, but somewhat similar to quantum computers that are analog,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology. “A fundamental belief in neuroscience has been that neurons are digital devices. They either generate a spike or not. These results show that the dendrites do not behave purely like a digital device. Dendrites do generate digital, all-or-none spikes, but they also show large analog fluctuations that are not all or none. This is a major departure from what neuroscientists have believed for about 60 years.”

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Recent studies in brain slices showed that dendrites can generate spikes. But it was neither clear that this could happen during natural behavior, nor how often. Measuring dendrites’ electrical activity during natural behavior has long been a challenge because they’re so delicate: In studies with laboratory rats, scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

“Many prior models assume that learning occurs when the cell bodies of two neurons are active at the same time,” said Jason Moore, a UCLA postdoctoral researcher and the study’s first author. “Our findings indicate that learning may take place when the input neuron is active at the same time that a dendrite is active — and it could be that different parts of dendrites will be active at different times, which would suggest a lot more flexibility in how learning can occur within a single neuron.”

Looking at the soma to understand how the brain works has provided a framework for numerous medical and scientific questions — from diagnosing and treating diseases to how to build computers. But, Mehta said, that framework was based on the understanding that the cell body makes the decisions, and that the process is digital.

“What we found indicates that such decisions are made in the dendrites far more often than in the cell body, and that such computations are not just digital, but also analog,” Mehta said. “Due to technological difficulties, research in brain function has largely focused on the cell body. But we have discovered the secret lives of neurons, especially in the extensive neuronal branches. Our results substantially change our understanding of how neurons compute.”

How big brains evolved could be revealed by new mathematical model

A new mathematical model could help clarify what drove the evolution of large brains in humans and other animals, according to a study published in PLOS Computational Biology.

Animals with high cognitive ability also have relatively large brains, but the factors that drive evolution of big brains remain unclear. Scientists have hypothesized that ecological, social, and cultural factors could each play a role. Most of these hypotheses have not been formalized mathematically, but doing so would allow scientists to refine their tests.

To address the issue, Mauricio González-Forero of University of Lausanne, Switzerland, and colleagues developed a mathematical model that predicts how large the human brain should be at different stages of an individual’s life, depending on different possible evolutionary scenarios.

The model relies on the assumption that the brain expends some energy on cognitive skills that allow an individual to extract energy from the environment (food), which in turn helps the brain grow. Given natural selection, it predicts how much energy is used to support brain growth at different ages under different biological settings.

The researchers used the model to test a simple scenario in which social interactions and cultural factors are excluded, revealing the influence of ecological factors alone. In the scenario, a human must hunt or gather food alone, but may receive some help from its mother while it is still young.

Under those specifications, the hypothetical human brain grew as big as ancient humans’ brains are thought to have grown, and the slow growth rate matched that of modern human brains. This runs counter to prevailing thought, which holds that social and cultural influences are required to achieve these sizes and growth rates.

“Our findings raise new questions about the effects and interactions of ecological, social, and cultural factors, and our framework offers a new tool to address them,” González-Forero says. “More broadly, our framework allows for new experiments in silico to study brain evolution, life history, and brain senescence.”

His team is now using the new model to investigate how social factors may influence evolution of large brain size. In the future, they hope to test cultural factors as well.

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.


Studying altruism through virtual reality

A computer-based environment developed with the aim to shed light on the origins of altruism: this is the innovative approach used by a research group at SISSA in Trieste, in collaboration with the University of Udine. This new study — recently published in the journal Neuropsychologia — immersed participants in a virtual environment that reproduced a building on fire which they had to evacuate in a hurry, deciding whether to save their lives or interrupt their escape and help rescue an injured person. The results showed that altruistic individuals self-reported to have greater concern for others’ wellbeing and had larger right anterior insula (a brain area involved in processing social emotions) compared to non-altruists. These results shed light on the role of compassion in motivating helping behaviour and its brain correlates.

“Our prosocial and altruistic impulses play a very important role in sustaining complex societal structure,” explained Giorgia Silani, the principal investigator of this research conducted at SISSA and now a researcher housed in University of Vienna, “However, studying altruism and its neural basis in lab-based environment poses unique ethical challenges. Indeed, it is difficult, if not impossible, to reproduce harmful situations realistically and then studying participant’s helping behaviour, especially if such situations pose physical threat to the life of the participant….”

In order to overcome these difficulties, researchers created a contextually-rich virtual reality environment in which participants were completely immersed. Indrajeet Patil, a former PhD student at SISSA and lead author of the paper, and currently a postdoctoral researcher at Harvard University, explained the novelty of the task: “During the experiment, participants were placed in a building from which they had to escape because of a sudden fire. The computer-based environment was characterised by intense audio-visual cues that contributed to increasing the experimental realism of the situation on the one hand, and the feeling of anxiety and danger on the other. Besides, a bar indicated how much “life energy” was left in one’s avatar.” Toward the very end of the escape, when there was very little energy left, participants had to make a very difficult decision: whether to rescue an injured person trapped under a heavy cabinet risking their own lives, or run for the exit ignoring the individual’s cries for help.

As participants completed the task in the virtual environment, described by all as “very realistic,” they underwent an MRI scanning to help acquire information about brain structure. Investigating the participants’ cerebral structure, scientists were able to relate their behaviours with the anatomy of specific areas of the nervous system.

“The results highlighted that majority of the people made altruistic choice: 65% stopped to rescue the injured person despite the threat to the (virtual) self. Additionally, questionnaire data revealed that individuals who helped the trapped human scored higher on empathic concern. Thus, an individual’s willingness to help others in need at a cost to the self seems to be driven by other-oriented caring motivation” Indrajeet Patil underscored the important takeaway message from the study. With regards to the brain structure data, researchers found that the altruistic individuals had a larger anterior insula compared to that of those who chose to escape without helping. “The insula is an area strictly connected to the processing of our social emotions” Giorgia Silani highlighted. “In this research, we were thus able to assay neurostructural correlates of helping behaviour. This work helps to generate various interesting hypotheses that can be investigated in future work.”

Story Source:

Materials provided by SISSA. Note: Content may be edited for style and length.

Progress towards a circuit diagram of the brain

Precise knowledge of the connections in the brain — the links between all the nerve cells — is a prerequisite for better understanding this most complex of organs. Researchers from Heidelberg University have now developed a new algorithm — a computational procedure — that can extract this connectivity pattern with far greater precision than previously possible from microscopic images of the brain. Prof. Dr Fred Hamprecht, head of the “Image Analysis and Learning” working group at the Interdisciplinary Center for Scientific Computing, expects such automated image data analysis to bring about great strides in the neurosciences. It will likely lead to a circuit diagram of the brain.

Understanding how the brain works is one of science’s greatest mysteries. “Except for a simple roundworm, there is still no circuit diagram of a complete animal brain, let alone a human brain,” states Prof. Hamprecht. In recent years, imaging techniques have been developed that can finally produce three-dimensional images of the entire brain at a sufficiently high resolution. These images are so big, however, that manual analysis would take centuries. What is needed, therefore, is an automated analysis process with the lowest possible error rate.

The new algorithm leverages non-local image information, allowing the researchers to study non-adjacent regions of the image and deduce whether they belong to the same nerve cell. Dr Björn Andres of the Max Planck Institute for Informatics in Saarbrücken has demonstrated how short and long-range interactions can be considered jointly. The aim is to find an optimal solution that does justice to both types of image information in the best way possible. “This approach affords far lower error rates than all known methods,” states Prof. Hamprecht.

Research groups around the world have joined in competitions to measure the accuracy of their automated analysis pipelines. The aim is to partition a three-dimensional image into the nerve cells it contains. A labour-intensive manual process is used beforehand to determine the correct partitioning, which is kept secret. All submissions are then compared to the diagram and the approach with the lowest error rate wins. In the latest partitioning challenge, the CREMI Challenge on Circuit Reconstruction from Electron Microscopy Images, the researchers at the Interdisciplinary Center for Scientific Computing succeeded in producing the most accurate analysis by a large margin.

To explain the challenge of using this analysis method to produce a circuit diagram of a brain, Prof. Hamprecht resorts to the fly as an example. The fly is capable of astonishing feats: It finds food, shelter and mates in a complex and often hostile environment. “Although its brain is smaller than the head of a pin, the diagram of its neuronal connections remains elusive.” The Heidelberg team is using their new algorithm to map the brain circuit of the fly first before moving on to higher animals, according to mathematician Dr Anna Kreshuk.

In the past fifteen years, the “Image Analysis and Learning” working group has been developing machine learning algorithms for computer vision, with applications mostly in the life sciences but also industry. The latest research results, achieved in a close international cooperation, were published in the journal Nature Methods.

Story Source:

Materials provided by Heidelberg, Universität. Note: Content may be edited for style and length.

Functional brain training alleviates chemotherapy-induced peripheral nerve damage in cancer survivor

A type of functional brain training known as neurofeedback shows promise in reducing symptoms of chemotherapy-induced nerve damage, or neuropathy, in cancer survivors, according to a study by researchers at The University of Texas MD Anderson Cancer Center. The pilot study, published in the journal Cancer, is the largest, to date, to determine the benefits of neurofeedback in cancer survivors.

Chronic chemotherapy-induced peripheral neuropathy (CIPN) is caused by damage to the nerves that control sensation and movement in arms and legs. CIPN is estimated to affect between 71 and 96 percent of patients one month after chemotherapy treatment, with symptoms including pain, burning, tingling and loss of feeling, explained Sarah Prinsloo, Ph.D., assistant professor of Palliative, Rehabilitation, and Integrative Medicine.

“There is currently only one approved medication to treat CIPN and it has associated muscle aches and nausea,” said Prinsloo, lead investigator of the study. “Neurofeedback has no known negative side effects, can be used in combinations with other treatments and is reasonably cost effective.”

In previous research, Prinsloo identified the location of brain activity that contributes to the physical and emotional aspects of chronic pain. By targeting brain areas that are active during pain episodes, neurofeedback teaches participants to understand pain signals differently.

The researchers developed training protocols which allow patients to retrain their own brain activity through electroencephalogram (EEG) neurofeedback. The EEG interface tracks and records brain wave patterns by attaching small metal discs with thin wires to the scalp. Brain wave signals are sent to a computer and displayed for participants, who receive visual and auditory rewards when making targeted adjustments to brain wave patterns.

The randomized, controlled study enrolled 71 MD Anderson patients of all cancer types; all were at least three months post-chemotherapy treatment and reported more than a three on the National Cancer Institute’s neuropathy rating scale. The Brief Pain Inventory (BPI) assessment was used to measure the severity of pain and impact on daily functioning. The BPI worst-pain item was the primary outcome.

Patients in the neurofeedback group attended 20 sessions in which they played a computer game that trained them to modify brain wave activity in the targeted area. Over time, participants learned to manipulate brain activity without an immediate reward from the game. The control group was offered the neurofeedback intervention at the conclusion of the study.

After completing treatment, participants repeated EEG measurements and pain assessments to determine changes in pain perception, cancer related symptoms, quality of life and brain wave activity in targeted areas.

At the beginning of the study, groups reported no significant differences in neuropathy symptoms. At the completion of the study, patients in the neurofeedback group reported significantly reduced BPI scores for worst pain, activity interference, numbness, tingling, and unpleasantness, compared to the control group.

Patients with CIPN also exhibited specific and predictable EEG signatures in the targeted brain regions that changed with neurofeedback.

“We observed clinically and statistically significant reductions in peripheral neuropathy following neurofeedback techniques,” said Prinsloo. “This research suggests that neurofeedback may be a valuable approach to reduce neuropathy symptoms and their impact on daily activities.”

One limitation of the study was the lack of a placebo group. Researchers studied areas of the brain that are active during placebo pain relief and determined that, although the placebo effect could be a factor, it was not the only factor leading to symptom improvement, said Prinsloo. Additionally, most study participants were female and breast cancer survivors. Future research will need to include a broader participant base to determine if the findings apply across the general population.

Current approved drugs for CIPN have known of side effects. The lack of adverse effects using neurofeedback is particularly important to emphasize for cancer patients with existing comorbidities.

Precise technique tracks dopamine in the brain

MIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

“Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

“Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

Studying the striatum

For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

“What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

How learning happens

The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

“We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

“What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”

Sex differences in brain activity alter pain therapies

A female brain’s resident immune cells are more active in regions involved in pain processing relative to males, according to a recent study by Georgia State University researchers.

The study, published in the Journal of Neuroscience, found that when microglia, the brain’s resident immune cells, were blocked, female response to opioid pain medication improved and matched the levels of pain relief normally seen in males.

Women suffer from a higher incidence of chronic and inflammatory pain conditions such as fibromyalgia and osteoarthritis. While morphine continues to be one of the primary drugs used for the treatment of severe or chronic pain, it is often less effective in females.

“Indeed, both clinical and preclinical studies report that females require almost twice as much morphine as males to produce comparable pain relief,” said Hillary Doyle, graduate student in the Murphy Laboratory in the Neuroscience Institute of Georgia State. “Our research team examined a potential explanation for this phenomenon, the sex differences in brain microglia.”

In healthy individuals, microglia survey the brain, looking for signs of infection or pathogens. In the absence of pain, morphine interferes with normal body function and is viewed as a pathogen, activating the brain’s innate immune cells and causing the release of inflammatory chemicals such as cytokines.

To test how this sex difference affects morphine analgesia, Doyle gave male and female rats a drug that inhibits microglia activation.

“The results of the study have important implications for the treatment of pain, and suggests that microglia may be an important drug target to improve opioid pain relief in women,” said Dr. Anne Murphy, co-author on the study and associate professor in the Neuroscience Institute at Georgia State.

The research team’s finding that microglia are more active in brain regions involved in pain processing may contribute to why the incidence rates for various chronic pain syndromes are significantly higher in females than males.

Story Source:

Materials provided by Georgia State University. Note: Content may be edited for style and length.