New technique isolates neuronal activity during memory consolidation

A team, led by researchers from the Cajal Institute (Madrid) belonging to the Spanish National Research Council (CSIC), have discovered some basic processes underlying memory consolidation in collaboration with colleagues at the National Hospital for Paraplegics in Toledo (Spain) and the University of Szeged (Hungary). The work, which is published in Neuron, identifies some of the electrical events responsible for specific neuronal activity in the hippocampus: a region of the brain with fundamental roles in episodic memory.

In the study, highlighted at the front cover of the journal, researchers used machine learning to study brain electrical activity during memory reactivation. “Using artificial neural networks, we have been able to identify electrical fotprints associated to events with similar informational content, presumably encoding the same memory trace. Using sophisticated experimental techniques we have succeeded in isolating the activity of individual neurons during these ‘memories'” explains Liset Menéndez de la Prida, the Cajal Institute researcher who lead the work.

As the researchers observed in their study, activity of hippocampal cells is precisely modulated during memory trace reactivation. “We have seen that most hippocampal cells acutely respond to ‘excitation’ and ‘inhibition’ as a kind of cellular yin-yang, in such a way that the participation of individual neurons of memory traces is extremely selective,” explains Manuel Valero, the first author of the paper.

“Only those hippocampal neurons carrying information about a memory to be reactivated would receive more ‘excitation’ than ‘inhibition’ to be biased for a particular memory trace. This mechanism endows the hippocampus with the ability to reactivate individual memories without merging information.”

In addition, researchers show that an imbalance between ‘excitation’ and ‘inhibition’ -characteristic of some brain diseases such as epilepsy- could be catastrophic for memories. “In epilepsy, we see a link between this mechanism and memory deficits. Our data suggest that alterations of excitation-inhibition balance not only contributes to epileptic activity, but also to the collapse of individual memory traces during consolidation, like an indissoluble mixture,” explains Menéndez de la Prida.

The hippocampus, vital to generating memory

As researchers point out, the function of hippocampus in memory was unveiled by the famous patient HM. “After he underwent bilateral surgical resection of both hippocampi for treating his epilepsy, he was unable to form new episodic memories.”

Menéndez de la Prida explains that with the advancement of neuroscience, it has become increasingly clear that the hippocampus may play a dual role in memory formation. “First, it represents information concerning the time and place where you are at this moment, through sequences of neuronal activity that signal your location in the room and some other temporal contingencies”

Valero adds, “Once this information is collected, it must be transformed it into a long-term memory. This is carried out by the hippocampus through a process called consolidation. During consolidation, neuronal sequences already activated during experience are replayed several times at high speed. It is a process which expends a great deal of energy to leave an electrical footprint.” That footprint seems now to be more easily detected in the apparently noisy brain activity.

Story Source:

Materials provided by Spanish National Research Council (CSIC). Note: Content may be edited for style and length.

 

Perceptions about body image linked to increased alcohol, tobacco use for teens

How teenagers perceive their appearance, including their body image, can have significant impacts on health and wellness. Prior body image research has shown that people with negative body image are more likely to develop eating disorders and are more likely to suffer from depression and low self-esteem. Now, Virginia Ramseyer Winter, a body image expert and an assistant professor in the University of Missouri’s School of Social Work, found negative body image also is associated with increased tobacco and alcohol use, with implications for both young men and women. Notably, she also found relationships between substance use and perceived attractiveness, with girls who believe they are very good looking being more likely to drink.

“We know alcohol and tobacco can have detrimental health effects, especially for teenagers,” Ramseyer Winter said. “I wanted to see if the perception of being overweight and negative body image leads to engaging in unhealthy or risky substance use behaviors. Understanding the relationship means that interventions and policies aimed at improving body image among teenage populations might improve overall health.”

Ramseyer Winter and her co-authors, Andrea Kennedy and Elizabeth O’Neill, used data from a national survey of American teenagers to determine the associations between perceived size and weight, perceived attractiveness, and levels of alcohol and tobacco use. The researchers found that perceived size and attractiveness were significantly related to substance use. Adolescent girls who perceived their body size to be too fat were more likely to use alcohol and tobacco. Boys who thought they were too skinny were more likely to smoke, and boys who considered themselves fat were more likely to binge drink.

“While poor body image disproportionately affects females, our findings indicate that body image also impacts young males,” Ramseyer Winter said. “For example, it’s possible that boys who identified their bodies as too thin use tobacco to maintain body size, putting their health at risk.”

In addition to body size, the researchers looked at the connection between perceived attractiveness and substance use. Girls who thought they were not at all good looking were more likely to smoke. Girls who thought they were very good looking were more likely to binge drink. Ramseyer Winter suggests this is because attractiveness may be associated with popularity, which is related to increased alcohol use.

To improve body image awareness, Ramseyer Winter suggested that parents, schools and health providers need to be aware of body shaming language and correct such behavior to help children identify with positive body image messages. Body shaming language can affect teenagers who have both positive and negative perceptions of themselves.

“Adolescent tobacco and alcohol use: the influence of body image,” recently was published in the Journal of Child and Adolescent Substance Abuse. Kennedy is a doctoral candidate at the University of Southern California and O’Neill is a doctoral candidate at the University of Kansas. The MU School of Social Work is in the College of Human Environmental Sciences.

Story Source:

Materials provided by University of Missouri-Columbia. Note: Content may be edited for style and length.

Let’s block ads! (Why?)

Predicting cognitive deficits in people with Parkinson’s disease

Parkinson’s disease is commonly thought of as a movement disorder, but after years of living with the disease, approximately 25 percent of patients also experience deficits in cognition that impair function. A newly developed research tool may help predict a patient’s risk for developing dementia and could enable clinical trials aimed at finding treatments to prevent the cognitive effects of the disease. The research was published in Lancet Neurology and was partially funded by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health.

“This study includes both genetic and clinical assessments from multiple groups of patients, and it represents a significant step forward in our ability to effectively model one of the most troublesome non-motor aspects of Parkinson’s disease,” said Margaret Sutherland, Ph.D., program director at the NINDS.

For the study, a team of researchers led by Clemens Scherzer, M.D., combined data from 3,200 people with Parkinson’s disease, representing more than 25,000 individual clinical assessments and evaluated seven known clinical and genetic risk factors associated with developing dementia. From this information, they built a computer-based risk calculator that may predict the chance that an individual with Parkinson’s will develop cognitive deficits. Dr. Scherzer is head of the Neurogenomics Lab and Parkinson Personalized Medicine Program at Harvard Medical School and a member of the Ann Romney Center for Neurologic Diseases at Brigham and Women’s Hospital, Boston.

Currently available Parkinson’s medications are only effective in improving motor deficits caused by the disease. However, the loss of cognitive abilities severely affects the individual’s quality of life and independence. One barrier to developing treatments for the cognitive effects of Parkinson’s disease is the considerable variability among patients. As a result, researchers must enroll several hundred patients when designing clinical trials to test treatments.

“By allowing clinical researchers to identify and select only patients at high-risk for developing dementia, this tool could help in the design of ‘smarter’ trials that require a manageable number of participating patients,” said Dr. Scherzer.

Dr. Scherzer and team also noted that a patient’s education appeared to have a powerful impact on the risk of memory loss. The more years of formal education patients in the study had, the greater was their protection against cognitive decline.

“This fits with the theory that education might provide your brain with a ‘cognitive reserve,’ which is the capacity to potentially compensate for some disease-related effects,” said Dr. Scherzer. “I hope researchers will take a closer look at this. It would be amazing, if this simple observation could be turned into a useful therapeutic intervention.”

Moving forward, Dr. Scherzer and his colleagues from the International Genetics of Parkinson’s Disease Progression (IGPP) Consortium plan to further improve the cognitive risk score calculator. The team is scanning the genome of patients to hunt for new progression genes. Ultimately, it is their hope that the tool can be used in the clinic in addition to helping with clinical trial design. However, considerable research remains to be done before that will be possible.

One complication for the use of this calculator in the clinic is the lack of available treatments for Parkinson’s-related cognitive deficits. Doctors face ethical issues concerning whether patients should be informed of their risk when there is little available to help them. It is hoped that by improving clinical trial design, the risk calculator can first aid in the discovery of new treatments and determine which patients would benefit most from the new treatments.

“Prediction is the first step,” said Dr. Scherzer. “Prevention is the ultimate goal, preventing a dismal prognosis from ever happening.”

 

Serotonin improves sociability in mouse model of autism

Scientists at the RIKEN Brain Science Institute (BSI) in Japan have linked early serotonin deficiency to several symptoms that occur in autism spectrum disorder (ASD). Published in Science Advances, the study examined serotonin levels, brain circuitry, and behavior in a mouse model of ASD. Experiments showed that increasing serotonergic activity in the brain during early development led to more balanced brain activity and improved the abnormal sociability of these mice.

As group leader Toru Takumi explains, “Although abnormalities in the serotonin system have been thought to be part of the ASD pathophysiology, the functional impact of serotonin deficiency in ASD was totally unknown. Our work shows that early serotonergic intervention rescues regional excitatory/inhibitory abnormalities in the brain as well as behavioral abnormalities.”

Although the causes and symptoms of ASD are varied, many people with ASD have too many genomic mutations. Previously, Takumi’s group generated a mouse model of ASD by duplicating in mice one of the most frequent copy variations found in people with ASD. These mice show many behavioral symptoms of ASD, including poor social interaction and low behavioral flexibility. The model mice also have reduced levels of serotonin in the brain during development, another symptom that has been found in patients with ASD.

In the newly published work, the researchers focused on this finding and examined how it affected the behavior of neurons in the brain as well as the behavior of the mice themselves.

After determining that the part of the brain that contains the highest amount of serotonin neurons was less active in the ASD model mice than in wild-type mice, the group examined a sensory region of the brain that receives input from these serotonergic neurons.

Patients with ASD often exhibit abnormal responses in sensory regions of the brain, and the RIKEN scientists found similar abnormalities in the brain region of the model mice that detects whisker movement. Although specific whisker movements are normally tightly mapped across this brain region, calcium imaging showed that a given whisker movement activated a much larger region of sensory cortex in the ASD model mice. This means that the responses of neighboring regions were more overlapped, which reduces the ability to distinguish sensations.

The overlap in sensory maps indicated that normally inactive neurons were somehow active. This pointed to reduced inhibitory activity, and the group confirmed this by showing that the ASD model mice had fewer inhibitory synapses and a lower frequency of naturally occurring inhibitory inputs in the sensory region.

These findings indicated an abnormality in cortical excitatory/inhibitory balance. First author Nobuhiro Nakai notes, “Because the sensory region was receiving abnormally low serotonin input, we reasoned that giving infant mice serotonin therapy might reduce the imbalance and also rescue some of the behavioral abnormalities.”

To test this hypothesis, the team administered a selective serotonin reuptake inhibitor, commonly referred to as an SSRI, to infant mice during the first three weeks after birth. This time period corresponded to the time period in which reduced serotonin was observed in the model mice. Researchers found that sensory neurons in the model mice treated with the SSRI showed more normal inhibitory responses, which improved the excitatory/inhibitory balance.

They also found that this intervention improved the social behavior of the model mice in adulthood. Social behavior was measured with a test in which mice are exposed to a cage with an unknown mouse or an empty cage. Normal mice spend more time near the cage with the unknown mouse, while the ASD model mice prefer the empty cage. After the SSRI treatment, ASD model mice spend more time around the cage with the unknown mouse, indicating more normal social behavior. Another improvement was seen in the communication behavior of the ASD mouse pups. While these pups displayed anxiety by produced more vocalizations than normal, this behavior was rescued by the SSRI treatment. These findings suggest that serotonin may have be potentially therapeutic for discrete ASD symptoms.

Looking toward the future, Takumi is optimistic, yet cautious. “Our genetic model for ASD is one of many and because the number of genetic mutations associated with ASD is so high, we need to investigate differences and common mechanisms among multiple genetic ASD models. Additionally, before we can administrate SSRIs to patients with ASD, we must study the effects of SSRIs in more detail, especially because adverse effects have been reported in some animal studies.”

Story Source:

Materials provided by RIKEN. Note: Content may be edited for style and length.

Sleep-wake rhythms vary widely with age as well as amongst individuals of a given age

The sleep rhythms that reflect circadian systems peak later in teenagers than in adults, and vary as much as 10 hours in individuals across at any ages, according to a study published June 21, 2017 in the open-access journal PLOS ONE by Dorothee Fischer from Harvard T.H. Chan School of Public Health, USA, and colleagues.

People’s circadian systems synchronize with light and darkness in the environment, giving rise to chronotypes: individual rhythms in physiology, cognition and behavior. For example, people with early chronotypes have earlier sleep times, while those with late chronotypes have later sleep times and can sleep into the day. Currently, 30% of the U.S. workforce has unusual work schedules, such as alternating or extended shifts, and on-call duty. These unusual schedules are linked with health and safety risks. Chronotype-tailored schedules might help minimize those risks.

To investigate chronotypes variation in the US, Fischer and colleagues analyzed self-reported data from 53,689 respondents of the American Time Use Survey from 2003 to 2014. The researchers used the mid-point of sleep on weekends as a proxy for chronotype.

The researchers found that sleep chronotypes vary widely, both over an individual’s lifetime amongst age groups as well as amongst individuals. The greatest difference in chronotypes is during adolescence and early adulthood. Chronotypes become later during adolescence, peaking in lateness at about age 19. The average chronotype, or mid-point of sleep, at age 17-18 was 4:30 a.m., compared to 3:00 a.m. at age 60. Most U.S. public schools start at 8:30 a.m. or earlier, suggesting that high school students go to school during their biological night. This work supports delaying school start times to benefit the sleep and circadian alignment of high school students.

In addition, the researchers found that chronotypes vary up to 10 hours from individual to individual regardless of age. This may provide opportunities for tailoring work schedules to chronotypes, which is important because syncing workers with their optimal work times could help minimize health and safety risks.

“The timing for optimal sleep can be as different as ten hours among individuals, meaning that opposite chronotypes could share a bed without knowing that they do. What chronotype you are, is influenced by age and gender: on average, older people are earlier chronotypes than younger people and women are earlier chronotypes than men during the first half of their lives.”

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

Prior knowledge may influence how adults view van Goghs

Adults rely more on top-down processing than children when observing paintings by van Gogh, according to a study published June 21, 2017 in the open-access journal PLOS ONE by Francesco Walker from the Vrije Universiteit Amsterdam, The Netherlands, and colleagues.

Analyzing eye movements can indicate how individuals direct their attention to build an overall impression of a painting. Previous studies have shown that children tend to be guided by visual stimuli — bottom-up processes — whereas adults are more influenced by their prior knowledge or beliefs — top-down factors — to guide perception.

Whilst previous research in this area has been conducted in artificial settings, the authors of the present study tracked the eye movements of 12 adults and 12 children when viewing five paintings in a museum setting at the Van Gogh Museum, Amsterdam. The paintings were chosen to be new to the participants, whose gaze patterns were recorded both before and after hearing descriptions of the paintings. The researchers found that adults made an average of 63 fixations on the surface of the paintings during the 30 second viewing period, while children made an average of 53 fixations.

When viewing the paintings freely, the children focused first on the stand-out, ‘salient’ features of the paintings, indicating bottom-up processing. However, after hearing the painting descriptions, they paid attention to less noticeable features first, indicating that their new knowledge was influencing their attention in top-down processing. Adults appeared to focus initially on non-salient features both before and after hearing a description, suggesting that top-down processing was dominating their viewing processes throughout.

This research suggests that it is possible to investigate eye movements in museums, and analyses using larger samples could continue to investigate how children and adults perceive art in this natural setting.

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

Three ways neuroscience can advance the concussion debate

While concussion awareness has improved over the past decade, understanding the nuances of these sports injuries, their severity, symptoms, and treatment, is still a work in progress. In the June 21 issue of Neuron, UCLA neurologists and neurotraumatologists review the science of concussions and outline several areas where neuroscience and clinical research can help create consensus in the field: definitions of what acute and chronic concussions are, diagnostics, and management and treatment.

“For patients, you have to be able to provide the best care even if you don’t have the exact research study to prove what you’re doing, and you also have to address the information that the patients and their families are getting through the media,” says Christopher C. Giza (@griz1), Director of the UCLA Steve Tisch BrainSPORT program and Professor of Pediatrics, Neurology, and Neurosurgery at the University of California Los Angeles. “That’s a discussion that’s hard to have because people naturally look for very short answers and sound biytes, and it’s far more complex than that.”

1. Let’s Agree on the Definition of a “Concussion,” both Acute and Chronic

The Centers for Disease Control and Prevention reported about 2.8 million traumatic-brain-injury-related emergency department visits, hospitalizations, and deaths in the United States in 2013. However, researchers disagree about whether all concussions and traumatic brain injuries are equal. A concussion may be characterized by wooziness, disorientation, incoordination, headache, and other “typical” symptoms after a hit to the head and may occur even with only rapid back-and-forth motion of the head and neck. Some have postulated that subconcussive injuries with repetitive head impacts in the absence of symptoms may result in cumulative problems.

Giza says that although a concussion and a more severe traumatic brain injury may sound similar, and although they may share some symptoms, the overlap between the two is not clear. Additionally, the determination of whether someone has a concussion or a mild traumatic brain injury or something else is largely subjective and often relies heavily on symptom reporting from the patient.

“One of the things that will help us on the acute diagnosis of concussion would be if we moved away from the current understanding of concussion as a black-or-white, yes-or-no answer,” Giza says. “There are scenarios when we can be more certain, clinically, that we’re making the correct diagnosis. If there’s a clear impact event, there’s a typical constellation of symptoms that occurs in temporal relationship to the impact, and that symptom pattern has a time course consistent with what we see in concussion in terms of peaking early followed by gradual improvement, then we can diagnose confidently.”

Giza notes that not every symptom that occurs after a hit to the head is related to a concussion, which is why formal diagnosis requires an experienced clinician. Similarly, not all chronic symptoms are referable to a distant concussion or head impact. Understanding the physiological mechanisms underlying concussions and concussive symptoms (both acute and chronic) can lead to better diagnostic tests and potentially point the way to individualized treatment plans.

2. Realize that Diagnosis Is Critical to Treatment

Some concussion patients experience atypical symptoms, or usual symptoms that get worse later on instead of improving. One potential pitfall of concussion diagnosis is that some symptoms may appear to be concussion related but could actually be a symptom of something else, like migraine, dehydration, hyperthermia, neck strain, or more severe brain injury.

“We need to prioritize what we think sounds like a definite concussion vs. probable vs. possible, and even recognize that there are syndromes with neurological symptoms that occur after impact that are something more than a concussion,” Giza says. “There are rare patients who have cerebral edema — sometimes, we call it second impact syndrome, which is another ambiguous term — but that’s not a concussion. Patients who very rarely get a subdural hematoma as a consequence of a sports injury sometimes are portrayed as having had a concussion, but a subdural hematoma or an epidural hematoma is something much more than what we would diagnose clinically as a concussion.”

There are also computerized tests, and soon, hopefully blood tests, brain imaging, and electrical tests that can help diagnose concussion or follow recovery, but because concussions are “the most complex injury to the most complex organ” in the human body, there is not necessarily a magic bullet, catch-all, perfect method for diagnosing concussions.

3. Focus on Animal Research to Discover Better Treatment Plans

“In the clinical concussion world, many of the research protocols are observational, but I think laboratory neuroscience can inform in terms of how important is the time between injuries and how much cognitive or physical activity should there be during the recovery period,” Giza says. Focusing on animal models is one way neuroscience can help accelerate concussion and traumatic brain injury research, particularly in the investigation of how consequences of repetitive injury differ when they occur very close in time versus when they are spaced out, and in determining when the brain is physiologically ready to return to activity.

“Animal models are also well suited for looking at long-term processes set into play by the acute injury.” Giza says. “So animals can be subjected to repetitive injuries when they’re relatively young — at least in rodent models, within a year or two, those animals become ‘old’ animals, and we can look to see along that time course whether mechanisms of neurodegeneration have been activated, and whether that leads to deficits over time. Those studies can be done in the time course of months to years rather than decades, as would be necessary for clinical studies. If we do things right in the coming years, we can really change the game in our understanding about concussion and brain injuries.”

Could flu during pregnancy raise risk for autism?

Researchers at the Center for Infection and Immunity (CII) at Columbia University’s Mailman School of Public Health found no evidence that laboratory-diagnosis alone of maternal influenza during pregnancy is associated with risk of autism spectrum disorder (ASD) in the offspring. They did, however, find a trend toward risk in mothers with a laboratory diagnosis of influenza and self-reported symptoms of severe illness. This trend did not achieve statistical significance.

The study is the first to assess the risk for ASD based on laboratory-verified maternal influenza infection, not just survey data or medical records. Results appear in the journal mSphere.

The researchers analyzed questionnaires and blood samples from 338 mothers of children with ASD and 348 matched controls, as part of the Autism Birth Cohort Study, a prospective birth cohort in Norway. Blood samples were collected from mothers at mid-pregnancy and after delivery. Mothers also reported on their cold and flu symptoms during pregnancy.

Positive blood tests for past influenza A or influenza B infection were not associated with increased ASD risk. However, when researchers combined reports of influenza-like illness with the blood test results, they found a substantial, albeit statistically insignificant, increased risk of ASD. While random error could be responsible for the finding, the authors caution against dismissing it outright due to the magnitude of the association: children born to mothers with laboratory-verified flu and matching symptoms had nearly double the odds of later being diagnosed with ASD compared to women without flu and symptoms.

“Symptoms are important because they may indicate the extent to which the mother’s immune system is fighting the flu,” says first author Milada Mahic, a post-doctoral research scientist at Center for Infection and Immunity and the Norwegian Institute of Public Health. “If infection is contributing to increased risk, it likely comes from inflammation related to maternal immune system response rather than the flu infection itself. Further research is warranted.”

The flu-ASD finding aligns with past research suggesting that admission to hospital for maternal viral infection in the first trimester and maternal bacterial infection in the second trimester is associated with increased risk of ASD.

In other recent studies from the Autism Birth Cohort Study, researchers reported that women actively infected with genital herpes during early pregnancy had twice the odds of giving birth to a child later diagnosed with ASD. Another new study reports maternal fever during pregnancy may raise risk for the child developing ASD.

“The fetal brain undergoes rapid changes that make it vulnerable to a robust maternal immune response,” says senior author W. Ian Lipkin, director of CII and John Snow Professor of Epidemiology at the Mailman School. “That said, mothers should not conclude that having an infection during pregnancy means that their child will develop autism. It may simply be one among many risk factors.”

Story Source:

Materials provided by Columbia University’s Mailman School of Public Health. Note: Content may be edited for style and length.

Neurons that regenerate, neurons that die

Cells, genetically marked with GFP, are viewed on a flat-mounted retina. The axons or fibers lead to the optic nerve head (round structure in the top right corner) and then exit the eyeball into the optic nerve. The alpha RGCs are killed by sox11 despite its pro-regenerative effect on some other still undefined type(s) of RGCs.

Credit: Image courtesy of Fengfeng Bei, Brigham and Women’s Hospital

The optic nerve is vital for vision — damage to this critical structure can lead to severe and irreversible loss of vision. Fengfeng Bei, PhD, a principal investigator in the Department of Neurosurgery at Brigham and Women’s Hospital, and his colleagues want to understand why the optic nerve — as well as other parts of the central nervous system including the brain and spinal cord — cannot be repaired by the body. In particular, Bei’s lab focuses on axons, the long processes of neurons that serve as signaling wires. In a new study published in Neuron, Bei, Michael Norsworthy in Zhigang He’s lab at Boston Children’s Hospital and colleagues report on a transcription factor that they have found that can help certain neurons regenerate, while simultaneously killing others. Unraveling exactly which signals can help or hinder axon regeneration may eventually lead to new and precise treatment strategies for restoring vision or repairing injury.

“Our long term goal is to repair brain, spinal cord or eye injury by regenerating functional connections,” said Bei. “The goal will be to regenerate as many subtypes of neurons as possible. Our results here suggest that different subtypes of neurons may respond differently to the same factors. This may mean that when we reach the point of developing new therapies, we may need to consider combination therapies for optimal recovery.”

Previous studies using the optic nerve as a model for injury have found that manipulating transcription factors — the master control switches of genes — might represent a promising avenue for stimulating axon regeneration. In the current study, researchers focused on transcription factors likely to influence the early development of retinal ganglion cells (RGCs). There are at least 30 types of RGCs in the human eye, which control different aspects of vision, and the researchers were interested in the effects of transcription factors on various types of RGCs. Using a mouse model of optic nerve injury, the research team found that increasing the production of a transcription factor known as Sox11 appeared to help axons grow past the site of injury. However, the team observed that the very same transcription factor also efficiently killed a type of RGCs known as alpha-RGCs which would preferentially survive the injury if untreated.

Bei notes that the heterogeneity of the nervous system — the inclusion of different cells with different properties and functions — will be an important consideration as researchers work to reprogram and, ultimately, restore the optic nerve, brain or spinal cord after injury.


Story Source:

Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length.


Journal Reference:

  1. Norsworthy M et al. Sox11 Expression Promotes Regeneration of Some Retinal Ganglion Cell Types but Kills Others. Neuron, June 2017 DOI: 10.1016/j.neuron.2017.05.035

Cite This Page:

Brigham and Women’s Hospital. “Neurons that regenerate, neurons that die: Untangling the complex puzzle of optic nerve regeneration.” ScienceDaily. ScienceDaily, 21 June 2017. .

Brigham and Women’s Hospital. (2017, June 21). Neurons that regenerate, neurons that die: Untangling the complex puzzle of optic nerve regeneration. ScienceDaily. Retrieved June 21, 2017 from www.sciencedaily.com/releases/2017/06/170621132913.htm

Brigham and Women’s Hospital. “Neurons that regenerate, neurons that die: Untangling the complex puzzle of optic nerve regeneration.” ScienceDaily. www.sciencedaily.com/releases/2017/06/170621132913.htm (accessed June 21, 2017).

Forgetting can make you smarter

For most people having a good memory means being able to remember more information clearly for long periods of time. For neuroscientists too, the inability to remember was long believed to represent a failure of the brain’s mechanisms for storing and retrieving information.

But according to a new review paper from Paul Frankland, a senior fellow in CIFAR’s Child & Brain Development program, and Blake Richards, an associate fellow in the Learning in Machines & Brains program, our brains are actively working to forget. In fact, the two University of Toronto researchers propose that the goal of memory is not to transmit the most accurate information over time, but to guide and optimize intelligent decision making by only holding on to valuable information.

“It’s important that the brain forgets irrelevant details and instead focuses on the stuff that’s going to help make decisions in the real world,” says Richards.

The review paper, published this week in the journal Neuron, looks at the literature on remembering, known as persistence, and the newer body of research on forgetting, or transience. The recent increase in research into the brain mechanisms that promote forgetting is revealing that forgetting is just as important a component of our memory system as remembering.

“We find plenty of evidence from recent research that there are mechanisms that promote memory loss, and that these are distinct from those involved in storing information,” says Frankland.

One of these mechanisms is the weakening or elimination of synaptic connections between neurons in which memories are encoded. Another mechanism, supported by evidence from Frankland’s own lab, is the generation of new neurons from stem cells. As new neurons integrate into the hippocampus, the new connections remodel hippocampal circuits and overwrite memories stored in those circuits, making them harder to access. This may explain why children, whose hippocampi are producing more new neurons, forget so much information.

It may seem counterintuitive that the brain would expend so much energy creating new neurons at the detriment of memory. Richards, whose research applies artificial intelligence (AI) theories to understanding the brain, looked to principles of learning from AI for answers. Using these principles, Frankland and Richards frame an argument that the interaction between remembering and forgetting in the human brain allows us to make more intelligent memory-based decisions.

It does so in two ways. First, forgetting allows us to adapt to new situations by letting go of outdated and potentially misleading information that can no longer help us maneuver changing environments.

“If you’re trying to navigate the world and your brain is constantly bringing up multiple conflicting memories, that makes it harder for you to make an informed decision,” says Richards.

The second way forgetting facilitates decision making is by allowing us to generalize past events to new ones. In artificial intelligence this principle is called regularization and it works by creating simple computer models that prioritize core information but eliminate specific details, allowing for wider application.

Memories in the brain work in a similar way. When we only remember the gist of an encounter as opposed to every detail, this controlled forgetting of insignificant details creates simple memories which are more effective at predicting new experiences.

Ultimately, these mechanisms are cued by the environment we are in. A constantly changing environment may require that we remember less. For example, a cashier who meets many new people every day will only remember the names of her customers for a short period of time, whereas a designer that meets with her clients regularly will retain that information longer.

“One of the things that distinguishes an environment where you’re going to want to remember stuff versus an environment where you want to forget stuff is this question of how consistent the environment is and how likely things are to come back into your life, ” says Richards.

Similarly, research shows that episodic memories of things that happen to us are forgotten more quickly than general knowledge that we access on a daily basis, supporting the old adage that if you don’t use it, you lose it. But in the context of making better memory-based decisions, you may be better off for it.

 

Parkinson’s is partly an autoimmune disease, study finds

Researchers have found the first direct evidence that autoimmunity — in which the immune system attacks the body’s own tissues — plays a role in Parkinson’s disease, the neurodegenerative movement disorder. The findings raise the possibility that the death of neurons in Parkinson’s could be prevented by therapies that dampen the immune response.

The study, led by scientists at Columbia University Medical Center (CUMC) and the La Jolla Institute for Allergy and Immunology, was published today in Nature.

“The idea that a malfunctioning immune system contributes to Parkinson’s dates back almost 100 years,” said study co-leader David Sulzer, PhD, professor of neurobiology (in psychiatry, neurology and pharmacology) at CUMC. “But until now, no one has been able to connect the dots. Our findings show that two fragments of alpha-synuclein, a protein that accumulates in the brain cells of people with Parkinson’s, can activate the T cells involved in autoimmune attacks.

“It remains to be seen whether the immune response to alpha-synuclein is an initial cause of Parkinson’s, or if it contributes to neuronal death and worsening symptoms after the onset of the disease,” said study co-leader Alessandro Sette, Dr. Biol. Sci., professor in the Center for Infectious Disease at La Jolla Institute for Allergy and Immunology in La Jolla, Calif. “These findings, however, could provide a much-needed diagnostic test for Parkinson’s disease, and could help us to identify individuals at risk or in the early stages of the disease.”

Scientists once thought that neurons were protected from autoimmune attacks. However, in a 2014 study, Dr. Sulzer’s lab demonstrated that dopamine neurons (those affected by Parkinson’s disease) are vulnerable because they have proteins on the cell surface that help the immune system recognize foreign substances. As a result, they concluded, T cells had the potential to mistake neurons damaged by Parkinson’s disease for foreign invaders.

The new study found that T cells can be tricked into thinking dopamine neurons are foreign by the buildup of damaged alpha-synuclein proteins, a key feature of Parkinson’s disease. “In most cases of Parkinson’s, dopamine neurons become filled with structures called Lewy bodies, which are primarily composed of a misfolded form of alpha-synuclein,” said Dr. Sulzer.

In the study, the researchers exposed blood samples from 67 Parkinson’s disease patients and 36 age-matched healthy controls to fragments of alpha-synuclein and other proteins found in neurons. They analyzed the samples to determine which, if any, of the protein fragments triggered an immune response. Little immune cell activity was seen in blood samples from the controls. In contrast, T cells in patients’ blood samples, which had been apparently primed to recognize alpha-synuclein from past exposure, showed a strong response to the protein fragments. In particular, the immune response was associated with a common form of a gene found in the immune system, which may explain why many people with Parkinson’s disease carry this gene variant.

Dr. Sulzer hypothesizes that autoimmunity in Parkinson’s disease arises when neurons are no longer able to get rid of abnormal alpha-synuclein. “Young, healthy cells break down and recycle old or damaged proteins,” he said. “But that recycling process declines with age and with certain diseases, including Parkinson’s. If abnormal alpha-synuclein begins to accumulate, and the immune system hasn’t seen it before, the protein could be mistaken as a pathogen that needs to be attacked.”

The Sulzer and Sette labs are now analyzing these responses in additional patients, and are working to identify the molecular steps that lead to the autoimmune response in animal and cellular models.

“Our findings raise the possibility that an immunotherapy approach could be used to increase the immune system’s tolerance for alpha-synuclein, which could help to ameliorate or prevent worsening symptoms in Parkinson’s disease patients,” said Dr. Sette.

Story Source:

Materials provided by Columbia University Medical Center. Note: Content may be edited for style and length.

New statistical method finds shared ancestral gene variants involved in autism's cause

The way you measure things has a lot to do with the value of the results you get. If you want to know how much a blueberry weighs, don’t use a bathroom scale; it isn’t sensitive enough to register a meaningful result.

While much more is at stake, the same principle applies when scientists try to measure genetic factors that cause disease. In a paper appearing in the Proceedings of the National Academy of Sciences, geneticist Michael Wigler of Cold Spring Harbor Laboratory (CSHL), Kenny Ye (Albert Einstein) and colleagues use a new mathematical method to assess the role of genetic variants in determining a trait — in this case, autism. (Autism is to be understood as interchangeable with autism spectrum disorder, or ASD, in this story.)

The new approach finds what Wigler believes is the first rigorous statistical evidence that ancient variations in the human genome contribute to autism — each, most likely, having a very small effect. (Devastating variants tend to be recent and are regularly weeded out of the genome; those who have them rarely are less likely to have offspring, meaning the damaging gene is less likely to be transmitted.)

Past studies have sought to identify causal autism variants by comparing the genomes of affected people and unaffected people who are not related to them. Professor Wigler is skeptical of the significance of the results obtained with such “case/control” studies. He argues that ethnic and other biases cannot be completely teased out, and produce a result cannot be assessed properly for statistical significance.

The method Wigler and colleagues used in the new study was family-based. The team analyzed data on common variants from two cohorts. One cohort consisted of “discordant siblings,” one of whom has autism and the other does not. These discordant pairs, gathered in the Simons Simplex Collection (SSC), were compared with the genomes of individuals with autism collected by the Autism Genetic Resource Exchange (AGRE). Overall, over 16,000 genomes from people in nearly 4,000 families were used in the analysis.

By comparing the discordant siblings in the SSC with unrelated people with autism in the AGRE collection, the team was able to find a clear signal of ancient variants contributing to autism, shared among those with the disorder in both collections — who, by definition, are not related.

Those in the AGRE sample — all “affected” — were genetically more like the affected children in the Simons Collection than their unaffected siblings.

For Wigler, there is more at stake in the result. “There is more power in family studies than we actually know how to tap into at this point,” he says. “There is more information in a family structure than in the isolated person who’s got a disorder. Certainly this is true when dealing with de novo or germline mutation, but true even when examining transmission, as we did in the current study.”

Story Source:

Materials provided by Cold Spring Harbor Laboratory. Note: Content may be edited for style and length.

Researchers uncover genetic gains and losses in Tourette syndrome

Researchers have identified structural changes in two genes that increase the risk of developing Tourette syndrome, a neurological disorder characterized by involuntary motor and vocal tics. The study, published in the journal Neuron, was supported by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health.

“Our study is the tip of the iceberg in understanding the complex biological mechanisms underlying this disorder. With recent advancements in genetic research, we are at the cusp of identifying many genes involved in Tourette syndrome,” said Jeremiah Scharf, M.D., Ph.D., assistant professor of neurology and psychiatry at Harvard Medical School and Massachusetts General Hospital, Boston, and co-corresponding author of the study.

The research was part of an international collaboration co-led by Dr. Scharf; Giovanni Coppola, M.D., professor of psychiatry and neurology at the University of California, Los Angeles; Carol Mathews, M.D., professor of psychiatry at the University of Florida in Gainesville; and Peristera Paschou, Ph.D., associate professor in the department of biological sciences at Purdue University, West Lafayette, Indiana.

The scientific team conducted genetic analyses on 2,434 individuals with Tourette syndrome and compared them to 4,093 controls, focusing on copy number variants, changes in the genetic code resulting in deletions or duplications in sections of genes. Their results determined that deletions in the NRXN1 gene or duplications in the CNTN6 gene were each associated with an increased risk of Tourette syndrome. In the study, approximately 1 in 100 people with Tourette syndrome carried one of those genetic variants.

NRXN1 and CNTN6 are important during brain development and produce molecules that help brain cells form connections with one another. In addition, the two genes are turned on in areas that are part of the cortico-striatal-thalamo-cortical circuit, a loop of brain cells connecting the cortex to specific regions involved in processing emotions and movement. Studies suggest that errors in the circuit may play a role in Tourette syndrome.

Copy number variants in NRXN1 have been implicated in other neurological disorders including epilepsy and autism, but this is the first time that scientists have linked copy number variants in CNTN6 to a specific disease.

“Tourette syndrome has a very strong genetic component but identifying the causal genes has been challenging,” said Jill Morris, Ph.D., program director at NINDS. “As we find genes involved in Tourette syndrome and understand more about its biology, we move closer to our ultimate goal of developing treatments to help children affected by the disease.”

Although involuntary tics are very common in children, they persist and worsen over time in people with Tourette syndrome. Tics associated with Tourette syndrome appear in children, peak during the early teenage years and often disappear in adulthood. Many people with Tourette syndrome experience other brain disorders including attention deficit disorder and obsessive-compulsive disorder.

Drs. Scharf, Coppola, Mathews and Paschou are planning to take a closer look at the mutations using animal and cellular models. More research is needed to learn about ways in which the genes contribute to development of Tourette syndrome and whether they may be potential therapeutic targets.

The brain mechanism behind multitasking

Although “multitasking” is a popular buzzword, research shows that only 2% of the population actually multitasks efficiently. Most of us just shift back and forth between different tasks, a process that requires our brains to refocus time and time again — and reduces overall productivity by a whopping 40%.

New Tel Aviv University research identifies a brain mechanism that enables more efficient multitasking. The key to this is “reactivating the learned memory,” a process that allows a person to more efficiently learn or engage in two tasks in close conjunction.

“The mechanism may have far-reaching implications for the improvement of learning and memory functions in daily life,” said Dr. Nitzan Censor of TAU’s School of Psychological Sciences and Sagol School of Neuroscience. “It also has clinical implications. It may support rehabilitation efforts following brain traumas that impact the motor and memory functions of patients, for example.”

The research, conducted by TAU student Jasmine Herszage, was published in Current Biology.

Training the brain

“When we learn a new task, we have great difficulty performing it and learning something else at the same time. For example, performing a motor task A (such as performing a task with one hand) can reduce performance in a second task B (such as performing a task with the other hand) conducted in close conjunction to it. This is due to interference between the two tasks, which compete for the same brain resources,” said Dr. Censor. “Our research demonstrates that the brief reactivation of a single learned memory, in appropriate conditions, enables the long-term prevention of, or immunity to, future interference in the performance of another task performed in close conjunction.”

The researchers first taught student volunteers to perform a sequence of motor finger movements with one hand, by learning to tap onto a keypad a specific string of digits appearing on a computer screen as quickly and accurately as possible. After acquiring this learned motor memory, the memory was reactivated on a different day, during which the participants were required to briefly engage with the task — this time with an addition of brief exposure to the same motor task performed with their other hand. By utilizing the memory reactivation paradigm, the subjects were able to perform the two tasks without interference.

By uniquely pairing the brief reactivation of the original memory with the exposure to a new memory, long-term immunity to future interference was created, demonstrating a prevention of interference even a month after the exposures.

“The second task is a model of a competing memory, as the same sequence is performed using the novel, untrained hand,” said Dr. Censor. “Existing research from studies on rodents showed that a reactivation of the memory of fear opened up a window of several hours in which the brain was susceptible to modifications — in which to modify memory.

“In other words, when a learned memory is reactivated by a brief cue or reminder, a unique time-window opens up. This presents an opportunity to interact with the memory and update it — degrade, stabilize or strengthen its underlying brain neural representations,” Dr. Censor said. “We utilized this knowledge to discover a mechanism that enabled long-term stabilization, and prevention of task interference in humans.

The researchers are eager to understand more about this intriguing brain mechanism. “Is it the result of hardwired circuitry in the brain, which allows different learning episodes to be integrated? And how is this circuitry represented in the brain? By functional connections between distinct brain regions? It is also essential to determine test whether the identified mechanism is relevant for other types of tasks and memories, not only motor tasks,” Dr. Censor concluded.

Story Source:

Materials provided by American Friends of Tel Aviv University. Note: Content may be edited for style and length.

 

New technique makes brain scans better

People who suffer a stroke often undergo a brain scan at the hospital, allowing doctors to determine the location and extent of the damage. Researchers who study the effects of strokes would love to be able to analyze these images, but the resolution is often too low for many analyses.

To help scientists take advantage of this untapped wealth of data from hospital scans, a team of MIT researchers, working with doctors at Massachusetts General Hospital and many other institutions, has devised a way to boost the quality of these scans so they can be used for large-scale studies of how strokes affect different people and how they respond to treatment.

“These images are quite unique because they are acquired in routine clinical practice when a patient comes in with a stroke,” says Polina Golland, an MIT professor of electrical engineering and computer science. “You couldn’t stage a study like that.”

Using these scans, researchers could study how genetic factors influence stroke survival or how people respond to different treatments. They could also use this approach to study other disorders such as Alzheimer’s disease.

Golland is the senior author of the paper, which will be presented at the Information Processing in Medical Imaging conference during the week of June 25. The paper’s lead author is Adrian Dalca, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory. Other authors are Katie Bouman, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering at MIT; Natalia Rost, director of the acute stroke service at MGH; and Mert Sabuncu, an assistant professor of electrical and computer engineering at Cornell University.

Filling in data

Scanning the brain with magnetic resonance imaging (MRI) produces many 2-D “slices” that can be combined to form a 3-D representation of the brain.

For clinical scans of patients who have had a stroke, images are taken rapidly due to limited scanning time. As a result, the scans are very sparse, meaning that the image slices are taken about 5-7 millimeters apart. (The in-slice resolution is 1 millimeter.)

For scientific studies, researchers usually obtain much higher-resolution images, with slices only 1 millimeter apart, which requires keeping subjects in the scanner for a much longer period of time. Scientists have developed specialized computer algorithms to analyze these images, but these algorithms don’t work well on the much more plentiful but lower-quality patient scans taken in hospitals.

The MIT researchers, along with their collaborators at MGH and other hospitals, were interested in taking advantage of the vast numbers of patient scans, which would allow them to learn much more than can be gleaned from smaller studies that produce higher-quality scans.

“These research studies are very small because you need volunteers, but hospitals have hundreds of thousands of images. Our motivation was to take advantage of this huge set of data,” Dalca says.

The new approach involves essentially filling in the data that is missing from each patient scan. This can be done by taking information from the entire set of scans and using it to recreate anatomical features that are missing from other scans.

“The key idea is to generate an image that is anatomically plausible, and to an algorithm looks like one of those research scans, and is completely consistent with clinical images that were acquired,” Golland says. “Once you have that, you can apply every state-of-the-art algorithm that was developed for the beautiful research images and run the same analysis, and get the results as if these were the research images.”

Once these research-quality images are generated, researchers can then run a set of algorithms designed to help with analyzing anatomical features. These include the alignment of slices and a process called skull-stripping that eliminates everything but the brain from the images.

Throughout this process, the algorithm keeps track of which pixels came from the original scans and which were filled in afterward, so that analyses done later, such as measuring the extent of brain damage, can be performed only on information from the original scans.

“In a sense, this is a scaffold that allows us to bring the image into the collection as if it were a high-resolution image, and then make measurements only on the pixels where we have the information,” Golland says.

Higher quality

Now that the MIT team has developed this technique for enhancing low-quality images, they plan to apply it to a large set of stroke images obtained by the MGH-led consortium, which includes about 4,000 scans from 12 hospitals.

“Understanding spatial patterns of the damage that is done to the white matter promises to help us understand in more detail how the disease interacts with cognitive abilities of the person, with their ability to recover from stroke, and so on,” Golland says.

The researchers also hope to apply this technique to scans of patients with other brain disorders.

“It opens up lots of interesting directions,” Golland says. “Images acquired in routine medical practice can give anatomical insight, because we lift them up to that quality that the algorithms can analyze.”

Find the report online at: http://www.mit.edu/~adalca/files/papers/ipmi2017_patchSynthesis.pdf