Many stroke patients do not receive life-saving therapy

Although tPA treatment for stroke is increasing over time, minorities, women and residents of 11 southeastern states that make up the “Stroke Belt” are left behind when it comes to receiving tPA, according to research presented at the American Stroke Association’s International Stroke Conference 2017.

Tissue plasminogen activator, or tPA, is the only treatment approved by the Food and Drug Administration for ischemic stroke, the most common kind of stroke. If administered within 4.5 hours of the first signs of stroke, tPA can dissolve the blood clot and restore blood flow to the affected part of the brain.

“Hospitals, governments and other organizations are undertaking efforts to increase the number of patients who receive tPA,” said Tracy Madsen, M.D., Sc.M., lead researcher and Assistant Professor of Emergency Medicine at Brown University in Rhode Island. “We wanted to see if these quality improvement efforts were making a difference.”

The study reviewed records from the National Inpatient Sample of 563,087 patients (median age 74) who had an ischemic stroke between 2005 and 2011. Overall, 3.8 percent of patients received tPA, with the number growing each year.

Researchers found:

  • Blacks were 38 percent less likely than whites to receive tPA.
  • Hispanics were 25 percent less likely than whites to receive tPA.
  • Women were 6 percent less likely than men to receive tPA.
  • Those with private insurance were 29 percent more likely to receive tPA compared to those with Medicare.
  • Residents of the “Stroke Belt” were 31 percent less likely than those living elsewhere to receive tPA.

Researchers also found that patients discharged from a designated stroke center or a hospital participating in the American Heart Association’s Get With The Guidelines® — Stroke program were more likely to receive tPA. Likewise, patients were more likely to receive tPA at large, urban, or teaching hospitals compared to hospitals discharged from small, rural, or non-teaching hospitals.

Madsen said that the growing number of hospitals participating in the Get With The Guidelines® — Stroke program and legislation requiring emergency services to take stroke patients to regional stroke centers are likely to increase the number of patients receiving tPA.

“Some previous studies have found that up to three-fourths of patients arrived after the time window for tPA had closed,” Madsen said. “Many patients across all groups do not arrive at the hospital in time, but this is particularly true for underrepresented minorities.”

An important limitation of the study data is that researchers could not determine why patients did not receive tPA. The study is also limited because we were not able to adjust for patient level factors such as time to arrival and other tPA exclusion criteria, stroke severity, patient education, and socioeconomic status. “More research needs to be done to help figure out why many patients do not receive tPA,” Madsen said.

“There is also a lot of work to do in the realm of stroke education so that patients recognize stroke symptoms and call EMS immediately,” Madsen said.

According to the American Heart Association’s Heart Disease and Stroke Statistical Update, 795,000 Americans have a stroke every year, causing almost 129,000 deaths. Residents of the Stroke Belt — Alabama, Arkansas, Georgia, Indiana, Kentucky, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee and Virginia — suffer even higher rates of stroke and stroke death.

The National Inpatient Sample is the largest publicly available database of inpatient health care in the U.S.

Story Source:

Materials provided by American Heart Association. Note: Content may be edited for style and length.

 

Tired teens 4.5 times more likely to commit crimes as adults

Teenagers who self-report feeling drowsy mid-afternoon also tend to exhibit more anti-social behavior such as lying, cheating, stealing and fighting. Now, research from the University of Pennsylvania and the University of York, in the United Kingdom, shows that those same teens are 4.5 times more likely to commit violent crimes a decade and a half later.

“It’s the first study to our knowledge to show that daytime sleepiness during teenage years are associated with criminal offending 14 years later,” said Adrian Raine, the Richard Perry University Professor with appointments in the departments of Criminology and Psychology in the School of Arts & Sciences and the Department of Psychiatry in Penn’s Perelman School of Medicine.

He and Peter Venables, an emeritus psychology professor at the University of York, published their findings in the Journal of Child Psychology and Psychiatry.

Raine had collected the data for this work 39 years earlier, as part of his Ph.D. research (studying under Venables) but had never analyzed it. Recently, he began noticing cross-sectional studies, those that analyze multiple behaviors at a single point in time, connecting sleep and behavioral problems in children. He dug out his old dissertation work to look for a link between these and illegal behavior in adulthood.

“A lot of the prior research focused on sleep problems, but in our study we measured, very simply, how drowsy the child is during the day,” Raine said.

To get at this information, he tested 101 15-year-old boys from three secondary schools in the north of England. At the start and end of each lab session, which always ran from 1 to 3 p.m., he asked participants to rate their degree of sleepiness on a 7-point scale, with 1 being “unusually alert” and 7 being “sleepy.” He also measured brain-wave activity and sweat-rate responses to stimuli, which indicates the level of attention a person pays to a tone being played over headphones. This represents brain-attentional function, Raine said.

Next he collected data about anti-social behavior, both self-reported from the study participants, as well as from two or three teachers who had worked with each teen for at least four years.

“Both are helpful. There are kids who don’t really want to talk about their anti-social behavior, and that’s where the teacher reports really come in handy,” Raine said. “Actually, the teacher and child reports correlated quite well in this study, which is not usual. Often, what the teacher says, what the parent says, what the child says — it’s usually three different stories.”

Finally, Raine conducted a computerized search at the Central Criminal Records Office in London to suss out which of the original 101 had a criminal record at age 29. Excluding minor violations, focusing instead on violent crimes and property offenses and only those crimes for which participants were convicted, the researchers learned that 17 percent of participants had committed a crime by that point in adulthood.

With these data in hand, Raine also incorporated the study participants’ socioeconomic status. He found a connection.

“Is it the case that low social class and early social adversity results in daytime drowsiness, which results in inattention or brain dysfunction, which results 14 years later in crime? The answer’s yes,” he said. “Think of a flow diagram from A to B to C to D. Think of a chain. There is a significant link.”

Put another way, he added: “Daytime drowsiness is associated with poor attention. Take poor attention as a proxy for poor brain function. If you’ve got poor brain functioning, you’re more likely to be criminal.”

The researchers stress that drowsiness in and of itself doesn’t always predispose a teenage boy to becoming anti-social. And many children with sleep problems do not become lawbreakers. But the researchers did find that those with sleepiness and a greater frequency of anti-social behavior during teenage years had higher odds of a life of crime later.

Knowing this could potentially help with a simple treatment plan for children with behavioral issues: Recommend they get more sleep at night.

“That could make a difference not just for anti-social behavior at school with these teenage kids but more importantly, with later serious criminal behavior,” Raine said. “More sleep won’t solve crime, but it might make a bit of a dent.”

OCD-like behavior linked to genetic mutation

A new Northwestern Medicine study found evidence suggesting how neural dysfunction in a certain region of the brain can lead to obsessive and repetitive behaviors much like obsessive-compulsive disorder (OCD).

Both in humans and in mice, there is a circuit in the brain called the corticostriatal connection that regulates habitual and repetitive actions. The study found certain synaptic receptors are important for the development of this brain circuit. If these receptors are eliminated in mice, they exhibit obsessive behavior, such as over-grooming.

This is the first strong evidence that supports the biological basis for how these genes that code for these receptors might affect obsessive or compulsive behaviors in humans. By demonstrating that these receptors have this role in development, researchers down the line will have a target to develop treatments for obsessive-compulsive behavior.

“Variations in these receptor genes are associated with human neurodevelopmental disorders, such as autism and neuropsychiatric disorders such as OCD,” said lead author Anis Contractor, associate professor of physiology at Northwestern University Feinberg School of Medicine. “People with OCD are known to have abnormalities in function of corticostriatal circuits.”

The study was published February 21 in the journal Cell Reports. The findings shed light on the importance of these receptors in the formation of the corticostriatal circuits, Contractor said.

“A number of studies have found mutations in the kainate receptor genes that are associated with OCD or other neuropsychiatric and neurodevelopmental disorders in humans,” said Contractor, who also is an associate professor of neurobiology at the Weinberg College of Arts and Sciences at Northwestern. “I believe our study, which found that a mouse with targeted mutations in these genes exhibited OCD-like behaviors, helps support the current genetic studies on neuropsychiatric and neurodevelopmental disorders in humans.”

The traits of OCD the mice in the study exhibited included over-grooming, continuously digging in their bedding and consistently failing a simple alternating-choice test in a maze.

Story Source:

Materials provided by Northwestern University. Original written by Kristin Samuelson. Note: Content may be edited for style and length.

 

Heart risks in middle age boost dementia risk later in life

People who have heart disease risks in middle age — such as diabetes, high blood pressure or smoking — are at higher risk for dementia later in life, according to research presented at the American Stroke Association’s International Stroke Conference 2017.

“The health of your vascular system in midlife is really important to the health of your brain when you are older,” said Rebecca F. Gottesman, M.D., Ph.D., lead researcher and associate professor of neurology and epidemiology at the Johns Hopkins University in Baltimore.

In an ongoing study that began in 1987 and enrolled 15,744 people in four U.S. communities, the risk of dementia increased as people got older. That was no surprise, but heart disease risks detected at the start of the study, when participants were between 45-64 years of age, also had a significant impact on later dementia, researchers noted. Dementia developed in 1,516 people during the study, and the researchers found that the risk of dementia later in life was:

  • 41 percent higher in midlife smokers than in non-smokers or former smokers;
  • 39 percent higher in people with high blood pressure (?140/90 mmHg) in middle age, and 31 percent higher in those with pre-hypertension (between 120/80 mmHg and 139/89 mmHg) compared to those with normal blood pressure; and
  • 77 percent higher in people with diabetes in middle age than in non-diabetics.

“Diabetes raises the risk almost as much as the most important known genetic risk factor for Alzheimer’s disease,” Gottesman said.

Overall, the risk of dementia was 11 percent lower in women. The risk was highest in individuals who were black, had less than a high school education, were older, carried the gene known to increase Alzheimer’s risk, or had high blood pressure, diabetes or were current smokers at the time of initial evaluation.

Smoking and carrying the gene known to increase the chance of Alzheimer’s were stronger risk factors in whites than in blacks, the researchers noted.

“If you knew you carried the gene increasing Alzheimer’s risk, you would know you were predisposed to dementia, but people don’t necessarily think of heart disease risks in the same way. If you want to protect your brain as you get older, stop smoking, watch your weight, and go to the doctor so diabetes and high blood pressure can be detected and treated,” said Gottesman.

Because Atherosclerosis Risk in Communities is an observational study, the current study could not test whether treating heart risk factors will result in a lessened dementia risk later in life.

“The benefit is that this is a long-term study and we know a lot about these people. Data like these may supplement data from clinical trials that look at the impact of treatment for heart disease risks,” Gottesman said.

Story Source:

Materials provided by American Heart Association. Note: Content may be edited for style and length.

Exercise can significantly improve brain function after stroke

Structured exercise training can significantly improve brain function in stroke survivors, according to research presented at the American Stroke Association’s International Stroke Conference 2017.

Stroke is the fifth leading cause of death in the United States, and the leading cause of long-term disability. Studies estimate that up to 85 percent of people who suffer a stroke will have cognitive impairments, including deficits in executive function, attention and working memory. Because there are no drugs to improve cognitive function, physical activity — such as physical therapy, aerobic and strength training — has become a low-cost intervention to treat cognitive deficits in stroke survivors.

In a meta-analysis of 13 intervention trials that included 735 participants, researchers analyzed the effects of various types of physical activity on cognitive function among stroke survivors. They found that structured physical activity training significantly improved cognitive deficits regardless of the length of the rehabilitation program (i.e., training longer than 3 months as well as from 1 to 3 months led to improvements in cognitive performance).

The researchers also found that cognitive abilities can be enhanced even when physical activity is introduced in the chronic stroke phase (beyond 3 months after a stroke).

“Physical activity is extremely helpful for stroke survivors for a number of reasons, and our findings suggest that this may also be a good strategy to promote cognitive recovery after stroke” said lead author Lauren E. Oberlin, a graduate student at the University of Pittsburgh. “We found that a program as short as twelve weeks is effective at improving cognition, and even patients with chronic stroke can experience improvement in their cognition with an exercise intervention.”

The researchers analyzed general cognitive improvement, as well as improvement specific to areas of higher order cognition: executive function, attention and working memory. Exercise led to selective improvements on measures of attention and processing speed.

The researchers also examined if cognitive improvements depended on the type of physical activity patients engaged in. Previous studies on healthy aging and dementia populations have found that aerobic exercise by itself is enough to improve cognition, but the effects are increased when combined with an activity such as strength training. Consistent with this work, the authors found that combined strength and aerobic training programs yielded the largest cognitive gains.

“Integrating aerobic training into rehabilitation is very important, and for patients with mobility limitations, exercise can be modified so they can still experience increases in their fitness levels,” Oberlin said. “This has substantial effects on quality of life and functional improvement, and I think it’s really important to integrate this into rehabilitative care and primary practice.”

Story Source:

Materials provided by American Heart Association. Note: Content may be edited for style and length.

Tiny fibers open new windows into the brain

For the first time ever, a single flexible fiber no bigger than a human hair has successfully delivered a combination of optical, electrical, and chemical signals back and forth into the brain, putting into practice an idea first proposed two years ago. With some tweaking to further improve its biocompatibility, the new approach could provide a dramatically improved way to learn about the functions and interconnections of different brain regions.

The new fibers were developed through a collaboration among material scientists, chemists, biologists, and other specialists. The results are reported in the journal Nature Neuroscience, in a paper by Seongjun Park, an MIT graduate student; Polina Anikeeva, the Class of 1942 Career Development Professor in the Department of Materials Science and Engineering; Yoel Fink, a professor in the departments of Materials Science and Engineering, and Electrical Engineering and Computer Science; Gloria Choi, the Samuel A. Goldblith Career Development Professor in the Department of Brain and Cognitive Sciences, and 10 others at MIT and elsewhere.

The fibers are designed to mimic the softness and flexibility of brain tissue. This could make it possible to leave implants in place and have them retain their functions over much longer periods than is currently possible with typical stiff, metallic fibers, thus enabling much more extensive data collection. For example, in tests with lab mice, the researchers were able to inject viral vectors that carried genes called opsins, which sensitize neurons to light, through one of two fluid channels in the fiber. They waited for the opsins to take effect, then sent a pulse of light through the optical waveguide in the center, and recorded the resulting neuronal activity, using six electrodes to pinpoint specific reactions. All fof this was done through a single flexible fiber just 200 micrometers across — comparable to the width of a human hair.

Previous research efforts in neuroscience have generally relied on separate devices: needles to inject viral vectors for optogenetics, optical fibers for light delivery, and arrays of electrodes for recording, adding a great deal of complication and the need for tricky alignments among the different devices. Getting that alignment right in practice was “somewhat probabilistic,” Anikeeva says. “We said, wouldn’t it be nice if we had a device that could just do it all.”

After years of effort, that’s what the team has now successfully demonstrated. “It can deliver the virus [containing the opsins] straight to the cell, and then stimulate the response and record the activity — and [the fiber] is sufficiently small and biocompatible so it can be kept in for a long time,” Anikeeva says.

Since each fiber is so small, “potentially, we could use many of them to observe different regions of activity,” she says. In their initial tests, the researchers placed probes in two different brain regions at once, varying which regions they used from one experiment to the next, and measuring how long it took for responses to travel between them.

The key ingredient that made this multifunctional fiber possible was the development of conductive “wires” that maintained the needed flexibility while also carrying electrical signals well. After much work, the team was able to engineer a composite of conductive polyethylene doped with graphite flakes. The polyethylene was initially formed into layers, sprinkled with graphite flakes, then compressed; then another pair of layers was added and compressed, and then another, and so on. A member of the team, Benjamin Grena, a recent graduate in materials science and engineering, referred to it as making “mille feuille,” (literally, “a thousand leaves,” the French name for a Napoleon pastry). That method increased the conductivity of the polymer by a factor of four or five, Park says. “That allowed us to reduce the size of the electrodes by the same amount.”

One immediate question that could be addressed through such fibers is that of exactly how long it takes for the neurons to become light-sensitized after injection of the genetic material. Such determinations could only be made by crude approximations before, but now could be pinpointed more clearly, the team says. The specific sensitizing agent used in their initial tests turned out to produce effects after about 11 days.

The team aims to reduce the width of the fibers further, to make their properties even closer to those of the neural tissue. “The next engineering challenge is to use material that is even softer, to really match” the adjacent tissue, Park says. Already, though, dozens of research teams around the world have been requesting samples of the new fibers to test in their own research.

 

Brain-machine interfaces: Bidirectional communication at last

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain,” explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal,” explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light.” Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons,” adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron.” Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Story Source:

Materials provided by Université de Genève. Note: Content may be edited for style and length.

 

Scientists survey the state of sleep science

Sleep remains an enduring biological mystery with major clinical relevance, according to a review by clinician-researcher Thomas Scammell, MD, of Beth Israel Deaconess Medical Center (BIDMC) and colleagues. In recent decades, new technologies have allowed neuroscientists to identify multiple brain circuits that govern the sleep/wake cycle, as well as the factors that can influence it, such as caffeine and light. But the brain’s complexity is still a stumbling block in understanding this ubiquitous and necessary animal behavior, the researchers wrote. Their review appeared today in the journal Neuron.

“In the last ten years, neuroscientists have had access to new tools with which we can test the roles of very specific neurons in the brain,” said lead author Scammell, a professor in the department of neurology at BIDMC. “When we know the specific relevant players in the brain, it allows us to develop therapies to help people get to sleep or help sleepy people be more alert during the day.”

Specifically, two technologies developed since 2000 allow neurologists to switch specific neurons on or off. In a process called chemogenetics, researchers use drugs that have an effect only in a genetically-defined group of cells to determine the neurons’ role. Optogenetics uses laser light to turn on or turn off targeted brain cells. These techniques have revealed which neuronal circuits promote wakefulness and sleep throughout the brain, especially in the brain stem and the hypothalamus.

“We can now interrogate neurons in a more precise way,” said Scammell. “The techniques are very similar, but optogenetics works over a short time scale, on the order of seconds. With chemogenetics, we can watch over several hours what happens when we turn certain neurons on or off.”

Sleep researchers have also made important discoveries about the fundamental chemistry of sleepiness in recent years. In a major breakthrough in the late 1990s, scientists discovered a previously unknown chemical, a neurotransmitter called orexin, required for maintaining long periods of wakefulness. The loss of orexin production causes the common sleep disorder narcolepsy, which is characterized by chronic sleepiness and irregular REM sleep. Today, pharmaceutical companies make drugs that intentionally block the orexin system to treat insomnia. Researchers are also trying to develop drugs that mimic orexin to wake people up.

“A drug that acts like orexin could be as great for patients with narcolepsy as insulin is for people with diabetes,” said Scammell.

Neuroscience research has also revealed the brain circuity governing circadian rhythms, the biological clock that synchronizes sleepiness and wakefulness with night and day. Located deep in the hypothalamus, the suprachiasmatic nucleus (SCN) regulates circadian rhythms and is capable of maintaining them for some time even in total darkness. However, the SCN is no match for the social norms surrounding people’s sleep habits.

“People increasingly use their electronic devices in bed, which tricks the brain into thinking it’s being exposed to daylight,” said Scammell. “The internal clock gets reset, making it much harder to wake up in the morning.”

Phones and tablets are just one of the reasons about a third of all American adults are sleep deprived, getting much less than the recommended seven to eight hours of sleep per night. That raises more questions about why some people need more or less than that, and why some people can tolerate a sleep deficit so much better than others. The links among lack of sleep or poor sleep and metabolic disease, cancer risk and mood disorders also require further study.

With each of the brain’s hundreds of thousands of neurons networked to each other, scientists will need a deeper knowledge of the brain’s inner workings to understand how the circuits that regulate sleep interact. “There’s tremendous dialog back and forth among these circuits,” said Scammell, who notes today’s technology allows scientists to monitor dozens of neurons at a time within one region of the brain.

“Our ability to record activity in just a handful of neurons simultaneously is still not anything close to understanding the whole brain, but at least it’s a step in the right direction.”

Itch neurons play a role in managing pain

There are neurons in your skin that are wired for one purpose and one purpose only: to sense itchy things. These neurons are separate from the ones that detect pain, and yet, chemical-induced itch is often accompanied by mild pain such as burning and stinging sensations. But when it comes to sending signals toward your brain through your spinal cord, itch and mild pain can go through the same set of spinal cord neurons, researchers report February 22 in Neuron. This finding explains why pain often accompanies intense chemical-induced itch.

“To our surprise, we found the spinal cord neurons receiving the peripheral pain and itch inputs are not separate. They can receive signals from itch fibers and also pain fibers,” says study coauthor and neuroscientist at Johns Hopkins University Xinzhong Dong. These neurons, called the GRP neurons, are a way station for pain and itch signals on their way to the brain.

However, GRP neurons are not passive conduits, the researchers found. “When we eliminate this population of neurons in mice, the itch response is reduced. They scratch less,” says the study’s first co-author Shuohao Sun, a graduate student at Hopkins. “But at the same time, the pain response is actually increased.”

Mice without GRP neurons spent more time rubbing and licking to alleviate their pain, induced, for example, by exposing their tails to hot water. Further experiments that tracked electrical signaling through the neurons corroborated the result. Even though the GRP neurons seemed to be forwarding mild pain signals to the next neural relay station, they also seemed to mitigate intense pain signals.

“It might sound counterintuitive, but we suggest that this small group of cells actually functions like a braking system for pain,” says Sun. “This brake is not always triggered by the painful stimuli; it’s only triggered by the strong pain stimuli. When the brake is on, the signal doesn’t go through. But when you have a weak pain signal, it doesn’t trigger the brake and the signal can go through.” The researchers have named this hypothesis “the leaky gate” model.

When the mice’s GRP neurons have been destroyed, the brake lines have essentially been cut, resulting in an uncontrolled cascade of pain. The braking system may be a way for animals to detect mild pains — like the kinds associated with itchy substances — without becoming overwhelmed by the pain, the researchers say. Built-in pain management would likely be a helpful adaptation for escaping from predators while injured.

At the same time, GRP neurons are not the only group of spinal cord neurons that receive and forward pain signals toward the brain, and the brain itself plays a central role in translating signals from peripheral neurons into experienced sensation. Questions remain about what happens to the signals from GRP neurons after they’re transported up the spinal cord.

Chronic pain and itch affect about 1 in 10 Americans, the authors say. A better understanding of pain and itch signals’ journey to the brain may lead to new treatment options, eventually. “The next step is moving even further into the central nervous system and seeing how the signal from the secondary neuron is getting to the next relay station,” says Dong. “We go one step at a time.”

The work was supported by grants from the US National Institutes of Health. Xinzhong Dong is a Howard Hughes Medical Institute investigator.

Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.

Making it harder to ‘outsmart’ concussion tests

An equation that combines multiple subtest scores into one could make fooling a concussion protocol nothing more than a fool’s errand, says a recent study from the University of Nebraska-Lincoln.

The study details a promising approach for pinpointing more athletes who play “impaired” on the Immediate Post-Concussion Assessment and Cognitive Testing, or ImPACT, a computerized tool consisting of eight subtests that gauge neurocognitive performance.

Administering ImPACT in the preseason helps establish a cognitive baseline that can be compared against the results of a post-concussion test, informing decisions about whether and when an athlete returns to action.

Concussions result from the brain slamming against the skull, usually causing short-term issues that some research suggests may evolve into long-term problems such as memory loss and depression when the brain is subjected to repeated trauma. To mitigate the risk of reinjury, athletes diagnosed with concussions take the ImPACT or a similar test to help determine when they have fully recovered.

But some athletes have undertaken the practice of sandbagging: giving lackadaisical effort on the baseline test to record a lower score in the hope of playing sooner after a concussion. Sandbagging can ruin the validity of the test and, because a recovering brain is more susceptible to further trauma, ultimately increase the likelihood of another concussion.”At this point, people (administering) ImPACT may not have very much training in neuropsychological testing or standardized test administration or data interpretation,” said lead author Kathryn Higgins, a postdoctoral researcher with the Center for Brain, Biology and Behavior at Nebraska. “If the baseline is the standard for when an athlete is recovered, there are all sorts of issues with returning someone to play based on poor baseline data.”

So Higgins conducted an experiment to determine whether a statistical approach could identify more of the athletes who sandbagged on the baseline test. The experiment asked 54 athletes from rural Midwestern high schools to take the test twice, once while giving their best effort and once while subtly sandbagging. After analyzing the results, Higgins identified four subtests that created the largest disparity in scores. She then developed an equation that yielded a composite score from those subtests.

Establishing a threshold for the composite score allowed her to correctly find 100 percent of sandbagging cases while identifying the best-effort cases more than 90 percent of the time. Prior research suggests that ImPACT’s existing system of validity checks, which flag suspicious scores on five individual subtests, detect just 65 to 70 percent of sandbaggers.

“Obviously, my flags are going to be better (in this case) because I built them and tested them on the same sample,” said Higgins, who conducted the study as part of her dissertation. “But I thought it was worth pointing out that this equation has strong potential as another way to detect poor effort on baseline testing.”

Higgins said she hopes further research will independently evaluate her approach and others that might improve the assessment of high school athletes, who suffer an estimated 300,000 sports-related concussions per year in the United States alone.

“There’s so much room for work to be done,” Higgins said. “We’ve come so far in the last 10 years — we know so much more than we did — but there are still a lot of things that we don’t know.”

Story Source:

Materials provided by University of Nebraska-Lincoln. Note: Content may be edited for style and length.

Brain scans could predict teens’ problem drug use before it starts

There’s an idea out there of what a drug-addled teen is supposed to look like: impulsive, unconscientious, smart, perhaps — but not the most engaged. While personality traits like that could signal danger, not every adolescent who fits that description becomes a problem drug user. So how do you tell who’s who?

There’s no perfect answer, but researchers report February 21 in Nature Communications that they’ve found a way to improve our predictions — using brain scans that can tell, in a manner of speaking, who’s bored by the promise of easy money, even when the kids themselves might not realize it.

That conclusion grew out of a collaboration between Brian Knutson, a professor of psychology at Stanford, and Christian Büchel, a professor of medicine at Universitätsklinikum Hamburg Eppendorf. With support from the Stanford Neurosciences Institute’s NeuroChoice program, which Knutson co-directs, the pair started sorting through an intriguing dataset covering, among other things, 144 European adolescents who scored high on a test of what’s called novelty seeking — roughly, the sorts of personality traits that might indicate a kid is at risk for drug or alcohol abuse.

Novelty seeking in a brain scanner

Novelty seeking isn’t inherently bad, Knutson said. On a good day, the urge to take a risk on something new can drive innovation. On a bad day, however, it can lead people to drive recklessly, jump off cliffs and ingest whatever someone hands out at a party. And psychologists know that kids who score high on tests of novelty seeking are on average a bit more likely to abuse drugs. The question was, could there be a better test, one both more precise and more individualized, that could tell whether novelty seeking might turn into something more destructive.

Knutson and Büchel thought so, and they suspected that a brain-scanning test called the Monetary Incentive Delay Task, or MID, could be the answer. Knutson had developed the task early in his career as a way of targeting a part of the brain now known to play a role in mentally processing rewards like money or the high of a drug.

The task works like this. People lie down in an MRI brain scanner to play a simple video game for points, which they can eventually convert to money. More important than the details of the game, however, is this: At the start of each round, each player gets a cue about how many points he stands to win during the round. It’s at that point that players start to anticipate future rewards. For most people, that anticipation alone is enough to kick the brain’s reward centers into gear.

A puzzle and the data to solve it

This plays out differently — and a little puzzlingly — in adolescents who use drugs. Kids’ brains in general respond less when anticipating rewards, compared with adults’ brains. But that effect is even more pronounced when those kids use drugs, which suggests one of two things: Either drugs suppress brain activity, or the suppressed brain activity somehow leads youths to take drugs.

If it’s the latter, then Knutson’s task could predict future drug use. But no one was sure, mainly because no one had measured brain activity in non-drug-using adolescents and compared it to eventual drug use.

No one, that is, except Büchel. As part of the IMAGEN consortium, he and colleagues in Europe had already collected data on around 1,000 14-year-olds as they went through Knutson’s MID task. They had also followed up with each of them two years later to find out if they’d become problem drug users — for example, if they smoked or drank on a daily basis or ever used harder drugs like heroin. Then, Knutson and Büchel focused their attention on 144 adolescents who hadn’t developed drug problems by age 14 but had scored in the top 25 percent on a test of novelty seeking.

Lower anticipation

Analyzing that data, Knutson and Büchel found they could correctly predict whether youngsters would go on to abuse drugs about two-thirds of the time based on how their brains responded to anticipating rewards. This is a substantial improvement over behavioral and personality measures, which correctly distinguished future drug abusers from other novelty-seeking 14-year-olds about 55 percent of the time, only a little better than chance.

“This is just a first step toward something more useful,” Knutson said. “Ultimately the goal — and maybe this is pie in the sky — is to do clinical diagnosis on individual patients” in the hope that doctors could stop drug abuse before it starts, he said.

Knutson said the study first needs to be replicated, and he hopes to follow the kids to see how they do further down the line. Eventually, he said, he may be able not just to predict drug abuse, but also better understand it. “My hope is the signal isn’t just predictive, but also informative with respect to interventions.”

Depression puts psoriasis patients at significantly greater risk of psoriatic arthritis

Psoriasis is a lifelong disease that is associated with significant cosmetic and physical disability and puts patients at increased risk for many major medical disorders. A multidisciplinary team of researchers at the University of Calgary, Canada, have found that psoriasis patients who developed depression were at a 37% greater risk of subsequently developing psoriatic arthritis, compared with psoriasis patients who did not develop depression. Their findings are published in the Journal of Investigative Dermatology.

Psoriasis is a long-lasting inflammatory skin disease characterized by red, itchy, and scaly patches of skin. Approximately 8.5% of psoriasis patients have psoriatic arthritis, which is characterized by psoriasis plus inflammation of and around the joints.

“For many years, the rheumatology and dermatology communities have been trying to understand which patients with psoriasis go on to develop psoriatic arthritis and how we might detect it earlier in the disease course,” explained senior investigator Cheryl Barnabe, MD, MSc, of the McCaig Institute for Bone and Joint Health and the O’Brien Institute for Public Health, Cumming School of Medicine, at the University of Calgary, Alberta, Canada.

Depression is common among patients with psoriasis. Based on recent laboratory work demonstrating that major depressive disorder is associated with increased systemic inflammation, the team of researchers hypothesized that psoriasis patients who develop depression are at increased risk of subsequently developing psoriatic arthritis.

Investigators used The Health Improvement Network (THIN), a primary care medical records database in the United Kingdom, to identify over 70,000 patients with a new diagnosis of psoriasis. Through follow-up records, they identified individuals who subsequently developed depression and those who developed psoriatic arthritis. Patients were followed for up to 25 years or until they developed psoriatic arthritis.

Statistical analysis showed that patients with psoriasis who developed major depressive disorder were at 37% greater risk of subsequently developing psoriatic arthritis compared with patients who did not develop depression, even after accounting for numerous other factors such as age and use of alcohol.

The study highlights the need for physicians to manage patients with psoriasis to identify and address depression. This could include rapid, effective treatment of psoriasis and psychosocial management of the cosmetic burden of psoriasis. The study also draws into question the biological mechanisms by which depression increases the risk for developing psoriatic arthritis. These mechanisms may include altered systemic inflammation as a consequence of depression, or even the role of lifestyle behaviors such as physical activity or nutrition, which are typically worsened by depression, and which may place an individual at risk for psoriatic arthritis.

“There is a tendency to think of depression as a purely ‘psychological’ or ’emotional’ issue, but it also has physical effects and changes in inflammatory and immune markers have been reported in depressed people,” commented Scott Patten, MD, PhD, the O’Brien Institute for Public Health, Hotchkiss Brain Institute and Mathison Centre for Mental Health Research and Education, Cumming School of Medicine. “Depression may be a risk factor for a variety of chronic conditions and this research is an example of how big data approaches can identify these associations.”

Laurie Parsons, MD, of the Cumming School of Medicine, added: “It is evident to physicians who treat patients with psoriasis, that there is a significant psychological and social burden associated with this disease, which is reflected in an increase in the rates of depression. This study brings us a little closer to understanding the role of chronic inflammation as a systemic player in both the physical and psychological manifestations of psoriasis and underscores the need for closer attention to symptoms of depression in this group of patients.”

“This study raises important questions on the role of systemic inflammation, which is also elevated in depression, in driving a disease phenotype, which needs to be confirmed in clinical cohorts,” concluded Dr Barnabe.

Story Source:

Materials provided by Elsevier Health Sciences. Note: Content may be edited for style and length.

tDCS combined with computer games at home reduces cognitive symptoms of multiple sclerosis

Patients with multiple sclerosis had better problem solving ability and response time after training with a technology called transcranial direct current stimulation (tDCS), according to a new study published February 22, 2017 in Neuromodulation: Technology at the Neural Interface.

During tDCS a low amplitude direct current is applied through electrodes placed on the scalp using a headset. The stimulation can change cortical excitability in the brain by making it easier for neurons to fire, which can help improve connections and speed up the learning that takes place during rehabilitation.

Led by researchers at NYU Langone’s Multiple Sclerosis Comprehensive Care Center, the new study reports that participants with MS who used tDCS while playing the cognitive training computer games designed to improve information processing abilities showed significantly greater gains in cognitive measures than those who played the computer games alone. Importantly, the participants completed the cognitive training and tDCS while at home.

By enabling patients to be treated without repeat visits to the clinic, which can be a major challenge for people with MS as their disease progresses, the approach may improve quality of life for this patient population, according to the study’s authors.

“Our research adds evidence that tDCS, while done remotely under a supervised treatment protocol, may provide an exciting new treatment option for patients with multiple sclerosis who cannot get relief for some of their cognitive symptoms,” says lead researcher Leigh E. Charvet, PhD, associate professor of neurology and director of research at NYU Langone’s Multiple Sclerosis Comprehensive Care Center. “Many MS medications are aimed at preventing disease flares but those drugs do not help with daily symptom management, especially cognitive problems. We hope tDCS will fill this crucial gap and help improve quality of life for people with MS.”

MS is the most common progressive neurological disorder in working age adults, nearly 70 percent of whom will experience cognitive impairment with symptoms including slower information processing and difficulties with memory and problem solving. Other common symptoms of the disease include fatigue and mood, sensory and motor problems.

In this study, the brain’s dorsolateral pre-frontal cortex, an area linked to fatigue, depression and cognitive function, was targeted for tDCS.

Twenty-five participants were provided with a tDCS system with a headset they learned to apply with guided help from the research team. In each session, a study technician would contact each participant through online video conferencing, giving him or her a code to enter into a keypad that commenced the tDCS session in order to control for dosing. Then, during the stimulation, the participant played a research version of computerized cognitive training games that challenged areas of information processing and attention and working memory systems.

Members of the tDCS group participated in 10 sessions, and the researchers compared their results to 20 participants with MS who only played cognitive training games in their 10 sessions.

Researchers found participants in the group treated with tDCS showed significantly greater improvements on sensitive, computer-based measures of complex attention and increases in their response times across trials compared to the group that did cognitive training games alone. Improvements were shown to increase over time with the number of sessions, which suggests the tDCS may have a cumulative benefit. But, more research is needed to determine how long these effects may last following culmination of sessions.

The group that participated in tDCS plus cognitive training however did not show a statistically significant difference from the group that only played cognitive training games as measured by less sensitive standard neuropsychological measures like the Brief International Cognitive Assessment in MS (BICAMS) tests or on computer-based measures of basic attention. Those findings suggest the cognitive changes brought on by tDCS may require more treatment sessions to have noticeable improvements in daily functioning, according to Dr. Charvet.

The researchers are recruiting for additional clinical trials involving 20 tDCS sessions and a randomized sham-controlled protocol, to look for additional evidence of benefits of tDCS. New research has also begun at NYU Langone to test tDCS for other neurological conditions, including Parkinson’s disease.

However, Dr. Charvet warns that some tDCS products on the market are sold straight to consumer without any clinical research behind them or information or guidance on dosing frequency, so it’s important for anyone considering these technologies outside of a controlled research environment to consult with their physician.

The device was designed in conjunction with inventor Marom Bikson, PhD, a professor of biomedical engineering at The City College of New York, and Abhishek Datta, PhD, the chief technology officer of Soterix Medical which holds a patent on the tDCS device. Dr. Charvet provided Bikson’s team with feedback from participants enrolled in the trial to help better design the device. The study was funded by the National Multiple Sclerosis Society and The Lourie Foundation, Inc.

MRI-guided laser surgery proving effective for some epilepsy patients

Melanie Vandyke wasn’t exactly eager to have brain surgery.

“I was very nervous, afraid it might make things worse,” Vandyke said of the relatively new procedure that was being recommended to her by epilepsy specialists at Wake Forest Baptist Medical Center.

Even though the operation had the potential to relieve the seizures she had been experiencing for nearly 15 years by eradicating a lesion on her right medial temporal lobe, Vandyke said she was “ready to not have it.”

But discussions with her neurologist, Dr. Cormac O’Donovan, and other members of the team at Wake Forest Baptist’s Comprehensive Epilepsy Center eventually convinced Vandyke that “they were really dedicated to helping me, not setting me back,” so she agreed to the procedure.

That was almost four years ago. She has been seizure-free ever since.

The operation Vandyke underwent is called MRI-guided laser ablation surgery. It is a minimally invasive procedure that is proving to be a very effective treatment for people with medial temporal lobe epilepsy (MTLE), a common form of drug-resistant epilepsy.

“It’s a game-changer,” said Dr. Gautam “Vinnie” Popli, chief of the section on epilepsy at Wake Forest Baptist. “This type of surgery allows us to precisely target areas of seizure without added risk, and there’s a very short recovery time.”

Approximately 3 million people in the United States have epilepsy, a neurological disease in which abnormal electrical discharges in the brain produce sudden episodes of altered or diminished consciousness, involuntary movements or convulsions. Collectively known as seizures, these episodes can severely limit an individual’s range of activities and lead to a number of serious physical and cognitive problems.

In roughly 60 percent of all cases, epileptic seizures can be controlled by medication. For other patients, especially those identified through brain imaging and other tests as having MTLE, surgery is generally the sole treatment option.

Until rather recently that meant a craniotomy — a conventional, day-long operation involving removing part of the patient’s skull, cutting through healthy brain matter and physically taking out the problem tissue, followed by a weeklong hospital stay and a prolonged recovery period.

The MRI-guided laser ablation method is far less invasive and time-consuming. A thin laser-tipped applicator inserted through a tiny hole in the skull delivers heat to the target area in the brain and destroys the unwanted tissue with the neurosurgeon viewing and being guided by real-time MRI images throughout the operation.

The entire process can be completed in about four hours, the incision in the scalp can be closed with just one stitch and most patients can go home the next day and resume their normal activities without restrictions.

“The patient is not excessively inconvenienced or placed at extraordinary risk to get relief from their seizures,” Popli said. “There’s much less collateral damage and fewer adverse effects than conventional surgery, and better outcomes.”

Neurosurgeons performed the first MRI-guided laser ablation surgery for epilepsy at Wake Forest Baptist in 2012, using a technology called Visualase. Since then they have employed the system more than 45 times on patients ranging in age from under 18 months to over 60 years. The success rate of these operations, in terms of either eliminating seizures or reducing their frequency or severity, depending on the individual patient’s condition, has been above 75 percent.

Popli called that a marked improvement over the 60 percent success rate of conventional epilepsy surgery, which, he noted, has not been performed at the medical center since it adopted the Visualase technology.

Melanie Vandyke was among the first dozen people to undergo laser epilepsy surgery at Wake Forest Baptist, and although she was discharged from the hospital the following day it took much longer than that for her to regard the operation as a success.

“It was probably five or six months after the surgery before I felt that it really had helped me,” said the 40-year-old resident of Buchanan County in southwestern Virginia. “Now I know that it was definitely worth it.”

These days Vandyke is working full-time, driving, traveling, socializing and doing just about everything else the epileptic seizures had kept her from doing for most of her adult life.

“When I was having the seizures I was a different person than I was before. I was isolated, and felt as if I was a burden on everyone, especially my parents,” Vandyke said. “But now I’ve regained my independence. I’m back to being the old me, and thankful for that.”

Successful insomnia treatment may require nothing more than a placebo

A new study published in Brain indicates that successful treatment for insomnia may not actually require complicated neurofeedback (direct training of brain functions). Rather, it appears patients who simply believe they’re getting neurofeedback training appear to get the same benefits.

Insomnia affects between 10 and 35% of the population worldwide. However, despite the burden of insomnia on our society, only few studies have addressed this issue non-pharmacologically. Researchers here recruited thirty patients with primary insomnia, who underwent neurofeedback treatment and placebo-feedback treatment over several weeks.

In the study researchers sought to test whether earlier findings on the positive effect of neurofeedback on sleep quality and memory could also be replicated in a double-blind placebo-controlled study. Patients spent nine nights and twelve sessions of neurofeedback and twelve sessions of placebo-feedback training (sham) in researchers’ laboratory.

As this study focuses on neurofeedback effects on EEG, sleep and quality of life in insomnia patients, insomnia patients underwent this procedure before and after real as well as placebo neurofeedback training. In between the first and second, as well as the third and fourth of these visits insomnia patients completed twelve sessions of neurofeedback treatment and twelve sessions of placebo-feedback treatment, i.e. a placebo or sham condition (with real EEG feedback, yet on varying frequency bands). The order of trainings, that is, real or placebo neurofeedback treatment was counterbalanced across subjects and the twelve sessions were completed within 4 weeks. Participants’ sleep-wake cycle was assessed using eight sleep laboratory nights, as well as sleep diaries and actigraphy over the course of the whole protocol.

Researchers found both neurofeedback and placebo-feedback to be equally effective as reflected in subjective measures of sleep complaints, suggesting that the observed improvements were due to unspecific factors such as experiencing trust and receiving care and empathy from experimenters. In addition, these improvements were not reflected in objective EEG-derived measures of sleep quality.

Researchers conclude that for the treatment of primary insomnia, neurofeedback does not have a specific efficacy beyond unspecific placebo effects. They did not find an advantage of neurofeedback over placebo-feedback.

The results show that patients benefitted from any treatment on some subjective measures of sleep and life quality. Objectively, however, this improvement was not verified in any EEG-derived measures of sleep or oscillatory brain activity.

“Given our results,” said lead author Manuel Schabus, “one has to question how much of published neurofeeback effects are due to simple expectations on the side of the participants or, in other words, unspecific placebo effects.”

Researchers find that improvement of symptoms was not specific to neurofeeback training, but rather seems to have been brought about by unspecific factors such as affection and care. Altogether, it therefore has to be questioned whether sensorimotor-rhythm neurofeedback can be promoted as an alternative to established therapeutic approaches. The findings may also stimulate a discussion regarding the usefulness of neurofeedback on a more general level. Especially in patient populations where various complaints are often associated with learning difficulties positive neurofeedback effects beyond the subjective level may be hard to achieve.

Story Source:

Materials provided by Oxford University Press USA. Note: Content may be edited for style and length.