Cognitive-related neural pattern to activate machines

Brain-machine interfaces represent a solution for people with physical difficulties to communicate with their physical and social environment. In this work, researchers have identified a functional brain pattern in the prefrontal cortex, associated with cognitive processes, and have used it to activate a screen on a touch device (an iPad touchscreen).

The use of the neural cortical activity for operant conditioning tasks has existed for decades. In this case, however, a device patented by researchers has been used. This device allows the activation of any environmental instrument through specific electrical brain signals selected at will. In this research, authors worked with electrical brain signals that allowed the activation of the presentation of visual stimuli in the iPad’s touchscreen. At the same time, experimental animals had to touch those stimuli presented on the iPad to obtain a reward and, thus, properly complete the task.

One of the most interesting results of this research is that rats learned to increase the frequency of the selected neural pattern throughout successive experimental sessions, with the aim of obtaining the reward. Authors also prove that the selected pattern is connected to cognitive processes and not to motor or behavioral activity, which represents an important progress in the design of brain-machine interfaces. Another result of interest is that the selected brain pattern did not modify its functional properties after being used to activate the associative learning. Therefore, the prefrontal cortex (a brain area particularly connected to mental processes and states) has the ability to produce an oscillatory pattern that rats can generate to control their environment.

From the point of view of the research, it is beneficial to use the conclusions of this work to advance in the area of brain-machine interactions.

Story Source:

Materials provided by Universitat Autonoma de Barcelona. Note: Content may be edited for style and length.

Man with quadriplegia employs injury bridging technologies to move again — just by thinking

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies.

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Holding a makeshift handle pierced through a dry sponge, Kochevar scratched the side of his nose with the sponge. He scooped forkfuls of mashed potatoes from a bowl — perhaps his top goal — and savored each mouthful.

“For somebody who’s been injured eight years and couldn’t move, being able to move just that little bit is awesome to me,” said Kochevar, 56, of Cleveland. “It’s better than I thought it would be.”

A video of Kochevar can be found at: https://youtu.be/OHsFkqSM7-A

Kochevar is the focal point of research led by Case Western Reserve University, the Cleveland Functional Electrical Stimulation (FES) Center at the Louis Stokes Cleveland VA Medical Center and University Hospitals Cleveland Medical Center (UH). A study of the work will be published in the The Lancet March 28 at 6:30 p.m. U.S. Eastern time.

“He’s really breaking ground for the spinal cord injury community,” said Bob Kirsch, chair of Case Western Reserve’s Department of Biomedical Engineering, executive director of the FES Center and principal investigator (PI) and senior author of the research. “This is a major step toward restoring some independence.”

When asked, people with quadriplegia say their first priority is to scratch an itch, feed themselves or perform other simple functions with their arm and hand, instead of relying on caregivers.

“By taking the brain signals generated when Bill attempts to move, and using them to control the stimulation of his arm and hand, he was able to perform personal functions that were important to him,” said Bolu Ajiboye, assistant professor of biomedical engineering and lead study author.

Technology and training

The research with Kochevar is part of the ongoing BrainGate2* pilot clinical trial being conducted by a consortium of academic and VA institutions assessing the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Other investigational BrainGate research has shown that people with paralysis can control a cursor on a computer screen or a robotic arm.

“Every day, most of us take for granted that when we will to move, we can move any part of our body with precision and control in multiple directions and those with traumatic spinal cord injury or any other form of paralysis cannot,” said Benjamin Walter, associate professor of Neurology at Case Western Reserve School of Medicine, Clinical PI of the Cleveland BrainGate2 trial and medical director of the Deep Brain Stimulation Program at UH Cleveland Medical Center.

“The ultimate hope of any of these individuals is to restore this function,” Walter said. “By restoring the communication of the will to move from the brain directly to the body this work will hopefully begin to restore the hope of millions of paralyzed individuals that someday they will be able to move freely again.”

Jonathan Miller, assistant professor of neurosurgery at Case Western Reserve School of Medicine and director of the Functional and Restorative Neurosurgery Center at UH, led a team of surgeons who implanted two 96-channel electrode arrays — each about the size of a baby aspirin — in Kochevar’s motor cortex, on the surface of the brain.

The arrays record brain signals created when Kochevar imagines movement of his own arm and hand. The brain-computer interface extracts information from the brain signals about what movements he intends to make, then passes the information to command the electrical stimulation system.

To prepare him to use his arm again, Kochevar first learned how to use his brain signals to move a virtual-reality arm on a computer screen.

“He was able to do it within a few minutes,” Kirsch said. “The code was still in his brain.”

As Kochevar’s ability to move the virtual arm improved through four months of training, the researchers believed he would be capable of controlling his own arm and hand.

Miller then led a team that implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm.

The BCI decodes the recorded brain signals into the intended movement command, which is then converted by the FES system into patterns of electrical pulses.

The pulses sent through the FES electrodes trigger the muscles controlling Kochevar’s hand, wrist, arm, elbow and shoulder. To overcome gravity that would otherwise prevent him from raising his arm and reaching, Kochevar uses a mobile arm support, which is also under his brain’s control.

New Capabilities

Eight years of muscle atrophy required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.

Kochevar can make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.

When asked to describe how he commanded the arm movements, Kochevar told investigators, “I’m making it move without having to really concentrate hard at it…I just think ‘out’…and it goes.”

Kocehvar is fitted with temporarily implanted FES technology that has a track record of reliable use in people. The BCI and FES system together represent early feasibility that gives the research team insights into the potential future benefit of the combined system.

Advances needed to make the combined technology usable outside of a lab are not far from reality, the researchers say. Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.

Kochevar welcomes new technology — even if it requires more surgery — that will enable him to move better. “This won’t replace caregivers,” he said. “But, in the long term, people will be able, in a limited way, to do more for themselves.”

The investigational BrainGate technology was initially developed in the Brown University laboratory of John Donoghue, now the founding director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The implanted recording electrodes are known as the Utah array, originally designed by Richard Normann, Emeritus Distinguished Professor of Bioengineering at the University of Utah.

The report in today’s Lancet is the result of a long-running collaboration between Kirsch, Ajiboye and the multi-institutional BrainGate consortium. Leigh Hochberg, MD, PhD, a neurologist and neuroengineer at Massachusetts General Hospital, Brown University and the VA RR&D Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.

“It’s been so inspiring to watch Mr. Kochevar move his own arm and hand just by thinking about it,” Hochberg said. “As an extraordinary participant in this research, he’s teaching us how to design a new generation of neurotechnologies that we all hope will one day restore mobility and independence for people with paralysis.”

Other researchers involved with the study include: Francis R. Willett, Daniel Young, William Memberg, Brian Murphy, PhD, and P. Hunter Peckham, PhD, from Case Western Reserve; Jennifer Sweet, MD, from UH; Harry Hoyen, MD,and Michael Keith, MD, from MetroHealth Medical Center and CWRU School of Medicine; and John Simeral, PhD from Brown University and Providence VA Medical Center.

*CAUTION: Investigational Device. Limited by Federal Law to Investigational Use.

Graphene-based neural probes probe brain activity in high resolution

Measuring brain activity with precision is essential to developing further understanding of diseases such as epilepsy and disorders that affect brain function and motor control. Neural probes with high spatial resolution are needed for both recording and stimulating specific functional areas of the brain. Now, researchers from the Graphene Flagship have developed a new device for recording brain activity in high resolution while maintaining excellent signal to noise ratio (SNR). Based on graphene field-effect transistors, the flexible devices open up new possibilities for the development of functional implants and interfaces.

The research, published in 2D Materials, was a collaborative effort involving Flagship partners Technical University of Munich (TU Munich; Germany), Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS; Spain), Spanish National Research Council (CSIC; Spain), The Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN; Spain) and the Catalan Institute of Nanoscience and Nanotechnology (ICN2; Spain).

The devices were used to record the large signals generated by pre-epileptic activity in rats, as well as the smaller levels of brain activity during sleep and in response to visual light stimulation. These types of activities lead to much smaller electrical signals, and are at the level of typical brain activity. Neural activity is detected through the highly localised electric fields generated when neurons fire, so densely packed, ultra-small measuring devices is important for accurate brain readings.

The neural probes are placed directly on the surface of the brain, so safety is of paramount importance for the development of graphene-based neural implant devices. Importantly, the researchers determined that the graphene-based probes are non-toxic, and did not induce any significant inflammation.

Devices implanted in the brain as neural prosthesis for therapeutic brain stimulation technologies and interfaces for sensory and motor devices, such as artificial limbs, are an important goal for improving quality of life for patients. This work represents a first step towards the use of graphene in research as well as clinical neural devices, showing that graphene-based technologies can deliver the high resolution and high SNR needed for these applications.

First author Benno Blaschke (TU Munich) said “Graphene is one of the few materials that allows recording in a transistor configuration and simultaneously complies with all other requirements for neural probes such as flexibility, biocompability and chemical stability. Although graphene is ideally suited for flexible electronics, it was a great challenge to transfer our fabrication process from rigid substrates to flexible ones. The next step is to optimize the wafer-scale fabrication process and improve device flexibility and stability.”

Jose Antonio Garrido (ICN2), led the research. He said “Mechanical compliance is an important requirement for safe neural probes and interfaces. Currently, the focus is on ultra-soft materials that can adapt conformally to the brain surface. Graphene neural interfaces have shown already great potential, but we have to improve on the yield and homogeneity of the device production in order to advance towards a real technology. Once we have demonstrated the proof of concept in animal studies, the next goal will be to work towards the first human clinical trial with graphene devices during intraoperative mapping of the brain. This means addressing all regulatory issues associated to medical devices such as safety, biocompatibility, etc.”

Story Source:

Materials provided by Graphene Flagship. Note: Content may be edited for style and length.

 

Improving memory with magnets

The ability to remember sounds, and manipulate them in our minds, is incredibly important to our daily lives — without it we would not be able to understand a sentence, or do simple arithmetic. New research is shedding light on how sound memory works in the brain, and is even demonstrating a means to improve it.

Scientists previously knew that a neural network of the brain called the dorsal stream was responsible for aspects of auditory memory. Inside the dorsal stream were rhythmic electrical pulses called theta waves, yet the role of these waves in auditory memory were until recently a complete mystery.

To learn precisely the relationship between theta waves and auditory memory, and to see how memory could be boosted, researchers at the Montreal Neurological Institute of McGill University gave seventeen individuals auditory memory tasks that required them to recognize a pattern of tones when it was reversed. Listeners performed this task while being recorded with a combination of magnetoencephalography (MEG) and electroencephalography (EEG). The MEG/EEG revealed the amplitude and frequency signatures of theta waves in the dorsal stream while the subjects worked on the memory tasks. It also revealed where the theta waves were coming from in the brain.

Using that data, researchers then applied transcranial magnetic stimulation (TMS) at the same theta frequency to the subjects while they performed the same tasks, to enhance the theta waves and measure the effect on the subjects’ memory performance.

They found that when they applied TMS, subjects performed better at auditory memory tasks. This was only the case when the TMS matched the rhythm of natural theta waves in the brain. When the TMS was arrhythmic, there was no effect on performance, suggesting it is the manipulation of theta waves, not simply the application of TMS, which alters performance.

“For a long time the role of theta waves has been unclear,” says Sylvain Baillet, one of the study’s co-senior authors. “We now know much more about the nature of the mechanisms involved and their causal role in brain functions. For this study, we have built on our strengths at The Neuro, using MEG, EEG and TMS as complementary techniques.”

The most exciting aspect of the study is that the results are very specific and have a broad range of applications, according to Philippe Albouy, the study’s first author.

“Now we know human behavior can be specifically boosted using stimulation that matched ongoing, self-generated brain oscillations,” he says. “Even more exciting is that while this study investigated auditory memory, the same approach can be used for multiple cognitive processes such as vision, perception, and learning.”

The successful demonstration that TMS can be used to improve brain performance also has clinical implications. One day this stimulation could compensate for the loss of memory caused by neurodegenerative diseases such as Alzheimer’s.

“The results are very promising, and offer a pathway for future treatments,” says Robert Zatorre, one of the study’s co-senior authors. “We plan to do more research to see if we can make the performance boost last longer, and if it works for other kinds of stimuli and tasks. This will help researchers develop clinical applications.”

This study was published in the journal Neuron on March 23, and was a result of collaboration between the Neuroimaging/Neuroinformatics and Cognition research groups of the MNI.

Story Source:

Materials provided by McGill University. Note: Content may be edited for style and length.

 

Robot uses social feedback to fetch objects intelligently

If someone asks you to hand them a wrench from a table full of different sized wrenches, you’d probably pause and ask, “which one?” Robotics researchers from Brown University have now developed an algorithm that lets robots do the same thing — ask for clarification when they’re not sure what a person wants.

The research, which will be presented this spring at the International Conference on Robotics and Automation in Singapore, comes from Brown’s Humans to Robots Lab led by computer science professor Stefanie Tellex. Her work focuses on human-robot collaboration — making robots that can be good helpers to people at home and in the workplace.

“Fetching objects is an important task that we want collaborative robots to be able to do,” Tellex said. “But it’s easy for the robot to make errors, either by misunderstanding what we want, or by being in situations where commands are ambiguous. So what we wanted to do here was come up with a way for the robot to ask a question when it’s not sure.”

Tellex’s lab had previously developed an algorithm that enables robots to receive speech commands as well as information from human gestures. It’s a form of interaction that people use all the time. When we ask someone for an object, we’ll often point to it at the same time. Tellex and her team showed that when robots could combine the speech commands with gestures, they got better at correctly interpreting user commands.

Still, the system isn’t perfect. It runs into problems when there are lots of very similar objects in close proximity to each other. Take the workshop table, for example. Simply asking for “a wrench” isn’t specific enough, and it might not be clear which one a person is pointing to if a number of wrenches are clustered close together.

“What we want in these situations is for the robot to be able to signal that it’s confused and ask a question rather than just fetching the wrong object,” Tellex said.

The new algorithm does that. It enables the robot to quantify how certain it is that it knows what a user wants. When its certainty is high, the robot will simply hand over the object as requested. When it’s not so certain, the robot makes its best guess about what the person wants, then asks for confirmation by hovering its gripper over the object and asking, “this one?”

One of the important features of the system is that the robot doesn’t ask questions with every interaction. It asks intelligently.

“When the robot is certain, we don’t want it to ask a question because it just takes up time,” said Eric Rosen, an undergraduate working in Tellex’s lab and co-lead author of the research paper with graduate student David Whitney. “But when it is ambiguous, we want it to ask questions because mistakes can be more costly in terms of time.”

And even though the system asks only a very simple question, “it’s able to make important inferences based on the answer,” Whitney said. For example, say a user asks for a wrench and there are two wrenches on a table. If the user tells the robot that its first guess was wrong, the algorithm deduces that the other wrench must be the one that the user wants. It will then hand that one over without asking another question. Those kinds of inferences, known as implicatures, make the algorithm more efficient.

To test their system, the researchers asked untrained participants to come into the lab and interact with Baxter, a popular industrial and research robot. Participants asked Baxter for objects under different conditions. The team could set the robot to never ask questions, ask a question every time, or to ask questions only when uncertain. The trials showed that asking questions intelligently using the new algorithm was significantly better in terms of accuracy and speed compared to the other two conditions.

The system worked so well, in fact, that participants thought the robot had capabilities it actually didn’t have. For the purposes of the study, the researchers used a very simple language model — one that only understood the names of objects. However, participants told the researchers they thought the robot could understand prepositional phrases like, “on the left” or “closest to me,” which it could not. They also thought the robot might be tracking their eye-gaze, which it wasn’t. All the system was doing was making smart inferences after asking a very simple question.

In future work, Tellex and her team would like to combine the algorithm with more robust speech recognition systems, which might further increase the system’s accuracy and speed.

Ultimately, Tellex says, she hopes systems like this will help robots become useful collaborators both at home and at work.

New software allows for ‘decoding digital brain data’

Early this year, about 30 neuroscientists and computer programmers got together to improve their ability to read the human mind.

The hackathon was one of several that researchers from Princeton University and Intel, the largest maker of computer processors, organized to build software that can tell what a person is thinking in real time, while the person is thinking it.

The collaboration between researchers at Princeton and Intel has enabled rapid progress on the ability to decode digital brain data, scanned using functional magnetic resonance imaging (fMRI), to reveal how neural activity gives rise to learning, memory and other cognitive functions.

A review of computational advances toward decoding brain scans appears in the journal Nature Neuroscience, authored by researchers at the Princeton Neuroscience Institute and Princeton’s departments of computer science and electrical engineering, together with colleagues at Intel Labs, a research arm of Intel.

“The capacity to monitor the brain in real time has tremendous potential for improving the diagnosis and treatment of brain disorders as well as for basic research on how the mind works,” said Jonathan Cohen, the Robert Bendheim and Lynn Bendheim Thoman Professor in Neuroscience, co-director of the Princeton Neuroscience Institute, and one of the founding members of the collaboration with Intel.

Since the collaboration’s inception two years ago, the researchers have whittled the time it takes to extract thoughts from brain scans from days down to less than a second, said Cohen, who is also a professor of psychology.

One type of experiment that is benefiting from real-time decoding of thoughts occurred during the hackathon. The study, designed by J. Benjamin Hutchinson, a former postdoctoral researcher in the Princeton Neuroscience Institute who is now an assistant professor at Northeastern University, aimed to explore activity in the brain when a person is paying attention to the environment, versus when his or her attention wanders to other thoughts or memories.

In the experiment, Hutchinson asked a research volunteer — a graduate student lying in the fMRI scanner — to look at a detail-filled picture of people in a crowded café. From his computer in the console room, Hutchinson could tell in real time whether the graduate student was paying attention to the picture or whether her mind was drifting to internal thoughts. Hutchinson could then give the graduate student feedback on how well she was paying attention by making the picture clearer and stronger in color when her mind was focused on the picture, and fading the picture when her attention drifted.

The ongoing collaboration has benefited neuroscientists who want to learn more about the brain and computer scientists who want to design more efficient computer algorithms and processing methods to rapidly sort through large data sets, according to Theodore Willke, a senior principal engineer at Intel Labs in Hillsboro, Oregon, and head of Intel’s Mind’s Eye Lab. Willke directs Intel’s part of the collaborative team.

“Intel was interested in working on emerging applications for high-performance computing, and the collaboration with Princeton provided us with new challenges,” Willke said. “We also hope to export what we learn from studies of human intelligence and cognition to machine learning and artificial intelligence, with the goal of advancing other important objectives, such as safer autonomous driving, quicker drug discovery and ealier detection of cancer.”

Since the invention of fMRI two decades ago, researchers have been improving the ability to sift through the enormous amounts of data in each scan. An fMRI scanner captures signals from changes in blood flow that happen in the brain from moment to moment as we are thinking. But reading from these measurements the actual thoughts a person is having is a challenge, and doing it in real time is even more challenging.

A number of techniques for processing these data have been developed at Princeton and other institutions. For example, work by Peter Ramadge, the Gordon Y.S. Wu Professor of Engineering and professor of electrical engineering at Princeton, has enabled researchers to identify brain activity patterns that correlate to thoughts by combining data from brain scans from multiple people. Designing computerized instructions, or algorithms, to carry out these analyses continues to be a major area of research.

Powerful high-performance computers help cut down the time that it takes to do these analyses by breaking the task up into chunks that can be processed in parallel. The combination of better algorithms and parallel computing is what enabled the collaboration to achieve real-time brain scan processing, according to Kai Li, Princeton’s Paul M. Wythes ’55 P86 and Marcia R. Wythes P86 Professor in Computer Science and one of the founders of the collaboration.

Since the beginning of the collaboration in 2015, Intel has contributed to Princeton more than $1.5 million in computer hardware and support for Princeton graduate students and postdoctoral researchers. Intel also employs 10 computer scientists who work on this project with Princeton, and these experts work closely with Princeton faculty, students and postdocs to improve the software.

These algorithms locate thoughts within the data by using machine learning, the same technique that facial recognition software uses to help find friends in social media platforms such as Facebook. Machine learning involves exposing computers to enough examples so that the computers can classify new objects that they’ve never seen before.

One of the results of the collaboration has been the creation of a software toolbox, called the Brain Imaging Analysis Kit (BrainIAK), that is openly available via the Internet to any researchers looking to process fMRI data. The team is now working on building a real-time analysis service. “The idea is that even researchers who don’t have access to high-performance computers, or who don’t know how to write software to run their analyses on these computers, would be able to use these tools to decode brain scans in real time,” said Li.

What these scientists learn about the brain may eventually help individuals combat difficulties with paying attention, or other conditions that benefit from immediate feedback.

For example, real-time feedback may help patients train their brains to weaken intrusive memories. While such “brain-training” approaches need additional validation to make sure that the brain is learning new patterns and not just becoming good at doing the training exercise, these feedback approaches offer the potential for new therapies, Cohen said. Real-time analysis of the brain could also help clinicians make diagnoses, he said.

The ability to decode the brain in real time also has applications in basic brain research, said Kenneth Norman, professor of psychology and the Princeton Neuroscience Institute. “As cognitive neuroscientists, we’re interested in learning how the brain gives rise to thinking,” said Norman. “Being able to do this in real time vastly increases the range of science that we can do,” he said.

Another way the technology can be used is in studies of how we learn. For example, when a person listens to a math lecture, certain neural patterns are activated. Researchers could look at the neural patterns of people who understand the math lecture and see how they differ from neural patterns of someone who isn’t following along as well, according to Norman.

The ongoing collaboration is now focused on improving the technology to obtain a clearer window into what people are thinking about, for example, decoding in real time the specific identity of a face that a person is mentally visualizing.

One of the challenges the computer scientists had to overcome was how to apply machine learning to the type of data generated by brain scans. A face-recognition algorithm can scan hundreds of thousands of photographs to learn how to classify new faces, but the logistics of scanning peoples’ brains are such that researchers usually only have access to a few hundred scans per person.

Although the number of scans is few, each scan contains a rich trove of data. The software divides the brain images into little cubes, each about one millimeter wide. These cubes, called voxels, are analogous to the pixels in a two-dimensional picture. The brain activity in each cube is constantly changing.

To make matters more complex, it is the connections between brain regions that give rise to our thoughts. A typical scan can contain 100,000 voxels, and if each voxel can talk to all the other voxels, the number of possible conversations is immense. And these conversations are changing second by second. The collaboration of Intel and Princeton computer scientists overcame this computational challenge. The effort included Li as well as Barbara Engelhardt, assistant professor of computer science, and Yida Wang, who earned his doctorate in computer science from Princeton in 2016 and now works at Intel Labs.

Prior to the recent progress, it would take researchers months to analyze a data set, said Nicholas Turk-Browne, professor of psychology at Princeton. With the availability of real-time fMRI, a researcher can change the experiment while it is ongoing. “If my hypothesis concerns a certain region of the brain and I detect in real time that my experiment is not engaging that brain region, then we can change what we ask the research volunteer to do to better engage that region, potentially saving precious time and accelerating scientific discovery,” Turk-Browne said.

One eventual goal is to be able to create pictures from people’s thoughts, said Turk-Browne. “If you are in the scanner and you are retrieving a special memory, such as from childhood, we would hope to generate a photograph of that experience on the screen. That is still far off, but we are making good progress.”

 

New, ultra-flexible probes form reliable, scar-free integration with the brain

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted. The researchers described their findings in a research article published on Feb. 15 in Science Advances.

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

Story Source:

Materials provided by University of Texas at Austin. Note: Content may be edited for style and length.