Shervin Assari, Ehsan Moazen-Zadeh, Cleopatra Howard Caldwell, Marc A. Zimmerman
In a study of more than 2.3 million patients in the United States with attention-deficit/hyperactivity disorder (ADHD), rates of motor vehicle crashes (MVCs) were lower when they had received their medication, according to a new article published by JAMA Psychiatry.
About 1.25 million people worldwide die annually because of MVCs. ADHD is a prevalent neurodevelopmental disorder with symptoms that include poor sustained attention, impaired impulse control and hyperactivity. ADHD affects 5 percent to 7 percent of children and adolescent and for many people it persists into adulthood. Prior studies have suggested people with ADHD are more likely to experience MVCs. Pharmacotherapy is a first-line treatment for the condition and rates of ADHD medication prescribing have increased over the last decade in the United States and in other countries.
Zheng Chang, Ph.D., M.Sc., of the Karolinska Institutet, Stockholm, Sweden, and coauthors identified more than 2.3 million U.S. patients with ADHD between 2005 and 2014 from commercial health insurance claims and identified emergency department visits for MVCs. Analyses compared the risk of MVCs during months when patients received their medication with the risk of MVCs during months when they did not.
Among the more than 2.3 million patients with ADHD (average age 32.5), 83.9 percent (more than 1.9 million) received at least one prescription for ADHD medication during the follow-up. There were 11,224 patients (0.5 percent) who had at least one emergency department visit for an MVC.
Patients with ADHD had a higher risk of an MVC than a control group of people who didn’t have ADHD or ADHD medication use. The use of medication in patients with ADHD was associated with reduced risk for MVC in both male and female patients, according to the results.
“These findings call attention to a prevalent and preventable cause of mortality and morbidity among patients with ADHD. If replicated, our results should be considered along with other potential benefits and harms associated with ADHD medication use,” the article concludes.
Limitations of the study include that it cannot prove causality because it is an observational study. Medication use also was measured by monthly filled prescriptions. Also, the study used emergency department visits due to MVCs as its main outcome so some MVCs that did not require medical services (for example less severe crashes or some fatal ones) were not included in the study.
An international collaborative study led by researchers at Sanford Burnham Prebys Medical Discovery Institute (SBP), with major participation from Yokohama School of Medicine, Harvard Medical School, and UC San Diego, has identified the molecular mechanism behind lithium’s effectiveness in treating bipolar disorder patients.
The study, published in Proceedings of the National Academy of Sciences (PNAS), utilized human induced pluripotent stem cells (hiPS cells) to map lithium’s response pathway, enabling the larger pathogenesis of bipolar disorder to be identified. These results are the first to explain the molecular basis of the disease, and may support the development of a diagnostic test for the disorder as well as predict the likelihood of patient response to lithium treatment. It may also provide the basis to discover new drugs that are safer and more effective than lithium.
Bipolar disorder is a mental health condition causing extreme mood swings that include emotional highs (mania or hypomania) and lows (depression) and affects approximately 5.7 million adults in the U.S. Lithium is the first treatment explored after bipolar symptoms, but it has significant limitations. Only approximately one-third of patients respond to lithium treatment, and its effect is only found through a trial-and-error process that takes months — and sometimes years — of prescribing the drug and monitoring for response. Side effects of lithium treatment can be significant, including nausea, muscle tremors, emotional numbing, irregular heartbeat, weight gain, and birth defects, and many patients choose to stop taking the medicine as a result.
“Lithium has been used to treat bipolar disorder for generations, but up until now our lack of knowledge about why the therapy does or does not work for a particular patient led to unnecessary dosing and delayed finding an effective treatment. Further, its side effects are intolerable for many patients, limiting its use and creating an urgent need for more targeted drugs with minimal risks,” said Evan Snyder, M.D., Ph.D., professor and director of the Center for Stem Cells and Regenerative Medicine at SBP, and senior author of the study. “Importantly, our findings open a clear path to finding safe and effective new drugs. Equally as important, it helped give us insight into what type of mechanisms cause psychiatric problems such as these.”
“We realized that studying the lithium response could be used as a ‘molecular can-opener’ to unravel the molecular pathway of this complex disorder, that turns out not to be caused by a defect in a gene, but rather by the posttranslational regulation (phosphorylation) of the product of a gene — in this case, CRMP2, an intracellular protein that regulates neural networks,” added Snyder.
In hiPS cells created from lithium-responsive and non-responsive patients, researchers observed a physiological difference in the regulation of CRMP2, which rendered the protein to be in a much more inactive state in responsive patients. However, the research showed that when lithium was administered to these cells, their regulatory mechanisms were corrected, restoring normal activity of CRMP2 and correcting the underlying cause of their disorder. Thus, the study demonstrated that bipolar disorder can be rooted in physiological — not necessarily genetic — mechanisms. The insights derived from the hiPS cells were validated in actual brain specimens from patients with bipolar disorder (on and off lithium), in animal models, and in the actions of living neurons.
“This ‘molecular can-opener’ approach — using a drug known to have a useful action without exactly knowing why — allowed us to examine and understand an underlying pathogenesis of bipolar disorder,” said Snyder. “The approach may be extended to additional complex disorders and diseases for which we don’t understand the underlying biology but do have drugs that may have some beneficial actions, such as depression, anxiety, schizophrenia and others in need of more effective therapies. One cannot improve a therapy until one knows what molecularly really needs to be fixed.”
This study was performed in collaboration with Veterans Administration Medical Center in La Jolla, University of California San Diego, Yokohama City University, Massachusetts General Hospital, Harvard Medical School, Mailman Research Center at McLean Hospital, University of Connecticut School of Medicine, University of Pittsburgh Medical Center, National Institute of Mental Health, Vala Sciences, Inc., Broad Institute of MIT and Harvard University, Dalhousie University, Beth-Israel Deaconess Medical Center, Örebro University, Janssen Research & Development Labs, Waseda University, and RIKEN .
In the largest MRI study on patients with bipolar disorder to date, a global consortium published new research showing that people with the condition have differences in the brain regions that control inhibition and emotion.
The new study, published in Molecular Psychiatry on May 2, found brain abnormalities in people with bipolar disorder. By revealing clear and consistent alterations in key brain regions, the findings shed light on the underlying mechanisms of bipolar disorder.
“We created the first global map of bipolar disorder and how it affects the brain, resolving years of uncertainty on how people’s brains differ when they have this severe illness,” said Ole A. Andreassen, senior author of the study and a professor at the Norwegian Centre for Mental Disorders Research (NORMENT) at the University of Oslo.
Bipolar disorder affects 1 to 3 percent of the adult population worldwide. It is a debilitating psychiatric disorder with serious implications for those affected and their families. However, scientists have struggled to pinpoint neurobiological mechanisms of the disorder, partly due to the lack of sufficient brain scans.
The study was part of an international consortium called ENIGMA (Enhancing Neuro Imaging Genetics Through Meta Analysis), which spans 76 centers and includes 28 different research groups across the world, and is led by the USC Stevens Neuroimaging and Informatics Institute at the Keck School of Medicine of USC. The researchers measured the MRI scans of 6,503 individuals, including 2,447 adults with bipolar disorder and 4,056 healthy controls. They also examined the effects of commonly used prescription medications, age of illness onset, history of psychosis, mood state, age and sex differences on cortical regions.
The study showed thinning of cortical gray matter in the brains of patients with bipolar disorder when compared with healthy controls. The greatest deficits were found in parts of the brain that control inhibition and emotion — the frontal and temporal regions.
Bipolar disorder patients with a history of psychosis showed greater deficits in the brain’s gray matter. The findings also showed different brain signatures in patients who took lithium, antipsychotics and anti-epileptic treatments. Lithium treatment was associated with less thinning of gray matter, which suggests a protective effect of this medication on the brain.
“These are important clues as to where to look in the brain for therapeutic effects of these drugs,” said Derrek Hibar, first author of the paper and a professor at the USC Mark and Mary Stevens Neuroimaging and Informatics Institute when the study was conducted. He was a former visiting researcher at the University of Oslo and is now a senior scientist at Janssen Research and Development, LLC.
“Mapping the brain regions affected is also important for early detection and prevention,” said Paul Thompson director of the ENIGMA consortium and an associate director of the USC Mark and Mary Stevens Neuroimaging and Informatics Institute.
Future research will test how well different medications and treatments can shift or modify these brain measures as well as improve symptoms and clinical outcomes for patients.
“This new map of the bipolar brain gives us a roadmap of where to look for treatment effects. By bringing together psychiatrists worldwide, we now have a new source of power to discover treatments that improve patients’ lives,” said Thompson.
Sebastian von Peter, Patrick Bieler
Ayano Miura, Takeo Fujiwara
From the clown fish to leopards, skin colour patterns in animals arise from microscopic interactions among coloured cells that obey equations discovered by the mathematician Alan Turing. Today, researchers at the University of Geneva (UNIGE), Switzerland, and SIB Swiss Institute of Bioinformatics report in the journal Nature that a southwestern European lizard slowly acquires its intricate adult skin colour by changing the colour of individual skin scales using an esoteric computational system invented in 1948 by another mathematician: John von Neumann. The Swiss team shows that the 3D geometry of the lizard’s skin scales causes the Turing mechanism to transform into the von Neumann computing system, allowing biology-driven research to link, for the first time, the work of these two mathematical giants.
A multidisciplinary team of biologists, physicists and computer scientists lead by Michel Milinkovitch, professor at the Department of Genetics and Evolution of the UNIGE Faculty of Science, Switzerland and Group Leader at the SIB Swiss Institute of Bioinformatics, realised that the brown juvenile ocellated lizard (Timon lepidus) gradually transforms its skin colour as it ages to reach an intricate adult labyrinthine pattern where each scale is either green or black. This observation is at odd with the mechanism, discovered in 1952 by the mathematician Alan Turing, that involves microscopic interactions among coloured cells. To understand why the pattern is forming at the level of scales, rather than at the level of biological cells, two PhD students, Liana Manukyan and Sophie Montandon, followed individual lizards during 4 years of their development from hatchlings crawling out of the egg to fully mature animals. For multiple time points, they reconstructed the geometry and colour of the network of scales by using a very high resolution robotic system developed previously in the Milinkovitch laboratory.
Flipping from green to black
The researchers were then surprised to see the brown juvenile scales change to green or black, then continue flipping colour (between green and black) during the life of the animal. This very strange observation prompted Milinkovitch to suggest that the skin scale network forms a so-called ‘cellular automaton’. This esoteric computing system was invented in 1948 by the mathematician John von Neumann. Cellular automata are lattices of elements in which each element changes its state (here, its colour, green or black) depending on the states of neighbouring elements. The elements are called cells but are not meant to represent biological cells; in the case of the lizards, they correspond to individual skin scales. These abstract automata were extensively used to model natural phenomena, but the UNIGE team discovered what seems to be the first case of a genuine 2D automaton appearing in a living organism. Analyses of the four years of colour change allowed the Swiss researchers to confirm Milinkovitch’s hypothesis: the scales were indeed flipping colour depending of the colours of their neighbour scales. Computer simulations implementing the discovered mathematical rule generated colour patterns that could not be distinguished from the patterns of real lizards.
How could the interactions among pigment cells, described by Turing equations, generate a von Neumann automaton exactly superposed to the skin scales? The skin of a lizard is not flat: it is very thin between scales and much thicker at the center of them. Given that Turing’s mechanism involves movements of cells, or the diffusion of signals produced by cells, Milinkovitch understood that this variation of skin thickness could impact on the Turing’s mechanism. The researchers then performed computer simulations including skin thickness and saw a cellular automaton behaviour emerge, demonstrating that a Cellular Automaton as a computational system is not just an abstract concept developed by John von Neumann, but also corresponds to a natural process generated by biological evolution.
The need for a formal mathematical analysis
However, the automaton behaviour was imperfect as the mathematics behind Turing’s mechanism and von Neumann automaton are very different. Milinkovitch called in the mathematician Stanislav Smirnov, Professor at the UNIGE, who was awarded the Fields Medal in 2010. Before long, Smirnov derived a so-called discretisation of Turing’s equations that would constitute a formal link with von Neumann’s automaton. Anamarija Fofonjka, a third PhD student in Milinkovitch’s team implemented Smirnov new equations in computer simulations, obtaining a system that had become un-differentiable from a von Neumann automaton. The highly multidisciplinary team of researchers had closed the loop in this amazing journey, from biology to physics to mathematics … and back to biology.
Materials provided by Université de Genève. Note: Content may be edited for style and length.
The words we use and our writing styles can reveal information about our preferences, thoughts, emotions and behaviours. Using this information, a new study from the University of Eastern Finland has developed machine learning models that can detect antisocial behaviours, such as hate speech and indications of violence, from texts.
Historically, most attempts to address antisocial behaviour have been done from education, social and psychological points of view. This new study has, however, demonstrated the potential of using natural language processing techniques to develop state-of-the-art solutions to combat antisocial behaviour in written communication.
The study created solutions that can be integrated in web forums or social media websites to automatically or semi-automatically detect potential incidents of antisocial behaviour with high accuracies, allowing for fast and reliable warnings and interventions to be made before the possible acts of violence are committed.
One of the great challenges in detecting antisocial behaviour is first defining what precisely counts as antisocial behaviour and then determining how to detect such phenomena. Thus, using an exploratory and interdisciplinary approach, the study applied natural language processing techniques to identify, extract and utilise the linguistic features, including emotional features, pertaining to antisocial behaviour.
The study investigated emotions and their role or presence in antisocial behaviour. Literature in the fields of psychology and cognitive science shows that emotions have a direct or indirect role in instigating antisocial behaviour. Thus, for the analysis of emotions in written language, the study created a novel resource for analysing emotions. This resource further contributes to subfields of natural language processing, such as emotion and sentiment analysis. The study also created a novel corpus of antisocial behaviour texts, allowing for a deeper insight into and understanding of how antisocial behaviour is expressed in written language.
The study shows that natural language processing techniques can help detect antisocial behaviour, which is a step towards its prevention in society. With continued research on the relationships between natural language and societal concerns and with a multidisciplinary effort in building automated means to assess the probability of harmful behaviour, much progress can be made.
Being able to interact with mobile phones and other smart devices using gestures with our hands and fingers in three dimensions would make the digital world more like the real one. A new project at Linnaeus University will help develop this next-generation interface.
Width, height, depth — the world is three-dimensional. So why are almost all interactions with mobile devices based on two dimensions only?
Sure, touchscreens have made it a lot easier to interact, but as new devices such as smart watches and virtual reality glasses turn up, we will not be content with two dimensions. We want to be able to interact in 3D with the help of the hands in front of and around our digital devices. This is where the new project Real-Time 3D Gesture Analysis for Natural Interaction with Smart Devices comes in, a project that will bring the next big development in interface technology.
– The goal of our project is that you will get the same experience of, for example, grabbing and twisting an object in the digital world as in the real world, says Shahrouz Yousefi, senior lecturer in media technology at Linnaeus University and project manager.
The applications are many and diverse — virtual and augmented reality (VR and AR), medical settings, robotics, e-learning, 3D games and much more. To be able to analyse in real-time the movements of the hands and individual fingers, however, requires both high capacity and high intelligence of the system that is to handle this. It involves large amounts of data and advanced analyses, especially when the system needs to track the movements of multiple hands simultaneously.
– A key issue is how we can develop and use new techniques for the analysis of so-called Big Data to analyse gestures and movements. Likewise how we can integrate them with solutions that already exist for computer vision and pattern recognition, to tackle the high degrees of freedom 3D gesture analysis.
The project is funded by the HÖG 16 programme of KK-stiftelsen (the Knowledge Foundation) with SEK 4 393 590 and will last for three years. Three companies — Screen Interaction, MindArk PE and Globalmouth — participate as active partners. They will contribute with unique knowledge and expertise in the field as well as equipment, so that research, development and implementation as quickly as possible will lead to practical applications that can be demonstrated.
From bustling cities to tiny farming communities, the bright lights of the local stadium are common beacons to the Friday night ritual of high school football.
But across the sprawling stretches of rural America, these stadiums are commonly far from doctors who could quickly diagnose and treat head injuries that have brought so much scrutiny to the sport.
A first-of-its-kind study from the Peter O’Donnell Jr. Brain Institute and Mayo Clinic shows the technology exists to ease this dilemma: By using a remote-controlled robot, a neurologist sitting hundreds of miles from the field can evaluate athletes for concussion with the same accuracy as on-site physicians.
The study provides preliminary data to support a nascent movement to utilize teleconcussion equipment at all school sporting events where neurologists or other concussion experts aren’t immediately accessible.
“I see teleconcussion being applicable anywhere in the world,” said Dr. Bert Vargas, the study’s lead author, who directs the sports neuroscience and concussion program at the O’Donnell Brain Institute at UT Southwestern Medical Center. “Right now there’s a significant disparity in access to concussion expertise.”
Concussion awareness has moved to the mainstream of national dialogue in recent years, fueled by revelations that former NFL players suffered permanent damage to their brains due to repeated head impacts.
Having personnel on hand to quickly identify and remove concussed players from games is an important part of protecting against such long-term injuries, Dr. Vargas said. But across the country — and most notably in rural regions — more than half of public high schools don’t have athletic trainers available to spot such incidents, increasing the chances that a concussion could go unnoticed and perhaps be exacerbated by additional injuries.
“Worst-case scenario, you have nobody at the games who can identify or address potential concussion cases,” said Dr. Vargas, an Associate Professor of Neurology and Neurotherapeutics. “You’re putting the athlete in a position to have a more severe injury with prolonged symptoms and longer recovery time.”
While previous teleconcussion research has focused on diagnosing severe traumatic brain injury (TBI) in the military, Dr. Vargas’ research is the first to measure how accurately telemedicine using standard sideline concussion evaluation tools can help diagnose concussions — a mild form of TBI — at sporting events.
The study, published in Neurology, used a mobile robot that was stationed for two seasons on the sideline and athletic training room of Northern Arizona University’s football games. A neurologist could view the game from the robot’s camera and make evaluations of players who may have been concussed.
Research shows mobile robots controlled by doctors can diagnose sports concussions with the same accuracy as on-site physicians. Researchers say the technology could be a game-changer in rural America, where few doctors or athletic trainers are available to diagnose head injuries during high school athletics. Some key facts and figures:
•63 percent of public high schools do not employ a full-time athletic trainer who could spot potential in-play concussions.
•Children under age 15 account for the most traumatic brain injury visits to the emergency room.
•Up to 3.8 million recreational and athletic concussions occur in the U.S. each year.
•The O’Donnell Brain Institute at UT Southwestern Medical Center is leading a statewide concussion registry in Texas. ConTex is the nation’s largest state effort to track concussions among youth athletes.
(Sources: UT Southwestern, CDC, National Federation of State High School Associations, National Athletic Trainers’ Association)
Using diagnostic tools that measure cognition, balance, and other factors, the remote neurologist made assessments in 11 cases brought to the robot for review. These assessments were later compared with separate face-to-face diagnoses made by sideline medical personnel consisting of Northern Arizona team physicians and athletic trainers. The results matched each time.
The study demonstrates that teleconcussion technology can work, but it doesn’t lessen the need to have trained personnel to help on the sidelines, said Cherisse Kutyreff, Director of Sports Medicine at Northern Arizona, who helped make the on-field assessments during the study.
“If you don’t start with the basics of having an athletic trainer in the schools first, you’re already spinning your wheels,” she said. “Don’t ask the questions if you aren’t prepared to have somebody there to do something with the answers (from the neurologist).”
The findings add scientific backing for the few entities that have already used teleconcussion robots at sporting events. Dr. Vargas sees only a limited future for the technology in college athletics and even less in the professional ranks, where qualified doctors and athletic trainers are already accessible.
His major goal is getting teleconcussion into high school athletics, including soccer, basketball, baseball, and cheerleading. He envisions a scenario where multiple districts could have one concussion specialist on standby for all their games. This person would be accessible when needed through a robot or less expensive interface.
The strategy could be especially beneficial in states such as Texas, which requires concussed high school players to get a physician’s approval before returning to action. In rural corners of the state, finding a doctor to do so often requires a lengthy trek.
“This is a way of bringing physicians into these outlying areas,” Dr. Vargas said. “One person could cover numerous schools. If you’re on-call virtually, you could be anywhere and available as soon as a consult is needed.”
Dr. Vargas’ research was funded by Mayo Clinic Center for Innovation, where he studied brain injury before moving to UT Southwestern. Drs. George Hershey, Northern Arizona’s team physician, and Amaal J. Starling with Mayo Clinic collaborated on the study.
“Removal from play decisions are of utmost importance in the setting of an acute concussion,” said Dr. Starling, a neurologist and concussion expert. “This teleconcussion study demonstrates that a remote concussion neurologist accessible through telemedicine technology can guide sideline personnel to make those decisions in a meaningful and timely manner.”
Researchers from the CNRS, Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications on 3 April 2017.
One of the goals of biomimetics is to take inspiration from the functioning of the brain in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.
Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.
Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.
As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera: the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.
Evolutionary robotics is a new exciting area of research which draws on Darwinian evolutionary principles to automatically develop autonomous robots. In a new research article published in Frontiers in Robotics and AI, researchers add more complexity to the field by demonstrating for the first time that just like in biological evolution, embodied robot evolution is impacted by epigenetic factors.
In evolutionary robotics, an artificial “gene pool” is created, which produces genomes each of which encodes the control system of a robot. Each robot is then allowed to act and perform tasks according to its “genetically” specified controller, and the robot’s fitness is ranked according to how well it performs a certain task. The robots are then allowed to reproduce by swapping genetic material with each other, comparable to biological sexual reproduction. However, the genomes of living organisms are also affected by development — events during their lifetime that lead to epigenetic changes. In biology, this interplay between evolution and development is known as evo-devo, which has emphasized the importance of non-genetic factors on an organism’s phenotype. “For roboticists, the evo-devo challenge is to create physically embodied systems that incorporate the three scales of time and the processes inherent in each: behavior, development, and evolution. Because of the complexity of building and evolving physical robots, this is a daunting challenge in the quest for the “evolution of things,” say authors and co-project leads Mr Jake Brawer and Mr Aaron Hill. “As an initial step toward this goal, in this paper we create a physically embodied system that allows us to examine systematically how developmental and evolutionary processes interact.”
In their study, the Vassar College research team wanted to create a system that could be used to study how genetic (evolutionary) and epigenetic (developmental) factors interact in robot evolution, and how epigenetic factors affect the evolvability of robots. While previous studies have focused on the effects of evolution in physically embodied robots, this is the first time that researchers have also taken into account the epigenetic aspect in this type of experiment. “An explicit evo-devo approach has proven invaluable in the evolution of artificial neural networks. Development serves as a new type of evolutionary driver — alongside the genetic factors of mutation, recombination, and selection — facilitating evolvability in embodied agents,” explain Brawer and Hill. “We note that what is missing from evolutionary robotics is not development per se but rather physically embodied development. We take the first simple steps toward combining the two by examining the interactions of epigenetic and genetic factors in the evolution of physically embodied and simulated robots.”
In this experiment, the fitness of individual robots was measured by how well they performed in two tasks: light gathering (phototaxis) and obstacle avoidance, and a randomized mating algorithm was used to determine which parental “genomes” should be combined to produce the next generation of robots. Here, the genes consisted of binary code that allowed for different possible wirings of the robot hardware. The phenotype — the physical expression of the genome — of the robots was modified in each generation by altering their wiring in accordance with the new genetic setup. This was repeated until 10 generations of robots had been created and ranked by fitness. To complement the experiment on physical robots, the team also created and evolved simulated robots, and compared the evolutionary outcomes of the physical and simulated robot populations.
The experiments were run until the robots lost all mobility, since the mating algorithm allowed low-fitness individuals to remain in the gene pool and reproduce. The results show that robot populations with an epigenetic factor evolved differently than populations where development was not taken into account. Despite the robots not evolving greater light capturing skills, the team are enthusiastic about the results, since the aim of this preliminary study was above all to demonstrate the importance of including epigenetic factors in robot evolution, and to develop a conceptual and physical methodology that makes this possible. “It is important to note that our goal was not to show adaptive evolution per se but rather to test the hypothesis that epigenetic factors can alter the evolutionary dynamics of a population of physically embodied robots. The results do indeed suggest demonstrate the broad importance of including EOs in investigations of evolvability,” say Brawer and Hill. “To our knowledge, our work represents the first physically embodied epigenetic factors to be used in the evolution of physically embodied robots.”
Materials provided by Frontiers. Note: Content may be edited for style and length.
Artificially intelligent algorithms can learn to identify amazingly subtle information, enabling them to distinguish between people in photos or to screen medical images as well as a doctor. But in most cases their ability to perform such feats relies on training that involves thousands to trillions of data points. This means artificial intelligence doesn’t work all that well in situations where there is very little data, such as drug development.
Vijay Pande, professor of chemistry at Stanford University, and his students thought that a fairly new kind of deep learning, called one-shot learning, that requires only a small number of data points might be a solution to that low-data problem.
“We’re trying to use machine learning, especially deep learning, for the early stage of drug design,” said Pande. “The issue is, once you have thousands of examples in drug design, you probably already have a successful drug.”
The group admitted the idea of applying one-shot learning to drug design problems was farfetched — the data was likely too limited. However, they’d had success in the past with machine learning methods requiring only hundreds of data points, and they had data available to test the one-shot approach. It seemed worth a try.
Much to their surprise, their results, published April 3 in ACS Central Science, show that one-shot learning methods have potential as a helpful tool for drug development and other areas of chemistry research.
Moving from images to molecules
Other researchers have successfully applied one-shot learning to image recognition and genomics, but applying it to problems relevant to drug development is a bit different. Whereas pixels and bases are fairly natural types of data to feed into an algorithm, properties of small molecules aren’t.
To make molecular information more digestible, the researchers first represented each molecule in terms of the connections between atoms (what a mathematician would call a graph). This step highlighted intrinsic properties of the chemical in a form that an algorithm could process.
With these graphical representations, the group trained an algorithm on two different datasets — one with information about the toxicity of different chemicals and another that detailed side effects of approved medicines. From the first dataset, they trained the algorithm on six chemicals and had it make predictions about the toxicity of the other three. Using the second dataset, they trained it to associate drugs with side effects in 21 tasks, testing it on six more.
In both cases, the algorithm was better able to predict toxicity or side effects than would have been possible by chance.
“We worked on some prototype algorithms and found that, given a few data points, they were able to make predictions that were pretty accurate,” said Bharath Ramsundar, who is a graduate student in the Pande lab and co-lead author of the study.
However, Ramsundar cautioned that this isn’t a “magical” technique. It was built off of several recent advances in a particular style of one-shot learning and it works by relying on the closeness of different molecules, as indirectly indicated by their formula. For example, when the researchers trained their algorithm on the toxicity data and tested it on the side effect data, the algorithm completely collapsed.
An experimentalist’s help
People concerned about AI taking jobs from humans have nothing to fear from this work. The researchers envision this as groundwork for a potential tool for chemists who are early in their research and trying to choose which molecule to pursue from a set of promising candidates.
“Right now, people make this kind of choice by hunch,” Ramsundar said. “This might be a nice compliment to that: an experimentalist’s helper.”
Beyond giving insight into drug design, this tool would be broadly applicable to molecular chemistry. Already, the Pande lab is testing these methods on different chemical compositions for solar cells. They have also made all of the code they used for the experiment open source, available as part of the DeepChem library.
“This paper is the first time that one-shot has been applied to this space and it’s exciting to see the field of machine learning move so quickly,” Pande said. “This is not the end of this journey — it’s the beginning.”