Unusual soybean coloration sheds a light on gene silencing

Today’s soybeans are typically golden yellow, with a tiny blackish mark where they attach to the pod. In a field of millions of beans, nearly all of them will have this look. Occasionally, however, a bean will turn up half-black, with a saddle pattern similar to a black-eyed pea.

“The yellow color is derived from a natural process known as gene silencing, in which genes interact to turn off certain traits,” explains Lila Vodkin, professor emerita in the Department of Crop Sciences at the University of Illinois. “Scientists make use of this process frequently to design everything from improved crops to medicines, but examples of naturally occurring gene silencing — also known as RNA interference, or RNAi — are limited. A better understanding of this process can explain how you can manipulate genes in anything from soybeans to humans.”

The RNAi pathway was discovered about 20 years ago as a naturally occurring process in a tiny roundworm. The discovery and follow-up work showing its biomedical potential won scientists the Nobel Prize in 2006. In plants, RNAi was discovered in petunias. When breeders tried to transform one gene to cause brighter pinks and purples, they wound up with white flowers instead. The gene for flower color had been silenced.

“Before they were domesticated, soybeans were black or brown due to the different anthocyanin pigments in the seed coat,” says Sarah Jones, a research specialist working with Vodkin on the study. “Breeders got rid of the dark pigments because they can discolor the oil or soybean meal during extraction processes.”

Vodkin clarifies, “The yellow color was a naturally occurring RNAi mutation that happened spontaneously, probably at the beginning of agriculture, like 10,000 years ago. People saw the yellow beans as different. They picked them up, saved them, and cultivated them. In the germplasm collections of the wild soybean, Glycine sojae, you don’t find the yellow color, only darkly pigmented seeds.”

Previous work from the team showed that yellow soybeans result from a naturally occurring gene silencing process involving two genes. Essentially, one of the genes blocks production of the darker pigment’s precursors. But the researchers weren’t sure why black pigments sometimes reappear, as in saddle-patterned beans. Now they know.

Vodkin and her team searched for beans with unusual pigmentation in the USDA soybean germplasm collection, housed at U of I. The collection contains thousands of specimens, representing much of the genetic diversity in domesticated soybean and its wild relatives.

“We requested beans with this black saddle pattern,” Jones recalls. “We wanted to know if they all get this pattern from the same gene.” Some of the samples had been collected as far back as 1945.

The team used modern genomic sequencing techniques, quickly sifting through some 56,000 protein-coding genes to identify the one responsible for the pattern. The lead author, Young Cho, made the discovery as a graduate student when he noticed a defect in the Argonaute5 gene. The team looked at additional beans with the saddle and found that the Argonaute5 gene was defective in a slightly different way in each of them.

“That’s how you prove you found the right gene,” Vodkin says, “because of these independent mutations happening at different spots right in that same gene.”

When the Argonaute5 gene is defective, the silencing process — which normally blocks the dark pigment and results in yellow beans — can no longer be carried out. The gene defect explains why the dark pigments show up in the saddle beans.

Before the team’s discovery, there were very few examples of how gene interactions work to achieve silencing in naturally occurring systems. Today, bioengineers use genetic engineering technologies to silence genes to produce a desired outcome, whether it’s flower color, disease resistance, improved photosynthesis, or any number of novel applications.

“The yellow color in soybeans could have been engineered, if it hadn’t occurred naturally,” Vodkin says, “but it would have cost millions of dollars and every yellow soybean would be a genetically modified organism. Nature engineered it first.” She says this study also emphasizes the value of the soybean germplasm collection, which preserves diversity for research and breeding purposes.

DNA delivery technology joins battle against drug-resistant bacteria

Antimicrobial resistance is one of the biggest threats to global health, affecting anyone, at any age, in any country, according to the World Health Organization. Currently, 700,000 deaths each year are attributed to antimicrobial resistance, a figure which could increase to 10 million a year by 2050 save further intervention.

New breakthrough technology from Tel Aviv University facilitates DNA delivery into drug-resistant bacterial pathogens, enabling their manipulation. The research expands the range of bacteriophages, which are the primary tool for introducing DNA into pathogenic bacteria to neutralize their lethal activity. A single type of bacteriophage can be adapted to a wide range of bacteria, an innovation which will likely accelerate the development of potential drugs based on this principle.

Prof. Udi Qimron of the Department of Clinical Microbiology and Immunology at TAU’s Sackler Faculty of Medicine led the research team, which also included Dr. Ido Yosef, Dr. Moran Goren, Rea Globus and Shahar Molshanski, all of Prof. Qimron’s lab. The study was recently published in Molecular Cell and featured on its cover.

For the research, the team genetically engineered bacteriophages to contain the desired DNA rather than their own genome. They also designed combinations of nanoparticles from different bacteriophages, resulting in hybrids that are able to recognize new bacteria, including pathogenic bacteria. The researchers further used directed evolution to select hybrid particles able to transfer DNA with optimal efficiency.

“DNA manipulation of pathogens includes sensitization to antibiotics, killing of pathogens, disabling pathogens’ virulence factors and more,” Prof. Qimron said. “We’ve developed a technology that significantly expands DNA delivery into bacterial pathogens. This may indeed be a milestone, because it opens up many opportunities for DNA manipulations of bacteria that were impossible to accomplish before.

“This could pave the way to changing the human microbiome — the combined genetic material of the microorganisms in humans — by replacing virulent bacteria with a-virulent bacteria and replacing antibiotic-resistant bacteria with antibiotic-sensitive bacteria, as well as changing environmental pathogens,” Prof. Qimron continued.

“We have applied for a patent on this technology and are developing products that would use this technology to deliver DNA into bacterial pathogens, rendering them a-virulent and sensitive to antibiotics,” Prof. Qimron said.

Story Source:

Materials provided by American Friends of Tel Aviv University. Note: Content may be edited for style and length.

E. coli bacteria's defense secret revealed

By tagging a cell’s proteins with fluorescent beacons, Cornell researchers have found out how E. coli bacteria defend themselves against antibiotics and other poisons. Probably not good news for the bacteria.

When undesirable molecules show up, the bacterial cell opens a tunnel though its cell wall and “effluxes,” or pumps out, the intruders.

“Dynamic assembly of these tunnels has long been hypothesized,” said Peng Chen, professor of chemistry and chemical biology. “Now we see them.”

The findings could lead to ways to combat antibiotic-resistant bacteria with a “cocktail” of drugs, he suggests: “One is to inhibit the assembly of the tunnel, the next is to kill the bacteria.”

To study bacteria’s defensive process, Chen and colleagues at Cornell selected a strain of E. coli known to pump out copper atoms that would otherwise poison the bacteria. The researchers genetically engineered it, adding to the DNA that codes for a defensive protein an additional DNA sequence that codes for a fluorescent molecule.

Under a powerful microscope, they exposed a bacterial cell to an environment containing copper atoms and periodically zapped the cell with an infrared laser to induce fluorescence. Following the blinking lights, they had a “movie” showing where the tagged protein traveled in the cell. They further genetically engineered the various proteins to turn their metal-binding capability on and off, and observed the effects.

Their research was reported in the Early Online edition of the Proceedings of the National Academy of Sciences the week of June 12. The Cornell researchers also collaborated with scientists at the University of Houston, the University of Arizona and the University of California, Los Angeles.

The key protein, known as CusB, resides in the periplasm, the space between the inner and outer membranes that make up the bacteria’s cell wall. When CusB binds to an intruder — in this experiment, a copper atom — that has passed through the porous outer membrane, it changes its shape so that it will attach itself between two related proteins in the inner and outer membranes to form a complex known as CusCBA that acts as a tunnel through the cell wall. The inner protein has a mechanism to grab the intruder and push it through.

The tunnel locks the inner and outer membranes together, making the periplasm less flexible and interfering with its normal functions. The ability to assemble the tunnel only when needed, rather than having it permanently in place, gives the cell an advantage, the researchers point out.

This mechanism for defending against toxic metals may also explain how bacteria develop resistance to antibiotics, by mutating their defensive proteins to recognize them. Similar mechanisms may be found in other species of bacteria, the researchers suggested.

Story Source:

Materials provided by Cornell University. Original written by Bill Steele. Note: Content may be edited for style and length.

Highly safe biocontainment strategy hopes to encourage greater use of GMOs

Use of genetically modified organisms (GMOs) — microorganisms not found in the natural world but developed in labs for their beneficial characteristics — is a contentious issue.

For while GMOs could greatly improve society in numerous ways — e.g. attacking diseased cells, digesting pollution, or increasing food production — their use is heavily restricted by decades-old legislation, for fear of what might happen should they escape into the environment.

For researchers, aware of their potential, it is important to develop safety strategies to convince legislators they are safe for release.

For this reason Hiroshima University’s Professor Ryuichi Hirota and Professor Akio Kuroda, have developed an extra safe phosphite-based biocontainment strategy.

Biocontainment strategies — methods used to prevent GMO escape or proliferation beyond their required use, typically employ one of two forms.

One is “suicide switch” where released GMOs die off independently after a given time. The other is “nutrient requirement,” where GMOs are designed to expire on removal of a nutrient source.

The control method for the new genetically modified E. coli strain of bacteria employs the latter, and its simple practicality could prove a real game changer.

It relies on the fact that all living things require phosphorus for a vast array of life-determining processes including energy storage, DNA production, and cell signal-transduction. The overwhelming majority of bacteria, source phosphorus from the naturally occurring nutrient phosphate.

However, bacteria are renowned for their ability to obtain energy from seemingly implausible sources and the researchers at HU found one type, Ralstonia sp. Strain 4506, capable of utilizing non-naturally occurring phosphite instead — throwing up exciting possibilities.

As Phosphite, a waste by-product from the metal plating industry, does not occur in the natural world, scientists can easily control its availability and determine potential GMO proliferation.

Strain 4506’s phosphite-digesting enzyme was thus isolated and introduced into E. coli bacteria, which due to its versatility is considered the poster boy of the GMO world. Genetic editing also saw a phosphite specific “transporter” created to allow this nutrients intake.

But while this modified E. coli, now with phosphite munching capabilities, was quite the novelty in the HU lab, there was still a major hurdle to overcome — it still possessed innate phosphate transport mechanisms intact, and could equally survive on non-naturally occurring phosphite or naturally occurring phosphate. It could easily escape and thrive.

As E. coli has seven phosphate transporters — pumps for transferring phosphate from outside the cells membrane to inside, Professors Hirota and Kuroda set about shutting them down using genetic editing.

When the resulting GMO was tested the results were outstanding. It proliferated in a phosphite medium, and didn’t grow at all when exposed only to phosphate.

Further, when thriving populations were later deprived of their phosphite-hit, their numbers tumbled over a two week period to zero — thus fulfilling the criteria for “nutrition requirement” biocontainment.

However, what the scientists discovered next astounded them. Even when this new GMO was successfully and continuously cultured on phosphite, its population nevertheless still began plummeting after two weeks.

Baffled, the HU researchers are investigating but there is a possibility that this strategy possesses “suicide switch” characteristics on top of “nutrient requirement.”

Whatever the reason, an extremely safe and practical biocontainment strategy has been born. Requiring just nine simple gene edits, in naturally occurring organisms, and based on phosphite — a readily available industrial waste product; it is extremely cost and time effective. Additionally, its simplicity means it can be adapted for other microorganisms, making it highly versatile.

These traits contrast with previous biocontainment strategies involving synthetic organisms and energy sources, requiring hundreds of gene edits, awful lots of money and time, and which are so specialized as to make them impractical.

It is hoped this new strategy will grab the attention of relevant government agencies, and convince them to bring 1980s laws in line with 21st Century advancements. We can then get GMOs safely out of the lab for the betterment of society!

Bio-based p-xylene oxidation into terephthalic acid by engineered E. coli

KAIST researchers have established an efficient biocatalytic system to produce terephthalic acid (TPA) from p-xylene (pX). It will allow this industrially important bulk chemical to be made available in a more environmentally-friendly manner.

The research team developed metabolically engineered Escherichia coli (E. coli) to biologically transform pX into TPA, a chemical necessary in the manufacturing of polyethylene terephthalate (PET). This biocatalysis system represents a greener and more efficient alternative to the traditional chemical methods for TPA production. This research, headed by Distinguished Professor Sang Yup Lee, was published in Nature Communications on May 31.

The research team utilized a metabolic engineering and synthetic biology approach to develop a recombinant microorganism that can oxidize pX into TPA using microbial fermentation. TPA is a globally important chemical commodity for manufacturing PET. It can be applied to manufacture plastic bottles, clothing fibers, films, and many other products. Currently, TPA is produced from pX oxidation through an industrially well-known chemical process (with a typical TPA yield of over 95 mol%), which shows, however, such drawbacks as intensive energy requirements at high temperatures and pressure, usage of heavy metal catalysts, and the unavoidable byproduct formation of 4-carboxybenzaldehyde.

The research team designed and constructed a synthetic metabolic pathway by incorporating the upper xylene degradation pathway of Pseudomonas putida F1 and the lower p-toluene sulfonate pathway of Comamonas testosteroni T-2, which successfully produced TPA from pX in small-scale cultures, with the formation of p-toluate (pTA) as the major byproduct. The team further optimized the pathway gene expression levels by using a synthetic biology toolkit, which gave the final engineered E. coli strain showing increased TPA production and the complete elimination of the byproduct.

Using this best-performing strain, the team designed an elegant two-phase (aqueous/organic) fermentation system for TPA production on a larger scale, where pX was supplied in the organic phase. Through a number of optimization steps, the team ultimately achieved production of 13.3 g TPA from 8.8 g pX, which represented an extraordinary yield of 97 mol%.

The team has developed a microbial biotechnology application which is reportedly the first successful example of the bio-based production of TPA from pX by the microbial fermentation of engineered E. coli. This bio-based TPA technology presents several advantages such as ambient reaction temperature and pressure, no use of heavy metals or other toxic chemicals, the removable of byproduct formation, and it is 100% environmentally compatible.

Professor Lee said, “We presented promising biotechnology for producing large amounts of the commodity chemical TPA, which is used for PET manufacturing, through metabolically engineered gut bacterium. Our research is meaningful in that it demonstrates the feasibility of the biotechnological production of bulk chemicals, and if reproducible when up-scaled, it will represent a breakthrough in hydrocarbon bioconversions.”

Remembrance of things past: bacterial memory of gut inflammation

The microbiome, or the collections of microorganisms present in the body, is known to affect human health and disease and researchers are thinking about new ways to use them as next-generation diagnostics and therapeutics. Today bacteria from the normal microbiome are already being used in their modified or attenuated form in probiotics and cancer therapy. Scientists exploit the microorganisms’ natural ability to sense and respond to environmental- and disease-related stimuli and the ease of engineering new functions into them. This is particularly beneficial in chronic inflammatory diseases like inflammatory bowel disease (IBD) that remain difficult to monitor non-invasively. However, there are several challenges associated with developing living diagnostics and therapeutics including generating robust sensors that do not crash and are capable of long-term monitoring of biomolecules.

In order to use bacteria of the microbiome as biomarker sensors, their genome needs to be modified with synthetic genetic circuits, or a set of genes that work together to achieve a sensory or response function. Some of these genetic alterations may weaken or break normal signaling circuits and be toxic to these bacteria. Even in cases where the probiotic microbes tolerate the changes, the engineered cells can have growth delays and be outcompeted by other components of the microbiome. As a result, probiotic bacteria and engineered therapeutic microbes are rapidly cleared from the body, which makes them inadequate for long period monitoring and modulation of the organism’s tissue environment.

A team at the Wyss Institute of Biologically Inspired Engineering led by Pamela Silver, Ph.D., designed a powerful bacterial sensor with a stable gene circuit in a colonizing bacterial strain that can record gut inflammation for six months in mice. This study offers a solution to previous challenges associated with living diagnostics and may bring them closer to use in human patients. The findings are reported in Nature Biotechnology.

Silver, who is a Core Faculty member at the Wyss Institute and also the Elliot T. and Onie H. Adams Professor of Biochemistry and Systems Biology at Harvard Medical School, thought of the gut as a first application for this system due to its inaccessibility by non-invasive means and its susceptibility to inflammation in patients suffering from chronic diseases like IBD. “We think about the gut as a black box where it is hard to see, but we can use bacteria to illuminate these dark places. There is great interest from patients and doctors that push us to build sensors for biomarkers of gut conditions like IBD and colon cancer,” said Silver, “We believe that our work opens up enormous possibilities that can exploit the flexibility and modularity of our diagnostic tool and expand the use of engineered organisms to a wide variety of applications.”

Key to the team’s work is the introduction of a memory module to the circuit that is able to detect a molecule of interest and respond to this exposure long after the stimulus is gone. As bacteria can be rapidly cleared from the intestinal tract, the team used a strain of bacteria that is part of the microbiome of mice, and engineered it to contain the sensory and memory elements capable of detecting tetrathionate. Tetrathionate is a transient metabolic molecule produced in the inflamed mouse intestine as a result of either infection with pathogenic bacteria like Salmonella typhimurium and Yersinia enterocolitica or genetic defects affecting inflammation.

The synthetic genetic circuit designed by the Wyss team contains a “trigger element” that is adopted from the natural system specifically recognizing the biomarker (in this case tetrathionate) in cells, or that can be developed using synthetic approaches when no prior sensor exists. The second element in the circuit is the “memory element” that resembles a toggle switch and has been adapted from a virus that attacks bacteria. It consists of two genes (A and B for simplicity) that regulate each other depending on whether the stimulus is present. In the tetrathionate sensor, the product of gene A blocks expression of gene B when tetrathionate is absent. When tetrathionate is produced during inflammation and is sensed by the trigger element, levels of A decrease and the gene B is induced and begins to shut off expression of gene A. The expression of the B gene is also coupled to a reporter gene which turns bacteria from colorless to blue only when they have switched the memory element on. The switch can be maintained in the on state long after the first tetrathionate exposure.

After verifying the functionality of the sensor in a liquid culture of bacteria, David Riglar, Ph.D., the study’s first author, was able to show that it detected tetrathionate in a mouse model of gut inflammation caused by infection with S. typhimurium up to six months after administration of the sensor-containing probiotic bacteria. Through simple analysis of fecal matter, the synthetic circuit’s memory state was confirmed to be on and its DNA unchanged and stable. “Our approach is to use the bacteria’s sensing ability to monitor the environment in unhealthy tissue or organs. By adding gene circuits that retain memory, we envision giving humans probiotics that record disease progression by a simple and non-invasive fecal test,” said Riglar.

Silver’s team plans to extend this work to sensing inflammation in the human gut and also to develop new sensors detecting signs of a variety of other conditions.

“Pam’s work demonstrates the power of synthetic biology for advancing medicine as it provides a way to rationally and rapidly design sophisticated sensors for virtually any molecule. If successful in humans, their technology would offer a much less expensive and more specific way to monitor gut function at home than sophisticated imaging instruments used today,” said Donald Ingber, M.D., Ph.D., Founding Director of the Wyss Institute, the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Scientists borrow from electronics to build circuits in living cells

Living cells must constantly process information to keep track of the changing world around them and arrive at an appropriate response.

Through billions of years of trial and error, evolution has arrived at a mode of information processing at the cellular level. In the microchips that run our computers, information processing capabilities reduce data to unambiguous zeros and ones. In cells, it’s not that simple. DNA, proteins, lipids and sugars are arranged in complex and compartmentalized structures.

But scientists — who want to harness the potential of cells as living computers that can respond to disease, efficiently produce biofuels or develop plant-based chemicals — don’t want to wait for evolution to craft their desired cellular system.

In a new paper published May 25 in Nature Communications, a team of UW synthetic biology researchers have demonstrated a new method for digital information processing in living cells, analogous to the logic gates used in electric circuits. They built a set of synthetic genes that function in cells like NOR gates, commonly used in electronics, which each take two inputs and only pass on a positive signal if both inputs are negative. NOR gates are functionally complete, meaning one can assemble them in different arrangements to make any kind of information processing circuit.

The UW engineers did all this using DNA instead of silicon and solder, and inside yeast cells instead of at an electronics workbench. The circuits the researchers built are the largest ever published to date in eurkaryotic cells, which, like human cells, contain a nucleus and other structures that enable complex behaviors.

“While implementing simple programs in cells will never rival the speed or accuracy of computation in silicon, genetic programs can interact with the cell’s environment directly,” said senior author and UW electrical engineering professor Eric Klavins. “For example, reprogrammed cells in a patient could make targeted, therapeutic decisions in the most relevant tissues, obviating the need for complex diagnostics and broad spectrum approaches to treatment.”

Each cellular NOR gate consists of a gene with three programmable stretches of DNA — two to act as inputs, and one to be the output. The authors then took advantage of a relatively new technology known as CRISPR-Cas9 to target those specific DNA sequences inside a cell. The Cas9 protein acts like a molecular gatekeeper in the circuit, sitting on the DNA and determining if a particular gate will be active or not.

If a gate is active, it expresses a signal that directs the Cas9 to deactivate another gate within the circuit. In this way, the researchers can “wire” together the gates to create logical programs in the cell.

What sets the study apart from previous work, researchers said, is the scale and complexity of the circuits successfully assembled — which included up to seven NOR gates assembled in series or parallel.

At this size, circuits can begin to execute really useful behaviors by taking in information from different environmental sensors and performing calculations to decide on the correct response. Imagined applications include engineered immune cells that can sense and respond to cancer markers or cellular biosensors that can easily diagnose infectious disease in patient tissue.

These large DNA circuits inside cells are a major step toward an ability to program living cells, the researchers said. They provide a framework where logical programs can be easily implemented to control cellular function and state.

Story Source:

Materials provided by University of Washington. Note: Content may be edited for style and length.

In a neuro-techno future, human rights laws will need to be revisited

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Story Source:

Materials provided by BioMed Central. Note: Content may be edited for style and length.

 

Brain stimulation during training boosts performance

Your Saturday Salsa club or Introductory Italian class might be even better for you than you thought.

According to Sandia National Laboratories cognitive scientist Mike Trumbo, learning a language or an instrument or going dancing is the best way to keep your brain keen despite the ravages of time. Not only do you enhance your cognition but you also learn a skill and have fun.

Several commercial enterprises have claimed you can get cognitive benefits from brain training games intended to enhance working memory. Working memory is the amount of information you can hold and manipulate in your mind at one time, said cognitive scientist Laura Matzen. However, a burgeoning body of research shows working memory training games don’t provide the benefits claimed. A study by Trumbo, Matzen and six colleagues published in Memory and Cognition shows evidence that working memory training actually impairs other kinds of memory.

On the other hand, studies have shown that learning another language can help school-age children do better in math and can delay the onset of dementia in older adults. Also going dancing regularly is the best protection against dementia compared to 16 different leisure activities, such as doing crossword puzzles and bicycling. Playing board games and practicing a musical instrument are the next best activities for keeping the mind sharp. Dancing is probably so effective because it combines cognitive exertion, physical exercise and social interaction, said Trumbo.

New research from Sandia published in Neuropsychologia shows that working memory training combined with a kind of noninvasive brain stimulation can lead to cognitive improvement under certain conditions. Improving working memory or cognitive strategies could be very valuable for training people faster and more efficiently.

“The idea for why brain stimulation might work when training falls short is because you’re directly influencing brain plasticity in the regions that are relevant to working memory task performance. If you’re improving connectivity in a brain region involved in working memory, then you should get transfer to other tasks to the extent that they rely on that same brain region,” said Trumbo. “Whereas when you’re having people do tasks in the absence of brain stimulation, it’s not clear if you’re getting this general improvement in working memory brain areas. You might be getting very selective, task kind of improvements.”

Matzen cautioned that research using transcranial direct current stimulation (tDCS) to improve cognitive performance is relatively new, and the field has produced mixed results. More research is needed to understand how best to use this technology.

Neurons that fire together wire together

Using more than 70 volunteers divided into six groups, the researchers used different combinations of working memory training with transcranial direct current stimulation. Then they assessed the volunteers’ performance on working memory tests and a test of problem-solving ability.

Using electrodes placed on the scalp and powered by a 9-volt battery, a tDCS unit delivers weak constant current through the skull to the brain tissue below. According to Trumbo, most people feel some mild tingling, itching or heat under the electrode for the first few minutes. There are well-established safety guidelines for tDCS research, ensuring that the procedure is safe and comfortable for participants and this research was approved by Sandia’s Human Studies Board and the University of New Mexico’s Institutional Review Board. There are commercial tDCS devices already on the market.

Researchers think tDCS makes neurons a little bit more likely to fire, which can help speed up the formation of neuronal connections and thus learning, said Matzen. Though the exact mechanisms aren’t well understood, its potential is. According to studies, tDCS can help volunteers remember people’s names, is better than caffeine at keeping Air Force personnel awake and may even help fight depression.

Brain stimulation and brain training: better together?

In the Sandia-led study, the volunteers played verbal or spatial memory training games for 30 minutes while receiving stimulation to the left or right forehead. That part of the brain is called the dorsolateral prefrontal cortex and is involved in working memory and reasoning. Since the right hemisphere is involved in spatial tasks and the left hemisphere is involved in verbal tasks, the researchers thought volunteers who received stimulation on the right side while training on spatial tasks would improve on spatial tests and those who received stimulation on the left side while training on verbal tasks would improve on verbal tests.

The verbal task involved remembering if a letter had appeared three letters back in a string of letters, for instance A-C-B-A-D. The spatial task was similar but involved remembering the sequence that blocks appear in a grid.

As expected, the spatial/right group got better at the spatial test but not verbal or reasoning tests. The spatial/left group performed about the same as the volunteers that received mock stimulation. The verbal/left group got better at the verbal test but not spatial or reasoning tests.

However, the results from the verbal/right group were surprising, said Trumbo. This group got better at the trained task — remembering strings of letters — as well as the closely related task — remembering the sequence of boxes in a grid. They also improved on a reasoning test. The sample size was small, with only 12 volunteers, but the improvements were statistically significant, said Matzen.

One explanation Trumbo offered is that the right dorsolateral prefrontal cortex is particularly involved in strategy use during tasks. By stimulating the right side during the verbal task, the volunteers might get better at using a strategy. The tDCS improves the connections of these neurons, which leads to enhanced ability to use this strategy, even on other tasks.

He added, “We did not explicitly collect data related to strategy use, so it is kind of an open question. I’d really like to do some follow-up work.”

If tDCS can reliably enhance working memory or cognitive strategies, it could be very useful for training people faster and more efficiently. Matzen said, “This could benefit many mission areas at Sandia where people must learn complex tools and systems. Reducing training time and improving cognitive performance would have substantial benefits to overall system performance.”

Mini brains from the petri dish

A new method could push research into developmental brain disorders an important step forward. This is shown by a recent study at the University of Bonn in which the researchers investigated the development of a rare congenital brain defect. To do so, they converted skin cells from patients into so called induced pluripotent stem cells. From these ‘jack-of-all-trades’ cells, they generated brain organoids — small three-dimensional tissues which resemble the structure and organization of the developing human brain. The work has now been published in the journal Cell Reports.

Investigations into human brain development using human cells in the culture dish have so far been very limited: the cells in the dish grow flat, so they do not display any three-dimensional structure. Model organisms are available as an alternative, such as mice. The human brain has, however, a much more complex structure. Developmental disorders of the human brain can thus only be resembled to a limited degree in the animal model.

Scientists at the Institute of Reconstructive Neurobiology at the University of Bonn applied a recent development in stem cell research to tackle this limitation: they grew three-dimensional organoids in the cell culture dish, the structure of which is incredibly similar to that of the human brain. These “mini brains” offer insight into the processes with which individual nerve cells organize themselves into our highly complex tissues. “The method thus opens up completely new opportunities for investigating disorders in the architecture of the developing human brain,” explains Dr. Julia Ladewig, who leads a working group on brain development.

Rare brain deformity investigated

In their work, the scientists investigated the Miller-Dieker syndrome. This hereditary disorder is attributed to a chromosome defect. As a consequence, patients present malformations of important parts of their brain. “In patients, the surface of the brain is hardly grooved but instead more or less smooth,” explains Vira Iefremova, PhD student and lead author of the study. What causes these changes has so far only been known in part.

The researchers produced induced pluripotent stem cells from skin cells from Miller-Dieker patients, from which they then grew brain organoids. In organoids, the brain cells organize themselves — very similar to the process in the brain of an embryo: the stem cells divide; a proportion of the daughter cells develops into nerve cells; these move to wherever they are needed. These processes resemble a complicated orchestral piece in which the genetic material waves the baton.

In Miller-Dieker patients, this process is fundamentally disrupted. “We were able to show that the stem cells divide differently in these patients,” explains associate professor Dr. Philipp Koch, who led the study together with Dr. Julia Ladewig. “In healthy people, the stem cells initially extensively multiply and form organized, densely packed layers. Only a small proportion of them becomes differentiated and develops into nerve cells.”

Certain proteins are responsible for the dense and even packing of the stem cells. The formation of these molecules is disrupted in Miller-Dieker patients. The stem cells are thus not so tightly packed and, at the same time, do not have such a regular arrangement. This poor organization leads, among other things, to the stem cells becoming differentiated at an earlier stage. “The change in the three-dimensional tissue structure thus causes altered division behavior,” says Ladewig. “This connection cannot be identified in animals or in two-dimensional cell culture models.”

The scientist emphasizes that no new treatment options are in sight as a result of this. “We are undertaking fundamental research here. Nevertheless, our results show that organoids have what it takes to herald a new era in brain research. And if we better understand the development of our brain, new treatment options for disorders of the brain can presumably arise from this over the long term.”

Story Source:

Materials provided by Universität Bonn. Note: Content may be edited for style and length.

Controlling soft robots using magnetic fields

A team of engineering researchers has made a fundamental advance in controlling so-called soft robots, using magnetic fields to remotely manipulate microparticle chains embedded in soft robotic devices. The researchers have already created several devices that make use of the new technique.

“By putting these self-assembling chains into soft robots, we are able to have them perform more complex functions while still retaining relatively simple designs,” says Joe Tracy, an associate professor of materials science and engineering at North Carolina State University and corresponding author of a paper on the work. “Possible applications for these devices range from remotely triggered pumps for drug delivery to the development of remotely deployable structures.”

The new technique builds on previous work in the field of self-assembling, magnetically actuated composites by Tracy and Orlin Velev, the INVISTA Professor of Chemical and Biomolecular Engineering at NC State.

For this study, the researchers introduced iron microparticles into a liquid polymer mixture and then applied a magnetic field to induce the microparticles to form parallel chains. The mixture was then dried, leaving behind an elastic polymer thin film embedded with the aligned chains of magnetic particles.

“The chains allow us to manipulate the polymer remotely as a soft robot by controlling a magnetic field that affects the chains of magnetic particles,” Tracy says.

Specifically, the direction of the magnetic field and its strength can be varied. The chains of iron microparticles respond by aligning themselves and the surrounding polymer in the same direction as the applied magnetic field.

Using this technique, the researchers have created three kinds of soft robots. One device is a cantilever that can lift up to 50 times its own weight. The second device is an accordion-like structure that expands and contracts, mimicking the behavior of muscle. The third device is a tube that is designed to function as a peristaltic pump — a compressed section travels down the length of the tube, much like someone squeezing out the last bit of toothpaste by running their finger along the tube.

“We’re now working to improve both the control and the power of these devices, to advance the potential of soft robotics,” Tracy says.

The researchers have also developed a metric for assessing the performance of magnetic lifters, such as the cantilever device.

“We do this by measuring the amount of weight being lifted and taking into account both the mass of particles in the lifter and the strength of the magnetic field being applied,” says Ben Evans, co-author of the paper and an associate professor of physics at Elon University. “We think this is a useful tool for researchers in this area who want to find an empirical way to compare the performance of different devices.”

Story Source:

Materials provided by North Carolina State University. Note: Content may be edited for style and length.

Psychologists enlist machine learning to help diagnose depression

Depression affects more than 15 million American adults, or about 6.7 percent of the U.S. population, each year. It is the leading cause of disability for those between the ages of 15 and 44.

Is it possible to detect who might be vulnerable to the illness before its onset using brain imaging?

David Schnyer, a cognitive neuroscientist and professor of psychology at The University of Texas at Austin, believes it may be. But identifying its tell-tale signs is no simpler matter. He is using the Stampede supercomputer at the Texas Advanced Computing Center (TACC) to train a machine learning algorithm that can identify commonalities among hundreds of patients using Magnetic Resonance Imaging (MRI) brain scans, genomics data and other relevant factors, to provide accurate predictions of risk for those with depression and anxiety.

Researchers have long studied mental disorders by examining the relationship between brain function and structure in neuroimaging data.

“One difficulty with that work is that it’s primarily descriptive. The brain networks may appear to differ between two groups, but it doesn’t tell us about what patterns actually predict which group you will fall into,” Schnyer says. “We’re looking for diagnostic measures that are predictive for outcomes like vulnerability to depression or dementia.”

In 2017, Schnyer, working with Peter Clasen (University of Washington School of Medicine), Christopher Gonzalez (University of California, San Diego) and Christopher Beevers (UT Austin), completed their analysis of a proof-of-concept study that used a machine learning approach to classify individuals with major depressive disorder with roughly 75 percent accuracy.

Machine learning is a subfield of computer science that involves the construction of algorithms that can “learn” by building a model from sample data inputs, and then make independent predictions on new data.

The type of machine learning that Schnyer and his team tested is called Support Vector Machine Learning. The researchers provided a set of training examples, each marked as belonging to either healthy individuals or those who have been diagnosed with depression. Schnyer and his team labelled features in their data that were meaningful, and these examples were used to train the system. A computer then scanned the data, found subtle connections between disparate parts, and built a model that assigns new examples to one category or the other.

In the study, Schnyer analyzed brain data from 52 treatment-seeking participants with depression, and 45 heathy control participants. To compare the groups, they matched a subset of depressed participants with healthy individuals based on age and gender, bringing the sample size to 50.

Participants received diffusion tensor imaging (DTI) MRI scans, which tag water molecules to determine the extent to which those molecules are microscopically diffused in the brain over time. By measuring this diffusion in multiple spatial directions, vectors are generated for each voxel (three-dimensional cubes that represent either structure or neural activity throughout the brain) to quantify the dominant fiber orientation. These measurements are then translated into metrics that indicate the integrity of white matter pathways within the cerebral cortex.

One common parameter used to characterize DTI is fractional anisotropy: the extent to which diffusion is highly directional (high fractional anisotropy) or unrestricted (low fractional anisotropy).

They compared these fractional anisotropy measurements between the two groups and found statistically significant differences. They then reduced the number of voxels involved to a subset that was most relevant for classification and carried out the classification and prediction using the machine learning approach.

“We feed in whole brain data or a subset and predict disease classifications or any potential behavioral measure such as measures of negative information bias,” he says.

The study revealed that DTI-derived fractional anisotropy maps can accurately classify depressed or vulnerable individuals versus healthy controls. It also showed that predictive information is distributed across brain networks rather than being highly localized.

“Not only are were learning that we can classify depressed versus non-depressed people using DTI data, we are also learning something about how depression is represented within the brain,” said Beevers, a professor of psychology and director of the Institute for Mental Health Research at UT Austin. “Rather than trying to find the area that is disrupted in depression, we are learning that alterations across a number of networks contribute to the classification of depression.”

The scale and complexity of the problem necessitates a machine learning approach. Each brain is represented by roughly 175,000 voxels and detecting complex relationship among such a large number of components by looking at the scans is practically impossible. For that reason, the team uses machine learning to automate the discovery process.

“This is the wave of the future,” Schnyer says. “We’re seeing increasing numbers of articles and presentations at conference on the application of machine learning to solve difficult problems in neuroscience.”

The results are promising, but not yet clear-cut enough to be used as a clinical metric. However, Schnyer believes that by adding more data — related not only to MRI scans but also from genomics and other classifiers — the system can do much better.

“One of the benefits of machine learning, compared to more traditional approaches, is that machine learning should increase the likelihood that what we observe in our study will apply to new and independent datasets. That is, it should generalize to new data,” Beevers said. “This is a critical question that we are really excited to test in future studies.”

Beevers and Schnyer will expand their study to include data from several hundred volunteers from the Austin community who have been diagnosed with depression, anxiety or a related condition. Stampede 2 — TACC’s newest supercomputer which will come online later in 2017 and will be twice as powerful as the current system — will provide the increased computer processing power required to incorporate more data and achieve greater accuracy.

“This approach, and also the movement towards open science and large databases like the human connectome project, mean that facilities like TACC are absolutely essential,” Schnyer says. “You just can’t do this work on desktops. It’s going to become more and more important to have an established relationship with an advanced computing center.”

 

Can robots write meaningful news?

Robots and computers are replacing people everywhere; doctors, pilots, even journalists. Is this leading to a dystopian society, or could it be something positive?

With this in mind, researchers from the Media Management and Transformation Centre (MMTC) at Jönköping International Business School, Jönköping University have launched the project DPer News (Digital Personalization of the News), with support from the Knowledge Foundation.

Digitalization is the integration of digital technologies into everyday life, but it is also the process of moving to a digital business. The media industry, and news in particular, is the starting point for the Dper News project, but virtually all industries are facing digitalization.

“The angle of digitalization is very much in demand today, and companies are eager to get help to transform,” says project director Mart Ots. “The general question is how can algorithms replace humans in repetitive professions? Journalism may not seem like a repetitive job, but when it comes to writing about finance and sports, it very well can be.”

In some areas, robots can be used to assist journalists by finding and analyzing data, but the journalist still writes the story. In other cases, robots could do the actual writing. The DPer News project wants to find creative methods for robotization that can help the news industry create more interesting news.

“DPer News is about how we can make news stories that are not just cheap and convenient, but more meaningful and personal. It worries me that just because we can get robots to mine and condense data, that’s all we’ll do,” says Professor Daved Barry. “Robots can target you and quickly give you the content you want, like the latest sports scores. But what about giving you content that would surprise you, that would help you think in out-of-the-box ways?”

Journalism, at its heart, is a very human enterprise. Can it be done by robots? Representing different views on this, the project involves experts on data mining, innovation and creativity, the news company Hallpressen, and computer consultants Infomaker and PDB. It connects with two other research projects at MMTC: DATAMINE which has received Regional funding, and the Digital Business Innovation Studio has received a grant from Vinnova.

The project team consists of Daved Barry, Karl Hammar, Anette Johansson, Ulf Johansson, Tuwe Löfström, Henry Lopez Vega, Mart Ots, Andrea Resmini, Ulf Seigerroth, and Håkan Sundell.

Story Source:

Materials provided by Jönköping University. Note: Content may be edited for style and length.

 

Playing to beat the blues: Video games viable treatment for depression

Video games and “brain training” applications are increasingly touted as an effective treatment for depression. A new UC Davis study carries it a step further, though, finding that when the video game users were messaged reminders, they played the game more often and in some cases increased the time spent playing.

“Through the use of carefully designed persuasive message prompts … mental health video games can be perceived and used as a more viable and less attrition-ridden treatment option,” according to the study.

The paper, authored by Subuhi Khan and Jorge Pena, professors in the Department of Communication at UC Davis, is forthcoming in Computers in Human Behavior.

The messages, and subsequent games assigned, targeted depression that could be perceived as either internal — caused by a chemical imbalance or hereditary factor; or depression that could come from outside factors — such as a job or relationship situation. The messaging had slight differences in approach, but ended on basic inspirational notes to inspire the participant to play the game. Each message ended with: “Just like a regular workout, much of the benefit of these tasks comes from using them without taking breaks and putting in your best effort.”

Using six, three-minute games, the study found in most cases that playing the specifically designed game helped subjects feel they had some control over their depression. Each game was an adaptation of neurophysiological training tasks that have been shown to improve cognitive control among people experiencing depression.

Portraying depression as something caused internally because of biological factors and providing a video game-based app for brain training made participants feel that they could do something to control their depression. This supports other research that shows that brain-training games have the potential to induce cognitive changes, the authors said. Those users also gave high ratings for the usability of the app.

On the other hand, portraying depression as a condition caused by external factors led users to spend more time playing the game — again, perhaps giving them a feeling of control over their situation. But researchers said this result was likely due to immediate engagement and was unlikely to have long-term benefits.

The study did not examine whether playing the games actually reduced depression, although that will be looked at in future studies, the authors said.

The study looked at results from 160 student volunteers who said they suffered from mild depression. They received class credit for participating. Three-fourths were women, and more than half of the subjects were of Asian heritage, followed by white, Latino, and other ethnicities. The average age was 21.

Story Source:

Materials provided by University of California – Davis. Note: Content may be edited for style and length.

 

Piece of mind: Engineers can take pictures of the brain with surgical needle and laser light

With just an inexpensive micro-thin surgical needle and laser light, University of Utah engineers have discovered a minimally invasive, inexpensive way to take high-resolution pictures of an animal brain, a process that also could lead to a much less invasive method for humans.

A team led by University of Utah electrical and computer engineering associate professor Rajesh Menon has now proven the process works on mice for the benefit of medical researchers studying neurological disorders such as depression, obsessive-compulsive disorder and aggression. Menon and his team have been working with the U. of U.’s renowned Nobel-winning researcher, Distinguished Professor of Biology and Human Genetics Mario Capecchi, and Jason Shepherd, assistant professor of neurobiology and anatomy.

The group has documented its process in a paper titled, “Deep-brain imaging via epifluorescence Computational Cannula Microscopy,” in the latest issue of Scientific Reports. The paper’s lead author is doctoral student Ganghun Kim.

The process, called “computational cannula microscopy,” involves taking a needle about a quarter-millimeter in diameter and inserting it into the brain. Laser light shines through the needle and into the brain, illuminating certain cells “like a flashlight,” Menon says. In the case of mice, researchers genetically modify the animals so that only the cells they want to see glow under this laser light.

The light from the glowing cells then is captured by the needle and recorded by a standard camera. The captured light is run through a sophisticated algorithm developed by Menon and his team, which assembles the scattered light waves into a 2D or potentially, even a 3D picture.

Typically, researchers must surgically take a sample of the animal’s brain to examine the cells under a microscope, or they use an endoscope that can be anywhere from 10 to 100 times thicker than a needle.

“That’s very damaging,” Menon says of previous methods of examining the brain. “What we have done is to take a surgical needle that’s really tiny and easily put it into the brain as deep as we want and see very clear high-resolution images. This technique is particularly useful for looking deep inside the brain where other techniques fail.”

Now that the process has been proven to work in animals, Menon believes it can potentially be developed for human patients, creating a simpler, less expensive and invasive method than endoscopes.

“Although its much more complex from a regulatory standpoint, it can be done in humans, and not just in the brain, but for other organs as well,” he says. “But our motivation for this project right now is to look inside the brain of the mouse and further develop the technique to understand fundamental neuroscience in the mouse brain.”

Story Source:

Materials provided by University of Utah. Note: Content may be edited for style and length.