Sneak peak at the new gotcha! homepage!See More arrow right

New Research Suggests Artificial Brains Could Benefit From Sleep

New Research Suggests Artificial Brains Could Benefit From Sleep

New Research Suggests Artificial Brains Could Benefit From Sleep
New Research Suggests Artificial Brains Could Benefit From Sleep
Published
New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains. 
The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14. 
Yijing Watkins is a Los Alamos National Laboratory computer scientist. 
“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”
Solving Instability in Network Simulations
Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored. 
“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.
Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.
“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning , deep learning , and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
Sleep as a Last Resort Solution
According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes. 
The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability. 
The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.
Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)
 
Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.
You may like
AI Model Used To Map Dryness Of Forests, Predict Wildfires
Published
Daniel Nelson
A new deep learning model designed by researchers from Stanford University leverages moisture levels across 12 different states in order to assist in the prediction of wildfires and help fire management teams get ahead of potentially destructive wildfires.
Fire management teams aim to predict where the worst blazes might occur, in order that preventative measures like prescribed burns can be carried out. Predicting points of origin and spreading patterns for wildfires mandates information regarding fuel amounts and moisture levels for the target region. Collecting this data and analyzing it at the speed required to be useful to wildfire management teams is difficult, but deep learning models could help automate these critical processes.
As Futurity recently reported , researchers from Stanford University collected climate data and designed a model intended to render detailed maps of moisture levels across 12 western states, including the Pacific Coast states, Texas, Wyoming, Montana, and the southwest states. According to the researchers, although the model is still undergoing refinement it is already capable of revealing areas at high-risk for forest fires where the landscape is unusually dry.
The typical method of collecting data regarding fuel and moisture levels for a target region is by painstakingly comparing dried out vegetation to more moist vegetation. Specifically, researchers collect vegetation samples from trees and weigh them. Afterwards, the vegetation samples are dried out and reweighted. Comparisons are made between the weight of the dry samples and the wet samples to determine the amount of moisture in the vegetation. This process is a long, complex one that is only viable in certain areas and for some species of vegetation. However, the data collected from decades of this process has been used to create the National Fuel Moisture Database, comprised of over 200,000 records. The fuel-moisture content of a region is well known to be linked to the risk of wildfire, though it’s still unknown just how much of a role it plays between ecosystems and from one plant to other plants.
Krishna Rao, PhD student in earth systems science at Stanford was the lead author or the new study, and Rao explained to Futurity that machine learning affords researchers the ability to test assumptions about links between live fuel moisture and weather for different ecosystems. Rao and colleagues trained a recurrent neural network model on data from the National Fuel Moisture Database. The model was then tested by estimating fuel moisture levels based on measurements collected by space sensors. The data included signals from synthetic aperture radar (SAR), which is microwave radar signals that penetrate to the surface, and visible light bouncing off the planet’s surface. The training and validation data for the model consisted of  three years of data for approximately 240 sites across the western US starting in 2015.
The researchers ran analyses on various types of land coverage, including sparse vegetation, grasslands, shrublands, needleleaf evergreen forests, and broadleaf deciduous forests. The model’s predictions were the most accurate, most reliably matched the NFMD measurement, on shrubland regions. This is fortunate, as shrublands comprise approximately 45% of the ecosystems found throughout the US west. Shrublands, particularly chaparral shrublands, are often uniquely susceptible to fire, as seen in many of the fires that burned throughout California over recent years.
The predictions generated by the model have been used to create an interactive map that fire management agencies could one day use to prioritize regions for fire control and discern other relevant patterns. The researchers believe that with further training and refinement the model could.
As Alexandra Konings, assistant professor of earth systems science at Stanford, explained to Futurity:
“Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions. Now we’re trying to really pin down the best ways to use it for improved fire prediction.”
Spread the love
Researchers Develop Method for Artificial Neuronal Networks to Communicate with Biological Ones
Published
Alex McFarland
A group of researchers has developed a way for artificial neuronal networks to communicate with biological neuronal networks. The new development is a big step forward for neuroprosthetic devices, which replace damaged neurons with artificial neuronal circuitry. 
The new method relies on the conversion of artificial electrical spiking signals to a visual pattern. That is then used, via optogenetic stimulation, in order to entrain the biological neurons. 
The article titled “ Toward neuroprosthetic real-time communication from in silico to biological neuronal network via patterned optogenetic stimulation ” was published in Scientific Reports.
Neuroprosthetic Technology
An international team led by Ikerbasque Researcher Paolo Bonifazi from Biocruces Health Research Institute in Bilbao, Spain, set out to create neuroprosthetic technology. He was joined by Timothée Levi from Institute of Industrial Science, The University of Tokyo.
One of the biggest challenges surrounding this technology is that neurons in the brain are extremely precise when communicating. When it comes to electrical neural networks, electrical output is not capable of targeting specific neurons. 
To get around this, the team of researchers converted the electrical signals to light. 
According to Levi, “advances in optogenetic technology allowed us to precisely target neurons in a very small area of our biological neuronal network.”
Optogenetics
Optogenetics is a technology that relies on the light-sensitive proteins that are found in algae and other animals. When these proteins are inserted into neurons, light can be shined onto a neuron to make it active or inactive, depending on the type of protein. 
The researchers used specific proteins that were activated by blue light in the project. The first step was to convert the electrical output of the spiking neuronal network into a checkered pattern made up of blue and black squares. This pattern was then projected by light down onto a 0.8 by 0.8 mm square of the biological neural network, which was growing in a dish. When this happened, only the neurons hit by the light coming from the blue squares were activated. 
Synchronous activity is produced in cultured neurons whenever there is spontaneous activity. This results in a type of rhythm that is based on the way the neurons are connected together, the different types of neurons, and how they adapt and change. 
“The key to our success,” says Levi, “was understanding that the rhythms of the artificial neurons had to match those of the real neurons. Once we were able to do this, the biological network was able to respond to the “melodies” sent by the artificial one. Preliminary results obtained during the European Brainbow project, help us to design these biomimetic artificial neurons.”
The researchers eventually found the best match after the artificial neural network was tuned to different rhythms, and they were able to identify changes in the global rhythms of the biological network.
“Incorporating optogenetics into the system is an advance towards practicality,” says Levi. “It will allow future biomimetic devices to communicate with specific types of neurons or within specific neuronal circuits.”
The future prosthetic devices that are developed with the system could replace damaged brain circuits. They could also restore communication between different regions of the brain. All of this could lead to an extremely impressive generation of neuroprosthesis. 
 
Engineers Develop Energy-Efficient “Early Bird” Method to Train Deep Neural Networks
Published
Alex McFarland
Engineers at Rice University have developed a new method for training deep neural networks (DNNs) with a fraction of the energy normally required. DNNs are the form of artificial intelligence (AI) that plays a key role in the development of technologies such as self-driving cars, intelligent assistants, facial recognition, and other applications.
Early Bird was detailed in a paper on April 29 by researchers from Rice and Texas A&M University. It took place at the International Conference on Learning Representations , or ICLR 2020. 
The study’s lead authors were Haoran You and Chaojian Li from Rice’s Efficient and Intelligent Computing (EIC) Lab. In one study, they demonstrated how the method could train a DNN at the same level and accuracy as today’s methods, but using 10.7 times less energy. 
The research was led by EIC Lab director Yingyan Lin, Rice’s Richard Baraniuk, and Texas A&M’s Zhangyang Wang. Other co-authors include Pengfei Xu, Yonggan Fu, Yue Wang, and Xiaohan Chen. 
“A major driving force in recent AI breakthroughs is the introduction of bigger, more expensive DNNs,” Lin said. “But training these DNNs demands considerable energy. For more innovations to be unveiled, it is imperative to find ‘greener’ training methods that both address environmental concerns and reduce financial barriers of AI research.”
Expensive to Train DNNs
It can be very expensive to train the world’s best DNNs, and the price-tag continues to increase. In 2019, a study led by the Allen Institute for AI in Seattle found that in order to train a top-flight deep neural network, 300,000 times more computations are needed compared to 2012-2018. Another 2019 study, this time led by researchers at the University of Massachusetts Amherst, found that by training a single, elite DNN, about the same amount of carbon dioxide emissions are released as five U.S. automobiles. 
In order for DNNs to perform their highly specialized tasks, they consist of at least millions of artificial neurons. They are capable of learning how to make decisions, sometimes outperforming humans, by observing large numbers of examples. They can do this without needing explicit programming. 
Prune and Train
Lin is an assistant professor of electrical and computer engineering in Rice’s Brown School of Engineering. 
“The state-of-art way to perform DNN training is called progressive prune and train,” Lin said. “First, you train a dense, giant network, then remove parts that don’t look important — like pruning a tree. Then you retrain the pruned network to restore performance because performance degrades after pruning. And in practice you need to prune and retrain many times to get good performance.”
This method is used since not all of the artificial neurons are needed to complete the specialized task. The connections between neurons are fortified due to the training, and others can be discarded. This pruning method cuts computational costs and reduces model size, which makes fully trained DNNs more affordable. 
“The first step, training the dense, giant network, is the most expensive,” Lin said. “Our idea in this work is to identify the final, fully functional pruned network, which we call the ‘early-bird ticket,’ in the beginning stage of this costly first step.”
The researchers do this by looking for key network connectivity patterns, and they were able to discover these early-bird tickets. This allowed them to quicken the DNN training. 
Early Bird in the Beginning Phase of Training
Lin and the other researchers found that Early Bird could appear one-tenth or less of the way through the beginning phase of training. 
“Our method can automatically identify early-bird tickets within the first 10% or less of the training of the dense, giant networks,” Lin said. “This means you can train a DNN to achieve the same or even better accuracy for a given task in about 10% or less of the time needed for traditional training, which can lead to more than one order savings in both computation and energy.”
Besides being faster and more energy-efficient, the researchers have a strong focus on environmental impact. 
“Our goal is to make AI both more environmentally friendly and more inclusive,” she said. “The sheer size of complex AI problems has kept out smaller players. Green AI can open the door enabling researchers with a laptop or limited computational resources to explore AI innovations.”
The research received support from the National Science Foundation. 
 

Images Powered by Shutterstock