Site icon DED9

How Computationally Complex Are Biological Neurons?

How Computationally Complex Are Biological Neurons?

Scientists Have Trained The Artificial Neural Network To Mimic Biological Neurons, And Their Results Offer A New Way Of Thinking About The Complexity Of Brain Cells.

Biological Neurons, Our soft brains look very different from the complex silicon chips in a computer processor; But scientists have a long history of comparing the two. As Alan Turing put it in 1952: “We are not interested in the fact that the state of the brain is jelly-like. In other words, its form does not matter; “It’s just the computing power that matters.”

Today, the most powerful artificial intelligence systems use a type of machine learning called deep learning. Their algorithms learn their tasks by processing extensive data using hidden layers of interconnected nodes called deep neural networks.

As the name implies, deep neural networks are inspired by real neural networks within the brain, and nodes are modelled on real neurons; Or at least based on what neuroscientists knew about neurons in the 1950s, when an influential neural model called perceptron was born.

Since then, our understanding of the computational complexity of individual neurons has increased dramatically. It has been found that biological neurons are much more complex than artificial neurons; But how much?

To find out, David Beniagoev, Aidan Segoff, and Michael London of the Hebrew University of Jerusalem trained a deep neural network to mimic the calculations of simulated biological neurons. They showed that the neural network needed 5 to 8 layers of interconnected neurons to represent the complexity of a biological neuron.

Even the authors themselves did not anticipate complexity.

“I thought it was simpler and smaller,” says Beniagoev. He expected that three to four layers would be enough to achieve the calculations inside the cell.

“We may need to reconsider the old tradition of roughly comparing brain neurons with defined neurons in machine learning,” said Timothy Lilkerp, who designs decision-making algorithms at Google’s DePayend.

The most basic analogy between artificial and natural neurons involves how to input information is managed. Both types of neurons receive input signals and, based on this information, decide whether or not to send their password to other neurons. Artificial neurons rely on a simple calculation to make this decision, But decades of research have shown that this process is much more complex in biological neurons.

Computational neuroscientists use a type of input-output function to model the relationship between inputs received by tall tree branches, such as neurons, called dendrites, and the neurons’ decision to send a signal.

This function is what the new study’s authors taught the artificial deep neural network to determine the complexity of biological neurons.

The researchers began by simulating a massive simulation of the input-output function of a neuron with different trees at the top and bottom of the dendritic branches known as “pyramidal neurons” that belonged to the rat cortex.

They then inserted the simulation into a deep neural network with a maximum of 256 artificial neurons per layer. The researchers continued to increase the number of layers to achieve 99% accuracy at the millisecond level between the input and output of the simulated neurons.

In most networks, the deep neural network successfully predicted the input-output function behaviour with at least 5 (but not more than 8) artificial layers, approximately equivalent to 1,000 artificial neurons for just one biological neuron.

Stimulation of neurons

The computational complexity of a single neuron, such as the pyramidal neuron on the left, relies on dendritic branches bombarded with input signals. It leads to changes in local voltage, as indicated by a difference in the colour of the neurons (red means the high voltage and blue means low voltage). Before the neuron decides to send its signal, it experiences a threefold increase in voltage. This phenomenon is shown in the individual branches on the right, where the colours indicate the location of the dendrites from the top (red) to the bottom (blue).

According to Andreas Tulips, a computer neuroscience scientist at Baylor College of Medicine, the results of a new study establish a link between biological neurons and artificial neurons. Still, the study authors warn that the neurons described do not yet accurately represent biological neurons.

“The relationship between the number of layers you have in a neural network and the complexity of the network is not clear,” says London. So we really can not say, for example, how much more complex we will be by going from four layers to five layers.

Nor can we say that the need for 1,000 artificial neurons means that biological neurons are exactly 1,000 times more complex.

The use of more neurons within each layer may eventually lead to a deep neural network with one layer, But it will probably take a lot more data and time to learn the algorithm. “We tried a lot of architecture with a lot of depth and different factors, and in most cases, we failed,” says London.

The authors have shared their code to encourage other researchers to find more innovative solutions using fewer layers. Still, given the difficulty of finding a deep neural network that can accurately mimic 99% of neurons, the authors are confident that their results provide a meaningful comparison for further research.

The study results may provide a new way to connect the neural networks that classify images to the brain, says Lilkerp.

These neural networks often require more than 50 layers. If each biological neuron is like a five-layer neural network, perhaps a 50-layer image classification network is equivalent to 10 real neurons in a physical network.

The authors also hope that their results will change the current architecture of advanced deep networks in artificial intelligence. “We want to replace the current deep network technology with something closer to the brain,” says Self.

They suggest that every simple unit in today’s deep networks be replaced by a team that represents a neuron. In this alternative scenario, researchers and artificial intelligence engineers can add a deep 5-layer network as a small network to replace each artificial neuron.

But some researchers question whether this will benefit AI.

A neuroscientist at the Cold Spring Harbor Laboratory in the United States, Anthony Zador, says this is an unanswered question and provides fundamentally new research to test.

Apart from artificial intelligence applications, the new article also strengthens the consensus among scientists on the computational power of dendritic trees and, on its behalf, individual neurons.

In 2003, three neuroscientists modelled it in a two-layer neural network, showing that the pyramidal neuron dendritic trees perform complex calculations.

In a new paper, the authors examine which features of pyramidal neurons inspired the much greater complexity of their 5 to 8-layer deep neural networks. They concluded that this complexity results from dendritic trees and special receptors that receive chemical messages on the surface of dendrites.

These findings were consistent with past work in this area.

Some believe that the new work results mean that neuroscientists should make the study of single neurons a higher priority. “This article makes thinking about individual dendrites and neurons much more important than ever,” says Conrad Cording, a computer neuroscience specialist at the University of Pennsylvania.

Others, such as Lilikerp and Zador, suggest that focusing on neurons within a circuit is essential for learning how the brain handles the computational complexity of single neurons.

However, the language of neural networks may provide new insights into the power of neurons and, ultimately, the brain. “Thinking in terms of layers and depth and breadth gives us an intuitive sense of computational complexity,” says Grace Lindsey, a scientist at Computational Neurology at University College London.

Lindsay also warns that the new work still compares one model to another.

Unfortunately, it is currently impossible for neuroscientists to record the complete input-output function; So there may be more in biological neurons that we do not yet know.

In other words, real neurons may be more complex. “We’re not sure there is a final number between 5 and 8,” says London.

 

Exit mobile version