DED9

How Similar Is The Learning Pattern Of Artificial Intelligence To The Human Brain?

In The Past Decade, Significant Progress Has Been Made In The Field Of Artificial Intelligence, And Some Systems Based On A Huge Database Of Training Data And Labeled Data Have Become Intelligent Electronic Equipment. 

Artificial Intelligence, Today, artificial neural networks can be trained to correctly tell the difference between a cat and a tiger in an image and to distinguish a leopard image from any similar image.

In a situation where the above strategy has brilliant achievements, it is associated with problems and inefficiencies.

Supervised learning requires data labeled by humans, and neural networks often take shortcuts to learn how to use labels and information to recognize visual elements within an image. These two elements contain the minimum data that helps the artificial intelligence in the diagnosis. For example, an artificial neural network might use grass to identify a photo of a cow since, most of the time, the pictures taken are of cows in fields. T

Alexei Efros, scientist and researcher in the field of artificial intelligence at The University of California, points out that we are building a generation of algorithms that behave like students who haven’t been in class all semester and haven’t studied but are faced with a barrage of information on exam night and only care about memorizing it.

In such a case, the student does not learn the material in the true sense of the word but performs well in the exam.

The commonality between biological and machine intelligence interests researchers because supervised learning may be limited to topics biological brains can understand well. Animals and humans do not use labeled data sets to learn.

Most of the time, they explore their surroundings by themselves, and by doing so, they gain a robust and correct understanding of the world. Some computational neuroscientists are starting to explore trained neural networks that work with little or no human-labeled data, known as self-supervised learning algorithms.

Neuroscientists point out that artificial networks have provided interesting information about some learning methods of the human brain. Self-supervised learning algorithms have acceptable performance in human language modeling and image recognition operations. These computational models have been developed based on the functioning of organisms’ visual and auditory systems to resemble the functioning of the human brain in learning closely.

Incomplete supervision

Self-monitored learning allows the neural network to figure out what is essential and what is not, a process that may explain how our brains learn and do things.

The construction of brain models inspired by artificial neural networks began nearly ten years ago. Their structure was almost at the same time as the emergence of the AlexNet neural network, which revolutionized the classification of uncertain images. Like other neural networks, this network is made of layers of artificial neurons whose computing units can perform calculations through communication. So that they are different in terms of strength or weight, it is necessary to explain that synaptic weight refers to the power or range of connection between two nodes in the neural network.

Suppose a neural network fails to classify an image correctly. In that case, the learning algorithm revises and updates the weights of connections between neurons to reduce the probability of misclassification in the next round of training. Based on the above pattern, the algorithm repeats this process repeatedly with all the training images until the network error is reduced acceptably.

At the same time, neuroscientists developed the first computational models of the primate visual system using neural networks such as AlexNet and similar examples. These models provided promising results because when monkeys and artificial neural networks were shown the same images, the activity of the actual cells and the artificial cells showed an exciting match.

While brilliant results were obtained in this field, researchers soon realized the limitations of supervised learning.

For example, in 2017, Leon Gatys, a computer scientist at the University of Tübingen in Germany, and his colleagues took an image of a Ford Model T. They painted an image of a leopard skin as a pale mask over it. The advanced neural network correctly classified the original image as a Ford Model T and considered the matte leopard skin image. To obtain a fuzzy image so that it is impossible to recognize it for the intelligent algorithm.

More precisely, the supervised learning-based artificial neural network did not understand the car’s shape or the leopard and only limited its judgment to the texture.

Based on the experiment, you will understand why self-directed learning strategies replace the traditional supervised learning model. In this method, humans do not label the data; the network must understand the nature of the data based on brief training.

Self-monitoring algorithms can create gaps in the data and ask the neural network to fill in the gaps. For example, in one of the exercises, the learning algorithm showed the artificial neural network the first few words of a sentence and asked it to predict the next term.

In such a case, it seems that when this model is trained with a vast collection of texts collected from the Internet, it can learn the syntactic rules of the language and show remarkable linguistic ability without supervision and external labels.

Animals and humans can explore their surroundings independently; by doing this, they gain a complete understanding of the world. Therefore, our brain function does not depend on labels and explores the universe based on self-monitored learning. 

Similar efforts are underway in the field of computer vision. For example, in late 2021, Kaiming He and his colleagues developed a method called masked auto-encoder, based on a technique developed by the Afros team in 2016.

The self-supervised learning algorithm randomly did not access nearly three-quarters of each image. Then, by the method of automatic masking-encoding, the non-hidden parts of the image were provided to him as a base model, a mathematical representation compact containing important information about that object. After this step, a decoder converts the photos back into complete ones.

The self-supervised learning algorithm teaches the machine the encoder-decoder combination to transform images with hidden parts into complete versions of the original image.

Meanwhile, any differences between the actual images and the reconstructed images were fed back to the system to help it learn. More precisely, this process is repeated for a set of training images until the system’s error rate is approximately reduced.

For example, when a trained auto-masking-encoding system was shown an image of a bus it had not seen before, the system successfully reconstructed the structure of the bus. Blake Richards, a prominent computational neuroscientist, believes that 90% of what our brain does is based on self-regulated learning.

The reconstruction of latent segments seems to contain deeper information than the previous approaches. In such a situation, this system may understand the textures and shapes (car, leopard, etc.).

To be more precise, we should say that the concept of self-monitored learning refers to improving your knowledge and understanding of images from the ground up, just like a student who studies throughout the semester and understands the photos without having to learn a vast amount of information overnight to pass the exam.

Self-monitoring brains

In systems similar to those mentioned above, some neuroscientists have observed signs of how we learn. “I think 90 percent of what our brain does is based on self-monitored learning,” says  Blake Richards, a computational neuroscientist at the Quebec Institute of Artificial Intelligence in Canada.

Biological brains seem to continuously predict the future location of a moving object or the next word in a sentence, just as a self-supervised learning algorithm tries to predict an obscure part of an image or text. Therefore, brains, whether biological or artificial, learn from their mistakes alone.

For example, imagine the visual system of humans and other creatures. The best research has been done on the sensory systems of animals. Still, neuroscientists have not succeeded in explaining why there are two completely separate pathways in the visual design of humans and animals.

One of these pathways is the Ventral Visual Stream, which is responsible for recognizing objects and faces; the other is the Dorsal Visual Stream, which processes movements. Based on this question, Richards and his team decided to find an answer using a self-monitoring model.

For this, the above research team trained an intelligent algorithm to combine two different neural networks, one of these networks is called ResNet and was designed for image processing. The other is a recurrent network that can sequence. It follows from previous inputs to predict the next expected information.

To train the hybrid AI, the team started with a sequence of 10 frames from a video file and let ResNet process them one by one.

Next, the recurrent network predicted the presentation of the 11th frame, which was hidden, while the 11th frame had no similarity to the previous ten structures. Next, the self-supervised learning algorithm compared the predicted values ​​with the actual values ​​and, based on the error rate evaluation, instructed the neural networks to update their weights to improve the predictions.

Richards’ team discovered that an artificial intelligence system trained with the ResNet network performed well in object recognition but not optimal in motion classification. However, they found that if they split the ResNet into two separate parts and created two paths (without changing the total number of neurons), the AI ​​would develop a representation of objects in one direction and motion in the other, allowing for better classification.

To conduct further experiments, the researchers used a series of video files previously shown to mice by neuroscientists at the Allen Institute in Seattle. Mice, like other mammals, have specific areas in the brain to recognize static and dynamic images. Allen Institute researchers recorded the activity of neurons in the animal’s visual cortex while watching videos.

Here too, Richards’ team noticed similarities in the reaction of artificial intelligence and living brains while watching videos. During this experiment, one of the paths in the artificial neural network was very similar to the visual and object recognition areas in the brain of rats.

The obtained results showed that our visual system has two highly specialized parts that help to predict the sequence of images.

 However, Richards believes that one path alone is not enough.

Models of the human auditory system tell a similar story. In June, a research team led by Meta’s Jean-Rémy Koenig worked on training an AI system called WavVec 2.0, which uses a neural network to convert sounds into hidden representations. The researchers hid some parts of the audio patterns and provided them to another part of the artificial network called the Transformer.

This team used 600 hours of speech data to train and train the target network. During training, the Transformer correctly predicted the masked information. In the process, the AI ​​learned to translate sounds into latent representations while no labeling was required.

King says, “The amount of data given to the intelligent algorithm is almost equivalent to the information that a child receives in the first two years of life.”

Once the system was trained, the researchers played audiobook segments in English, French, and Mandarin.

Then, they compared the AI‘s performance with data from 412 people. The subjects were native speakers of all three languages ​​and listened to the same part of the audiobook while their brain pattern activity was recorded through an fMRI scanner.

Training in the primary layers of the artificial intelligence network was the same as the activity of the auditory cortex of humans. Additionally, stir in the deepest layers of AI was aligned with movement in deep brain layers (in the prefrontal cortex).

“This is an amazing achievement,” Richards said. Of course, it is still too early to conclude. Still, the obtained results are convincing and indicate that the way humans learn a language is primarily related to trying to predict future topics.

Shortcomings of self-monitored learning about the explanation of human brain function

Self-supervised learning methods advance, allowing you to make predictions about events with no prior experience. Josh McDermott

(Josh McDermott), a neuroscientist at MIT believes that self-supervised learning approaches are improvements to learning a set of representations that can support most cognitive behaviors without needing labels from an observer, but they still face serious problems.

The algorithms themselves need more work. For example, Wav2Vec 2.0 only can predict the hidden parts of the sound in a few tens of milliseconds, which is less than the time required to understand noise in a different form and does not even reach the understanding of a word. “There’s a lot of things the brain does that need to be done in artificial intelligence that we haven’t gotten to yet, ” King says.

last word

A proper understanding of brain function requires more than self-supervised learning because the brain is based on many feedback connections. At the same time, current models have few links and few communication nodes. The next step in better understanding brain function and artificial intelligence is to use the self-monitoring model to train feedback networks and observe how such networks work compared to actual brain activity.

The next important step is to match the activity of artificial neurons in self-monitoring learning models with the movement of individual biological neurons.

If parallels are observed between the brain and self-monitoring learning models for other sensory systems, it is a seal of approval for the magical workings of our brains. King says, “If we can find similarities between systems that are entirely different from each other, we can conclude that there is not much distance from discovering ways to process information intelligently.

At least, it’s a nice theory we like to work on and hope for.”

Exit mobile version