DED9

Artificial Intelligence Is Changing Our Understanding Of Language Learning

Unlike The Carefully Scripted Conversations In Most Books And Movies, Our Everyday Conversational Language Is Usually Unstructured And Incomplete, Full Of False Beginnings And Unnecessary Pauses. 

This includes any type of conversation, from casual chats between friends, arguments between siblings, formal boardroom discussions, and synchronous conversations.

Given language’s random and experiential nature, it seemed miraculous that some people, especially young children, learned to speak only experientially.

For this reason, many scientists in the field of linguistics, including Noam Chomsky, the founder of modern linguistics, believe that language learners need some connector to control the unruly nature of everyday language. That connector is grammar, a system of rules for constructing grammatical sentences.

Learning to speak different languages ​​in another way than we know languages ​​is one of the hot topics around GPT-3. To be more precise, the above algorithm has learned how to speak in different languages ​​based on casual conversations.

Suppose you go back to language learning schools or school days. In that case, you are well aware that the language learning process and grammar rules had a specific framework, and we learned how to put the subject, concept, verb, and other elements of sentences together. To give, However, GPT-3 has done language learning in a different way, surprising experts.

So that scientists on how to learn language have compared this algorithm with the model used by humans. The initial evaluations concluded that there may be a more straightforward and objective approach to language learning.

From this point of view, children should probably have a grammatical pattern in their brains that helps them overcome the limitations of their language experience. For example, the design may contain a special hyper rule that dictates how new sections are added to existing expressions. After that, the child’s brain checks whether his native language matches what he made.

In this way, for example, an English-speaking child knows according to the main pattern that the verb is placed before the object, such as I eat Sushi. In contrast, the same child, if he has Japanese nationality, knows according to his super law that the verb is placed after the object in Japanese. , so that the structure of this sentence in Japanese becomes, I Sushi eat.

But the new insight into how language is learned comes from a particular source that you might not have thought of, and that is artificial intelligence. A new generation of AI models can learn and write newspaper articles, poems, and computer code after being exposed to large amounts of linguistic data as input. The significant part of the story is that all these tasks are done for artificial intelligence without the help of grammar.

Correct and structured sentences without using grammar

If the AI‘s word choices are sometimes weird, nonsensical, or contain racist, sexist, and other biases, one thing is pretty straightforward; The vast majority of output from these AI language models are structurally and grammatically correct. While they are not given any grammatical patterns or linguistic rules, they rely only on linguistic experience to produce the right expressions as output.

One of the most well-known and recent AI models that have made a lot of noise is GPT-3, a massive deep-learning neural network with more than 175 billion parameters.

While training this artificial intelligence, experts and researchers provided hundreds of billions of words from the Internet, books, and Wikipedia as input. They asked the artificial intelligence to use its knowledge to predict the next word in a sentence. In this regard, when the artificial intelligence made a wrong prediction, its parameters were adjusted using an automatic learning algorithm to predict the next word with less error.

GPT-3 can produce believable text responding to sentences such as “a synopsis of the latest Fast and Furious” or “an Emily Dickinson-style poem.” Also, GPT-3 can perform comparisons at the level of SAT (Student Aptitude Test), comprehension questions, and even simple math problems, all of which it has learned based on the learning approach by predicting the next word.

Comparison of artificial intelligence and human brain models

However, the similarity of expressions produced by artificial intelligence with human language does not end there. Research published in the journal Nature Neuroscience shows that deep learning artificial networks such as GPT-3 use the same computational principles as the human brain.

First, a research team led by renowned neuroscientist Uri Hasson compared the next word predicted by AI GPT-2 (GPT-3’s little brother) and humans in a story from the podcast This American Life. The result was that the human brain and artificial intelligence predicted the same word almost 50% of the time.

The researchers recorded the volunteers’ brain activity while listening to the story. Their best explanation for the activation patterns they observed was that, like GPT-2, people’s brains don’t use just one or two previous words when predicting but instead rely on the semantic context of the last 100 words. Overall, the paper’s authors concluded that their findings regarding predictive and spontaneous neural signals suggest that active prediction can underlie lifelong language learning in humans when participants listen to everyday speech.

There is another exciting issue with new language models of artificial intelligence based on significant inputs. Data is used to feeding them; GPT-3 is trained with language experience equivalent to 20,000 thousand years.

But an early study, not yet fully cited, shows that GPT-2 can model next-word predictions and neuronal activation even when trained with only 100 million words. This amount of language input is equivalent to the average number of words a child hears during their first ten years of life.

Of course, we’re not claiming that GPT-3 or GPT-2 learn language precisely like children because these AI models don’t seem to understand much of what they’re saying, whereas perception is a critical component of human speech.

However, the critical point that these models prove is that a machine learner whose foundation is zeros and ones can learn to speak by receiving the basic sentences it needs to say. Produce perfect grammar in a way similar to word processing in the human brain.

Rethinking language learning

For years, linguists believed language learning was impossible without an innate grammatical pattern, but new AI models prove otherwise. These models show that constructing grammatically correct sentences can be learned only through language experience. For this reason, it can be claimed that children do not need innate grammar to learn a language.

Instead, children should be exposed to as much interpersonal conversation as possible to help them develop their language skills. Now, researchers are hypothesizing that language experience, not grammar, may be the key to becoming a person who can learn a language properly.

Die mobile Version verlassen