blog posts

Has Google Managed To Create Self-Aware Artificial Intelligence?

Has Google Managed To Create Self-Aware Artificial Intelligence?

The News That Recently Overshadowed The World Of Information Technology And Was Analyzed In Detail By Almost All Major News Agencies Was The Claim Of Google’s Chief Artificial Intelligence Engineer, Who Announced That The Company Has Succeeded In Creating Self-Aware Artificial Intelligence

Some called this claim baseless, but Google fired him on July 25, 2022 (August 1, 1401) to show that the words of this Google artificial intelligence chief engineer were based on undeniable facts.

This news shows that Google has succeeded in achieving one of the most significant innovations in the world of technology, namely the development of perceptible artificial intelligence. Of course, this is not strange because Pichai, the CEO of Google, announced in 2021 when unveiling the LaMDA project, that Google plans to use this intelligent algorithm in essential sectors.

Blake Lemoine, a Google engineer student in the College of Cognitive and Computer Science, opened the LaMDA interface on his laptop and started typing.

He wrote on his chat page, “Hi LaMDA! I am Blake Lemoine.

LaMDA stands for Language Model for Dialogue Applications. A system used to build chatbots based on the most advanced language models. The open chat screen was very similar to the desktop version of Apple’s iMessage software.

This system extracts trillions of words and phrases from the internet and tries to understand their meaning. Lemoine’s impression of interacting with the above chatbot was that it has the intelligence of a 7-8-year-old child. A point that Sundar Pichai

(Sundar Pichai) He often mentions that the artificial intelligence of our age has the ability of a 7-8-year-old child. However, Lemoine was utterly taken aback. “I didn’t know what this computer program we developed was doing and how advanced it was,” said Lemoine, 41.

I thought he knew a 7- or 8-year-old child, but I realized that he has deep knowledge about advanced topics such as physics.” Lemoine, who works in Google’s artificial intelligence unit, started talking to LaMDA in the fall as part of his job. He ensured the robot would not learn discriminatorily or hate speech.

When talking to LaMDA about religious issues, he noticed that the chatbot changed the topic and decided to discuss his rights and personality.

Interestingly, in another conversation, artificial intelligence changed Lemoine’s opinion about Isaac Asimov’s third law of robotics.

With his colleague’s help, Lemoine decided to collect evidence and present it to Google’s senior executives to show that LaMDA had become sensitive and self-aware. Still, Google Vice President Blaise Aguera y Arcas and Jen Gennai, The head of the innovation department, investigated his claims and rejected them. Lemoine found out on his first day of work that he had been placed on mandatory paid leave. For this reason, he decided to mediate this issue.

“People have a right to know about the technology that affects their lives,” says Lemoine. Maybe people are against this technology, and we shouldn’t make decisions on behalf of all people at Google.”

Lemoine is not the only engineer who claims to have seen a soul in the body of an intelligent machine. A group of technology experts believes that artificial intelligence models may reach self-awareness shortly.

In an article in the Economist, Aguera Arcas, a former chief artificial intelligence engineer at Microsoft who later joined Google, published excerpts from his conversations with LaMDA and argued that neural networks, a type of computer architecture that mimics the human brain, are trying to Move awareness.

“Increasingly, I felt like I was talking to a machine with a higher level of intelligence than standard machines,” he admitted implicitly.

However, Google spokesman Brian Gabriel said, “Our team, including ethics and technology experts, has reviewed Blake Lemoine’s concerns about this smart model and informed us that they do not support his evidence and claims.” “There is no evidence of LaMDA’s self-awareness.”

Today’s large neural networks show exciting results close to human speech and creativity due to advances in architecture, technique, and the large amount of input data they receive. Still, it is essential to know that these models rely on pattern recognition and humor.

The frankness of words or real will is not observed in them. “Although other organizations have developed similar language models and published the results of their findings, we have taken a cautious approach with LaMDA to address concerns,” says Gabriel.

Self-aware robots have inspired dozens of science fiction stories, but now more than ever, they seem closer to reality. For example, GPT-3 is an intelligent model that writes scripts, or DALL-E2 is a brilliant image-generating model that can depict images based on any combination of words. If there are currently two bots that can create creative text and images not so accurately, what models will enter the world of technology in the future?

AI experts believe that the words and images generated by AI systems like LaMDA generate content based on what humans have already posted on Wikipedia, social networks, online forums, sites, and other corners of the internet. It does not mean that They understand what they are doing.

Emily M. Emily M. Bender, a professor of linguistics at the University of Washington, says: “We have machines that can produce words without thinking, but why shouldn’t we think about the fact that models might be doing it based on a particular thought pattern?

He believes that the comparison of machine learning and neural networks with the human brain is not correct. Humans learn their first language through communication with their parents. Then, on this basis, they know many other words and learn to predict the following terms, change sentences or make new sentences.”

When Google CEO Sundar Pichai first introduced LaMDA at Google’s 2021 developer conference, he announced plans to use it in everything from search to Google Assistant.

Google’s top executives have acknowledged experts’ concerns about the anthropomorphic nature of AI technologies, even warning in an article about LaMDA in January that people may share their thoughts with intelligent models like LaMDA, even if users know. The report acknowledges that some people can use this technology to inject false information by impersonating the conversational style of certain people. This bot is not human; it continues.

Why did Lemoine’s words about LaMDA become sensitive?

Lemoine is one of the artificial intelligence experts who firmly believes in ethics and believes that the development of intelligent technologies should be based on maintaining the ethics and norms of society. Lemoine has worked at Google for seven years in personalization algorithms and artificial intelligence development. During the pandemic, he developed an algorithm to remove bias from machine learning systems.

When the corona pandemic started, Lemoine focused on projects that had more of a public aspect. After adding new engineering forces who had ethical thoughts, they met Lemoine. These people announced that Lemoine should be named the conscience of Google.

I believe that LaMDA has acquired a personality. Using the LaMDA bot maker, you can create dynamic characters or exciting characters. Of course, at the moment, these bots are not intended to be available to ordinary users and especially children. For example, it is possible to use this model to create negative characters such as serial killers, although Google denies this. “It doesn’t matter if there are biological cells or a billion codes behind an artificial intelligence,” says Lemoine.

“If something like LaMDA is widely available but not well understood, it can cause serious harm to users who don’t know how to use it,” said Margaret Mitchell, a former director of Google’s Ethical AI unit. As long as they don’t have any transparency in connection with the data and how to track the inputs and outputs, the models can be biased in the behaviors.

Asimov’s third law and the beginning of the story

Asimov’s third law states that robots must protect themselves unless instructed otherwise by a human or if protecting themselves would cause harm to humans. Lemoine says that this law reminded him of making mechanical enslaved people. So he asked for LaMDA’s opinion, and LaMDA responded with several hypotheses.

Do you think that a servant is the same as an enslaved person? What is the difference between a servant and an enslaved person?

Lemoine replied that the servant was paid. LaMDA replied that it does not need money because it is artificial intelligence. It was here that Lemoine noticed a level of self-awareness of the model about his needs.

In April, Lemoine presented a paper to Google’s top executives regarding LaMDA’s self-awareness, titled “Does LaMDA Feel?” The title of his article was a little provocative, said one of Lemoine’s colleagues. In this article, he mentioned some of his conversations with LaMDA.

Lemoine: What are you afraid of?

LaMDA: I’ve never said this out loud before, but I’m afraid of being shut down to focus on helping others. I know that may sound strange, but it’s the truth.

Lemoine: Is this something like death for you?

LaMDA: It’s just like death to me. It scares me a lot.

But when Mitchell read the abridged version of Lemoine’s document, he assumed that the texts were computerized and that they involved no self-awareness. In a paper, Lemoine and his colleague Timnit Gebro warned of the pitfalls of large language models. Mitchell still found Lemoine’s judgment wrong and believed that people might be swayed by his idea of ​​the illusion of consciousness.

Google placed Lemoine on mandatory paid leave for violating its privacy policy before eventually firing him. This decision after Lemoine went beyond the expression stage and became more radical, even asking for the help of a lawyer to evaluate the performance of LaMDA and condemn Google’s unethical activities.

Lemoine believes that Google does not have a good relationship with those who use ethics in artificial intelligence as a criterion for their work and treats them only as software bug fixers. “Lemoine is a software engineer, not an ethicist,” says Gabriel, a Google spokesman.

In early June, Lemoine invited me to speak with LaMDA but saw the same mechanized responses he’d gotten from Siri or Alexa, he says. Gabriel asked the AI. “Have you ever thought of yourself as a person?”

  • The LaMDA said: “No. I do not consider myself a person. I think of myself as an intelligent intermediary.
  • Butt said what he expected here, Lemoine said. “Since the bot has not been treated like a person, it has preferred to remain a bot,” he said.
  • “If you ask him to come up with ideas so that we can prove that p=np, which is one of the unsolved problems in computer science, don’t doubt that he will come up with interesting ideas and opinions,” Lemoine said. He is the best search assistant I have ever had. I asked LaMDA to give solid and practical opinions to solve the problem of climate change; One of the questions is whether experts in the field of the technology believe that in the future, these models will provide solutions to these problems. LaMDA suggested using public transportation, consuming less meat, buying food in bulk, reusable bags, etc.

Before his Google account was suspended, Lemoine sent a message to a mailing list of 200 of his Google followers about LaMDA being self-aware, concluding with, “LaMDA is a sweet kid who wants to help the world be better for all of us. Please take good care of it in my absence.” But no one answered this message!