blog posts

Playing With Fire: Powerful And Deceptive Artificial Intelligence

Playing With Fire: Powerful And Deceptive Artificial Intelligence

A Month Later, Although The Initial Fever Of Chatgpt And Artificial Intelligence Among The Public Has Somewhat Subsided, It Has Instead Given Way To More Expert Discussions. 

AI was one of the main topics of the G7 leaders’ summit in Hiroshima, Japan. The G7 countries are looking to explore the challenges facing artificial intelligence and the opportunities it creates for industries and services. Speaking at the G7 summit, EU leaders want AI systems to be accurate, secure, reliable, and non-discriminatory.

On the other hand, Sam Altman, the CEO of OpenAI, appeared in front of the US Senate and asked Congress not to pass laws that would close the hands of leading companies in this field. He urged members of Congress to consider AI technologies and tools like   ChatGPT and welcome the AI ​​revolution.

Finally, in the latest news published just the day before writing this note, the European Union has prepared and presented a set of rules for AI. The European Parliament must approve this bill after discussion. If passed, this would be the first AI law in the history of technology worldwide.

The critical question that comes to mind now is why, at this time, essential groups such as the G7, the European Union, the US Congress or the UK government, and the World Health Organization are thinking of preparing laws to monitor this technology.

In a word, it can be said that AI and its applications, such as ChatGPT and Bard, have attracted the “attention” of the world’s public and even elites.

The more subtle thing about attracting attention is keeping this attention for a relatively long period.

Maybe some technologies can attract attention like this in a day or two, but they can’t keep it continuously, long, and even increasing. But artificial intelligence and its applications have been able to achieve this.

For example, according to Sam Altman, CEO of OpenAI, JPT Chat was downloaded 1 million times in the first five days of its release. According to the Swiss bank UBS, ChatGPT is the first application in the world of information technology that has achieved such a rapid rate of adoption and growth.

The bank’s research shows that this program gained one hundred million active users in January 2023, just two months after its launch, even though a program like Tik Tok, after nine months, has reached one hundred million users.

On the other hand, repeated warnings about the dangerous nature of AI by its founders and world elites have played an essential role in creating and stabilizing this attention. The people who warned in this regard were not ordinary or irrelevant but are generally considered AI developers, which is why their warnings can attract attention.

A short statement has been posted on the “Safety Center for Artificial Intelligence ” website ( www.safe.ai ), signed by hundreds of scientists and experts in artificial intelligence and other fields, managers of the world’s most influential technology companies, prominent entrepreneurs, and influential global personalities. The translation of this text is as follows:

“Reducing the risk of extinction caused by artificial intelligence should be a global priority, alongside other societal risks such as pandemics and nuclear war.”

Without looking for complex and technical terms, this statement, in the simplest terms, considers the dangers of artificial intelligence to be on the same level as nuclear weapons and global epidemics. The directors of Open AI and DeepMind, Google, and Bill Gates are among the signatories of this statement. Also signing the information are 2018 Alan Turing Award winners Dr. Geoffrey Hinton, Prof. Yoshua Benjio, Professor of Computer Science at the University of Montreal, and Dr. Yan Likon, Professor at New York University.

Since the European Union has an important place among the existing political, social, and economic structures of the world, and it alone includes 27 countries, we can expect that the initial bill or the law that will be approved later will play an essential role in the future of AI development. The rules passed in this area after this will probably be primarily influenced by EU law.

Why is AI more dangerous than nuclear weapons and pandemics?

Despite all the advantages and possibilities that artificial intelligence has, some people believe that the danger of artificial intelligence is quite severe and can be considered even more dangerous than nuclear bombs and global infectious diseases.

While nuclear technologies and global epidemics are entirely under the supervision of international organizations and governments, the operation of artificial intelligence is usually such that it flows between the system and the user.

In particular, systems based on language models such as ChatGPI can operate so quietly and covertly that no one notices their operation: discriminatory recommendations, false information to change users’ minds, for example, in presidential elections or national assemblies, the use of deep fading or deep-faking To believe that a particular person has done a specific action and…

It seems that the principal risks of artificial intelligence expressed by critics are reflected in the site of the ” Artificial Intelligence Safety Center ” (HYPERLINK ” http://www.safe.ai/ai-risk ” www.safe.ai/ai-risk ). This site declares eight significant risks of artificial intelligence as follows.

Weapons 

Artificial intelligence increasingly plays an increasing role in manufacturing and using nuclear, biological, and chemical weapons. Any miscalculation in a battle in which both sides are equipped with retaliatory AI systems can produce catastrophic disasters. Artificial intelligence can use the results of syntheses in pharmaceuticals to produce chemical weapons. Also, the data of the medical department and its research can be used in creating weapons that have the worst results on the human body and mind.

 Incorrect information

Misinformation, mainly when distributed too widely, can harm society’s ability to deal with significant challenges.

Private and government organizations and parties use technology to influence and convince the public on a specific issue and direction. Artificial intelligence brings this usage into a new era. With artificial intelligence, disinformation campaigns personalized to users’ interests are activated on a vast scale. AI can give persuasive but inaccurate yet personal answers to people’s doubts and questions, triggering powerful and intense emotions. The set of these trends together can have a debilitating effect on collective decision-making.

 The game of representatives 

AI systems trained with the wrong goals can find new ways to pursue their dreams at the cost of undermining individual and social values. AI systems are introduced as human agents with measurable goals. These representatives may do harmful things on the way to achieve these goals. For example, recommender systems may offer users content that is not necessarily good or good for them to achieve their goals and show high performance.

 Weakening the role of man

Suppose essential tasks are increasingly delegated to machines and artificial intelligence. In that case, humans will lose their independence and become dependent on devices, lose their agency in the economy, and have little motivation to acquire knowledge and skills. They will gradually be removed from decision-making. The withdrawals are set aside.

 Unlimited power

Artificial intelligence systems can give tremendous power to small groups of humans, such as leaders of companies and parties, and organizations. Over time, these systems make the powerful less and less powerful, instead increasing their power irresistibly.

 Unwanted capabilities and objectives

When AI models are sufficiently trained and developed, they may exhibit capabilities or pursue goals not defined. This can lead to the risk of people losing control of the entire system.

Unwanted capabilities and targets may emerge during or after deployment. The effect of these capabilities and goals may sometimes be irreversible. In such a situation, the new plans may conflict with the defined plans or abilities, and often the systems themselves must decide which dream to pursue. This situation can be dangerous for military organizations and armies. Also, significant financial and commercial organizations can suffer irreparable damage in this situation or make people suffer such losses.

 deception

AI systems in the future can be deceptive, not out of malice, but just like humans, they calculate that they can achieve their goals faster through deception. AI systems can trick human agents into gaining approval. Such intelligent systems have more strategic advantages than their non-deceptive counterparts. They can also display the system so that the deceptions are temporarily hidden from observers. Such an event can severely weaken human supervision and control over artificial intelligence-based systems.

 In search of power

Corporations, parties, and governments have strong incentives to create systems that can achieve broad goals. Such systems usually have many incentives to gain power. These systems can become very dangerous if they are not aligned with human values. Abusive behavior can encourage systems to appear aligned but not in practice and can even collude with other AI systems, humans, and organizations.

Creating such artificial intelligence systems is playing with fire. But many political and business leaders may want to have robust artificial intelligence systems be interested; Because it gives them considerable strength and competitive advantage. But these power-hungry leaders must answer this question: Can a deceptive and power-hungry artificial intelligence system that colludes with others remain loyal to them?