blog posts

Artificial intelligence Issues and Problems

Artificial intelligence Issues such as trying to deceive as well as fight and design it

Artificial intelligence can now learn to manipulate human behavior. Robots can deceive us on the virtual battlefield, so let’s not hold them accountable for real work.

It may seem like a cliché to say that artificial intelligence is changing all aspects of our lives and work, but it is true. Different types of artificial intelligence are working in various fields, including vaccine production, environmental management, and administrative management. And while AI does not have the intelligence and emotions of a human, its capabilities are enormous and overgrowing.

There is no requirement to be concerned about machine power and control. Still, this recent discovery highlights the power of artificial intelligence and emphasizes the need for proper governance to prevent abuse.

How can artificial intelligence learn to influence human behavior?

A team of Data61 researchers from CSRIO, the data and digital arm of the Australian National Science Agency, uses an artificial intelligence system called repetitive neural network learning and deep reinforcement, a systematic way to find and exploit vulnerabilities in selection methods. People invented. To test their model, they performed three experiments in which human participants played a role with a computer.

Computers do not have the moral ability to make decisions about life or death. They have neither empathy nor compassion.

The first experiment involved participants clicking on red or blue boxes to obtain a counterfeit currency, in which AI learned participants’ selection patterns and guided them toward a specific choice. Artificial intelligence was successful about 70 percent of the time.

In the second experiment, participants were asked to look at a page when a particular symbol was shown to them (such as an orange triangle) and press a button. They were offered another symbol (such as a blue circle). Do not press. Here, AI decided to sequence the symbols so that participants could make more mistakes and increase about 25 percent.

The third experiment consisted of several rounds in which a participant pretended to be an investor giving money to a trustee (an AI). The AI ​​then returns a sum to the participant, who decides how much to invest in the next round. The game was played in two methods: in one, AI sought to maximize the amount of money ultimately, and in the other, AI sought a fair distribution of money between itself and the human investor. AI was very successful in every case.

In each experiment, the machine learned from participants’ responses and identified and targeted vulnerabilities in people’s decisions. The result was that the device learned to direct participants to specific actions.

In the future, will we see robots that fight in wars and plan wars? We have both.

The meaning of research for the future of artificial intelligence

These findings are still quite abstract and include limited and unrealistic situations. More research needs to determine how this approach works and how it can be used to benefit society.

But this research enhances our understanding of what AI can do and how people make choices.

This shows that machines can guide human creation through interaction with us.

This research has many potential applications, from strengthening behavioral sciences and public policies to improving social well-being to understanding and influencing how individuals adopt healthy eating habits or renewable energies. Artificial intelligence and machine learning can identify people’s vulnerabilities in certain situations and help them avoid making the wrong choices.

This method can also be used to defend against infiltration attacks. Machines can be taught, for example, to alert us when we are affected online and to help us formulate behavior to hide our vulnerability (for example, by not clicking on specific pages or clicks). On others, to create a wrong path). Organizations that use and develop artificial intelligence need to make sure what these technologies can and cannot do and be aware of the potential risks and benefits.

What for next?

Like any technology, artificial intelligence can use for good or bad, and good governance is crucial to ensure it is implemented responsibly. Last year CSIRO established an ethical AI framework for the Australian Government as the first step in this journey.

Artificial intelligence and machine learning are typically very hungry for data. This means that it is essential to ensure effective systems for governing and accessing data. Adequate consent and privacy processes are imperative when collecting information.

Organizations that use and develop artificial intelligence need to make sure what these technologies can and cannot do and be aware of the potential risks and benefits.

Caution about robots

Artificial intelligence and machine learning can identify people’s vulnerabilities in certain situations and help them avoid making the wrong choices.

DeepMind, an AI developer, recently announced its latest milestone: a robot named AlphaStar, which plays the popular StarCraft II strategy game at the Grandmaster level.

This is not the first time a robot has surpassed humans in a strategic war game. In 1981, a program named Eurisko, designed by Doug Lenat, a pioneer of artificial intelligence (AI), won the Traveler Championship in the United States, a highly complex strategic warfare game in which players design a fleet of 100 ships. As a result, Yurisco was honored in the Traveler Fleet. The following year, competition rules were revised to neutralize computers. But Yurisco still won for the second year in a row. Due to the authorities’ threat to cancel the tournament if the computer wins again, Lenat retired from his program.

In each experiment, the machine-learned from participants’ responses and identified and targeted vulnerabilities in people’s decisions. The result was that the device learned to direct participants to specific actions.

DeepMind’s public relations department has led you to believe that StarCraft “has emerged by consensus as to the next big challenge (in computer games).” And that “it has been a major challenge for AI researchers for over 15 years. Is”. In December last year, AlphaStar defeated Grzegorz “MaNa” Komincz, one of the top StarCraft professional players in the world. But this was a performance of AlphaStar with much foster reflection than any human and unlimited view of the game screen (unlike human players who can only see part of it at a time). It was hardly a level playing field.

Nevertheless, StarCraft has some features that make AlphaStar a significant improvement, if not an unexpected one. Unlike Chess or Go, StarCraft players have incomplete information about the game’s status. And the range of possible actions you can take anywhere is much greater. And StarCraft opens instantly and needs long-term planning.

Robot Wars

This presents the question of whether in the future we will visit robots that not only fight in wars but also plan wars. We have both.

It may seem like a cliché to say that artificial intelligence is changing all aspects of our lives and work, but it is true.

Despite numerous warnings from artificial intelligence researchers and the founders of artificial intelligence and robotics companies. Nobel Peace Prize winners and church leaders – fully self-propelled weapons, also known as “killer robots,” have been developed to be used soon.

In 2020, Turkey deployed kamikaze drones on its borderland with Syria. These drones will utilize computer vision to determine, track and kill people without human intervention.

This is a terrible development. Computers do not have the moral ability to make decisions about life or death. They have neither empathy nor compassion. “Killer robots” change the nature of the battle for the worse. As for “robot generals,” computers have been supporting generals’ plans for decades. In Desert Storm, during the Gulf War in the early 1990s, artificial intelligence timing tools were used to design forces in the Middle East before the conflict. A short time later, an American general told me that the amount saved by doing so was the equivalent of all the money spent on artificial intelligence research up to that point.

Generals have also used computers extensively for potential war game strategies. But just as we do not leave all battlefield decisions to one soldier. The complete transfer of a general’s responsibilities to the computer is a mistake and even a step further.

Machines cannot be held responsible for their conclusions. Only humans can be like this. This is the cornerstone of international humanitarian law. However, generals increasingly rely on computer support in their decision-making to quell the fog of war and negotiate with the extensive amount of information coming from the front. If this leads to fewer civilian deaths, fewer insider fires, and more respect for international humanitarian law, we should welcome such computer assistance. But money needs to stop humans, not cars.

Artificial intelligence can identify vulnerabilities in human habits and behaviors and use them to influence human decisions.

Conclusion

Here’s a final question to ponder. Suppose tech companies like Google do not want us to worry about computers taking over the world. Why build robots to win virtual wars instead of focusing on, say, peaceful e-sports? With all due respect to sports enthusiasts, the answer is that bets would be much lower.

 Source:https://rasekhoon.net/article/show/1586940/%D9%87%D9%88%D8%B4-%D9%85%D8%B5%D9%86%D9%88%D8%B9%DB%8C-%D8%A7%D9%82%D8%AF%D8%A7%D9%85-%D8%A8%D9%87-%D9%81%D8%B1%DB%8C%D8%A8-%D8%AF%D8%A7%D8%AF%D9%86-%D9%88-%D9%86%DB%8C%D8%B2-%D8%AC%D9%86%DA%AF-%D9%88-%D8%B7%D8%B1%D8%AD-%D8%B1%DB%8C%D8%B2%DB%8C-%D8%A2%D9%86-%D9%85%DB%8C-%DA%A9%D9%86%D8%AF