Introducing a new Emerging Idea in Artificial Intelligence
Embrace the unexpected: Change the game’s rules by teaching artificial intelligence how to manage new situations.
Image: Most AIs today stop when faced with unexpected situations, such as changes in the game’s rules.
My co-workers and I modified a digital version of Monopoly to be charged a wealth tax instead of receiving $ 200 each time a Go player passes. We did not do this to gain or deceive anyone. The goal is to throw an arched ball at the AI agents who play the game.
Our goal is to help agents or agents learn to manage unexpected events, which AI has done badly to date. Giving this kind of compatibility to artificial intelligence is important for future systems such as surgical robots. But so are the algorithms here, and now that decide who should receive bail. Who should be approved for a credit card or a resume? It’s important to the hiring manager. Failure to interact well with the unexpected in any of these situations can have catastrophic consequences.
Perhaps the day is not far off when AI can not only defeat humans in existing games. But quickly adapt to any version of those games that humans can imagine.
Artificial intelligence agents need the ability to identify, describe, and adapt to new or emerging human-like ways. A situation is new if it directly or indirectly challenges an agent model of the outside world that includes other factors, environment, and interactions.
While most people do not deal with the emergence in the fullest possible way, they can learn from their mistakes and adapt. When faced with a wealth tax in Monopoly, a human player may find that they must have cash in hand for the IRS when approaching Go. An AI player who is strongly inclined to acquire real estate and monopolies may not be able to strike the right balance between cash and non-cash assets before it is too late.
Adapting to the new or emerging in the open world
Reinforcement learning is an area that is primarily responsible for the “superhuman” player’s AI factors and self-driving car applications. Learning to reinforce using rewards and punishments allows AI agents to learn trial and error. This is part of the larger field of artificial intelligence machine learning.
Machine learning means that such systems are already capable of dealing with a limited variety of novelties. Machine learning systems tend to perform well on statistically similar input data. Although not identical to data that was originally trained. In practice, violating this condition is only true if the unexpected does not happen.
Such systems can be problematic in the open world. As the name implies, open worlds cannot be fully and explicitly defined. Unexpected events can happen and happen. Most importantly, the real world is an open world.
However, “superhuman” AI is not designed to control very unexpected situations in an open world. This may be one reason for using modern amplifier self-learning, which has ultimately led to the optimization of artificial intelligence for specific environments in which artificial intelligence is located. There is no guarantee that the unexpected will not happen in real life. Artificial intelligence built for real life must adapt to the new or emerging world.
Emerging as a First-Class Citizen
Returning to Monopoly, imagine that some properties are subject to the Lease Control Act. A good player, human or artificial intelligence, recognizes these properties as a bad investment, compared to properties that can be rented more for him and do not buy them. However, AI that has not seen this situation before, or the like, will probably need to play a lot of games before adapting.
Before computer scientists could even begin to theorize how such “new adaptive” factors could be constructed, they needed an accurate evaluation method. Traditionally, most AI systems have been tested by the same people who built them.
Although the competition is more impartial, to date, no competition has evaluated AI systems in such unexpected conditions that even system designers cannot predict them. Such a gold standard evaluation of the emerging AI test is similar to randomized controlled trials for drug evaluation.
This research is expected to create artificial intelligence agents that can manage new or emerging environments in different environments.
In 2019, the U.S. Defense Advanced Research Projects Agency launched Artificial Intelligence Science and Learning for Emerging or New in the Open World, abbreviated SAIL-ON. And now funds many Provides departments, including my group at the University of Southern California, to research new adaptations in the open world.
One innovative way in this program is to create an AI agent to deal with the emerging entity. Or to design an open world environment to evaluate such agents, but Not both. Teams that create an open-world space should also theorize about being new to that environment. They test their theories and evaluate the factors created by another group by producing a new generator. These generators can be used to inject unexpected elements into the environment.
Under SAIL-ON, my colleagues and I recently developed a Generating Novelty simulator in Open-world Multi-agent Environment or GNOME. GNOME is designed to test artificial intelligence’s new or emerging adaptation to strategy board games that capture real-world elements.
Image: Graph of a Monopoly game with symbols representing players, houses, and hotels. The Monopoly version of the new or emerging artificial intelligence environment provided by the author can increase the game’s artificial intelligence by introducing wealth tax, rent control, and other unexpected factors.
Our first GNOME version uses the classic Monopoly board game. We recently showcased GNOME based on Monopoly at a premier machine learning conference. We allowed participants to inject new things and see for themselves how pre-programmed AI factors work. GNOME, for example, could introduce the aforementioned wealth tax or rent control innovations and evaluate AI after the change.
Learning to reinforce using rewards and punishments allows AI agents to learn trial and error.
By comparing how AI works before and after a rule change, GNOME can determine how much new or emerging AI has been involved iinGNOME finds that AI had won 80% of games before it was introduced. And now, only 25% of games will mark AI as one thing that has a lot of room for improvement.
Future: Is it an emerging science?
GNOME has previously been used to evaluate recent compatible AI factors, developed by three independent organizations funded under the DARPA program. We also build GNOMEs based on poker and “war games” similar to Battleship. Next year, we will also be discovering GNOME for other strategy board games such as Risk and Catan. This research is expected to lead to the creation of artificial intelligence agents. That can manage new or emerging environments in different environments.
Shaping the Emerging As a central focus of modern artificial intelligence research and evaluation. The by-product is producing a primary set of work in support of emerging science. Not only are researchers like ourselves exploring new definitions and theories. But we are exploring questions that could have far-reaching implications. For example, our team explores when an emerging artificial intelligence is expected to be as difficult as possible. Artificial intelligence could detect and detect a human-operated operator if a situation arises in the real world.
A situation is new if it directly or indirectly challenges an agent model of the outside world that includes other factors, environment, and interactions.
In search of answers to these and other questions, computer scientists are activating artificial intelligence. That can properly respond to unexpected events, including unpredictable events such as the outbreak of COVID-19. Perhaps the day is not far off when AI can not only defeat humans in existing games. But quickly adapt to any version of those games that humans can imagine. It may even adapt to situations we can not imagine today.