Site icon DED9

Human-AI interaction in decision making

Human-AI interaction in decision making

Here we will see why artificial intelligence should know when it is necessary to get help from humans.

Artificial intelligence systems are a powerful tool for businesses and governments to process data. And respond to changing conditions, both on the stock exchange and the battlefield. But there are still cases for which artificial intelligence is not ready.

We are computer scientists working to understand and improve the way algorithms interact with society. Artificial intelligence systems work best when the target is clear and high-quality data. Such as when they are asked to correctly recognize different faces after learning from many other people’s pictures.

An AI algorithm with the cooperation of a human can benefit from the benefits and efficiency of good AI decisions without being trapped in bad choices.

Sometimes AI systems work so well that users and observers are amazed at how much they understand the technology. However, occasionally victory is hard to measure or misdefined, or training data does not match the existing task. In these cases, artificial intelligence algorithms fail in unpredictable and dramatic ways. However, it is not always immediately apparent that something went wrong. As a result, it is essential to avoid exaggeration and excitement. And to be careful about what AI can do and not assume that the solution it finds is always the right one.

When algorithms are working, there must be a safety net to prevent harm to people. Our research has shown that algorithms can detect performance problems and seek help from humans in some cases. In particular, we offer that asking for help from humans can help reduce algorithmic bias in some settings.

How reliable is the algorithm?

Artificial intelligence systems are used in criminal penalties, face-to-face character profiling, resumption of screening, health care enrollment, and other complex tasks in which people’s lives and well-being are at stake. In response to the executive order of our former President Donald Trump, US government agencies are searching for and exploiting appropriate AI systems.

Remember that artificial intelligence can reinforce misconceptions about doing something or magnify existing inequalities. Even if someone does not explicitly tell the algorithm to treat someone differently, this can happen.

For example, many companies have algorithms that try to determine a person’s facial features – for instance, and they guess what their gender is. Systems developed by American companies perform significantly better in classifying white men than organizing women and dark-skinned people. They work even worse in women with dark skin. However, systems developed in China work worse for white faces.

Oriented orientation training data can improve or worsen the performance of systems in recognizing certain types of faces.

The difference is not that one group has faces that are easier to classify than other groups. But instead, the two algorithms are usually each on a (possibly different) set of data as large as the total human population. They are not diverse, and they are educated. Suppose the dataset is a dominant figure – white men in the United States and Chinese figures in China. The algorithm probably works better than others.

Algorithms can be programmed to detect their shortcomings – and follow up by asking a human to help solve the problem. No matter how the difference is made, the result will be that algorithms can be more accurate in one group than in the other.

Continuance of human monitoring of artificial intelligence

For high-risk situations, the algorithm’s confidence in its outcome (its confidence) – that is, its estimation of the probability that the system will be able to provide the correct answer – is just as important as the outcome itself. People who get the output of algorithms need to know how seriously they should take the results, not just think it’s right because a computer did it.

Artificial intelligence systems are a powerful tool for businesses and governments to process data and respond to changing conditions on the stock exchange and the battlefield. Only recently have researchers begun to develop methods for identifying inequalities in algorithms, and The data, even with much less effort, have been refined. Algorithms can be programmed to detect their shortcomings – and follow up by asking a human to help solve the problem.

Can artificial intelligence make better decisions than humans?

We know that AI can complete computations quickly and develop its algorithms. Derived algorithms are mainly based on results inferred from humans. Now let’s look at it from another angle, can new artificial intelligence produce algorithms that make better decisions than humans?

The answer is undoubted “yes.” The machine learning method can process more data faster than the human brain. This technique allows artificial intelligence to find patterns in data that are not immediately visible to humans.

So, we have independent data related to dependent data but has not been inferred from human decisions. There is a possibility that artificial intelligence can make better decisions.

Many types of AI algorithms now measure the level of internal reliability. Which is to predict how well they have performed in analyzing a particular piece of data. Many AI algorithms have less confidence in dark faces and female faces than white men in face analysis. It is unclear how important this issue has been to law enforcement for the widespread use of these algorithms.

The goal is for the AI ​​to find areas that do not have the same accuracy for different groups. Artificial intelligence can delegate its decision-making work to a human observer in these inputs. This technique is especially suitable for heavy-duty tasks such as content management.

Human content watchers can not follow the flood of images posted on social media sites. But on the other hand, artificial intelligence content managers have also been known to fail to consider the grounds behind a post – misrecognizing sexual orientation discussions as explicit content or recognizing the Declaration of Independence as hate speech. This can lead to improper censorship of one demographic or political group by another.

Algorithms can be more accurate in one group than in another.

To achieve the best in both worlds, our research suggests scoring all content automatically, using the same AI methods that are common today. Our approach then uses the newly proposed techniques to automatically place possible inequalities in the ‘s accuracy of algorithms accuracy on different protected groups of individuals and deliver decisions about specific individuals to a human. Consequently, the algorithm can be pretty neutral about the people it decides on. And humans make decisions about people for whom the algorithmic decision is inevitably biased.

This approach does not eliminate bias: it only “concentrates” the potential for prejudice and discrimination in a smaller set of decisions, which humans then decide using common sense. That way, AI can still do most of the decision-making.

It is essential to avoid exaggeration and excitement and to be careful about what AI can do and not think that the solution it finds is always right.

This represents a situation in which an AI algorithm with the cooperation of a human can benefit from the benefits and efficiency of good AI decisions without being trapped in bad choices. Humans will then have more time to work on difficulties. And ambiguous choices that are crucial to ensuring fairness and justice.

 Source:https://rasekhoon.net/article/show/1559779/%D8%AA%D8%B9%D8%A7%D9%88%D9%86-%D8%A7%D9%86%D8%B3%D8%A7%D9%86-%D9%88-%D9%87%D9%88%D8%B4-%D9%85%D8%B5%D9%86%D9%88%D8%B9%DB%8C-%D8%AF%D8%B1-%D8%A7%D8%AA%D8%AE%D8%A7%D8%B0-%D8%AA%D8%B5%D9%85%DB%8C%D9%85%D8%A7%D8%AA

 

 

 

 

 

Exit mobile version