DED9

Targeted Artificial Intelligence Is More Dangerous Than The Technology

There Has Been A Lot Of Talk Over The Past Five Years About Artificial General Intelligence (AGI). General Artificial Intelligence Is The Highest Level Of Artificial Intelligence That Works Close To The Human Brain.

Unlike limited artificial intelligence, which is developed to perform specific tasks, general AI can perform various functions. Artificial intelligence is a fully conscious algorithm that can do anything that requires complex processing, learn its valuable points, and use it in future applications. This artificial intelligence is so efficient and powerful that experts and scientists have expressed their concern about developing this artificial intelligence model at different times. Scientists and experts believe that the development of artificial intelligence, similar to the robots we saw in the famous Terminator movie series, will allow them to work out of human control.

In his speeches and tweets, Ilan Musk, CEO of SpaceX and Tesla, condemned the development of artificial intelligence and believes that the development of artificial intelligence is the end of human work. During the SXSW event in 2018, he said: “The development of digital superconsciousness is the greatest threat to human societies because they are autonomous, do not need human help to do their work, and are not even accountable to anyone for what they do. “These super-intelligentsia are putting additional pressure on society.”

 “The day will come when artificial intelligence systems become more dangerous and deadly than nuclear bombs,” he said.

In an interview with a news network in 2014, the late Stephen Hawking expressed his deep concern, saying, “If you take a step towards developing artificial public intelligence, the human race will end.”

Stuart Russell, a computer scientist, expressed deep concern in a short documentary called Slaughterbots about making weapons that use artificial intelligence to identify and kill humans. He believes that short documentaries should make about the dangerous nature of using this technology for malicious purposes to make people aware and force companies to stop making brilliant self-propelled guns.

However, few have objected to the dangers that artificial intelligence systems pose to humans. Unfortunately, in this day and age, artificial intelligence systems have seriously damaged the structure of societies, but their destructive effects are not such as to be noticed by all people. For example, today, intelligent algorithms have been developed to identify ethnicities by their faces.

Intelligent supersystems become the supervisors to whom labor responsibility is entrusted. There is little concern about the superiority of super-intelligent systems over humans or their destructive impact on the world, as they are primarily used in manufacturing plants and factories and only cause unemployment, so we are still a long way from that. However, the biggest concern now is the hiring process. Today, some companies use artificial intelligence systems instead of hiring managers to evaluate the resumes they receive and invite suitable candidates for interviews. One of the biggest concerns in developing intelligent algorithms in the human resources sector is the entry of cultural and social biases when designing this systems model.

Data scientists and statisticians have a famous saying: “Wrong input, without quality, provides incorrect output.”

When talking about machine learning algorithms, the above proposition becomes a big challenge. Deep learning is an essential subfield of machine learning that refers to neural networks and neurons that try to identify patterns. This adaptive pattern allows computers to recognize music or songs by listening to them for just a few seconds, recognizing when someone is speaking and turning speech into text, or producing deep fake content. کردن. In all these cases, the data is the first and last letter.

 

Whenever there is news that an expert has used public photos from sites such as Facebook to teach a face recognition program, he has provided the data as input to the machine learning algorithm. In general, specialists delete images after they have provided the photos to the machine learning algorithm and completed the algorithm training because they are no longer useful to data scientists. However, security experts believe this approach is a clear violation of users’ privacy.

Some people think that the lovely selfies they post on networks like Instagram are worthless. However, the vast majority of Salafis provided as input to algorithms provide exciting information about the faces of individuals and their races. In this case, the algorithm learns how to classify individuals, communities, and ethnicities and acquires good skills in this area over time. The system can be deployed at airport entrance gates to identify individuals, even citizens of another country, by looking at their faces and telling the officer what specific and purposeful questions to ask. Now imagine what happens if an intelligent algorithm model can identify the people of any country just by looking at their faces?

What is the reaction of law enforcement to the use of intelligent algorithms?

Since the early 1990s, ongoing police reporting in the United States of America on crime statistics has led law enforcement to use a predictive model to focus police on locations through this reporting model. They have more crime statistics. However, if a large part of the police force is stationed in a particular area, is it not likely that crime will increase in areas where different ethnicities and cultures live but where the presence of police forces is less?

The algorithm detects that crimes are more likely to occur in certain areas, suggesting that more police forces be deployed to those areas as more crimes may detect. Unfortunately, this feedback loop does not specify precisely where crimes occur but instead provides an overview of places where police forces may see more crimes. This approach may seem straightforward, but biases can still be problematic. In the United States, for example, police are more likely to go to poorer areas or have different ethnicities, and intelligent models are more likely to commit crimes in those areas. As a result, policies, restrictions, and pressure on the people of these areas increase. It makes the people of these areas have a wrong mentality towards the police forces. We reiterate that this is not a fantasy story. Today, these fanatical algorithms have been developed and, as mentioned, have been made available to the police force of developed countries.

Problems with Wrong Data

If law enforcement officers use a face recognition system to identify suspects in various crimes, and if the algorithm is not well trained in identifying faces with dark skin tones, it can lead to many people being arrested for wrongdoing. If these suspects are found guilty by a faulty face recognition algorithm, they may be found not guilty.

The cases mentioned are part of an effort to replace human-machine learning algorithms. As mentioned, intelligent algorithms can increase the concentration of police forces in areas or areas with high crime rates. However, if police arrest a person for a crime he did not commit, the AI ​​algorithm detects by looking at security cameras that he is the same person who committed the crime. It isn’t easy to prove his innocence because the image of a person in angles Different may look similar to that of another person. However, as noted, intelligent algorithms always face the problem of discrimination, differentiation between citizens, proposing to put pressure on the weak, and treating minorities in a biased manner. Unfortunately, some believe that artificial intelligence has no human biases and therefore accepts the results provided by the algorithms if the reality is something else, and these algorithms can have systematic biases and approaches that we must avoid.

Is better data the key to problem-solving?

Can the problems mentioned be solved by using better data, or is it possible to solve them independently? Identifying biases and trends in the data used to teach machine learning leads to more efficient models, but drawbacks exist. We will never be able to implement neutral models because the process of analyzing the data so accurately is so complex and perhaps impossible. It may be best to ask ourselves about the basic tasks we intend to do instead of trying to build artificial intelligence systems without bias, which is probably impossible. For example, do we need to develop artificial public intelligence?

Rashida Richardson, a lawyer and researcher on an algorithmic bias at Rutgers Law School in New Jersey believes a straightforward solution is. “Instead of trying to explain this dark part of machine learning, it’s better to focus on the root problems of the world and use artificial intelligence to correct real problems,” he says. “Then we can look to build reliable smart devices.”

last word

Perhaps, in the distant future, we should be concerned about artificial intelligence, similar to fiction movies. Still, for now, it is essential to educate people about the potential dangers of artificial intelligence and enact laws to encourage companies to develop intelligent algorithms. The rules should follow, and prejudices should remove from the world of artificial intelligence.

Exit mobile version