blog posts

Why Should We Not Ignore Ethics In The Field Of Artificial Intelligence?

Why Should We Not Ignore Ethics In The Field Of Artificial Intelligence?

Most Professionals Who Enter The World Of Artificial Intelligence First Start Their Career As A Computer Programmer And Stay In This Profession For A Long Time.

However, experts believe today’s intelligent machines are advanced enough to understand the emotional state and even lie through human facial analysis, which is an essential issue for advertising publishers and various organizations.

Today, computer scientists are building operating systems that control what a billion people see every day, building cars that can decide how to navigate the streets, and building weapons that can intelligently shoot criminals. According to some experts, these approaches indicate the moral fall in technology.

However, some other users believe this is just the machine’s intelligence. We are using calculations that organize and shape all our decisions. Interestingly, in some cases, they give answers that are not specific and are more subjective.

We ask who the company should hire, which person should respond to a friend request, which person’s face looks more like a criminal, which part of the news or movie should be recommended, and so on.

We have been using computers for a long time, but this time things are different.

It is a historical reversal because we cannot hold computations for subjective decisions such as the computational method for flying airplanes, building bridges, and similar examples. Are the planes safe, will this bridge collapse? That’s right, we agreed on clear criteria and have the laws of nature to guide us.

We are trying hard to make more powerful software to do more complex tasks, but this makes the software more transparent due to less complexity. In the last decade, intricate algorithms have been developed by leaps and bounds.

They can recognize human faces, recognize handwriting, detect credit card fraud and block spam, and translate languages. They can detect tumors in medical images or games of chess and Go.

Most of these advances come from a method called machine learning. This system learns by circulating information. Machine learning differs from traditional programming, where you give the computer detailed, precise instructions. It’s more like you’ve got a system and a lot of unstructured information like the information we generate in our digital lives.

The problem is that these systems do not provide only one logical answer; in most cases, it does not give a simple solution. These systems work primarily based on probability.

The manager of artificial intelligence systems at Google described this concept as the irrational effect of information. The not-so-great part is that we don’t understand what the plan is learning.

 It is the power of machine learning.

It’s It is a central problem of intelligent systems. Less like giving instructions to a computer, and it’s more like teaching a living machine. We don’t understand, and we don’t control. It is a problem that causes the artificial intelligence system to learn things wrongly, and until it learns things correctly, we can still not precisely describe how it does it. We don’t know what an intelligent machine is thinking.

Imagine a recruiting algorithm with  A system that hires people using a machine learning system. A system that is trained based on information from previous employees and instructed to find and hire people with high productivity for the company. Looks good. HR managers and executives using these systems for recruiting were very excited. They thought this approach would make hiring more targeted and less biased, giving people a better chance of being hired without bias from HR managers.

However, these systems are gradually becoming more complex. Computing systems can now find out everything about you through digital bits of information, even if you haven’t revealed it. They will have access to your social tendencies, personal characteristics, and knowledge and have achieved the power of prediction with great accuracy. Interestingly, they can draw conclusions and inferences from things that you have not even disclosed.

For example, some developers of machine learning algorithms are working on computational systems to predict people’s chances of depression using information from social networks. Interestingly, in some cases, they provide exciting results.

The system can predict the possibility of depression months before the onset of symptoms. Interestingly, the patient has no signs of the disease, but the plan indicates.

 It is an excellent achievement in psychology, but imagine such a situation in the employment space.

Suppose an intelligent hiring system does not hire a professional because of a high probability of depression in the future. At the same time, the person has no symptoms of depression at the time of hiring and may be more likely to develop depression in the future, or if the woman may get pregnant in the next year or two, but she is not pregnant now?

To deal with these defensive modes of intelligent systems, some experts have suggested a protection mechanism and declared that it is better to equip machine learning algorithms with something similar to a black box.

Another issue is that intelligent systems are often trained on information produced by our actions rather than human artifacts. They can only reflect our desires. These systems can pick up on our tendencies, amplify them, and show them back to us while we tell ourselves: We’re just checking.

One of the problematic cases is summed up in biased predictions. For example, in one of the American cities, a person was sentenced to six years for escaping from the police.

Many users are unaware of this, but intelligent algorithms are increasingly used in parole and sentencing. Many ask, how do they do this? ProPublica, a non-profit research institute, tested many algorithms with the public information they found. They found that these predictions were based on propensity and that their predictive power was not due to chance (but on purpose) and to rate offenders erroneously.

It has twice as many black defendants as white defendants.

These systems can easily make such a mistake. Do you remember Watson, IBM’s intelligent machine system that competed with Jupiter? Watson was a good performer in the competition. But in the latest interview, Watson was asked, “Does he know the largest airport in the United States named for a World War II hero?” The contestants answered Chicago while Watson, on the other hand, answered Toronto! The system had made a mistake that no human would make.

An intelligent machine can make such dangerous mistakes; however, in places where you don’t expect and prepare for it, the consequences of this mistake will be catastrophic, especially in situations where due to small behaviors in the usual routine of a person’s life prevent him from being employed.

In May 2010, a short-term glitch in Wall Street’s selling algorithm caused multi-million dollar losses in 36 minutes. Now imagine such an error in connection with military weapons.

The fact is that it is human beings who always create desires. AI algorithms make more mistakes than humans, and we cannot escape this. We cannot ignore our responsibilities toward machines. AI can’t give us a free get-out-of-ethics card.

Data scientist Fred Benson calls this math washing. We can use computing to make better decisions, but we shouldn’t ignore the ethical issue in technology. We must accept responsibility for any behavior that algorithms commit. And we must admit that mathematical calculations are not objective for accumulated and messy value and human affairs.

Artificial intelligence is on our side, and we must hold more firmly to human values ​​and ethics.