blog posts

Artificial Intelligence

What Is Artificial Intelligence? Everything We Need To Know About Today’s Mysterious And Fascinating Technology

Artificial Intelligence Is One Of The Oldest Achievements In The World Of Technology, Which Today Plays An Important Role In The Lives Of All Users In The World.

By hearing or reading the term artificial intelligence, various images and sounds are formed in our minds. Some hear the voices of intelligent assistants such as Siri, Cortana, and Alexa, and others, reminiscent of the terrifying and disturbing images of science fiction films such as Terminator.

More serious movie lovers will most likely remember the innocent face of David’s character in the acclaimed Steven Spielberg (AI Artificial Intelligence) movie.

However, artificial intelligence is in the minds of many of us technology users today and will be a part of our lives for the foreseeable future. A partner who can build us a bright future and perhaps a ruin similar to the Terminator movie.

What is artificial intelligence?

By hearing or reading the term artificial intelligence, various images and sounds are formed in our minds. Some hear the voices of intelligent assistants such as Siri, Cortana, and Alexa, and others, reminiscent of the terrifying and disturbing images of science fiction films such as Terminator. More serious movie lovers will most likely remember the innocent face of David’s character in the acclaimed Steven Spielberg (AI Artificial Intelligence) movie. However, artificial intelligence is in the minds of many of us technology users today and will be a part of our lives for the foreseeable future. A partner who can build us a bright future and perhaps a ruin similar to the Terminator movie.

In computer science, artificial intelligence or machine intelligence is the intelligence that comes from any machine (not man). Reference books in the field of artificial intelligence consider this science to be the knowledge of the study of intelligent agents, which is defined as: “Any device that has the ability to understand the environment and activity with the maximum chance of success.” In general, the term artificial intelligence is used to describe machines or computers that perform well the cognitive activities associated with the human mind. Important cognitive activities include “learning” and “problem-solving”.

Activities that fall into the category of intelligent machine activities change over time, and in fact, as machines become more powerful, some activities are no longer necessarily intelligent. Tessler’s theory in the definition of artificial intelligence states that any achievement that has not been achieved so far is called artificial intelligence. As a result, activities such as character recognition no longer make a machine smarter today. In the modern world, more complex tasks such as recognizing human speech, competing in strategy games such as chess, and automatically guiding machines define real intelligence in computers.

Human thinking about intelligent machines dates back centuries

AI in academic settings has a history dating back to the mid-twentieth century. However, human thinking about intelligent machines dates back centuries. This history has experienced many ups and downs until now, in the 21st century, it is one of the most important topics of human study and discussion. The present century, with the dramatic growth of the power of computers and the data available for their education, is the century of the flourishing of artificial intelligence, which has made this scientific subject a necessary part of the technology industry.

History of artificial intelligence

Intelligent beings who have the ability to think are seen in historical documents from ancient times. The first definition of such creatures was that they were tools with the ability to tell stories. Centuries later, examples of smart machines appeared in storybooks such as Frankenstein and RUR. The characters in these stories posed the first challenges to the ethics of artificial intelligence, and in a way raised concerns.

The study of reason and logic dates back to the time of ancient philosophers. From a logical point of view, it derives more from Alan Turing and his theory of processing. In processing theory, Turing states that any machine, by combining mathematical symbols and the numbers zero and one, has the ability to simulate any possible function in mathematical inference. Such an approach and perspective is known as Church-Turing theory.

The development of sciences such as neuroscience, information theory, and cybernetics have led researchers to ponder and research the possibility of developing an electronic brain. Turing changed the question of the feasibility of machine intelligence to: “Can a machine perform intelligent behaviors?” The first official paper to be written in this area was the work of McCola and Pitts in 1943, who defined artificial neurons in terms of Turing’s theory of “perfection.”

Field of Research on Artificial Intelligence was born in 1956 in a laboratory at Dartmouth College. John McCarthy removed this field from the subset of cybernetics and the theories of cyberneticists such as Norbert Wiener, and the term “artificial intelligence” was coined by him. Pioneers and leaders in the field of artificial intelligence include Alan Knoll, Herbert Simon, John McCarthy, Marvin Minsky, and Arthur Samuel. With the help of their students, they developed programs for which the world’s media chose the title “strange”. 

Equipped with the first intelligent programs of the twentieth century, computers had extraordinary functions such as learning and playing checkers, solving various algebraic problems, proving logical theories, and speaking English. By the mid-1960s, research into artificial intelligence had become a major topic in the tech world with massive investments. The US Department of Defense was a major investor in artificial intelligence projects, and several laboratories were set up in other countries. Artificial intelligence researchers were very optimistic in those years. Herbert Simon predicted that by 20 years, machines would be able to do all human work. Martin Minsky also believed that after a generation, the challenge of developing artificial intelligence would disappear completely.

Until the mid-1980s, there were few successes in the development of advanced artificial intelligence.

Researchers’ efforts to develop artificial intelligence were not very successful until the mid-1970s, as new advances in them created new challenges in the development process. Meanwhile, wealthy governments such as the United States and Britain have gradually reduced investment in projects. Since then, an era called the “Winter of Artificial Intelligence” has begun; The winter was the biggest challenge in finding and raising capital for AI projects.

Entering the 1980s, the first significant successes were seen despite the not-so-strong investments in artificial intelligence. Expert computers were born that had the ability to simulate the knowledge and analytical skills of human experts. By the mid-1980s, the AI ​​market had grown to billions of dollars, and Japan was proving the success of science with its “Fifth Generation Computer” project. The United States and the United Kingdom were again encouraged to invest in artificial intelligence, but the failure of projects such as the Lisp machine still darkened the future of machine intelligence, and a longer period without investment began.

Technologies such as MOS and VLSI, introduced in CMOS form in the mid-1980s, enabled the development of artificial neural networks (ANNs). Such hardware has once again made the use of machines for intelligent activities a hot topic. The 1990s and the beginning of the 21st century were the years when artificial intelligence was used in activities such as data mining and medical diagnostics, which eventually proved the potential of modern science. The integration of artificial intelligence with areas such as statistics, economics, and mathematics occurred in the early years of the 21st century, and a new era in the development of machine intelligence began. Perhaps the defeat of world chess champion Gary Kasparov of the Deep Blue computer in 1997 sparked an artificial intelligence explosion.

The decade of flourishing tangible achievements

The decade of 2010 can be considered as a flourishing period of artificial intelligence achievements that were clearly felt in everyday human life. In 2011, the popular Jeopardy American Television Competition was held with two of the top participants, Brad Rutter and Ken Jennings, and in front of them was the IBM Watson computer. Watson Computer was able to defeat two human heroes by a large margin. In 2012, faster computers equipped with more advanced algorithms and access to larger data sources enabled advances in machine comprehension and learning. Deep learning approaches were also born in the same year that increased the need for data as a feed for artificial intelligence systems.

Application of artificial intelligence

Important applications of artificial intelligence in the lives of users in the 2010s include the Xbox 360 game console and its Kinect gadget, which, after years of research and development, made it possible to understand the three-dimensional structure of the body. Voice assistants were added to smartphones over time, increasing the use of new technology in our lives. The next major achievement was the defeat of game champion Guy Lee Sedol by AlphaGo AI in 2016. A year later, Alfago was able to defeat the then-champion KG, which by many accounts was an important milestone in the development of AI in history. The player has much more complexity than playing chess, and his ability to defeat his hero was a testament to his high computer intelligence.

In an article in Bloomberg, Jack Clark called 2015 the year of artificial intelligence flourishing. Scattered uses of artificial intelligence in 2012 reached 2,700 projects in 2015, representing an explosion of ancient science applications. The development of cloud computing infrastructures and more data facilities available to researchers made neural networks accessible tools that made technology development easier. In 2017, the results of a survey claimed that one-fifth of companies present have used artificial intelligence in some way in their activities. Finally, today we have reached a point where life may be impossible for many of us without the presence of smart brokers. On the other hand, concerns about the limitless development of artificial intelligence are growing day by day.

Types of artificial intelligence

High-level artificial intelligence is divided into two broad types: “Narrow AI” and “General AI”. Such a classification helps to better understand the concepts and achievements of artificial intelligence and how to develop them. Limited artificial intelligence is also known as “weak artificial intelligence (Weak AI)”.

Limited artificial intelligence is the intelligence we all see in computers today. Intelligent systems that, under automated training or learning, make it possible to perform specific tasks without specific programming for those tasks. This kind of intelligence is seen in applications such as voice and language recognition in virtual assistants such as Siri. Other limited applications of artificial intelligence include visual recognition systems in self-driving cars and product offer engines in online retail. Such intelligent systems, unlike humans, have only the ability to learn to perform limited tasks and are therefore called limited artificial intelligence.

Limited AI capabilities

Today, there are a variety of applications for limited artificial intelligence, the number of which is increasing day by day. Interpretation of visual data is one of the important applications of this type of artificial intelligence, which is especially seen in industrial drones with the task of examining oil pipelines. Limited artificial intelligence today can organize and plan people’s personal and work calendars and even work with other bits of intelligence; Collaboration that we have seen in everyday applications such as booking a hotel or requesting a car and so on.

Artificial intelligence for specific tasks is called limited intelligence

Limited artificial intelligence is also widely seen in medical applications. Today, some machines can help radiologists diagnose possible tumors. These smart brokers are also widely used on social media, making life in these new cities easier and healthier. At present, artificial intelligence in social networks has the ability to detect irrelevant or annoying content, and arranging content display feeds is one of its simple tasks. The combination of limited artificial intelligence with IoT equipment also has many applications.

General AI abilities

General artificial intelligence made major differences with the limited type. Such a style of intelligence can exhibit very human-like behaviors. In fact, general intelligence is more flexible and allows you to learn the skills to perform a wide variety of tasks. Everything from haircuts to organizing managers’ extensive page files to even inferring and drawing conclusions from information and experiences gained can be done by general artificial intelligence.

The character of David (a robot) in Steven Spielberg’s AI movie

The artificial intelligence that we see in movies and raises our concern about the future of machine domination is the same as general artificial intelligence. HAL in the Space Odyssey series, or Skynet in Terminator, are general artificial intelligence (AGIs) capable of dominating humans. Of course, there is no such thing as intelligence in the world today, and artificial intelligence researchers have made every effort to develop it. In terms of predicting the time to achieve general artificial intelligence, there are many different predictions made by scientists.

In 2012 and 2013, a survey of four groups of artificial intelligence experts was conducted by Vincent C. Moore and Nick Bostrom, artificial intelligence and philosophy experts. Survey results put the probability of achieving general AI by the 2040s and 2050s at about 50 percent, which by 2075 was 90 percent. The team went beyond prediction and coined the term “superintelligence.” Bostrom says that superintelligence is any intelligence that defeats man in terms of cognitive abilities in all possible areas. He predicts the time to reach superintelligence 30 years after reaching general AI.

A group of experts and theorists of artificial intelligence consider the existing predictions with estimates of the 2040s and 2050s to be far from reality. They are far from achieving such achievements due to the approach to the development of artificial intelligence, which did not first come to understand the human brain and mind. In fact, in such a theory, the limitation of human knowledge of the brain makes it impossible to develop artificial intelligence faster.

Key factors in artificial intelligence

Artificial intelligence today has several sub-concepts and definitions, and familiarity with some of its key elements is not without merit:

  • Recursive processing (linear processing) enables multiple levels of abstract comprehension in the intelligent system.
  • Information is processed comprehensively in processes. At each point, the information is first identified depending on the subject and context; But contexts that move between and along abstract concepts play an important role in transforming information.
  • Classification is one of the main parts of artificial intelligence processes.
  • The information graph is constantly changing and uses filters to change that are self-constructed based on existing information.
  • Intelligence is defined as the point, distribution, and random. In other words, the information we have in the system is never complete or all-encompassing, and artificial intelligence decisions can only be made when the climax of information confirming or disproving a fact is presented.
  • At every point in the system, there is information in a model, but the model itself is flexible in nature and has the ability to modify itself. Such a model differs from current models, which are predefined and fixed.
  • The system now has a level of self-awareness.

The above key concepts show that artificial intelligence has reached a remarkable level of complexity by 2020, and its processing models will never remain the same.

Machine learning and neural networks and deep learning

In the section on the history and introduction of artificial intelligence, two concepts were introduced as the main tools for the flourishing of this technology. Machine learning is one of the main tools for the development of intelligence in machines, which forms the basic concepts of intelligence. In fact, a machine equipped with learning has taken the first step towards becoming intelligent (just like a human).

Machine learning is the main axis of the development of artificial intelligence

Machine learning in a simple definition begins with providing large data to the machine. The machine then uses the same data to learn how to perform specific tasks such as speaking comprehension or tagging images. Data is a key element in the development of machine learning, and for this reason, in recent years we have seen an increase in data collection by technology companies. In fact, today, macro data and machine learning are two intertwined concepts. Another concept that completes the basic triangle of artificial intelligence is called a neural network.

Neural networks are the key to processing machine learning. Such networks are inspired by the structure of neurons in the human brain and consist of multiple, interconnected layers of an algorithm called a neuron. Algorithmic layers in a neural network exchange data with each other. Each neuron has the ability to learn to perform a specific task and perform a process on it by prioritizing the data structure being exchanged. In the neural networks learning path, the priority and homeland of the input data change until the required output is finally extracted from the network. In such a situation, the network has learned to perform a specific task properly.

Examples of artificial intelligence

Deep learning is one of the concepts born of machine learning. Neural networks in this style of learning develop into extensive networks and have many layers. In deep learning, each layer will be able to process and process huge amounts of data. Deep learning has enabled today’s computers to achieve exemplary intelligence and learning capabilities, examples of which can be seen in computer speech and vision recognition.

Evolutionary processing is one of the fields of artificial intelligence research that was born with the development of neural networks. Researchers are proposing a new style of artificial intelligence based on Darwin’s theories and the concepts of genetic mutation. Such an approach led to the development of artificial intelligence with the ability to build other artificial intelligence. The use of evolutionary algorithms to optimize neural networks is known as Neuroevolution and will be very effective in the development of future generations of intelligent systems. The latest breakthrough in this field came at the Uber Artificial Intelligence Laboratory, which used genetic algorithms to train deep neural networks for advanced learning.

Expert systems are another concept developed in the field of artificial intelligence. These systems are programmed with rules that allow them to make decisions based on large data sets. Such an approach simulates the behavior of the human mind in specific areas. An example of an expert system is an autopilot on an airplane.

Dedicated machine learning processors

Among the important achievements of recent years in the field of artificial intelligence, advances in machine learning and especially deep learning, have had the most important impact on the progress of science. An important part of the achievements was made possible by the emergence of big data concepts. In addition, increasing power in parallel computing has helped accelerate technology development. In parallel computing, clusters of GPUs are used to train machine learning systems.

The development of proprietary processors has made data processing more advanced in machine learning

GPU clusters are more powerful systems for teaching machine learning models and are now available to experts in the form of cloud services. With the development of such concepts, the development of proprietary chips for the implementation and training of machine learning models also accelerated. Proprietary processors include Google’s Tensor Processing Unit (TPU), which uses the company’s TensorFlow software library to extract data from data.

Google’s proprietary chips are not only used in the development of DePaymand and Google Brain models, they are also seen in more common functions such as the company’s translation service or image recognition in photo search. In addition, public users can use cloud services such as TensorFlow Research Cloud to develop their machine learning models using Google processors.

Supervised learning

Supervised machine learning is one of the most common methods of teaching models. In this way, the AI ​​system is trained using multiple labeled examples. Educational data can be a collection of images whose content is marked with a special label. In other examples, we see the use of texts for education, the main subject of which is determined by special footnotes. The learning model uses these tags to learn how to tag new data.

Teaching a machine model using examples is known as supervised learning. Human data users, hired on platforms such as Amazon Mechanical Turk, are used to tag primary data. Teaching such models requires large databases, and sometimes millions of examples must be injected into the algorithm to learn a particular task.

Training datasets for machine learning models are becoming larger and more accessible. Google has a database called Open Images, which provides 9 million images to the user. YouTube also has a collection of tagged videos that includes seven million videos. Other databases include the pioneering ImageNet database, which provides 14 million images with specific categories. About 50,000 people were involved in the development of this database, which took two years to complete. Most of them were hired on the Amazon platform to review and categorize image tagging.

With the advancement of artificial intelligence tools, access to large labeled datasets has become less important than access to massive processing power. In recent years, networks called GANs have shown that machine learning systems are capable of generating large amounts of data for their training only by receiving small amounts of data. Such an approach is likely to lead to the development of the concept of semi-regulatory learning, in which systems are trained with much smaller datasets than today’s datasets.

Learning without supervision

Unsupervised learning takes place without the need for tagged datasets. Algorithms in such a learning process try to find a common pattern between the data. In fact, they are looking for similarities that make it easier to categorize data. For example, we can classify homogenous fruits or cars with equal engine dimensions.

Unsupervised learning is not done to select specific data from the data set. In fact, such algorithms only try to find data with similar specifications. As a practical example, we can refer to news feeds that categorize similar topics daily.

Reinforced learning

Reinforced or rewarding learning is very similar to pet training. In such a model, the system is rewarded for achieving the desired output. As a result, it tries to maximize its reward-based on input data. Such a method of training is mostly done with trial and error so that the maximum reward is finally found among the many options.

Google’s Deep Q network is a well-known example of enhanced learning. The network has so far defeated professional players in various video game competitions. The system receives the pixels of each game and detects things like the distance of the elements on the screen. Next, the system, by observing the score of each game, creates a model of the selections that have the highest score.

Artificial intelligence in medicine

One of the important areas of application of artificial intelligence in medical science. Smart machines today, along with doctors and specialists, help diagnose the disease from medical images. In addition, they have the ability to detect genetic patterns that lead to specific diseases. In pharmacy, more practical molecules can be discovered in the manufacture of medicine using artificial intelligence.

Numerous experimental programs have been conducted around the world to study the impact and application of artificial intelligence in hospitals. Watson’s clinical decision support tool from IBM, for example, is being used on a trial basis in some hospitals. Google’s DeepMind artificial intelligence is also used in the UK National Health Service in the Department of Head and Neck Diseases, which examines ocular abnormalities in the diagnosis.

The future of artificial intelligence and its impact on the world

The current world approach to the development of robots with the ability to operate automatically and to perceive and move in the surrounding world reflects the natural overlap that exists between robotics and artificial intelligence. Artificial intelligence is one of the many technologies used in robots. On the other hand, the development of artificial intelligence has led to the emergence of robots in new areas such as self-driving cars, delivery robots, and training robots.

The world of technology today is on the verge of jumping to a new stage of artificial intelligence capabilities. Today’s neural networks have the ability to create real images and even simulate people’s voices with high quality. Of course, such developments have also been accompanied by social concerns. Recent news achievements include Deepfick, which doubles the need for more control and regulation over the development of artificial intelligence.

Artificial intelligence now has the ability to detect speech with 95% accuracy

One of the most important advances in machine learning today is the accurate recognition of the user’s speech. Current systems detect human speech with 95% accuracy. Microsoft recently announced that it has developed artificial intelligence with equal human precision that has the ability to convert audio to text. Researchers are looking for 99 percent accuracy in artificial intelligence voice recognition, and in the not-too-distant future, talking to a machine will become one of the main approaches to user-computer interaction.

In recent years, the accuracy and quality of face recognition in computer systems have improved. Today, Chinese tech giant Baidu claims that the system has the ability to recognize faces with 99% accuracy. Police and other law enforcement agencies in Western countries have launched pilot programs aimed at using artificial intelligence to identify the faces of criminals. The Chinese have gone a few steps further and have a national plan to connect CCTV cameras to massive face recognition AI. They are also looking to equip their police with face-recognition glasses.

Does it destroy human artificial intelligence?

The answer to the concern about the threats of artificial intelligence depends on the audience of the question. However, with the development of AI-based systems, concerns about their risk have increased. Tesla & SpaceX CEO ان Ilan Musk calls artificial intelligence “a fundamental threat to the existence of human civilization.” He started the OpenAI research company as a non-profit organization to increase oversight and regulation of artificial intelligence research and reduce its harmful effects. The late Stephen Hawking also warned of the dangers of artificial intelligence. He believed that when professional and advanced artificial intelligence is developed, it will quickly overtake humans and, because of its originality, will pose a serious threat to human society.

Despite the many concerns raised by some experts about artificial intelligence, many researchers find it ridiculous. In fact, in their view, we are still a long way from the explosion of artificial intelligence and its advancement from human reason. Chris Bishop, director of Microsoft Research at Cambridge, UK, believes that the current limited artificial intelligence is a long way from general artificial intelligence. In his view, the “ridiculous” worries that paint a picture of a Terminator-like future are still decades away from reality.

Does artificial intelligence cause us to be unemployed?

Among the concerns about artificial intelligence, the concern over the replacement of jobs by smart machines seems wiser and more likely. Although artificial intelligence does not completely replace human employment, it does change the nature of work. Now only the speed and how to change in the workplace due to automation are discussed. On the other hand, artificial intelligence has the ability to replace many human abilities. Andrew Anji, an artificial intelligence expert, says that humans today perform many repetitive and simple activities in work environments that AI can easily replace.

Are jobs being created at the same rate as jobs are destroyed by automation?

The current news of the replacement of human jobs by automation, however, raises concerns. Amazon is one of the pioneers in replacing manpower with robots. They recently launched the Amazon Go store in Seattle, which eliminated the need for cashiers in chain stores. Such an approach could pose a serious threat to at least three million cashiers in the United States. In addition, the e-commerce giant uses robots in its warehouses to move goods and has so far employed 100,000 worker robots in its warehouses. Of course, they claim to have increased manpower as much as robots, but Amazon and partner robotics companies are working to replace the remaining manual jobs in warehouses with automation.

Automation in the transportation industry still needs a lot of time to develop. Although automobiles and trucks are still years away from final development, it would not be unreasonable to worry about replacing driving jobs with artificial intelligence. On the other hand, not all jobs that can be replaced by artificial intelligence can be replaced by robots. There are now many people who do repetitive office work, and with the advancement of software and automation systems, their jobs are also in jeopardy.

Every change in the world of technology is accompanied by the loss of several jobs and the birth of new jobs. Critics of the theory of artificial intelligence dominating the business world also believe that new jobs always keep job opportunities alive for the workforce. However, the question still arises as to whether new jobs are born at the right rate and at the same rate as old jobs disappear.

Not all theories about the future of employment with artificial intelligence are pessimistic. Some believe that artificial intelligence will help improve activities in the future instead of replacing our jobs. Examples include smart tools such as virtual reality glasses that increase workers’ productivity.

The Future of Humanity Institute at the University of Oxford has conducted a remarkable survey of hundreds of machine learning experts asking for their views on AI achievements and future capabilities. Survey results show interesting histories of human access to important AI capabilities. For example, the writing of a professional article by artificial intelligence is projected in 2026. In 2027, we will probably see automatic truck drivers. In the years 2031, 2049, and 2053, the emergence of artificial intelligence with the capabilities of retail, book writing, and surgery will be possible.

Artificial intelligence is not a new technology and has been with us for about half a century. This technology will definitely not disappear shortly and will be developed in a way that aims to stay with us. Concerns about illegal development must also be addressed. However, there is no limit to intelligence, and some of us may not be so concerned about the future of Sky’s rule.