Site icon DED9

What Is Distributive And Cumulative Artificial Intelligence?

Distributed artificial intelligence

Distributed artificial intelligence (DAI), also called decentralized artificial intelligence, is a subset of artificial intelligence research dedicated to developing distributed solutions to problems. 

DAI is closely related to multifactorial systems and has played an important role in advancing the achievements of artificial intelligence. In this article, we intend to examine distributive and cumulative artificial intelligence.

What is distributive artificial intelligence?

In 1975, distributed artificial intelligence emerged as a subset of artificial intelligence that dealt with interactions between intelligent agents. Distributed artificial intelligence systems were thought of as a group of intelligent beings, called agents, who interact through coexistence or competition.

DAI is divided into two categories:

distributed multi-agent systems and problem-solving. In multi-factor systems, the main focus is on how agents coordinate their knowledge and activities. To solve the distributed problem, the main focus is on how to break down and construct solutions.

Distributed artificial intelligence (DAI) Distributed artificial intelligence is a method for solving complex learning, planning, and decision-making problems.

Due to the ability to parallelize, this technology can use large-scale computing and the distribution of computational resources.

 These features make it possible to solve problems that require the processing of very large data sets.

DAI systems consist of independent learning processing nodes (intelligent agents) that are often spread over a very large scale.

DAI nodes can operate independently, and partial solutions are often combined as an asynchronous connection between nodes. Due to their superiority in scale, DAI systems are stable and flexible and can work together as needed.

In addition, based on the DAI approach, systems are built in such a way that they can adapt to changes in case of a problem or data used.

Multi-factor systems and distributed problem solving are the two main approaches of DAI. In multi-factor systems, agents coordinate their knowledge and activities and argue about coordination processes.

Agents are physical or virtual beings who can take action, understand their environment, and interact with other agents.

It is an automated agent and has the skills to achieve goals.

Agents change the current state of their environment through their actions. There are several techniques for creating coordination.

In DAI systems, not all data need to be gathered in one place, unlike centralized AI, which is dependent on processing nodes and whose nodes are geographically close to each other.

For this reason, DAI systems typically operate on samples or parts of larger data sets. In addition, the initial datasheet may be changed or updated during the operation of a DAI system.

Why use distributed artificial intelligence?

The goal of distributed AI is to solve the problems of reasoning, programming, learning, and comprehension in AI, especially if they require a lot of data by distributing the problem in independent processing nodes (agents). To achieve this, distributed artificial intelligence requires the following factors:

  1. A distributed system with strong and flexible computations on less stable resources that are somewhat interconnected.
  2. Coordination of actions and communication of nodes
  3. Large datasets and online machine learning subsets

There are many reasons to want to distribute information or work with multi-agent systems.

The main problems in DAI research include the following:
Parallel problem solving:

It mainly deals with how to modify the classical concepts of artificial intelligence, so multiprocessor systems and a set of computers can be used to speed up the computation.

Distributed Problem Solving (DPS):

The concept of agent, independent beings that can communicate with each other, has been developed as a concept for the development of DPS systems.

Multi-Factor Simulation (MABS):

The branch of DAI that forms the basis of simulations, as in many social simulation scenarios, must analyze phenomena not only at the macro level but also at the micro-level.

Some important applications of distributed artificial intelligence

E-Commerce:

In connection with trading strategies, the DAI system learns the rules of financial commerce from subsets of very large samples of financial data.

Networks:

In mobile communications, the DAI system controls shared resources in a WLAN.

Routing:

Modeling the flow of vehicles in transportation networks

Scheduling:

Purchasing process scheduling, where the resource management body ensures local optimization and coordination for global and local compatibility.

Electrical Systems:

Manages the Multi-Factor Status Monitoring System (COMMAS) to monitor transformer status and the IntelliTEAM II automatic repair system applied.

What is Cumulative Intelligence?

Collective information, abbreviated CI, includes a group of information obtained through continuous and repeated efforts and competition of individuals and is reflected in collective decisions.

The term is common in sociology, political science, morphology, and scientific evaluations and may include consensus, individuals’ work and social relationships, polls, social media, and other tools for evaluating group activities.

In contrast to shared information, which is abbreviated as IQ, it is a measure of shared information, although often both of these acronyms are used instead of shared information.

Shared information is a collaborative product of factors such as scientific information, software, hardware, and leading experts with new perspectives.

Therefore, it can be seen that by utilizing the feedback obtained in providing timely knowledge, it has a better performance than each of the three factors mentioned.

In other words, this phenomenon lies between humans and information processing methods.

Norman Lee Johnson refers to such a notion of shared information as symbiotic intelligence. A term used in sociology, business, computer science, mass communication, as well as science fiction.

In his definition, Pierre Levy states that shared information is distributed throughout the world and is constantly evolving, timed, and effective in integrating other skills.

What I add to this definition as an avoidable characteristic is that the basis and purpose of shared information is a two-way diagnosis, and refers more to individual enrichment than to the irrational nature of societies.

Shared information fully implies the transfer of knowledge and power from individuals to society.

According to Eric S. Raymond (1998) and GC Herz (2005), such a source of freedom from intelligence will eventually lead to the production of a scientific achievement that will be at a higher level of knowledge output from the software of private companies.

(Flow 2008). One media theorist, Henry Jenkins, sees shared information as an alternative to media power and cultural convergence. He focuses on education and the application of traditional learning methods.

Henry Jenkins criticizes schools that encourage students to learn and solve autonomous problems and are reluctant to use shared information in education.

 Both Pierre Oy and Henry Jenkins believe that because of their close connection to knowledge-based culture.

Shared information is important for the realization of democracy and strengthens the participation of collective opinions; Therefore, it has a great role in understanding more different societies.

As with factor (g) for individual general intelligence, scientific understanding of shared information helps to extract the overall shared information of factor (c) so that groups can demonstrate their ability to perform a wide range of activities.

Definitive, pragmatic, and statistical methods are derived from (g). In the same way (g) is related to the concept (IQ). Measuring shared information in this way can be interpreted as IQ, although this value is not outside the equation itself.

What is federal learning?

Federal learning, also known as participatory learning, is a machine learning method that teaches an algorithm on multiple decentralized edge devices or servers that hold local data samples without exchanging them.

This method, unlike traditional machine learning techniques, focuses on loading all local data sets into one server, as well as more classical decentralized approaches that often assume that local data samples are distributed evenly.

Federation Learning enables several factors to build a common and robust model of machine learning without data sharing, thus allowing you to address important issues such as data privacy, data security, data access rights, and access to heterogeneous data.

Its applications have spread to many industries, including defense, telecommunications, the Internet of Things, and pharmacy.

The Federation’s learning goal is to teach machine learning algorithms, such as deep neural networks, on several local datasets that exist in local nodes without explicitly exchanging data.

The general principle involves training local models on local data samples and exchanging parameters (eg, weight and bias of a deep neural network) between these local nodes at certain frequencies to produce a common global model between all nodes.

The main difference between federation learning and distributed learning is in the assumptions about the properties of local datasets because the main purpose of distributed learning is to parallelize computational power where federal learning is primarily intended to teach heterogeneous datasets.

While the goal of distributed learning is to teach a single model across multiple servers, a common assumption is that local datasets are evenly distributed (i.e., iid) and have approximately the same size.

None of these hypotheses are made for the federation to learn. Instead, datasets are typically heterogeneous, and their size may be several times the size.

Focused Federation Learning

In a centralized learning setup, a central server is used to configure the various stages of the algorithms and coordinate all the participating nodes throughout the learning process.

The server is responsible for selecting nodes at the beginning of the training process and collecting received model updates. Because all selected nodes must send updates to a single entity, the server may become a system bottleneck.

Learn decentralized federation

In a decentralized federation learning environment, nodes can coordinate themselves to achieve a global model. These settings prevent single-point crashes because model updates are only exchanged between interconnected nodes without central server coordination.

However, the specific network topology may affect the performance of the learning process. Refer to blockchain-based learning and its resources.

The main features of learning the federation should be mentioned:

Iterative learning: To ensure the good performance of a final and central model of machine learning, federation learning relies on an iterative process that breaks down into an atomic set of client-server interactions and is known as the federation training cycle.

Each round of this process involves transferring the status of the current global model to participating nodes, training local models in those local nodes to generate a set of potential model updates per node, and then collecting and processing these local updates in a global update.

And its use in the global model.
Initial start:

Depending on the server inputs, a machine learning model (eg, linear regression, neural network, amplifier) ​​is selected to train on local nodes and initialize.

Then, the nodes are activated and wait for the central server to perform the calculation tasks.

 Customer selection:

To start local data training, part of the local nodes is selected. The selected nodes get the current statistical model while the rest wait for the next round of the federation.

Configuration:

The central server instructs the selected nodes to train the model on their local data in a predetermined manner (for example, for some mini-updates a bunch of downhill slopes).

Reporting:

Each selected node sends its local model to the server for aggregation. The central server collects the received models and sends the model updates to the nodes. It also handles broken node failures or missing model updates.

The next round of the federation is to return to the customer selection stage.

Termination:

As soon as a predefined termination criterion is met (for example, the maximum number of iterations is achieved or the model is more accurate than the threshold), the central server collects updates and finalizes the global model.

Exit mobile version