Neural Networks And Its Application Comprehensive Guide To Different Types Of Neural Networks

The range of Artificial Neural Networks is increasing day by day and every once in a while we see the introduction of a new type of them. 

Each of these intelligent networks is used for specific purposes, and therefore it is important to get acquainted with the specific capabilities and applications of these networks.

Accordingly, in this article, we have examined the architecture and interconnection of the most important artificial neural networks.

Types of artificial neural networks

In general, neural networks process data based on a multi-layered architecture and provide appropriate output. Simply put, in each layer, the data is processed or pre-processed and the result is sent to the next layer.

However, this is not the case with all neural networks, each based on the function defined for them by a specific type of connection for processing. They use information.

Perceptron neural network

Perceptron is the simplest and oldest type of neural network (Figure 1). Then Perceptron receives several inputs, combines them and executes the activation function on them, and finally sends the results to the output layer.

Multilayer perceptron neural networks consist of an input layer, latent layers, and output layer. In these networks, neural network nodes called neural cells are computational units.

In multilayer perceptron neural networks, the first layer outputs are used as latent layer inputs, and this process continues until the last layer or latent layer output is used as the output layer input.

 And In these networks, the layers that are located between the input layer and the output layer are called hidden layers.

figure 1

Feed neural network

Feed Forward Neural Networks are more than 50 years old and, like the perceptron neural network, are one of the oldest neural networks (Figure 2). The architecture of these neural networks is such that all nodes are completely connected.

In these networks, activation from the input layer to the output is based on a hidden layer between the input and output without the problem of a backward loop. In most cases, the backpropagation approach is used to train the forward neural networks.

figure 2

Basic radial neural network

Radial Basis Networks are an improved type of feedforward neural network that uses the basic radial function instead of the logistic function as the activating function for better performance.

(Figure 3). What is the difference between forwarding and radial neural networks? The logistics function maps some arbitrary values ​​in a range of zero to one to answer a yes or no question.

These neural network models are suitable for classification and Making Systems, so they are no longer profitable when continuous values ​​are to be used in these networks.

In contrast, basic radial functions are used to answer the question, how far are we from the goal? Basic radial neural networks are suitable for estimating function and controlling the machine (as an alternative to the PID controller).

In general, radial networks are the basis of the improved type of feed neural networks with activator function and different properties.

Figure 3

Deep feed neural network

Deep Feed Forward Neural Networks emerged in the early 1990s (Figure 4). These neural networks are similar to feed neural networks, except that they contain more than one hidden layer.

How are these neural networks different from the old feed neural networks? When feeding the feed neural network, only a small portion of the error is sent to the back layer.

Because many layers are used in these networks, we are witnessing an exponential growth of the training time of these networks, which makes deep-feed neural networks less practical.

In the early 2000s, efforts were made to address the underlying problems of these networks and to enable the training of a Deep Feed Neural Network (DFF) more efficiently.

Figure 4

Recursive neural networks

Recurrent Neural Networks have introduced a new concept called Recurrent Cells into the world of artificial intelligence and neural networks (Figure 5).

The first recursive neural network, called the Jordan network, was based on the architecture that each of the hidden cells received its input and output based on one or more repetitive steps with a constant delay.

Regardless, the Jordan network performed similarly to conventional feed neural networks. However, various changes such as state conduction to input nodes, the latency of variables, and the like have been applied to recursive neural networks, but in general, their architecture and function have a definite trajectory.

These types of neural networks are used when the context is important.

In this case, when decisions are made based on previous iterations or samples, they affect the current samples.

The most common example that can be mentioned about contexts is the text. In the text, a word can be analyzed only in the context of the previous word or sentence.

Today, such an approach is used by email service providers to identify spam from regular emails.

Figure 5

Short-term long-term memory

Long / Short Term Memory Introduces a new type of memory cell (Figure 6). This cell can process data when there is a time gap. The feed neural network can process the text by memorizing the previous ten words.

However, short-term/long-term memory networks can process video frames by remembering the state that occurred in previous frames.

These network models are widely used in connection with speech recognition and behavior recognition. Memory cells consist of a pair of elements called gates.

These elements are recursive and control how information is remembered and forgotten.

For a better understanding of the subject, look at Figure 7. The Xs in the diagram are gates and have a weight and sometimes an activator function.

For example, Xs decide whether or not to move data forward, clear memory, and so on. The input gate decides how much information from the last instance is stored in memory.

The output gate regulates the rate of data transferred to the next layer, while the forgetting gate controls the rate at which data is lost from memory.

Figure 6

Figure 7

The neural network of the portal return unit

Gated Recurrent Unit A specific type of LSTM network differs in gate and time interval (Figure 8). These types of neural networks have a relatively simple function.

Today, this model of recursive neural networks is more commonly used in text-to-speech and voice-combining engines. Some AI experts believe that the neural network of the gateway return unit is slightly different from the short-term/long-term memory, because all the gates of the LSTM network are combined into one gate called the gateway, and the rest gate is near the input node. contract.

In addition, GRU networks use fewer resources than LSTM but perform the same function as LSTM.

Figure 8

Self-encoding neural network

Auto Encoder neural networks are used to classify, cluster, and compress features (Figure 9). When a feed neural network is trained for classification, X-ray and Wi-Fi samples must be provided to the network as a data feed to activate one of the Wi-Fi cells.

In common parlance, this architectural example is called supervised learning.

Due to the structure and architecture of these networks, the number of hidden layers in them is less than the number of input cells and the number of output cells is equal to the number of input cells.

As a result, it is possible to train the encoder neural networks in such a way as to bring the output as close as possible to the input and to look for common patterns by generalizing the data.

Therefore, it is possible to use the self-coding neural network based on an unsupervised learning approach.

Figure 9

Variable self-encryption network

The Variational Auto Encoder neural network compresses probabilities rather than features compared to the self-encoding neural network (Figure 10).

In situations where there is a slight difference between these two neural networks, but each of them is intended for a specific application. The self-encrypting neural network seeks to answer the question of how data can be generalized.

The variable self-encoding network, on the other hand, seeks to answer the question of how high the relationship between the two events is (e.g., the relationship between cloud and rain) and whether the error must be distributed between the two events or they are completely independent of each other.

Figure 10

Diagnosing network encoder itself

In situations where self-encrypting networks can solve many problems, sometimes these networks focus only on compatible input data instead of identifying the best feature (Figure 11).

The Denoising AutoEncoder neural network adds some noise (clutter) to the input cell to solve this problem. In this case, the network must do more to provide the output so that it can find more common features that eventually produce more accurate output.

Figure 11

Spars self-encoder neural network

The Sparse AutoEncoder neural network is a more advanced type of artificial self-encrypted neural network that can detect some of the group patterns hidden in the data (Figure 12).

The structure of Spar’s self-encoding neural network is very similar to the dinosaur self-encoding neural network. In Spars self-encryption networks, the number of hidden layers is more than the number of input and output layer cells.

Figure 12

Marco Chain

Marco chains are one of the oldest topics in the world of graphs, in which each edge contains a probability (Figure 13). In the past, Marco chains were used to create text, for example, after the word Hello, the word Dear appeared with a probability of 0.0053% and the word You with a probability of 0.03551%.

Today, the Marco technique is used in mobile phones to predict text. Marco chains are not classical neural networks and can be used for probability classification (Bayesian filters), clustering (some types), and finite state machines.

Figure 13

Hopfield Neural Network

Hopfield Networks are trained on a limited set of samples and therefore respond to a known sample with a similar sample (Figure 14). Before training, each cell is considered as an input cell during training as a hidden cell and as an output cell during use.

A Hopfield network tries to build trained examples. Hopfield neural networks are used to interact with dinosaurs and restore inputs.

If these networks are provided with half of an image or sequence, it can return a complete sample.

Figure 14

Boltzmann car

The Boltzmann Machines are very similar to Hopfield networks because some cells are marked and hidden as input (Figure 15). Input cells become output cells when hidden cells update their state.

Figure 15

Boltzmann Machine Limited

The Restricted Boltzmann Machine is architecturally very similar to the Boltzmann machine, but because it is limited, it can only be trained using the post-release approach only as a feed, because the post-release data is returned once to the input layer. (Figure 16).

Figure 16

Deep Belief Network

The Deep Belief Network shown in Figure 17 is a stack of Boltzmann machines surrounded by a self-encoding variable (Figure 17). They can be linked together in a chain and used to generate data based on a learned pattern.

Figure 17

Deep torsional mesh

The Deep Convolutional Network has torsional cells and nuclei, each designed for a specific application (Figure 18). Torsional cores process input data and torsional cells simplify the process by using nonlinear functions such as Max and reducing unnecessary properties.

Typically, torsional neural networks are used to recognize images. They work on a subset of 20 by 20 images.

Figure 18

This input window moves the entire image spherically and pixel by pixel. In this case, the data is directed to the torsion layers based on the compression of the input properties, which eventually creates a funnel-like state.

In this case, the first layer of gradients, the second layer of lines, and the third layer of shape are identified. This process continues until a specific object is detected.

Deconvolution network

The deconvolution network has the opposite function of the deep torsion network (Figure 19). To detect a cat-like image, the deconvolution network receives and receives information such as {dog: 0, lizard: 0, horse: 0, cat: 1 و, and then draws a picture of the cat based on the information obtained.

Figure 19

Deep torsional reverse graphic grid

The Deep Convolutional Inverse Graphics Network shown in Figure 20 is a self-encoding network in which the deep torsion network and the deconvolution network are used instead of as separate networks to space the network inputs and outputs.

(Figure 20). This network model is mostly used for image processing that has not been previously trained. Due to their abstract surfaces, these grids can remove certain objects from images or redraw images, and also have the ability to replace objects within an image.

For example, replace a horse with a zebra in the picture. These network models are used in conjunction with Deepika.

Figure 20

Conflict generating network

Generative Adversarial Networks are a dual network of generators and Discriminators (Figure 21). These neural network models are constantly updated so that the two networks can produce a true image.

Figure 21

Fluid state machine

The Liquid State Machine is a semi-interconnected sparse neural network in which activator functions with a certain threshold level are replaced (Figure 22). In these networks, the cell aggregates the values ​​from the sequential samples and presents the output only when it is specified according to the threshold level. It then sets the external counter to zero.

Fluid state machines are inspired by the human brain and are widely used in machine vision and speech recognition systems, although these networks are evolving slowly.

Figure 22

 Nervous Turing Machine

Neural networks are like a black box. These networks can be trained, their results received, and their performance improved, yet it is not clear exactly how they make their decisions. Neural Turing Machine

Designed to solve this problem (Figure 23). Neural Turing Machine A feed neural network with memory cells is extracted. Some scientists believe that the Turing neural machine is an abstraction of LSTM.

In this type of neural network, memory is referenced by its content, and the network can read or write to memory depending on its current state.

Figure 23

 

1 Comments

  • I was suggested this blog by my cousin. I’m not sure whether this post is written by him as no one else know such detailed
    about my difficulty. You are incredible! Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *