blog posts

The Most Important Information Recovery Algorithms

The Most Important Information Recovery Algorithms

In The Last Issue, We Got Acquainted With Some Of The Most Important And Powerful Algorithms For Data Recovery. 

Recovery Algorithms, In this section, we will learn about other algorithms.

Level 1 search algorithm

In graph theory, breadth-first search is one of the graph traversal algorithms. As the name suggests, the first level search strategy for graph traversal is “graph level-by-level search.” The algorithm starts from the root (in unrooted graphs or trees, an arbitrary vertex is chosen as the root) and places it at level one. Then, at each step, it visits all the neighbors of the vertices of the last seen level that have not been seen yet and puts them in the next level.

This process stops when all vertices’ neighbors of the last level have already been seen. Also, in problems where different states correspond to the vertices of the same graph, solving the problem requires finding the target vertex with specific characteristics closest to the root among all the target vertices with those characteristics; the first level search works uncreatively.

In this way, the algorithm visits all the neighbors of a vertex and then goes to the next vertex, and thus the graph will be scrolled level by level. This process continues until the target vertex is found or possibly the entire graph is traversed. Based on what was said, the intelligent implementation of the algorithm will not be so effective.

From a practical point of view, a queue is used to implement this algorithm.

This way, the root is placed in the queue at the beginning. Then, each time the element at the front of the string is drawn, its neighbors are checked, and any neighbors that have not been seen so far are added to the end of the queue. Implementation details will follow.

The implementation of this algorithm is similar to the performance of a depth-first search, except that a queue is used instead of a stack. Here, like the first depth search, we consider preWORK for more flexibility of the algorithm, which is done when checking each vertex out of the queue. The search algorithm of the first level is as follows.

The Visited array is used to specify visited vertices. A queue is used to hold adjacent vertices. Every time a vertex is met, all adjacent vertices are added to the line. Scrolling continues from the vertex that is removed from the queue.

String matching algorithm with linear time

Donald Knuth and Van Part developed the linear-time string matching algorithm and wrote James H. Morris independently, but the three published it together. For this reason, it is also known as the KMP algorithm. This algorithm checks for the word W among the string S so that when a mismatch occurs, the word itself contains enough information to determine where the next match might start by retrying previously matched letters. slow

String matching algorithm

String matching algorithms, sometimes called string search algorithms, are an important class of string algorithms that attempt to find the occurrence of one or more strings (patterns) in a larger string (or text). Let Σ be a finite set of alphabets. Usually, the pattern and the target text are combinations of elements of Σ. Σ can be a normal human alphabet (for example, the letters A to Z in English). In other applications, binary alphabet ({(Σ = {0,1)) or DNA alphabet may be used in bioinformatics.

In particular, how the string is encoded can affect the concrete matching algorithm. Especially if variable length encryption is used, it is slow to find the Nth character. It can significantly slow down more advanced search algorithms. One solution is to look for a sequence of code units, but this method may cause false matches. Can avoid This problem if the encryption is designed specially.

Ignorant search

An uninformed search algorithm is an algorithm that has nothing to do with the nature of the problem. Hence they can be designed generically and use the same design for many situations; this requires abstract design. Among the issues such algorithms have is that the search space is often huge and requires a lot of time (even for small samples). Therefore, they often use conscious algorithms to increase the processing speed.

Search list

List search algorithms are perhaps the most basic types of search algorithms. Its purpose is to find an element from a set of keys (it may also contain other information related to that key). The simplest of these algorithms is the sequential search algorithm that compares each list element with the desired part.

The execution time of this algorithm is (O(n) when n is the number of elements in the list. But you can use another method that does not need to search the entire inventory. Binary search is a little bit of linear search.

Its execution time is (O(log. This method is much more effective than the sequential search algorithm for a list with much data. But in this method, the list must be sorted before searching. Interpolation search is suitable for sorted data with a large number and uniform distribution. It is better than binary search.

Its execution time is O(log(log) on average, but its worst execution time is O(n). Graver’s algorithm is a stepwise algorithm used for unordered lists. A hash table is also used to search the list. On average, it has a fixed execution time. But it needs extra space and the worst execution time is O(n).

Search tree

Search tree algorithms are the heart of search methods for structured data. The primary basis of tree search is the nodes taken from a data structure. Each element that wants to be added is compared with the data in the tree nodes and added to the tree structure. By reordering the data and placing it in the tree, the tree is searched in different ways. For example, level by level (breadth-first search) or reverse tree traversal (depth-first search). Other examples of tree searches include iterative depth-first search, bounded depth-first search, bidirectional search, and uniform cost search.

Graph search

Many problems in graph theory can be solved by tree traversal algorithms, such as Dijkstra’s algorithm, Kruskal’s algorithm, nearest neighbor algorithm, and Prim’s algorithm. These algorithms can be considered as an extension of tree search algorithms.

Conscious search

In an informed search, a specific type of problem is used as a guide. A suitable species produces an informed search with significantly greater efficiency than an unconscious search. There are few unique algorithms for intelligently searching a list. One of these algorithms is a hash table with a hash function based on the type of problem at hand. Most informed search algorithms are extensions of trees. Like uninformed algorithms, these algorithms can also use for graphs.

Hostile search

These issues have unique characteristics. Computer game programs and forms of artificial intelligence such as machine programming often use search algorithms such as the minimax algorithm, search tree pruning, and alpha-beta pruning. In a game like chess, there is a tree containing all the possible moves by both players and the outcomes of the combinations of these moves, and we can search this tree and find the most effective strategy for the game.

Fan algorithm

FSCAN is a disk scheduling algorithm that determines the movement of the disk logo and head in servicing read and write requests. All bids are in the primary data queue during scanning, and all new demands are placed in the secondary data queue. Therefore, servicing new requests is delayed until all old requests are processed. When the scan is finished, the logo is moved to the entire original data queue and starts all over again.

F-SCAN algorithm, according to N-Step-SCAN, prevents the logo from sticking if this does not happen in other algorithms such as SSTF, SCAN, and C-LOOK. Logo sticking in different algorithms occurs when an onslaught of requests for a shared path causes the disk logo to stop processing on that path, so it is preferred that no lookup be acknowledged for recommendations on that path, rather than Where F-SCAN separates requests into two data queues, encountering new demands is moved to the pending data queue, the logon continues its scan to the outer path, and is therefore not sticky in front of the algorithm.

There is an obvious trade-off such that requests in the pending data queue have to wait longer to be executed, but in the F-SCAN interchange, it is more satisfactory for all requests.

Depth-first search algorithm

In graph theory, depth-first search (abbreviated DFS) is a graph traversal algorithm used to traverse or search a tree or a graph. As the name suggests, the depth-first search strategy for graph traversal is “search as deep into the graph as possible.” The algorithm starts from the root (in unrooted graphs or trees, an arbitrary vertex is chosen as the root). At each step, it checks the neighbors of the current vertex through the output edges of the current vertex in order. As soon as it encounters a neighbor that has not been seen, it is executed recursively for that vertex as the current vertex.

If all neighbors have already been seen, the algorithm backtracks, and the algorithm’s execution continues for the vertex from which we reached the current vertex. In other words, the algorithm goes deeper and deeper as much as possible and turns back when faced with a dead end. This process continues until all vertices reachable from the root are seen. From a practical point of view, a stack is used to execute the algorithm.

In this way, every time we enter an unseen vertex, we place that vertex in the stack and remove the vertex from the stack when going back; Therefore, throughout the algorithm, the first element of the vertex stack is being checked.

This simple solution of “save the bosses we’ve seen so far” doesn’t always work. When we search in large graphs that cannot store due to memory limitations, if the length of the path traversed by the algorithm starting from the root becomes too large, the algorithm will encounter problems. Because we may not have enough memory for this, implementation details will follow.

Binary search algorithm

The binary search algorithm is a technique to find a numerical value from a set of ordered numbers. This method halves the search range at each step, so the desired target is whether the searched value is not in the list. Binary search is only used on ordered arrays. In this method, the selected element is compared with the middle house of the variety. If it is equal to this house, the search ends.

If the searched element is more significant than the middle house, the search is done in the upper part of the array. Otherwise, the search is done in the lower part of the array (we have assumed (The collection is sorted in ascending order). This procedure continues until the desired element is found or all the array houses are examined. Finding the index of a particular component of a sorted list is helpful because other related information can be obtained using the given index.

String search algorithm

String search algorithms (string matching) refer to an essential category of existing algorithms related to strings. In this algorithm, the main problem is finding the repetition places of one or more searched patterns (Pattern) in a large series (Text). In each case of the algorithm execution environment and the problem conditions, examined these conditions should and the best algorithm should choose for implementation and use.

A specific algorithm should choose To get the best performance in each of these conditions. Also, in a variant of this problem, Pattern strings are given as a regular expression, and the position of all substrings that match that regular expression should be returned as output.

Digest Algorithm

In graph theory, Dijkstra’s algorithm (in English: Dijkstra’s algorithm) is one of the graph traversal algorithms that solve the problem of the shortest path from a single origin for weighted graphs that do not have edges with negative weight; finally, by creating the shortest path tree, the shortest path From the root to all the vertices of the graph.

This algorithm can also find the shortest path from the origin to the destination vertex so that during the algorithm’s execution, the algorithm stops as soon as the shortest path from the source to the destination is found.

The digester algorithm calculates the single-source shortest path, similar to Prim’s algorithm. If the graph has an edge with negative weight, this algorithm does not work correctly and should use other algorithms such as Bellman. -Ford, whose time complexity is higher, let’s use it.

Flood accumulation

Flood fill is an algorithm determining the area connected to a given vertex in a multidimensional array. For example, this algorithm is used in the “bucket” filling tool in painting programs to fill secured and same-colored areas with a different color than the previous one, or in games such as Go or Minesweeper to mark cleared pieces.

This algorithm is known as boundary stacking if applied to an image to fill a specific and limited area. The flood stacking algorithm accepts three parameters as input: the starting vertex, the target color, and the replacement color.

This algorithm looks for all vertices in the array connected to the starting node by a path with the target color, then replaces them with an alternative color. There are many ways to build a flooding algorithm, but they all use the stack or queue data structure either explicitly or implicitly.