DED9

What Is Edge Computing And How Is It Different From Cloud And Fog Computing?

Today, Businesses Are Faced With An Ocean Of Data. Data Typically Obtained By IoT Sensors And Devices Located In Various Locations.

Sensors generate large amounts of data, and companies must constantly process this data. This massive volume of generated information has revolutionized how they are managed.

Is not. In traditional methods, data must be sent over the Internet to a centralized data center to be processed and returned to the source. This method works well for a limited amount of data, but when a large amount of data is to be sent to data centers, processed there, and the result returned to the source, it is not suitable because it requires a lot of bandwidth cost-effective. Bandwidth limitations, latency issues, and unpredictable network outages can disrupt the ability to send, receive, and process data.

Businesses have turned to ” edge computing ” architectures to solve this problem. Edge computing is a distributed architecture where user data is processed at the network’s edge close to the source. Statistics show that edge computing is changing the way information is processed, and there is a possibility that it will bring significant changes in the field of information technology in the future.

What is edge computing?

Edge computing moves some of the storage space and computing resources out of the centralized data center and closer to the source that generates the data. Instead of transferring the raw data to a central data center for processing and analysis, this is done at the location or source that caused the data. This resource may be an online store, a manufacturing unit, a utility, or a smart city.

In the above architecture, calculations such as initial data analysis and the possibility of strategic equipment or software problems are performed. The result is sent to the data center for a more detailed check (Figure 1).

figure 1

How does edge computing work?

Edge computing is a concept that refers to a location or resource that generates data, and computing equipment is deployed in that location. In traditional enterprise computing, information is stored in equipment such as servers and then made available to users through the local network. More precisely, in the above method, the data is stored in the organizational infrastructure and processed there, and the processing results are sent to the user. The above architecture is based on most commercial applications’ client and server models.

However, the number of Internet-connected devices and the volume of data generated by the machines and used by businesses exceeds the capacity of traditional data center infrastructures.

Gartner predicts that by 2025, 75% of the data companies will interact with will be generated outside the organization. A simple example in this field is the data generated in social networks, which are of great value in business and marketing and are caused by social network users.

On the other hand, time-sensitive data is recorded by equipment such as video surveillance cameras. The images are sent over the Internet to the operator responsible for monitoring the cameras so that the operator can react if there is a suspicious case. In this method, not only a lot of bandwidth is needed to send data, but the operator must respond quickly to doubtful cases.

Suppose the image data is analyzed locally by intelligent algorithms, and suspicious cases are sent to the operator in a simple text message. In that case, it will significantly save bandwidth and reduce an incident’s response time.

In this case, there will be no additional pressure on the Internet or wide networks, and we will not face the problem of congestion and disruption.

This issue has made information technology architects go for logical edge design instead of centralized data center design. So that the storage and computing resources are transferred from the data center to a place close to the source of the data generator, it’s good to know that edge computing is based on an elementary theory. If you can’t bring the data closer to the data center, get the data center closer to the data.

Its Edge computing places storage and servers where the data resides. These types of equipment are deployed in small to medium-sized racks to collect data and perform processing locally. These racks are equipped with advanced security locks and air conditioning mechanisms so that the equipment inside the frame is not exposed to extreme temperatures, humidity, theft, or vandalism.

These small racks are often used for standard processing and analysis of business-critical data. Once they have done the calculations, they send the results to the leading data center for final research.

These days, companies, especially retailers, are working on an exciting idea combining “Business Intelligence” (Business Intelligence) and edge computing. For example, it is possible to combine the images from the video surveillance cameras in the stores with the actual sales data to present attractive offers to the consumer to purchase a product.

To be more precise, it processed the information related to purchases and visited products in real-time and then presented attractive offers to the next buyer. An approach that Amazon is working on experimentally.

Another practical example is edge-based predictive analytics. In the above method, it is possible to monitor and analyze the performance of critical facilities and equipment, such as water treatment plants and power plants, so that problems can identify and the equipment repaired or replaced before the failure occurs.

Edge, cloud, and fog calculations versus each other

One of the easiest ways to understand the differences between edge, cloud, and fog computing is to examine what they have in common. Edge computing is closely related to cloud and fog computing. Although there is overlap between these concepts, they are not the same and should not be used interchangeably. Comparing images and understanding their differences helps to use them more accurately.

All three concepts are related to distributed computing and emphasize the physical deployment of computing and storage resources near the sources that generate data. Still, the main difference between the three mentioned technologies is where the required resources should be located (Figure 2)

figure 2

Fog computing environments can process large amounts of special-purpose data generated by sensors or IoT devices that may deploy over vast geographic areas that are beyond the inherent capabilities of edge computing. For example, we can refer to intelligent buildings, smart cities, or a network of innovative tools.

Consider a smart city that can use data to track, analyze and optimize the public fleet system, intra- and extra-urban services and long-term urban planning. An edge deployment is not enough to handle such a workload, so fog computing can define a set of fog nodes within the perimeter to collect, process, and analyze data.

Why is edge computing critical?

Different computing processes require their computing architectures, and it is not the case that a particular computing architecture is suitable for every task. Edge computing is a solution to overcome some limitations of cloud computing so that computing and storage resources can be placed close to the physical location of the data source so that processing can be done locally.

In general, invented distributed computing models with the goal of decentralization, but decentralization is challenging and requires high-level monitoring and control. However, edge computing is an effective solution to overcome the emerging problems that computer networks face, one of which is sending and receiving a considerable amount of information.

Since sending, receiving, and processing data requires time and some programs are time sensitive, processing information locally and sending the processed results to data centers requires less bandwidth and time.

Traffic machines and controls are forced to generate, analyze and exchange real-time data. This massive volume of data to be developed and processed requires a fast and responsive network. A concrete example in this field is self-driving cars that depend on intelligent traffic control signals.

Edge and fog computing can solve the three big problems of computer networks: bandwidth, latency, and congestion.

By deploying servers and storage equipment where data is generated, edge computing can make the performance of devices located on small local area networks more efficient, where bandwidth is used exclusively and optimally by data-generating devices, reducing latency and latency.

Density reaches its lowest value. In this case, local storage equipment collects the raw data, while l. In contrast, drivers can perform the necessary analytics on the network’s edge or at least ss the data so that only the essential data is sent to the cloud or data centers.

What are the benefits of edge computing?

Edge computing addresses critical infrastructure problems such as bandwidth limitations, excessive latency, and network congestion, but it also offers several additional potential benefits. These benefits are as follows:

last word

Edge computing has received attention with the ever-increasing expansion of the Internet of Things and the data generated by various devices. However, The edge will be impressive. Considering that the technologies related to the Internet of Things have not yet been fully embraced and are still evolving, this issue affects the future of computing development.

The example, for some time now, companies have offered another solution for locally computing: the story of Micro Modular Data Centers (MMDCs). MMDC is a small but complete data center that can deploy at the data generating source to perform computing processes without the need for edge computing. Of course, the above solution is in the early stages, and it is not yet clear whether it is possible to use it as an alternative to edge computing or not.

Exit mobile version