blog posts

What Role Do Content Delivery Network Servers Play In Improving The Speed Of Information Access?

What Role Do Content Delivery Network Servers Play In Improving The Speed Of Information Access?

As A Website Administrator, You May Or May Not Know The Terms Origin Server And Network Edge. 

Sometimes, It Is Difficult To Tell The Difference Between These Two Servers For People Who Are Not Familiar With Network Concepts, But If You Own A Website, You Should Have A Good Knowledge Of The Above Terms To Manage It Better.

Once you have the proper technical information about the origin and edge servers, you can make more accurate decisions about the type of content to be uploaded to the site.

In this article, we will try to explain to you the origin and edge servers and how they work in a simple language, and then we will examine the applications of each of these servers.

What is the origin server?

The origin server, or, more precisely, the central server, is a computer system that receives and processes client requests by running special applications and giving them a response. To be more specific, it is responsible for processing incoming traffic. These servers store the original version of the website pages in their cache and are responsible for transferring the content to the website users.

As we mentioned, the origin server receives the files of websites. For this reason, it receives and responds to many requests to view website content from users’ browsers during the day. When a user opens a website page in their browser, a request is sent to the origin server to get content from their browser. 

In this case, depending on the geographic location of the origin server from the end user and the distance between the user and the origin server, the amount of time it takes for the user to receive their response from the server is called “latency.”

Also, the user’s browser communicates with the origin server through SSL/TLS protocols to ensure that hackers cannot eavesdrop on or distort information during a session. These protocols also increase the request and response round trip time (RTT).

RTT is the round-trip time between sending a request and receiving a response from the browser. In addition to physical distance, the amount of traffic that the server processes, the number of nodes that simultaneously send a request to the origin server, and the intermediate web servers that play the role of an intermediary affect the increase in RTT time. Figure 1 shows the sending of the first and second requests to the origin server.

Another critical issue with origin servers is that maintaining, managing, updating, or adding and removing roles to the server is the owner’s responsibility. Origin servers can receive fixed and specific traffic and cannot receive and respond to more than the allowed traffic and incoming requests.

As a result, if the server owner has not specified the necessary measures to limit the traffic, the server may be able to It may not handle some requests, or it may crash in case of overload. If the origin server crashes or slows down, access to the website content may be difficult or interrupted. So, whenever you see that you are accessing the content of the site that you visit daily with a delay, one of the reasons for this is the overload that has entered the origin server.

What is an edge network?

When the schema of networks deployed in data centers is prepared, it refers to different devicese connected based on different architectures and topologies. If a network wants to connect to another network, it needs some gateway or bridge to transfer traffic from one place to another. In computer networks, the bridge that connects two networks is called an “edge.” The hardware devices that perform the bridging process are called edge devices. Edge devices can be routers and switches placed at IxP Internet Exchange Points to allow different networks to connect and share traffic.

What is Edge Server?

An edge server is a computer system placed at the network’s edge and plays the role of a connecting point between different networks. The edge server in a content delivery network tries to provide information to users with lower latency by caching the content of websites so that the loading time of web pages is shorter.

Before we examine the working mechanism of an edge server, let us look at how different networks are connected on a local and global scale. In typical home or business networks, devices such as smartphones, computers, and laptops are connected to the network and each other through a unique combination called hub-and-spoke. In the above architecture, all devices are in a local network, Each machine is connected to a central router that plays the role of an edge device to communicate with each other and other local networks.

In the above architecture, an edge device must be used to connect network A to network B to create a connection point to communicate between networks. Similar to this model, it is used globally to connect large networks.

To be more precise, the infrastructure of the Internet is also based on such a model. When a connection from a local network is going to pass through the Internet to reach another local network, it passes through various hops to finally establish a relationship between two networks, A and B.

As the number of edge devices between networks increases, sharing network traffic becomes more complex, and the bottleneck problem arises.

To understand this better, imagine that every network is a circle, and the place where the processes connect is the network’s edge. For a local network to communicate with another network through the Internet, it must pass through several networks and edge equipment. As the distance between two networks increases, the communication request must pass through more networks. Typically, this request to communicate with the destination passes through various Internet service providers and their underlying infrastructure equipment.

In such a situation, if there are no servers at the edge of the content distribution network, the connection between the source and the destination takes a slower and more complicated route. This was the same problem users experienced nearly three decades ago, waiting at least a minute for a website to open.

For this reason, content distribution network service providers place their servers in different locations. Still, the site of edge servers between various networks is more critical than other servers. These edge servers connect to multiple various networks and allow traffic to be exchanged between networks faster. Without a content distribution network, even short-distance local network traffic has to travel a long way to reach its destination. For example, clients A and B, located on the same street and across from each other, must bypass the entire network of a country to communicate with each other. In such a situation, sending and receiving responses takes a long time, and bandwidth is wasted.

What is the difference between the origin and an edge server?

The origin server hosts the original version of a website’s files in a fixed location. On the contrary, the number of edge servers is enormous and located in different geographical areas. Edge servers are responsible for temporarily storing of the con, fast fast processing of requ, tests, and serving cached content to end users.

More precisely, edge servers reduce the traffic load of the origin servers and connect to the origin server only when they want to download new, uncached content. A solution to reduce the traffic load of the source server content delivery networks is available to network experts. These networks are essential in reducing loading time and content access and managing a significant part of the content delivery process. Of course, this does not mean they take over all the tasks of the origin servers.

Origin servers contain server-side master and critical codes and databases used for authentication. Information that edge servers or content delivery networks are not supposed to have. For this reason, the effectiveness of the content delivery network and its tasks depends on the services these networks provide and the websites’ technical characteristics.

In general, static resources such as CSS files, images, static JavaScript files, and HTML files can be cached on content distribution network servers because static content consumes half the bandwidth available to the end user through edge servers. Also, dynamic content such as HTML files of WordPress systems or similar examples can be stored in some content delivery networks under certain conditions and placed, significantly accelerating the process of viewing websites.

In this case, end-user traffic is significantly reduced. This is why you see some Iranian websites announcing to their users that their traffic is semi-price (of course, semi-price traffic has other special conditions beyond this article’s scope).

How do the origin and edge servers communicate with each other?

Source and destination servers communicate when the source server delivers content to the edge server. The connection between these two servers is based on Push or Pull architecture. In the push mechanism, the content distribution network’s content is changed when the origin servers are updated. This architecture is less used than the Pull method, where the content delivery network automatically receives new content from the source servers.

The role of edge and origin servers in delivering content to the end user depends on whether the content is static or dynamic. For example, imagine logging into a system where the user must complete an authentication process before accessing a service, such as a download link.

Ideally, web page resources include static files such as HTML pages, CSS files, images, and JavaScript libraries. These files are fixed and can be viewed by all website visitors. Therefore, they can be stored directly on the edge server.

In this case, the content is provided to end users without the need to send a request to the origin server and consume bandwidth.

When the user enters the login information and presses the login button, dynamic content containing the user’s unique information is generated on the origin server, which must be sent to the edge server. The edge server must request a dynamic range from the origin server to send content. Of course, the origin server must first send user account information to the edge server, compare the user’s login details with the data recorded in their databases and confirm them, and then send the information to the edge server.

For content delivery networks to function smoothly, edge servers must be designated destinations for incoming HTTP requests. For this purpose, the website administrator must change the domain name system to direct incoming requests to a website to the edge servers. The resource response header should be looked at to check whether the website content has been delivered from the edge server to the user.

Can content distribution networks protect the origin server?

Because the source servers host the vital information of the websites, they are constantly exposed to hacker attacks, and their data should be protected in the best way. Content delivery networks can protect origin servers from malicious incoming traffic in several ways.

Among the standard mechanisms in this field, the following should be mentioned:

  • Protect distributed. Inspection of incoming traffic: content delivery networks inspect requests received from HTTP and HTTPS protocols to identify and filter application layer attacks and various types of attack vectors, such as SQL code injection or cross-site scripting, so that they can prevent denial of service attacks from origin servers. The above issue is vital because distributed denial of service attacks, executed on the application layer platform, is used by hackers to make the origin server unavailable. So that hackers send direct traffic to these servers.
  • Hiding the IP addresses of origin servers: Another good functionality that content distribution networks have is hiding the IP address of origin servers. The advantage of the above method is that it prevents direct attacks on the server through direct IP. This technique is used to deal with network layer distributed denial of service attacks. The mechanism of hiding the actual IP address of the source server and redirecting the domain request to the IPs of the content distribution network plays an influential role in dealing with this attack model.
  • Managing traffic growth: Content delivery networks are essential in managing incoming website traffic. By adequately distributing traffic to edge servers in the network, content delivery networks prevent heavy traffic from being sent directly to origin servers and disrupting their performance. When the amount of traffic sent to the source servers increases, their performance will decrease, and they will not be able to serve.