blog posts

Server

The Best Patterns And Worst Mistakes Around Server Virtualization

Virtualization Is A Relatively New And Popular Way To Manage Data Centers. There Are Several Benefits To Setting Up Virtual Data Centers At The Heart Of Physical Examples. 

Multiple virtual servers can implement on a single piece of hardware, which significantly reduces hardware costs. If a server fails, it is enough to re-implement the virtual machine, and no hardware configuration will be required.

As a result, the service loss time is short. Another advantage of this method is implementing virtual machines with different tasks and migrating them to the cloud.

In the following, you will read the discussion of the theories of two experts in this field. The original text has been slightly modified and changed from the dialog to make the content coherent and easier to read.

Differences with the past

About twenty years ago, when Vmware introduced its workstation product and the ESX virtualization tool, the basics of virtualization have not changed much. Today, we see an increase in efficiency, especially an increase in demand for storage.

Perhaps one of the most important developments in this field is the change in virtualization, networks, and migration of virtual machines. In the early days of virtual machines, these machines house on a physical server.

You were virtualizing a physical server and had several virtual machines running only on the same server and not transferring them to other servers.

As a result, if the physical server crashed, all the virtual machines on that server would malfunction, and you had no way of saving your services. With the introduction of products such as vMotion, it became possible for virtual machines to migrate between physical servers when needed easily.

Concerns and security

The Security of Virtualization and the Risks of Simultaneous Implementation of Multiple Virtual Machines in a Common Physical Infrastructure are perhaps the most important challenges neglected or underestimated in the early days of virtualization.

Although security vulnerabilities exist today in the CPU architecture, add-on packages are quickly available to fix these hardware defects. As a result, attackers can not easily exploit these vulnerabilities and need high-level access to the systems themselves to effectively attack virtual machines.

In general, isolating resources and virtual machines greatly increases security, but the problems start when the isolation is improperly implemented. A well-designed virtual environment with network isolation and storage space isolation (if required) is highly secure in attacks.

Peter Rittwage, IntelliSystems Chief Technical Officer, says: “There is always talk of viruses and malware that can attack the hypervisor. Personally, I have never encountered such a thing. “I think it is challenging to plan such a thing.”

Virtualization tools

In most cases for virtualization, the tools are Hyper-V and Vmware or KVM / Xen. One of the most important parameters for choosing a virtualization tool, in addition to efficiency, is the cost you want to spend.

This range includes the cost of various ranges, KVM / Xen, followed by Hyper-V, and finally Vmware, which is the most expensive option.

Vmware has great management tools and is a masterpiece in hardware virtualization, but it is costly to use. If you use a Windows environment and most guest machines run Windows Server, then the Hyper-V environment will be a good choice for you.

KVM and Xen are both great open-source hypervisor platforms, but they lack management relationships.

Although there are solutions to this problem, such as using environments such as OpenStack or using tools such as OnApp, nevertheless, if you do not have enough experience to use these tools and open-source software, such solutions will make the design more complicated and will not help you much.

Where is virtualization useful?

Although virtualization is often possible at workloads, if you have software that makes heavy use of CPU, RAM, or I / O storage exclusively, it may be best to deploy such software as a standalone server within a larger virtualized environment.

Give. You can also configure a physical server to run just one virtual machine. In addition to ensuring your software is in good condition regarding access to hardware resources, it takes advantage of a virtualized environment such as management and workload migration. You will.

However, it should be noted that not all software can easily cope with virtual processors and virtual NICs, as they are designed to interact with physical hardware. But the good news is that with the growth of the virtualization market, this software is evolving, and we will face fewer problems in the future.

In general, you do not need much virtualization if you want to use all your resources for a specific function (High-CPU Or High-IOP Function), such as implementing a high-load SQL Server. Virtualization helps in cases where we want to share hardware between multiple missions.

The biggest challenge of virtualization

Still, the biggest challenge is virtualizing, distributing, and sharing resources between infrastructure and software. In any case, there will be items in your infrastructure that take precedence over others. Designing a virtualized platform is actually creating a kind of balance between the processing resources you have.

However, you may encounter bottlenecks in parts of your infrastructure that harm its performance. Still, it is hoped that you can consider in your design that these bottlenecks have less of a negative impact on the performance of your software.

It would help if you always estimated future workload, as you will not easily adapt the networks you have set up to a new virtualized environment. There are some important things to keep in mind when it comes to storage.

Suggested solutions

While there are technical solutions that can help alleviate some of these problems, such as SSD caching inside the storage array or using clustered storage, each of these solutions has its drawbacks that need to be addressed. . One of the best ways to reduce the hassle is to carefully evaluate the existing physical servers and choose the right method for the virtualization of your infrastructure.

Before deciding on hardware or virtualization, you need to consider the amount of bandwidth that each server will use to exchange data on the WAN, the amount of CPU and memory involved during normal workloads and peak loads, and the amount of disk I / Be aware of what each server is facing.

By having such information before implementation, you can make more accurate decisions.

Common Mistakes in Virtualization

Design errors include “unbalanced resource allocation.” Something like using 24 processing cores with 64 GB of memory. In a virtual environment, memory (RAM) is not sharing between virtual machines, and as a result, memory resources may run out sooner than processing resources. Inadequacy of storage capacity with needs is another issue that should consider.

Accurate estimation of storage capacity may be more important than the estimation of processing capacity because storage costs increase more rapidly than the cost of processing cores.

If you have a large number of high-transaction databases and want to virtualize them, you will need a lot of disk I / O, which means that you need a large array of disks, which is synonymous with high cost.

In addition, you will usually always see virtual environments in which, for each guest virtual machine, a VLAN with IP address management of the hypervisor node is provided for each.

In general, this is not necessary and only adds to the complexity of platform management.

Except in exceptional cases, reduce the network size as much as possible and manage the separation.

(Separation) Virtual machines on the network Use access lists or firewall rules. We also see many virtual switches in virtualization, which may not be necessary to use this number of switches.

If the environment requires a large number of VLANs, you usually do not need a separate virtual switch for each VLAN. A well-designed VLAN and virtual switches will provide you with a sufficient amount of network isolation in most cases.

Improper configuration of virtual processors, memory, and storage space are common mistakes in virtualization.

Look to the future

In the future, we will see network virtualization on network hardware. As a result, the physical network infrastructure that should support the virtual infrastructure that you have set up will better integrate with SDN, scripting, and VLAN.

Another case is the increasing use of containerization within virtual machines. Products such as Docker and Kubernetes provide operating system and software virtualization within the virtual machine itself.

Proper use of such a feature has many advantages, such as faster implementation, a more stable environment, and the ability to transfer software loads between virtual machines instantly.

Final Tips Perhaps one of the most important suggestions for planning to implement and maintain server virtualization projects is growth planning.

During the design phase, after evaluating the current environment, make sure you have a good plan for developing the platform with new hypervisors or additional storage infrastructure to minimize the impact of these changes on the current virtualized environment.

There is a high demand for high accessibility in virtual environments, so you should increase your storage capacity or the number of hypervisors without redefining the entire platform architecture to meet this demand.

In addition, you should have a good backup strategy in your pocket. Although we will achieve better flexibility in the face of errors and defects of a physical component of the infrastructure with virtualization of the infrastructure, we should always expect trouble.

PS:

1. I went to Vmware to find the right image for this section, but unfortunately, it was not available!