Server Virtualization: The Best Patterns And Worst Mistakes
Virtualization is a relatively new and popular way to manage data centers. Setting Up Virtual Data Centers At The Heart Of Physical Examples has several benefits.
Multiple virtual servers can be implemented on a single piece of hardware, significantly reducing hardware costs. If a server fails, re-implementing the virtual machine is enough, and no hardware configuration is required.
As a result, the service loss time is short. Another advantage of this method is implementing virtual machines with different tasks and migrating them to the cloud.
In the following, you will read the discussion of the theories of two experts in this field. The original text has been slightly modified and changed from the dialog to make the content coherent and easier to read.
Differences with the past
When VMware introduced its workstation product and the ESX virtualization tool about twenty years ago, the basics of virtualization had not changed much. Today, we see an increase in efficiency and significant demand for storage.
Perhaps one of the most critical developments in this field is the change in virtualization, networks, and migration of virtual machines. In the early days of virtual machines, these machines were housed on a physical server.
You were virtualizing a physical server, and several virtual machines ran only on the same server, not transferring them to other servers.
As a result, if the physical server crashed, all the virtual machines on that server would malfunction, and you would have no way of saving your services. With the introduction of products such as vMotion, it became possible for virtual machines to migrate between physical servers easily when needed.
Concerns and security
The Security of Virtualization and the Risks of Simultaneously implementing Multiple Virtual Machines in a Common Physical Infrastructure are perhaps the most critical challenges neglected or underestimated in the early days of virtualization.
Although security vulnerabilities exist today in the CPU architecture, add-on packages are quickly available to fix these hardware defects. As a result, attackers can not easily exploit these vulnerabilities and need high-level access to the systems themselves to attack virtual machines effectively.
In general, isolating resources and virtual machines dramatically increases security, but the problems start when the isolation is improperly implemented. A well-designed virtual environment with network and storage space isolation (if required) is highly secure from attacks.
Peter Rittwage, IntelliSystems Chief Technical Officer, says: “There is always talk of viruses and malware that can attack the hypervisor. I have never encountered such a thing. “I think it is challenging to plan such a thing.”
Virtualization tools
In most cases, the tools used for virtualization are Hyper-V and Vmware or KVM / Xen. One of the most important parameters for choosing a virtualization tool, in addition to efficiency, is the cost you want to spend.
This range includes the cost of various options, including KVM / Xen, Hyper-V, and finally, VMwaree, the most expensive option.
VMware has excellent management tools and is a masterpiece in hardware virtualization, but it is costly. If you use a Windows environment and most guest machines run Windows Server, then the Hyper-V environment will be a good choice.
KVM and Xen are great open-source hypervisor platforms but lack management relationships.
Although there are solutions to this problem, such as using environments like OpenStack or tools like OnApp, if you do not have enough experience using these tools and open-source software, such solutions will make the design more complicated and will not help you much.
Where is helpful virtualization?
Although virtualization is often possible at workloads, if you have software exclusively using CPU, RAM, or I / O storage, deploying such software as a standalone server within a larger virtualized environment may be best.
Give. You can also configure a physical server to run just one virtual machine. This ensures your software has good access to hardware resources and takes advantage of a virtualized environment for management and workload migration. You will.
However, it should be noted that not all software can easily cope with virtual processors and virtual NICs, as they are designed to interact with physical hardware. The good news is that with the growth of the virtualization market, this software is evolving, and we will face fewer problems in the future.
In general, you do not need much virtualization to use all your resources for a specific function (High-CPU Or High-IOP Function), such as implementing a high-load SQL Server. Virtualization helps in cases where we want to share hardware between multiple missions.
The biggest challenge of virtualization
Still, the biggest challenge is virtualizing, distributing, and sharing resources between infrastructure and software. In any case, some items in your infrastructure will take precedence over others. Designing a virtualized platform creates a balance between your processing resources.
However, you may encounter bottlenecks in parts of your infrastructure that harm its performance. Still, it is hoped that you can consider in your design that these bottlenecks have less of a negative impact on the performance of your software.
It would help if you always estimated future workload, as you will not quickly adapt the networks you have set up to a new virtualized environment. There are some essential things to remember when it comes to storage.
Suggested solutions
While technical solutions can help alleviate some of these problems, such as SSD caching inside the storage array or clustered storage, each has drawbacks that must be addressed. One of the best ways to reduce the hassle is to carefully evaluate the existing physical servers and choose the proper method for virtualizing your infrastructure.
Before deciding on hardware or virtualization, you need to consider the amount of bandwidth that each server will use to exchange data on the WAN, the amount of CPU and memory involved during typical workloads and peak loads, and the amount of disk I / Be aware of what each server is facing.
By having such information before implementation, you can make more accurate decisions.
Common Mistakes in Virtualization
Design errors include “unbalanced resource allocation, ” such as using 24 processing cores with 64 GB of memory. In a virtual environment, memory (RAM) is not shared between virtual machines, and as a result, memory resources may run out sooner than processing resources. Inadequacy of storage capacity with needs is another issue that should be considered.
An accurate estimation of storage capacity may be more critical than the estimation of processing capacity because storage costs increase more rapidly than processing cores.
If you have many high-transaction databases and want to virtualize them, you will need a lot of disk I / O, meaning you need an extensive array of disks, which is synonymous with high cost.
In addition, virtual environments usually provide VLAN with IP address management of the hypervisor node for each guest virtual machine.
In general, this is unnecessary and only adds to the complexity of platform management.
Except in exceptional cases, reduce the network size as much as possible and manage the separation.
(Separation) Virtual machines on the network Use access lists or firewall rules. We also see many virtual switches in virtualization, which may not be necessary to use this number of switches.
If the environment requires many VLANs, you usually do not need a separate virtual switch for each VLAN. A well-designed VLAN and virtual switches will usually provide sufficient network isolation.
Improper configuration of virtual processors, memory, and storage space are common mistakes in virtualization.
Look to the future
In the future, we will see network virtualization on network hardware. As a result, the physical network infrastructure that should support the virtual infrastructure you have set up will better integrate with SDN, scripting, and VLAN.
Another case is the increasing use of containerization within virtual machines. Products such as Docker and Kubernetes provide an operating system and software virtualization for the virtual machine.
Proper use of such a feature has many advantages, such as faster implementation, a more stable environment, and the ability to instantly transfer software loads between virtual machines.
Final Tips Perhaps one of the most essential suggestions for implementing and maintaining server virtualization projects is growth planning.
During the design phase, after evaluating the current environment, make sure you have a good plan for developing the platform with new hypervisors or additional storage infrastructure to minimize the impact of these changes on the current virtualized environment.
Accessibility is highly demanded in virtual environments, so you should increase your storage capacity or the number of hypervisors without redefining the entire platform architecture to meet this demand.
In addition, you should have a good backup strategy in your pocket. Although we will achieve better flexibility in the face of errors and defects of a physical infrastructure component with virtualization, we should always expect trouble.
PS:
1. I went to Vmware to find the right image for this section, but unfortunately, it was unavailable!