Virtualization allows you to abstract the hardware resources of a single physical server (host) to run dozens of independent virtual machines (VMs). Choosing hypervisor hardware ( Proxmox, VMware ESXi, KVM ) in 2026 requires maximum horizontal scaling of computing resources. Investments in the right architecture ensure tight VM “packaging” and minimize energy consumption in the data center.
Brief conclusions:
- Unlike database servers, virtualization hosts require extreme numbers of physical cores ( 32 to 128 per processor) rather than high clock speeds.
- In 90% of cases, the main bottleneck of virtualization systems is random access memory (RAM), so its volume is calculated with an oversubscription factor of no more than 1.2 .
- Software-defined storage ( vSAN, Ceph ) has finally replaced hardware RAID controllers, requiring direct hypervisor access to NVMe drives (HBA mode).
- The minimum network connectivity standard for virtual machine migration (Live Migration) in 2026 is 25GbE with RDMA support .
Basic principles for selecting a virtualization host
Selecting a server for virtualization begins with calculating the required amount of RAM and the total number of virtual processor cores ( vCPUs ). Bare-metal hypervisors are installed directly on bare metal, ensuring minimal overhead for managing guest operating systems. The most important criterion for selecting a hardware platform is its official inclusion in the chosen vendor’s Hardware Compatibility List .
Any uncertified hardware (especially network cards and HBA controllers) can cause a sudden failure ( Purple Screen of Death ) of the entire physical node. The failure of a single host will result in dozens of virtual machines hosted on it being restarted on other cluster nodes.
Choosing a Processor (CPU): Multi-Core Decides
The main rule for a hypervisor is to ensure the maximum number of physical cores, as it distributes threads across multiple VMs. A base clock frequency of 2.4–2.8 GHz is perfectly acceptable and sufficient for isolated operating systems. For dense VM packing, AMD EPYC processors (up to 128/192 cores per socket) or Intel Xeon Scalable processors with high thread density are ideal .
Overcommitment of processor cores is a standard virtualization operating mode. Modern hypervisors easily handle a ratio of 3 virtual cores (vCPU) to 1 physical core (pCPU) for standard office workloads. However, for heavy databases within a VM, the ratio should be strictly 1:1 .
“In 2026, data center economics dictate its own rules: it’s more cost-effective to buy one dual-processor server with 128 physical cores than to maintain four older servers with 32 cores. This saves rack space, reduces cooling costs, and reduces hypervisor licensing.”
Random Access Memory (RAM): The Hypervisor’s Main Resource
RAM exhaustion is the primary cause of hardware swapping (flush data to disk), which instantly paralyzes all virtual machines on the host. RAM capacity should be calculated based on the total allocated memory for all VMs plus a 30% overhead for the hypervisor itself and fault tolerance. In 2026, the standard for an enterprise-class virtualization host is 512 GB to 2 TB of DDR5 memory with error-correcting code (ECC).
Hypervisor memory-saving mechanisms ( Memory Ballooning and Deduplication ) should not be seen as a way to save on physical memory. These are failsafe mechanisms: when activated, guest system performance inevitably drops.
Disk Subsystem: From RAID to SDS
Virtual desktop infrastructure (VDI) and containers generate a huge amount of random input/output operations (IOPS). Using classic hardware RAID controllers with SATA drives creates an inevitable bottleneck during mass operating system boot storms. Modern clusters are built exclusively on all-flash arrays with NVMe PCIe 4.0 or 5.0 drives .
In 2026, software-defined storage (SDS), such as VMware vSAN, Ceph, or Microsoft Storage Spaces Direct, became the dominant storage technology. This requires the installation of HBA (Host Bus Adapter) controllers in the server, which transfer disks directly to the hypervisor without hardware RAID intervention.
Pros & Cons (Software-Defined Storage vs. Classic SAN)
- Linear scaling when adding new nodes, no need to purchase a separate expensive storage system, disk management directly from the hypervisor interface.
- Takes some of the host’s CPU time to calculate checksums and requires extremely fast network connectivity between servers (from 25GbE).
- Completely relieves the server processor from disk operations and supports hardware snapshots at the storage system level.
Network infrastructure (Networking)
The minimum network connection standard for a modern virtualization host is at least two 25 Gigabit Ethernet (25GbE) optical ports for redundancy. For hyperconverged infrastructure (HCI) systems, where storage is distributed across servers, the throughput should be 100GbE .
Traffic isolation is a critical security and stability requirement. Virtual machine migration traffic (vMotion), storage system traffic, and guest VM traffic must be distributed across different physical ports or isolated VLANs. RDMA (RoCE v2) technology is essential to reduce latency when the hypervisor accesses network storage.
Comparison table: Network ports distribution on the host
| Traffic type | Recommended speed | Protocol / Features |
|---|---|---|
| Management | 1GbE / 10GbE | Complete isolation in a separate VLAN. |
| Guest VM Network | 10GbE / 25GbE | SR-IOV support for direct port forwarding. |
| Storage (Storage / vSAN) | 25GbE / 100GbE | Mandatory support of RDMA, Jumbo Frames (MTU 9000). |
| Migration (vMotion / Live Migration) | 25GbE | Minimal latency for seamless VM migration. |
Practical sizing and resource calculation
Resource assessment for a virtualization server is always performed according to the ” N+1 ” rule to ensure high availability ( HA ). This means that if one physical server fails, the remaining nodes in the cluster must have sufficient free resources ( CPU and RAM ) to run all the failed VMs.
2x processors with 32 cores (total 64 physical cores, 128 threads ). RAM: 1 TB DDR5 . Network: 2x 25 GbE ports + 2x 10 GbE ports . Storage: 4x NVMe 3.84 TB (for cache and data).
Requires the addition of server graphics accelerators (e.g. NVIDIA L40S ) with support for vGPU technology for hardware acceleration of the Windows interface .
If you need to run 50 VMs with 16 GB of RAM each, the total capacity will be 800 GB . Taking into account 30% overhead for the hypervisor and HA redundancy , the server should have exactly 1 TB of RAM (or 1.5 TB for a two-host cluster ).
Conclusion
Selecting a server for virtualization requires strategic capacity planning with an eye on infrastructure growth. The rejection of high-frequency cores in favor of extreme multi-core performance and maximum RAM provides flexibility in resource allocation. Building a cluster using NVMe drives paired with 25GbE/100GbE network interfaces guarantees the absence of hardware latency, making virtual machine performance indistinguishable from physical workstations.

