Virtualization Showdown: Benchmarking Single-Node Hypervisors

Uncategorized


The question has been asked thousands of times: Which hypervisor is the “best”? And yet, the answer always varies. The truth is, there’s no one-size-fits-all solution — different environments have unique requirements and use cases that influence the ideal choice.

Rather than trying to declare a definitive winner, this post takes a data-driven approach. Using the same hardware across multiple single-node hypervisors, I’ve run a series of benchmarks to compare performance objectively. If you’re looking for raw numbers to help inform your decision, you’re in the right place.


For this comparison, we tested the following virtualization platforms:

Each of these hypervisors is tested using the default installation settings, with no additional tuning. The performance results reflect their out-of-the-box capabilities. Optimizing each hypervisor is beyond the scope of this post.


While the underlying hypervisor changes between tests, the guest virtual machine remains consistent using a CentOS 9 VM with 16 vCPUs, 32GB RAM, and a 30GB thin-provisioned disk using XFS installed from the ISO with defaults.

After installing CentOS 9 using the Minimal Install option using the CentOS-Stream-9–20250124.0-x86_64-boot.iso, the following commands were executed to set up the necessary environment:

sudo dnf config-manager --set-enabled crb
sudo dnf install epel-release epel-next-release -y
sudo dnf groupinstall "Development Tools" -y
sudo dnf install -y tmux bc sqlite fio php php-cli php-json php-xml php-gd unzip \
php-process libaio-devel qemu-guest-agent libaio.i686 libaio-devel.i686 redis \
tmux bison gawk zlib-devel perl pcre pcre-devel pcre2 pcre2-devel expat
git clone https://github.com/phoronix-test-suite/phoronix-test-suite.git
cd ~/phoronix-test-suite && sudo ./install-sh
sudo init 6
phoronix-test-suite phoromatic.connect benchmark.lab.mydc.dev:8010/9KOKNN

These steps ensure the system is properly configured for benchmarking across all tested hypervisors.


To ensure standardized benchmarking, we used Phoronix Test Suite 10.8.5 alongside Phoromatic, which managed the To ensure standardized benchmarking, we used Phoronix Test Suite 10.8.5 alongside Phoromatic, which managed the execution, storage, and tracking of all benchmarks across systems.

The hardware used for these benchmarks is a Dell PowerEdge R640 server, equipped with:

  • 2× Intel Xeon Gold 6122 1.80Ghz 20-core
  • 256GB RAM DDR4
  • X550/i350 network interfaces (2× 10GbE + 2× 1GbE)
  • 1× Quad M.2 NVMe to PCIe 4.0 x16 Adapter
  • 1× 512GB TEAMGROUP MP33 NVMe SSD
  • 1× 1TB TEAMGROUP MP33 NVMe SSD
  • 1× USB Flash Drive with rEFInd for booting OS from NVMe

This standardized hardware configuration ensures a fair comparison across all tested hypervisors.


The full benchmark results are available on GitHub. To view them, download the MHTML file and open it in your web browser: GitHub Repository.


Disk Benchmark Results


Memory Benchmark Results


Network Benchmark Results

Note: Latency results are in microseconds. 107.036𝜇𝑠 = 0.107ms


Processor Benchmark Results


System Benchmark Results


Final Thoughts

Based on my interpretation of the benchmark data, OpenShift Virtualization, Hyper-V, and ESXi 8 rank among the top three.

This benchmarking exercise highlights the importance of exploring new and emerging technologies — sometimes the best-performing solutions are the ones flying under the radar.

Looking ahead, benchmarking for three-node hyperconverged clusters is in the planning stages. The next round of testing will include:

Stay tuned for more insights into how these platforms compare in multi-node deployments!

2 thoughts on “Virtualization Showdown: Benchmarking Single-Node Hypervisors

  1. The performance difference between ESX and Proxmox does not match my experience. I suspect the main problem is you don’t specify which virtual hardware you used for the network and disc controllers. Proxmox supports several options, and although it can run with vmware virtual hardware, the performance greatly suffers. That’s the only explanation I can see for the degree of difference reported.

    1. I used the default installation settings for each hypervisor, both during the initial setup and when creating virtual machines within each platform.

      As for Proxmox, my guess is that it may not be as finely tuned out of the box compared to some of the other hypervisors. This might explain its performance in the benchmarks, maybe more tuning is needed to reach its full potential.

      Meanwhile I use Proxmox on my main virtualization host and don’t plan on switching anytime soon.

Leave a Reply

Your email address will not be published. Required fields are marked *