Building a Hyperconverged Home Lab using Nutanix Community Edition 2.1

Bare Metal Nutanix Community Edition

Introduction

In this guide, I’ll walk you through the process of building your own three-node Nutanix CE 2.1 cluster, sharing the lessons I’ve learned along the way. Whether you’re a seasoned home lab enthusiast looking to explore hyperconverged infrastructure or an IT professional wanting to expand your skillset, this step-by-step approach will help you create a powerful learning environment that bridges the gap between theory and practical application.

This guide follows the steps outlined in the official Nutanix document: Getting Started with Community Edition. While I’ll be expanding on certain aspects based on my personal experience, the core installation and configuration process adheres to Nutanix’s official recommendations.


What IS Nutanix Community Edition?

Nutanix Community Edition (CE) is a free, community-supported version of the Nutanix Cloud Infrastructure (NCI), released as a way for the community to learn and experiment with Nutanix technology.

Unlike many “community editions” of enterprise software that are heavily limited, Nutanix CE provides a remarkably complete experience. With sufficient hardware resources, CE can even serve as a platform for other Nutanix products like Nutanix Unified Storage (Files, Objects, and Volumes), Prism Central, and Nutanix Kubernetes Engine (NKE) in a limited capacity for non-commercial use.

At its core, Nutanix CE delivers four primary capabilities that form the foundation of hyperconverged infrastructure:

  • Virtualization through Nutanix AHV: A native hypervisor based on the KVM platform, offering VM management capabilities without additional licensing costs that would be associated with other hypervisors.
  • Software-defined storage through Nutanix AOS: The distributed storage fabric that pools your storage resources across nodes and provides enterprise features like snapshots, compression, and thin provisioning.
  • Infrastructure management through Prism: A simple yet powerful web-based management interface that provides a single pane of glass for managing your entire infrastructure.
  • Built-in networking and security: Integrated networking capabilities for VM connectivity and basic security features

What Nutanix Community Edition is NOT!

Nutanix CE is explicitly designed for non-commercial use cases like testing, learning, and development. It lacks the performance optimizations, validated hardware configurations, and enterprise support necessary for production workloads.

  • Installation process: CE requires manual installation rather than using Nutanix Foundation for automated deployment.
  • Node limitations: CE clusters are limited to one, three, or four nodes and can’t be deployed in the public cloud.
  • Support: CE is supported only by the Nutanix user community, not by Nutanix enterprise support.
  • Hardware compatibility: While commercial Nutanix platforms typically run on specific validated hardware configurations, CE offers broader hardware compatibility at the expense of optimized performance.
  • Connectivity requirements: CE requires an internet connection and a Nutanix Community account for validation.

Hardware Recommendations Based on Experience

While Nutanix provides official hardware recommendations in their Recommended Hardware for Community Edition documentation, I wanted to expand upon those recommendations. After using various versions of CE for 3 years, building and rebuilding clusters for tutorials, troubleshooting issues, and reading countless forum posts, I’ve found that most issues users encounter are related to hardware limitations, specifically around disk read/write speeds during installation, maintenance tasks, and upgrades.

For the requirement of separate disks for the hypervisor boot, CVM, and data storage, I strongly recommend using at least SSDs for each disk to avoid I/O related issues. Additionally, equip each server with at least 64GB of RAM, or at minimum ensure that one node has 64GB of RAM if you plan on using Prism Central.

Although Nutanix explicitly doesn’t recommend NVMe drives, I’ve successfully implemented them in my Dell R640 servers. The NVMe’s I use are not bootable, however, you can still install a bootable OS to one of the NVMe’s and kickstart it with a bootloader installed on another drive. Options would include the internal USB port, SD Card, or even a HDD/SSD just for the purpose of bootloading. In this guide I use rEFInd, however the Clover bootloader or Grub will also work.

In my testing, 1GbE networking worked reliably without issues, though I’ve since upgraded to 10GbE for improved performance. For processors, I recommend at least 8 cores per node to prevent CPU bottlenecks and support reasonable VM workloads. Regarding storage controllers, I’ve successfully used the PERC H730 in HBA mode, as well as the HBA330 and onboard AHCI SATA controllers, all of which performed without problems in my CE deployments.


Planning Your Deployment

Before beginning the installation process, proper planning is crucial for a smooth deployment. This section will help you organize all the necessary information you’ll need throughout the installation and configuration process.

Below is a diagram of my home lab environment for visual reference:


Network Planning Table

Use the table below to document your network configuration details. Having this information prepared ahead of time will save you troubleshooting headaches later.

ServerNode1Node2Node3
Node Hostnamenode1.labrepo.comnode2.labrepo.comnode3.labrepo.com
CVM Hostnamecvm1.labrepo.comcvm2.labrepo.comcvm3.labrepo.com
Host IP Address (AHV)192.168.10.20192.168.10.22192.168.10.24
CVM IP Address192.168.10.21192.168.10.23192.168.10.24
Subnet Mask255.255.254.0255.255.254.0255.255.254.0
Gateway192.168.10.1192.168.10.1192.168.10.1
iDRAC Hostnameidrac-node1.labrepo.comidrac-node2.labrepo.comidrac-node3.labrepo.com
iDRAC IP Address192.168.10.121192.168.10.122192.168.10.123

Cluster Configuration Details

Document these additional cluster-specific settings:

SettingValue
Cluster Namentnx
Cluster Hostnamentnx.labrepo.com
Cluster Virutal IP192.168.10.27
Dataservices IP192.168.10.26
DNS Servers (Cloudflare)1.1.1.1,1.0.0.1
NTP Servers (Cloudflare)162.159.200.1,162.159.200.123

DNS Entries

Create these entries for the Prism Element cluster:

FQDNIP Address
prism-element.labrepo.com192.168.10.21
prism-element.labrepo.com192.168.10.23
prism-element.labrepo.com192.168.10.25
prism-element.labrepo.com192.168.10.27

Default Credentials Reference

Keep track of the default credentials that you’ll need to change during setup:

ComponentUsernameDefault Password
AHV Hostrootnutanix/4u
Controller VMnutanixnutanix/4u
Prism Web ConsoleadminNutanix/4u

Hardware Documentation

Record the specific hardware configurations for reference:

ComponentNode1Node2Node3
Server ModelPowerEdge R640PowerEdge R640PowerEdge R640
CPU ModelIntel Xeon Gold 6122Intel Xeon Gold 6122Intel Xeon Gold 6122
CPU Sockets/Cores/Threads2/40/802/40/802/40/80
RAM Amount256 GB (8x 32GB)256 GB (8x 32GB)256 GB (8x 32GB)
Boot DeviceSD Card (rEFInd Bootloader)SD Card (rEFInd Bootloader)SD Card (rEFInd Bootloader)
Hypervisor DeviceTEAMGROUP 500GB NVMeTEAMGROUP 500GB NVMeTEAMGROUP 500GB NVMe
CVM Boot DeviceTEAMGROUP 1TB NVMeTEAMGROUP 1TB NVMeTEAMGROUP 1TB NVMe
Data Storage Device(s)TEAMGROUP 1TB NVMeTEAMGROUP 1TB NVMeTEAMGROUP 1TB NVMe
NIC Model/SpeedIntel I550 4P – 10GbEIntel I550 4P – 10GbEIntel I550 4P – 10GbE
PCIe Slot 1Quad M.2 NVMe Adapter x16Quad M.2 NVMe Adapter x16Quad M.2 NVMe Adapter x16
PCIe Slot 2NVIDIA Tesla P4 8GBNVIDIA Tesla P4 8GBNVIDIA Tesla P4 8GB

Pre-Installation Steps

This section will guide you through all necessary preparations before beginning the actual installation process.

Creating Nutanix Community Account and Registering for CE

  • Check your E-Mail.
  • Follow the steps to create a Nutanix account.
  • Create a .Next community account and download the ISO.

Downloading CE Installation ISO

Login to the next.nutanix.com community page:

Installer ISO
https://download.nutanix.com/ce/2024.08.19/phoenix.x86_64-fnd_5.6.1_patch-aos_6.8.1_ga.iso
Md5: 0e7c800b46986c84d4fcdae92e73dc53
  • Save the Installer ISO to a known location.

Preparing Boot Media

Before installing Nutanix CE, you’ll need to either create bootable installation media or use virtual media functionality. I’ll cover both approaches in detail.

Using Etcher to Create Bootable USB Media

Etcher (now known as balenaEtcher) is a free, open-source utility for creating bootable USB drives from ISO files. It’s an excellent choice because it works consistently across Windows, macOS, and Linux, making this guide more accessible regardless of your operating system.

  • Download and install balenaEtcher from etcher.io
  • Insert a USB drive (8GB+) into your computer
  • In Etcher, click “Flash from file” and select the Nutanix CE ISO
  • Select your USB drive as the target
  • Click “Flash!” and wait for the process to complete
  • The drive is now ready to use for Nutanix CE installation.

Using iDRAC Virtual Media

If your servers have Dell iDRAC Enterprise, you can bypass physical media entirely by mounting the ISO directly. Here are two efficient methods via Web Browser or Remote File Share:

Method 1: Direct ISO Mounting via Web Browser

  1. Access the iDRAC web interface and launch the Virtual Console
  2. From the Virtual Media menu, select “Map CD/DVD”
  3. Browse to your locally stored Nutanix CE ISO file and click “Map Device”

Method 2: Remote File Share Mounting

  1. Upload the Nutanix CE ISO to an SMB or NFS share on your network
  2. In the iDRAC web interface, navigate to “Configuration” → “Virtual Media”
  3. Select “Connect Remote File Share” and enter your share details:
    • Example: nfs.labrepo.com:/volume1/ISOs/Nutanix/phoenix.x86_64-fnd_5.6.1_patch-aos_6.8.1_ga.iso
  4. Click “Connect” to mount the share

Setting “First Boot Device” via iDRAC Web GUI:

  1. In the iDRAC web interface, navigate to “Configuration” → “System Settings”
  2. Go to “Hardware Settings”
  3. Find and select “First Boot Device”
  4. Choose “Virtual CD/DVD/ISO” from the options
  5. Set “Boot Once” to “Enabled”
  6. Click “Apply”
  7. The server will boot from the virtual media on the next reboot

BIOS/UEFI configuration

The following settings are based on the Nutanix “DELL PowerEdge – Recommended BIOS Settings” to achieve maximum performance and compatibility.

BIOS settings for Intel platforms

Processor settings
Virtualization TechnologyEnabled
SATA settings
Embedded SATAOff
Boot settings
Boot ModeUEFI
Integrated devices
I/OAT DMA EngineEnabled
SR-IOV Global EnableEnabled
SR-IOV Individual NIC settingEnabled
System profile settings
Systems ProfileCustom
CPU Power ManagementMax Performance
Memory FrequencyMaximum Performance
Turbo BoostEnabled
C1EDisabled
C StatesDisabled
Memory Patrol ScrubStandard
Uncore FrequencyMaximum
Energy Efficient PolicyPerformance
Monitor/MwaitEnabled
CPU Interconnect Link Power ManagementDisabled
PCI ASPM L1Link Power ManagementDisabled

Installing Nutanix CE on Each Node

We’ll start by installing CE on the first node, which will later become the cluster leader.

  • Boot from installation media (USB or iDRAC virtual media)

The installer may appear to hang on “INFO Getting AOS version from /mnt/iso/images/svm/nutanix_installer_package.tar.p00” but can take 10-15+ minutes when using local media, or 20-30+ minutes when using virtual media.

  • When the Nutanix installer appears, use Tab key to navigate to disk selection
  • Select disks by pressing:
    • ‘h’ for hypervisor boot disk (smallest disk)
    • ‘c’ for Controller VM (CVM) disk
    • ‘d’ for data disk(s)
  • Enter network information for your node
  • Tab to “Next Page” and press Enter
  • Accept the EULA and select “Start”
  • The installation can take 20-45+ minutes.
  • When prompted, remove installation media and reboot
  • Disconnect Remote File Share before restarting or rEFInd will hang.
  • Using rEFInd is only necessary if your BIOS cannot detect your NVMe drives as bootable devices. When properly configured, rEFInd automatically detects any partitions with EFI bootloaders, including those on NVMe drives. Simply set your rEFInd device as the primary boot option in your BIOS, and it will handle locating and booting from the correct NVMe partition.
  • ** Important ** After the reboot, wait 15-20 minutes for the automated configuration of AHV and CVM to complete. During this initial boot, both the hypervisor and Controller VM undergo several automated configuration steps.
  • To determine when the setup process is finished, you can log into the AHV node and run the top command. Look for the QEMU process to drop below 100% CPU utilization for several consecutive seconds, which indicates the configuration is complete and you can proceed with the next steps.

Login to node1 using the iDRAC Console or SSH.

ssh [email protected]

Change the root password from “nutanix/4u”:

passwd

Repeat the “Installing Nutanix CE on the First Node” steps on your remaining nodes before proceeding to the next section.


Creating and Configuring the Cluster

Connect to the first CVM via SSH using the credentials: username nutanix, password nutanix/4u

ssh [email protected]

To create the Nutanix cluster, run the following command, replacing the placeholder IP addresses with your actual CVM IP addresses. Include all CVM IP addresses in this command, separated by commas.

cluster -s 192.168.10.21,192.168.10.23,192.168.10.25 create

After several minutes, the cluster installation will complete and display a “Success!” message.


Post-Installation Configuration

Access the Prism Element web console by navigating to https://<any_cvm_ip>:9440 in your browser and accept the certificate security warning when prompted. Example https://192.168.10.21:9440

Login using admin as the username and Nutanix/4u as the password.

Create a new password for the cluster admin account when prompted, then log in again using your new credentials.

When prompted, enter your Nutanix Community account username and password to activate the cluster.

You should now see the cluster web dashboard.

Navigate to Home > Settings > Cluster Details, then fill in the values for Cluster Name, FQDN, Virtual IP, and iSCSI Data Services IP. Click Save when finished.


Updating AOS and AHV

Navigate to Home > LCM, then click to run the NCC health check before performing updates.

Select “All Checks” and click Run. You can monitor the health check progress in the Recent Tasks panel.

Select the Inventory tab and click Perform Inventory.

Select “Enable Auto Inventory”, “Enable Auto Update for NCC”, and click Proceed.

Click “Software” and then “View Upgrade Plan” to view all available updates and follow the prompts to update the cluster.


Configure a Virtual Network for VMs

Navigate to Home > Settings > Network Configuration, then click “Create Subnet”.

Type in a “Subnet Name”, “VLAN ID” ( 0 for the Native VLAN ) and click Save.


Create a Virtual Machine

First we need to download an ISO or disk image. Navigate to Home > Settings > Image Configuration, then click “Upload Image”.

Enter the URL of the image or Upload a file and click Save.

In this example, AOS pulls the latest CentOS Stream qcow2 from a URL and stores it on the Nutanix cluster for future use.

Once the image has been downloaded it will populate in the Image Configuration window.

Navigate to Home > VM then click “Create VM”.

Fill in the Name, vCPUs, Cores per vCPU, Memory, Boot Configuration: Legacy BIOS, and click “Add New Disk”

Under Operation, select “Clone from Image Service”, choose the correct Image, and click Add.

After the Disk is added, click “Add New NIC”.

Select the correct Subnet Name and click “Add”

For Disk Images you can run a Custom Script with a Cloud-Init YAML and click Save.

Nutanix Cloud-Init Examples: https://www.nutanix.dev/lab_content/cloud-init-lab/contents/lab.html

#cloud-config
users:
  - name: nutanix
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    lock-passwd: false
    passwd: $6$4guEcDvX$HBHMFKXp4x/Eutj0OW5JGC6f1toudbYs.q.WkvXGbUxUTzNcHawKRRwrPehIxSXHVc70jFOp3yb8yZgjGUuET.

# note: the encoded password hash above is "nutanix/4u" (without the quotes)

After the VM is created, right-click the VM and select “Power on”.

Right-click and select “Launch Console”.

Login with the credentials specified in the cloud-init. Username: nutanix and password: nutanix/4u

Optional: Run a benchmark on the newly created VM disk. Benchmarking Disks On Nutanix Community Edition 2.1

sudo dnf install fio -y

sudo fio --name=balanced_test --filename=/home/nutanix/test1 --size=5G --rw=randrw \
    --bs=8k --direct=1 --numjobs=8 --ioengine=libaio --iodepth=16 \
    --group_reporting --rwmixread=70 --runtime=120 --startdelay=60 \
    | grep -E 'read:| write:' && sudo rm -f /home/nutanix/test1

Final Thoughts

Building a Hyperconverged Home Lab using Nutanix Community Edition 2.1 provides an excellent platform for exploring enterprise-grade hyperconverged infrastructure without the associated licensing costs. While there are some challenges to overcome, particularly around hardware compatibility and initial configuration, the end result is a powerful home lab environment that resembles official Nutanix deployments. I hope this guide helps you on your journey, and I look forward to hearing about your own Nutanix CE adventures!

Leave a Reply

Your email address will not be published. Required fields are marked *