I have been asked many times in what specific order a Nutanix cluster should be upgraded. This refers to all software components that make up a Nutanix cluster such as Nutanix AOS, AHV, NCC & Foundation but also VMware ESXi & Hyper-V and even the BIOS, firmware and other hardware device drivers. Well, I thought it was about time to share my way or working with you this blog post. I hope it helps you with your own Nutanix cluster upgrade activities.
Data Resiliency Not Possible & Adding New Disks to Nutanix Nodes
One of the great benefits of the Nutanix Hyperconverged Infrastructure (‘HCI’) platform is that you can easily expand your Nutanix cluster with new nodes when you are in need of more CPU, RAM and storage resources. But what if you find yourself quickly running out of storage space on your Nutanix cluster without a need for more CPU or RAM resources? Even worse, what if you’ve let this storage problem linger on too long?
My Nutanix Field Deployment Process
I have been doing field deployments of Nutanix nodes for a couple of years now. My very first deployment was back in 2018, which was successful but did not go really smooth. Practice makes perfect though and these deployments have now become a walk in the park. It’s all good fun and time has really flown by. I thought it was about time to share my way of workings with you, which hopefully helps you on your own path as a Nutanix engineer.
Deploying Nutanix CE 5.18 (AHV) nested in a VMware ESXi 7.0U1 Host (Intel NUC 10)
In this post, I will show you how to get CE 5.18 deployed as a nested VM running on a VMware ESXi 7.0U1 host. The ESXi host, which I used, is an Intel NUC 10 (NUC10i7FNH2), which is configured with 64GB RAM and a 1TB NVMe M2 SSD (Samsung 970 EVO).
Nutanix Medusa Error: Cassandra Gossip Fails
I want to share an interesting issue that I came across when expanding an existing Nutanix cluster with a new node. All installation tasks completed successfully until the Medusa service on the new CVM gave an error stating “Cassandra gossip failed”.