Deploying Nutanix CE 5.18 Nested on VMware Fusion or Workstation
I deployed the newly released Nutanix Community Edition (CE) 5.18 as a Nested VM on VMware Fusion to have a quick first look at this long anticipated CE upgrade.
In this post, I am showing you how I have performed this deployment.
In case you want to learn more about Nutanix CE, have a look at my previously shared post: Now Available: Nutanix Community Edition 5.18.
Note that the steps below are based on VMware Fusion but the same steps apply in case you are using VMware Workstation on Windows.
Downloading CE 5.18 ISO
The first step is to download the Community Edition Installer ISO image from the Nutanix Next Community forum post: https://next.nutanix.com/discussion-forum-14/download-community-edition-5-18-38417. This ISO image is immediately a new key feature of CE 5.18 as it supports an enhanced disk selection wizard, UEFI and Legacy boot support, and ESXi installation (with a user supplied ESX ISO hosted on an accessible internal server). You will need a Next Community account to be able to access the forum post but not to worry as registration is FREE.
After having downloaded the ISO image you can continue with creating the Virtual Machine in VMware Fusion.
Virtual Machine Configuration in VMware Fusion
I have used VMware Fusion 10 Pro for this Nutanix CE 5.18 deployment, which is an older version of the product. Using VMware Fusion is quite easy and powerful to quickly perform Virtual Machine deployments – all from within the comfort of your MacOS desktop.
In case you are interested my hardware is the following: Mac Pro 2009, 6-core Intel Xeon 3.46Ghz, 48GB RAM, 500GB M2 SSD NVMe (PCIe), 500GB SSD & 1TB HDD. Yes, it’s an old one but works just fine. 😉
Also, before continuing check out whether your system meets the requirements for running CE 518: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Community-Edition-Getting-Started:Nutanix-Community-Edition-Getting-Started.
Now, on with the steps for creating the Virtual Machine:
- Create a new Virtual Machine and selecting the “install from disk or image” Installation Method and click on “Continue”
- Select “Use another disk or image…” and browse to select the earlier downloaded “ce-2020.09.16.iso” ISO Installer image
- Choose “Linux” and “CentOS 7 64-bit” as the “Operating System”
- Click on “Customize Setting, enter a name for this new CE 5.18 VM and “Save”
- The CE 5.18 VM is now created but needs to be reconfigured to meet the requirements
- In the VM Settings, select “Processors & Memory” and choose a minimum of 4 processors and 16GB RAM (32GB is recommended)
- Tick the checkbox “Enable hypervisor applications in this virtual machine” ensuring that Intel VT-x is enabled
- Next is to create the storage by selecting the “Hard Disk (SCSI) and resize this disk to 8GB and set to use SATA instead of SCSI
- Leave the checkbox “Pre-allocate disk space” unchecked to have this disk Thin Provisioned thus saving valuable disk space on your Mac
- Click on “Add Device” and create two more disks using the following minimum specifications: 200GB SCSI & 500GB SCSI (both also Thin Provisioned)
- Select the “Network Adapter” and configure the adapter to be on “Bridged Networking – Autodetect”
- Select “Advanced” and set the “Firmware type” to “UEFI”
That’s it! Now you have completed the VM reconfiguration allowing you to proceed with the Nutanix CE installation process.
Nutanix CE Installation Process
The installation process starts as soon as you power on your newly configured CE 5.18 VMware Fusion VM. In case your VM does not start up, check if you are using UEFI boot (see above) or your “ce-2020.09.16.iso” file for any corruption; MD5 checksum is available on the Next Community forum post.
- In case this is the first time that you are using VMware Fusion with bridged networking, you will have to accept a security exception on MacOS allowing VMware Fusion to make changes to your machine related to the required bridged networking.
- After the VM has powered on, the installer starts with automatic minimum system requirements checks
- Next up is the main configuration screen with the next key new feature: ability utilize VMware ESXi as Hypervisor!!
- I chose the Nutanix AHV Hypervisor for this quick first look
- Select the “Hypervisor boot” (8GB), “CVM boot” (min. 200GB) and “Data” (min. 500GB) disks by assigning the “H”, “C” and “D” characters
- Enter the IP addresses for the Hypervisor and CVM (remember to keep these in the same subnet as with the Enterprise Production deployments)
- Choose whether you want immediately create a single node cluster and, if so, provide your DNS server IP address (this is needed to be able to connect to next.nutanix.com to login and license this new cluster after the installation has fully completed)
- If you want to create a 3-node cluster you will need to leave this option unchecked as you will need to deploy additional CE 5.18 VM’s and create the cluster manually afterwards
- On the next page, you will have to scroll down the entire EULA and place your “x” to accept
- Choose “Start” to start the actual installation process
- When the installation is completed, enter “Y” to reboot the VM
This concludes the installation part and after the CE 5.18 VM has been rebooted and all CVM processes have been started, you are good to go with navigating to the Nutanix Prism GUI using your MacOS desktop.
Note that MacOS will ask you to provide your administrator password upon booting up the VM allowing the bridged networking to be setup properly.
First Look at the Nutanix Prism Dashboard
When you first access the Nutanix Prism web GUI, using the CVM IP address, you are asked to accept the EULA.
Afterwards you are prompted with a login screen for which you need to use the default values “admin” and “Nutanix/4u”.
It is needed to change the default password after which you have to log back into Prism.
In the next screen you need to provide your Nutanix Next Community login credentials via which Nutanix keeps tracks of all these Community Edition installs. Note that in case your previously provided DNS IP address is not working properly you will get an error here.
And now the BIG moment has arrived as you will be presented with the Nutanix Prism Elements dashboard of your newly deployed Community Edition 5.18 AHV (single node) cluster!
As I have created a single node cluster with minimal disk setup, I will not have a “Green” dashboard as can be seen from my Prism screenshot; this cluster does not have any redundancy but that’s not the point for my first quick look.
The new ISO installer works perfectly and feels more mature than the old Community Edition installation process. There is also no need anymore to create a .vmdk file based on a .img files downloaded from the Next community forum. This is definitely a big win!
The only real drawback that I encountered was the slow performance of my new CE 5.18 deployment. For example, it took quite some time before the CVM was booted up with all services up & running. But, this was to be expected as nested virtualization is always slower compared to a bare-metal.
My thanks to the entire Nutanix CE team for delivering this great new CE 5.18 platform!
In case you have any questions on Nutanix Community Edition, leave a comment below or reach out in the Nutanix Next Community Forum.
— Happy testing in your Home Lab!
13 Replies to “Nutanix Community Edition 5.18 Nested on VMware Fusion or Workstation”
Two bits of feedback. I couldn’t get the install to work with less than 32GB of RAM. I got the following error when I tried with 16GB “cannot set up guest memory pc.ram cannot allocate memory”. After install, I was unable to access CVM using a browser. During setup, I used my Pi-Hole’s (DNS ad blocker) IP address. I had to SSH into CVM and add an additional DNS using “ncli cluster add-to-name-servers servers=”dns_server” where “dns_server” was my local internet gateway.
Thank you for your taking the time to provide your feedback on my post. Much appreciated! 🙂
Very interesting that you received that RAM allocation error during installation using less than 32GB RAM: I have installed and am still running multiple single-node nested CE clusters using only 20GB RAM. Were or are you able to downsize the AHV Host and CVM RAM allocations after deployment?
Regarding, your Pi-Hole: I have that running in my home as well. My main DNS is my Pi-Hole server, which I have setup in my LAN router. However, I do have a Windows Server VM that is specifically routing my local (home-lab) DNS queries. When installing CE 5.18, I used that Windows Server VM IP address as DNS server instead of the Pi-Hole.
I can also confirm that I received this same exact error installing with the minimum recommended 16gb of memory.
Why not choose SATA for de 200Gb and 500Gb disks?
Honestly, I took over the same configuration settings for the Hot and Cold Tier Hard Disks as with the previous version of Community Edition in a nested configuration.
However, it is well worth trying it without iSCSI because in a bare-metal installation most of use SATA Hard Disks.
For the fun of it, I will try this out coming weekend and will let you know the results. 😉
I know the installation is possible (I have it running on my laptop with SATA defined disks). What I would like to know is if there is a noticeable performance difference between SATA and SCSI. Is that something you can test?
Have you installed the CE with esxi instead of AHV?
Thanks for your comment!
No, I have not yet used the new VMware ESXi Hypervisor capability of CE.
But I am planning to check this out coming weekend after which I will share my experience with you and the others.
VMware ESXi deployment done and shared via another post: Nutanix Community Edition 5.18 with VMware ESXi 7.0U1 nested in VMware Fusion or Workstation.
have you tried to configure 3 nodes clusters ? i am stuck cause Host could not ping CVM, any idea ?
Thanks for your comment!
I have not yet deployed a 3-node CE cluster, but:
Have you been able confirm whether your CVM is up and running on your host(s)?
Login in your host and run the command “
virsh list --all” to see whether your CVM is in the “running” state.
If not, then you can power it on by issuing the “
virsh start [CVM Name]” command.
If the CVM is (already) running then you can check whether your CVM has the correct network settings.
Via the Host ssh session, connect to your CVM using the internal network: “
When logged in on the CVM, run the command “
ifconfig eth0” command to check the CVM network settings.
You can also try a ping command to the Gateway, which you have set during deployment.
Also, are you experiencing this issues with all three your separate CE Hosts/CVM’s?