darrylcauldwell.com On a journey around the datacenter and public cloud.

NSX-V, vRealize, All-Flash VSAN Homelab

As VMware add more products to its suite its more difficult to run a suitable homelab to host them. With this I aim to describe the steps I followed to create my current homelab.

High Level Hosting Requirements

Some core products will permanently available in the homelab.

vCenter Appliance (Tiny) 2 vCPU, 8GB vRAM, 120GB vHDD
Windows, DNS, Active Directory 2 vCPU, 4GB vRAM, 100GB vHDD
Ansible 1 vCPU, 2GB vRAM, 16GB vHDD
Core Total 5 vCPU, 14GB vRAM, 256GB vHDD

Other products which will be transient in the environment.

NSX Manager 4 vCPU, 16GB vRAM, 60GB vHDD
NSX Controller 4 vCPU, 4GB vRAM, 20GB vHDD
NSX Total 8 vCPU, 20GB vRAM, 80GB vHDD

vRealize Operations (Extra Small) 2 vCPU, 8GB vRAM, 122GB vHDD
vRealize Log Insight (Extra Small) 2 vCPU, 4GB vRAM, 144GB vHDD
Management Total 4 vCPU, 12GB VRAM, 266GB vHDD

vRealize Orchestrator 2 vCPU, 4GB vRAM, 12GB vHDD
vRealize Automation (Small) 4 vCPU, 18GB vRAM, 60GB vHDD
vRealize Automation IaaS 2 vCPU, 8GB vRAM, 30GB vHDD
vRA Total 8 vCPU, 30GB vRAM, 102GB vHDD

Physical Hosts Design

The requirements are to concurrently host VMs with 54GB of RAM and 16 vCPU and 600GB of vHDD. As they will have a single user (me) I expect the get away with a high over commit ratio, however memory will be committed by the applications as they run.

On researching what low power, small footprint, All Flash VSAN capable hosts are available I came across William Lam article on 6th Generation NUC homelab. These seemed to meet most of the requirement. As I’m potentially CPU constrained I chose the best CPU available so got two i5 Intel NUCs. At the tine of purchase the 1TB Samsung 850 EVO were only a little more expensive so I opted for these.

Physical Network Design

The internet connection terminates in my lounge, the lab will be run in a remote location which has no structured cabling. My internet connectivity is provided by UK provider BT and they supply BT Home Hub 4 while being a providing great WiFi this provides 3x 10/100 Ethernet port and 1x Gigabit Ethernet port.

The Gigabit Ethernet port will be extended to the remote location over the power line by use of a  TP-LINK AV500 Wi-Fi Powerline Extender. While this will provide a 500MB connection it only has 2x 10/100 Ethernet Ports on the remote end. The North-South traffic to internet will therefore be limited to 100MB,   as my connection only runs at 20MB I don’t foresee this as an issue.

Most traffic will be East-West within the lab and Cisco SG200-08 switch will be attached to the AV500 to provide Gigabit Ethernet between hosts and network attached storage.

The network switch provides 8x 1GbE ports one of which uplinks to internet, one will connect the NAS. The other six ports will be presented evently to the two ESXi hosts by way of the two onboard NICs and four StarTech USB3 to GbE adapters.

Storage Design

The two ESXi hosts will host a All-Flash VSAN of 1.8TB, this will have deduplication and compression enabled.

For a few years now I’ve had a Synology DS213j NAS with 2x 500GB HDD this has a single 1GbE NIC and can present iSCSI and/or NFS.

Bill Of Materials

2 x Intel NUC 6th Gen NUC6i5SYH (eBuyer unit cost £357.99)
2 x Crucial 32GB Kit (2x16GB) DDR4 (eBuyer unit cost £85.98)
2 x Samsung SM951 NVMe 128GB M.2 (eBuyer unit cost £42.98)
2 x Samsung 850 EVO 1TB SATA3 (eBuyer unit cost £247.98)
2 x Toshiba microSDHC UHS-I 8GB Card (eBuyer unit cost £2.39)
4 x StarTech USB 3.0 to Gigabit Ethernet (eBuyer unit cost £16.98)
1 x Cisco SG200-08 8-port Gigabit (eBuyer unit cost £72.96)
8 x 0.5m Cat6 Cables (eBuyer unit cost £0.82)

Management IP Address Scheme

The switch I purchased supports VLANs but does not include a routing capability this will be done where required with NSX Edge devices. All core virtual machines will sit on the normal home network so they can NAT out to internet. The IP in use on my homework is 192.168.1.0/24, the gateway is 192.168.1.254 and 192.168.1.64 - 192.168.1.253 are in use for DHCP. I will use the 192.168.1.10 - 192.168.1.50 for static IPv4 lab out of band management addressing.

Core Network Configuration

The TP-Link AV500 extended installation was simple case of connecting a cable at each end and pressing a button to mirror the WiFi settings to improve connectivity in the remote location.

The Cisco SG200-08 comes set with IP address 192.168.1.254/24 which would clash with the BT Home Hub. The first task is to connect the switch directly to a PC with Ethernet cable,  configure the Ethernet port with a temporary IP on the 192.168.1.0/24 network.  Using a web browser connect to 192.168.1.254 and connect using username: cisco and password: cisco. I then updated this to static IP address 192.168.1.10/24 with gateway 192.168.1.254.

Once the switch has correct IP address we can connect this to the AV500 I connected this to port #8 from a WiFi client.  We can test connectivity and do something like changing to SNTP and enabling the three SNTP servers.

Connect the NAS to port #7.

Ports #1 - #6 will be used for ESXi host connectivity, VSAN and NSX require so we need to increase the MTU size for these ports to there maximum size of 9216.

Save the running configuration to be the startup configuration before exiting or it will be lost when the switch restarts.

ESXi Host Installation

I followed the Installation section of Willam Lam’s guide which worked well. William had also found StarTech USB3 1GbE NICs could be added to 6th Generation NUCs. Following his guide and supplied driver and these should get detected.

Once installed first task is to configure IPv4 addressing we will be using 192.168.1.11 and 192.168.1.12.  Set host names to esx1.darrylcauldwell.local and esx2.darrylcauldwell.local and while we have not setup DNS yet set primary DNS to the IP which will be Active Directory DNS 192.168.1.14.  This lab will not use IPv6 so at this stage I disable this too.

You should now be able to use a browser to see the ESXi Embedeed Host Client
https://192.168.1.11/ui/
https://192.168.1.12/ui/

Single Node VSAN

Will create VSAN on single node to deploy DNS and vCenter,  using a method based on this William Lam article.

The configuration of VSAN without vCenter is done via the command line on ESX host. As such first task is to enable SSH and the console.

In order to run a single VSAN node we need to update the default VSAN storage policy to allow us to force VM provisioning and FTT,

esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

So at this stage we can check if VSAN will identify the M.2 storage for cache tier and SSD for capacity by running the following and checking the IsCapacityFlash attribute

vdq -q

We will find that it is not marked correctly and also find the device name so we can use this output to to configure this attribute by running a command similar to

esxcli vsan storage tag add -d t10.ATA_____Samsung_SSD_850_EVO_1TB_________________S2RFNXAH317049Z_____ -t capacityFlash

We can then check the attribute is updated correctly by running,

vdq -q

We can then add both disks to the create a disk group,  by running a command similar to below substituting the disk name with output of vdq -q command,

esxcli vsan storage add -s t10.ATA_____SAMSUNG_MZHPV128HDGM2D00000______________S1X3NYAH201722______ -d t10.ATA_____Samsung_SSD_850_EVO_1TB_________________S2RFNXAH317049Z_____

We can then look to create the VSAN cluster by running,

esxcli vsan cluster new

This should now provision a VSAN datastore on single node to be used to deploy first VMs.

Active Directory and DNS

In order to deploy vCenter 6 we require DNS to be in place,  this will be hosted on a Windows 2012 Server.

  • Create a folder called ISOs on VSAN datastore
  • Upload Windows Server 2012 R2 ISO files to ISOs folder on VSAN datastore
  • Create new VM named AD with hardware config 2x vCPU, 4GB
  • vRAM, 1x 60GB vHDD, attach Windows ISO as vCD-ROM
  • Apply MSDN License Key and Active Windows
  • Disable IE Enhanced Security
  • Use Windows Update to apply all current patches
  • As some Updates will update .net we should force the Assemblies to get updated
%windir%\Microsoft.NET\Framework\v4.0.30319\ngen.exe update /force
%windir%\Microsoft.NET\Framework64\v4.0.30319\ngen.exe update /force
  • Update Windows computername to 'ad'
  • Configure IPv4 address 192.168.1.14, net mask 255.255.255.0, gateway 192.168.1.254 DNS server 192.168.1.14
  • Add Active Directory, DNS, Desktop Experience and .net Framework 3.5 using Roles and Features
  • Create a DNS Forward lookup zone for darrylcauldwell.local
  • Create a DNS Reverse lookup zone for 192.168.1.0
  • Create A & PTR record in DNS for ‘ad’ with IP ‘192.168.1.14’
  • Add a new Active Directory forest named darrylcauldwell.local

As well as hosting AD and DNS this will be used as a hopbox at least initially we will also perform the following extra steps.

  • Allow Remote Desktop access
  • Install Google Chrome
  • Install Google Chrome PostMAN Rest Client App
  • Install putty and WinSCP
  • Add Port Forward record on the BT HomeHub Router

vCenter

Follow the VMware guide for installing a Tiny VCSA with Embedded PSC to the newly formed VSAN datastore, give IP address 192.168.1.13 and hostname vcenter.darrylcauldwell.local.

Create a Datacenter and add the ESXi hosts. Enable vMotion and VSAN traffic on vmk0. Add VSAN Enterprise, vCenter, NSX and ESXi license keys.

VSAN Cluster

Create a cluster in the Datacenter holding the two physical hosts. Enable VSAN .  Move the two esx hosts into the cluster.  Edit VSAN configuration and enable Deduplication and Compression.

For some reason the VSAN cluster configuration from the single node VSAN cluster is not picked up by vCenter. So we need to manually set this again via the GUI to show FTT=0 and to force provisioning.

vRealize Log Insight

As ESX is installed to USB rather than physical disk the log files are not persistent, in home lab I’ll be trying things which will cause errors so retaining the log files would be useful. As such I add Log Insight 3.3 OVF at this point, it comes with vSphere content pack included so I just configure this to the correct FQDN, I’ll be shortly adding NSX so add this plugin at this point.

In order to view the network switch logs as part of troubleshooting I also configure the Log Insight IP address as a remote syslog receiver.

vRealize Operations Manager

While this won’t be production its useful to see what information is recorded in vROPs so I deployed 6.2.1 OVF and then configured the Log Insight integration.

NSX for vSphere

Create new VDS with both hosts added, assign both USB3 NICs as uplinks for each host.  Update MTU size to 9000 and enable LLDP to both listen and advertise.

Deploy NSX Manager and give it an IP address on out of band management network (192.168.1.17).  Register with vCenter, and update remote syslog to point to Log Insight.

Create a NSX Controller IP Pool with 192.168.1.20-192.168.2.25.

Create a VTEP IP Pool with 192.168.1.26-192.168.2.35.

Use Host Preparation to install the VIBs to the ESX hosts in the cluster. Configure VXLAN to use VTEPs IP Pool.

As this is a lab only and we don’t need high availability deploy a single NSX Controller to our cluster.

Add segment IDPool 5001-6000

Create a Transport Zone for the cluster.

Be social and share this post!