darrylcauldwell.com On a journey around the datacenter and public cloud.

vSphere Best Practice

I often get asked the question,  what is the best general configuration for private VMware hosting. Typically this is when an engineer or architect has been reading the multiple best practices and has become confused as much best practice advise is conditional. They just want to stand something up quickly with little detail on what the requirements are and then as solution gets used evolve the configuration based on the requirements which come through usage. Its certainly not ideal but does seem to happen a lot.

As such I compiled this list of what I think are general purpose best practices, these are not hard and fast just a starting point you should over time evaluate each within your environment.

vSphere

Monitoring

  • Configure SNMP trapping for vCenter and any outbound from the server hardware running ESXi

High Availability

  • Form clusters and enable HA on clusters
  • Configure to use percentage of resources
  • Enable VM level HA monitoring
  • Use multiple isolation response addresses of HA device IPs such as HSRP address of a network devices and change isolation response to Shutdown

DRS

  • Form clusters and enable DRS
  • Use larger clusters and use DRS anti-affinity rules to create smaller groupings within if required for SQL licensing etc
  • Set exceptions for application specifics such as MS Lync no vMotion etc

sDRS

  • Use VASA profile meta data from the storage array rather than create manually if possible

VAAI Thin Provisioning & Unmap

  • Encourage VM guests OS choice to support TRIM\UnMap so Windows 2012+ (be careful of 3am unmap storm)
  • Schedule datastore free SAN block unmapping
  • Plan VM template placement to maximise clone offload,  dependant on storage but within same aggregate as target datastore will do an offload rather than a full copy
  • As a starting point we generally aim for no more than 15 VMs per datastore,  to note isn't a hard limit just a balance on risk we have formed

CPU

  • Enable Turbo Boost (where available)
  • Enable Hyperthreading (where available)
  • Enable Hardware Support for virtualization
  • Intel VT-x or AMD-V
  • Intel VT-d or AMD IOMMU
  • No eXecute (NX)/eXecute Disable (XD)
  • Always right-size your VMs,  be very careful not to oversize with too many vCPU
  • Idle vCPU wastes resources
  • VMs can always be increased in size if necessary, whereas an oversized VM will not get the attention required to change the size at a later date
  • Where possible use 1 vCPU per virtual socket
  • Putting multiple cores per virtual socket can impact performance in NUMA systems

NUMA

  • Use a vNUMA enabled version of vSphere so 5.0+
  • Use virtual hardware enabled version 8+ on VMs with NUMA aware OS so like Windows 2008 R2 and newer
  • Use virtual machine guest OS version which is NUMA aware such as Windows 2008 or newer
  • Use 1 vCPU per virtual socket, putting multiple cores per virtual socket can impact memory latency in NUMA enabled systems.

Networking

  • Configure ESXi management portgroup on Standard Switch and others on vDistributed Switch
  • SplitRx mode is supported only on vmxnet3 network adapters and is disabled by default. Enable SplitRx Mode in situations where multiple virtual machines share a single physical NIC and receive a lot of multicast or broadcast packets.
  • Ensure Portfast is enabled on all network switch ports connected to ESX(i) servers
  • Ensure SpanningTree is disables on all network switch ports connected to ESX(i) servers
  • Enable Jumbo Frames for network based storage and vMotion network.
  • When specifying NICs look for 10g PCIe network cards with NetQueue feature.
  • Use multiple uplinks per vSwitch
  • Separate vSwitch by network function where possible with NIC quantity
  • Use 802.1Q trunked connections with base VLAN set to the ESX management network VLAN
  • If there is a NetFlow collector in environment configure vSphere to send information to it if required

Power

  • Set server power mode to “OS Control” in BIOS

Storage

  • Work with storage vendor to define which MP to use per array
  • Split each VM guest volume into its own VMDK so each can be resized independently and each can be placed on datastore with best suited characteristics
  • Use array snapshots where possible for VM guest backups
  • Use vscsiStats to profile the workload of your VMs and work with your storage team to form LUNs most suited to the workload profiles

Vendor Packs

  • Install and use storage vCentre \ vCOPs plugins
  • Install and use server vendor vCenter plugins

Be social and share this post!