Advanced vSphere 5.x Storage - Masking, Multipathing & Filtering21 Jul 2014
The relationship between a vSphere ESXi host and its shared storage is very important for the solution to work effectively. The storage architecture within vSphere is extensible and this extensibility is known as the Pluggable Storage Architecture often abbreviated to PSA.
The PSA is an open modular framework that coordinates the simultaneous operation of multiple multi pathing plugins (MPPs). PSA is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESX storage I/O path. This allows 3rd party software developers to design their own load balancing techniques and failover mechanisms for particular storage array. The PSA coordinates the operation of the NMP and any additional 3rd party MPP.
To view what PSA plugins are loaded,
By default only NMP and MASK_PATH plugins are installed. To install a third-party PSA plugin your vendor would provide you with a vSphere Installation Bundle vib file. An example of how to install a VIB we can see for the Netapp NFS VAAI plugin.
As there are multiple plugins supplied and others can be plugged in there is a rule list which defines which device gets managed by which plugin. You can view existing claim rules.
The rules are applied by rule number lowest to high, so if you have a complex claimrule set and your devices are picking up incorrectly check the order. You will see a catch all at the bottom 65535 which assigns any without specific match to NMP .
One MP rule you will notice is listed twice with Class as being file and runtime. The file which these rules apply is
Using this we can if we wish we can tell PSA to detect devices and apply the MASK_PATH plugin. You might want to consider doing this at the ESX layer if some issue with the storage which causes host to loose contact with it and the LUN (or LUNs) enter an all-paths-down condition by temporarily masking the path while the storage engineer fixes the fabric.
Obtain ESXi Device ID of the LUN you would like to mask
Once you have the device ID you can then obtain :C Channel :T Target :L LUN and vmhba of the device you want to mask
Paths can be viewed and changed using this namespace use this to find a claim rule number not in use
Using all the above information assign the device to MASK_PATH
Once rule is added you should be able to see it in the claimrule list but you will notice this is file state not runtime
You now need to load the file into rules and then run the new ruleset
Once loaded and in running configuration you should see
Even though the new rule is in place for new devices to pickup, the current device still has an active claimrule, you can either reboot, or run a reclaim on the LUN so it picks up its new rule
If your in lab and want to get your LUN back
You may want to mask out a whole vendor
To undo mask a whole vendor
Once you have correct plugin’s installed, and claimrules in place so correct devices get picked up by correct plugin, you might want to then configure the plugin to say how it should react for path failover and how to choose correct path.
Within the storage plugin are two submodules
- Storage Array Type Plugin (SATP) – Used for path failover
- Path Selection Plugin (PSP) – Used for selecting the path
An SATP plugin monitors physical path health, reports to the NMP changes in physical paths, and executes array-specific actions for activating and deactivating paths. The Path Selection Plug-ins (PSP) selects the path for I/O requests. The flow of how these two components operate is nicely shown in this short video.
The pathing policies which can be used with VMware NMP can be managed via the namespace
You can list all available PSP’s using
VMW_PSP_MRU – Most Recently Used, on path failure recovery stays load stays on same path
MW_PSP_RR – Round Robin, rotates through available paths
VMW_PSP_FIXED – Fixed Pathing, on path failure recovery load moves back to preferred path
The SATP level path selection can be viewed
The final thing to cover with regards how storage can be seen and managed within ESXi would be device filtering. There are four storage filters all of which are applied by default, these filters define what can be seen within the cli and gui.
- VMFS Filter: filters out storage devices or LUNs that are already used by a VMFS datastore
- RDM Filter: filters out LUNs that are already mapped as a RDMSame Host and
- Transports Filter: filters out LUNs that can’t be used as a VMFS datastore extend.
- Prevents you from adding LUNs as an extent not exposed to all hosts that share the original VMFS datastore.
- Prevents you from adding LUNs as an extent that use a storage type different from the original VMFS datastore
- Host Rescan Filter: Automatically rescans and updates VMFS datastores after you perform datastore management operations
vSphere Client -> Administration -> vCenter Server -> Settings -> Advanced Settings
To disable add the following key(s) if not already there and set it to false.