darrylcauldwell.com On a journey around the datacenter and public cloud.

Troubleshooting ESXi HBA Fibre Connectivity

To see physical NIC connectivity in vSphere is nice and easy as its displayed in vCenter console. I found however checking the link state of fibre channel HBAs is possible but not so easy so I thought I’d detail the steps I used here.

I had a simple issue that the storage target was not being discovered by my newly built ESXi 5.5 host, normally a couple of reboots or HBA resets are required get the array visible, but this host wasn’t having it. The first thing to establish is if the cabling engineer had connected it correctly as the server is many hundreds of miles away physically checking the cabling is non trivial.

I first explored the esxcli storage namespace and while from here I can verify that the adapters are discovered using either

esxcfg-scsidevs -a

or

esxcli storage core adapter list

Once we are happy the adapters are discovered, we can see if packets are going through the hba’s using

esxcli storage core adapter stats get

Unfortunatley there is no part of esxcli storage namespace for checking fibre connection state.

Up to and including vSphere 5.1 the running HBA driver populates /proc/scsi/ with a file which records fibre channel link state.

For an active link:

Host adapter:Loop State = <READY>, flags = 0xaa68
Link speed = <8 Gbps>

For an inactive link:

Host adapter:Loop State = <DEAD>, flags = 0x1a268
Link speed = <Unknown>

For vSphere 5.5 and later, you do not see native drivers in the /proc nodes. To view native driver information, run the command:

/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -a”

This command looks to give you some great information from the running driver including link state,
in my case this was down and it was a cabling fault.

Be social and share this post!