We recently purchased a new all-flash storage array here at CoverMyMeds and in the process of setting it up, I have been reminded how something as simple as establishing multipath access to LUNs can be abstracted by software — and subsequently forgotten by humans. In our environment, a Windows server also needs iSCSI access to our VMware datastores to perform backups via VADP. So here’s a quick refresher on how to get your ESXi hosts and your Windows servers to multipath the old-fashioned way in case you forgot how to do it, like I did.
The process on ESXi is relatively straightforward. Once you have created your datastore on top of the new SAN volume, select the datastore from the vSphere Web Client home page Storage view. Then click the Manage tab, Settings, and Connectivity and Multipathing. Select a host in your cluster and the web client will display the path selection policy currently in effect. The default is ‘Fixed (VMware)’, and we want to change that.
So hit the Edit Multipathing button and select ‘Round Robin (VMware)’, or whatever your storage vendor has recommended. Note that some vendors (like Dell’s EqualLogic) package their own multipathing driver to use with their SAN, so this article applies only to SANs that use the built-in VMware policies. Hit OK and you’ve set up multipathing… for one host. Be sure to repeat this procedure for each host in your cluster. I’m not sure why you’d want hosts within a cluster to have different multipath policies, you probably don’t, so be sure to modify each host in turn. Even better, your SAN vendor may offer a customized SATP policy is available to make this a little more automated. For example, we have a SATP rule in place that finds a string in the iSCSI ID of a LUN when it logs into an initiator on the ESX host, and when it matches, it applies a multipathing policy and other MPIO tweaks automatically when the volume mounts.
On Windows, it’s a little more convoluted. In the iSCSI Initiator Properties dialog box, when you attempt to connect to the volume, Windows offers you a checkbox to enable multipath. Be sure to check that box, then go into the Advanced properties. From there, select your Microsoft iSCSI Initiator (if you use the default software initiator), then select the first initiator IP address.
We have four initiator IPs on our backup server, so I selected the first appropriate IP on the list here. Select the target portal for your SAN, then click OK twice. Great! Your volume is now connected… on one path. You need to manually add any others, so select the volume from the list (refresh if necessary) and click Properties. Then click Add Session. Does this dialog look familiar? Enter the same details, but change the initiator IP to your second IP address. Then click OK. Repeat for each adapter that you want to use for multipathing, changing only the Initiator IP for each session. Once you’ve finished, you should see a session for each of your iSCSI adapters on the Properties page for your volume. To make sure they’re all active and connected, you can click Disconnect for that volume on the main iSCSI Initiator Properties page. The warning will tell you that you have X active sessions for that volume. If X equals your intended number, congratulations! Just be sure to click NO so you don’t drop the volume.
Now we have set up multiple paths to the volumes, but Windows won’t use them all at once until we enable MPIO for iSCSI. This is not done by default when the MPIO feature is installed, so head to Control Panel > MPIO. Click on the Discover Multi-Paths tab, check the box for “Add support for iSCSI devices”, and click Add. The system will ask for a reboot.
At this point, all SAN-mounted volumes should now have multiple active sessions, with a default pathing policy of Round Robin. You can alter this to suit your needs on a per-volume basis in the iSCSI Initiators control panel.
Your SAN vendor may have tools that automate some or all of this process for either VMware or Windows, and it’s important to always follow their best practices for host connectivity. But the processes outlined above are a generic primer on how to ensure that your storage stays connected redundantly to your hosts.