vSphere Adding iSCSI Storage

vSphere Adding iSCSI KB ID 0001378

Problem

iSCSI storage is nice and cheap, so adding iSCSI 10/1Gbps storage to your virtual infrastructure is a common occurrence.

vSphere Adding iSCSI Solution (vSphere 7/8)

Add a Software iSCSI Adaptor: Select the host > Configure > Storage Adapters > Add > Software iSCSI adaptor > OK.

After a few seconds you should see it appear at the bottom of the list.

Create a vSwitch and VMKernel:If you already have this configured you can skip this section, but basically you need a vSwitch, with a VMKernel interface (that has an IP address on it that can ‘see’ your iSCSI device), and then you need to connect a physical NIC from that vSwitch the iSCSI network (or VLAN).

With the host still selected > Configure > Virtual Switches > Add Networking.

.

VMKernel Network Adapter > Next.

New Standard Switch > Set the MTU to 9000 to enable jumbo frames > Next.

Note: Make sure the physical switches you are connecting to also support Jumbo Frames. Each vendor will be slightly different to configure.

THIS IS CONFUSING: Select the NIC you want to add the the vSwitch, and then ‘Move Down‘ so that it is listed in Active Adapters > Next.

Give the switch a sensible name (like iSCSI) > Next.

Define the IP address of the VMKernel (this needs to be able to see the iSCSI Target IP addresses) > Next.

Note: Don’t worry about the default gateway, it will display the default gateway of the managment network, that’s fine, unless you need to route to the iSCSI devices).

Review the settings > Finish.

You should now have a new vSwitch for iSCSI.

vSphere Adding iSCSI Storage: Create Port Binging

Back on the Storage Adapters tab > Select the iSCSI adapter > Network Port Binding > Add.

Select the one you’ve just created > OK.

vSphere Adding iSCSI Storage: Add iSCSI Target

Dynamic Discovery > Add.

Add in the iSCSI Target IP for your storage device/provider > OK.

At this point it’s a good idea to do a full storage rescan.

No Storage Has Appeared? Remember at this point your iSCSI storage device probably needs to ‘allow’ this ESX server access to the storage before it will either appear (if it’s already been formatted as VMFS and is in use by other hosts) or if it’s the first host that needs to connect to format the datastore as VFMS.

How this is done varies from vendor to vendor.

If you need to add the storage manually > Host > Storage > New Datastore.

vSphere Adding iSCSI Solution (vSphere 5/6)

Add a Software iSCSI Adaptor: Select the host > Configure > Storage Adaptor > Add > Software iSCSI adaptor.

After a few seconds you should see it appear at the bottom of the list.

Create a vSwitch and VMKernel:If you already have this configured you can skip this section, but basically you need a vSwitch, with a VMKernel interface (that has an IP address on it that can ‘see’ your iSCSI device), and then you need to connect a physical NIC from that vSwitch the iSCSI network (or VLAN).

Note: You can add a port group to an existing switch, (or use a distributed switch!) Here I’m using a standard vSwitch and keeping my storage on its own vSwitch.

With the host still selected > Configure > Virtual Switches > Add.

VMware Kernel Adaptor > Next > New Standard Switch > Next > Add in the Physical NIC that’s connected to your iSCSI network > Next.

Give the VMKernel port a name (i.e. Storage-iSCSI) > Next > Put in the IP details* > Next > Finish.

*Note: You may need to add a gateway if your iSCSI device is on another network.

Jumbo Frames Warning: Edit the properties of the switch and set it’s MTU to 9000 to allow for jumbo frames.

vSphere Adding iSCSI Storage, make sure the physical switches you are connecting to also support Jumbo Frames. Each vendor will be slightly different in my case the switches are Cisco Catalyst 3750-X’s so I just need to enable jumbo frames universally on the switch (which requires a reload/reboot!)

Allow Jumbo Frames Cisco Catalyst 3750-X

Execute the following commands;

[box]

Petes-Switch#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Petes-Switch(config)#system mtu jumbo 9198
Changes to the system jumbo MTU will not take effect until the next reload is done

Then Reboot/Reload the Switch and Check

Petes-Switch#show system mtu

System MTU size is 1500 bytes
System Jumbo MTU size is 9198 bytes
System Alternate MTU size is 1500 bytes
Routing MTU size is 1500 bytes

[/box]

vSphere Configure iSCSI: Back on your vCenter, we need to ‘Bind’ the VMKernel port we created above, to our Software iSCSI adaptor. With the host selected > Configure > Storage Adaptors > Select the iSCSI Adaptor > Network Port Binding > Add.

Select the VMKernel Port  > OK.

Note: If you can’t see/select anything, make sure each iSCSI port group is set to use ONLY ONE physical NIC, (i.e. move the others into ‘unused’). That’s on the port group properties NOT the failover priority of the vSwitch.

Add an iSCSI Target to vSphere: With the iSCSI Adaptor still selected > Targets Add.

Give it the IP address of your iSCSI device.

At this point, I would suggest you perform a ‘Storage Rescan’.


Ensure ALL HOSTS, have had the same procedure carried out on them. Then (assuming you have configured your iSCSI device), presented the storage, and allowed access to it from your ESX hosts. Right click the Cluster > Storage > New Datastore > Follow the instructions.

IBM Storagewize v3700 iSCSI 

This article is just for configuring the VMware side, but just as a placeholder, (and to jog my memory if ever I put in another one.) The process is.

1. iSCSI IP addresses, Note: these are under Settings > Network > Ethernet Ports. (Not iSCSI confusingly.) 

2. Create the Hosts (Note: you can copy the iqn in from vCenter).

 

3. Create MDiscs (RAID groups) from the available disks, Note: Global Spares are allocated here.

4. Create a Pool, I don’t really see the point of these, but you need one to create a volume.

5. Create the Volumes, which you will present to the Hosts, then create host mappings.

 

Related Articles, References, Credits, or External Links

vSphere ESX – Configure Buffalo Terastation 5000 as an iSCSI Target

VMware ‘Disable DelayedAck’ Does Not Work?

KB ID 0001525

Problem

I’ve got a client that’s been having some performance issues with their VMs. Their storage vendor, (EMC) said that as a result of finding this in the logs;

[box]

B       02/28/19 09:50:53.953 scsitarg          117000e [INFO] System: iSCSI Logout Initiator Data: IP=192.168.200.161 Name=...-ec-21 Target Data: Port=2 Flags=0x00002002 Info=0x01200801
B       02/28/19 09:50:53.969 scsitarg          117000e [INFO] System: iSCSI Logout Initiator Data: IP=192.168.201.161 Name=...-ec-21 Target Data: Port=3 Flags=0x00002002 Info=0x01200801
B       02/28/19 09:51:16.413 Health              608fe [WARN] User: Host ESXi-01.petenetlive.com does not have any initiators logged into the storage system.
A       02/28/19 10:04:25.968 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.200.161 Name=...-ec-21 Target Data: Port=2 Flags=0x00002002 Info=0x00000000 [Target]
B       02/28/19 10:04:26.034 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.200.161 Name=...-ec-21 Target Data: Port=2 Flags=0x00002002 Info=0x00000000
A       02/28/19 10:04:31.996 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.201.161 Name=...-ec-21 Target Data: Port=3 Flags=0x00002002 Info=0x00000000 [Target]
B       02/28/19 10:04:32.055 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.201.161 Name=...-ec-21 Target Data: Port=3 Flags=0x00002002 Info=0x00000000
B       02/28/19 10:04:57.438 Health              608fc [INFO] User: Host ESXi-01.petenetlive.com is operating normally.
Host Host ESXi-01.petenetlive.com is accessing lun Datastore_3 as HLU 3, After the initiators for this host start logging in/logging,  unit attention update events will be logged as the paths to the luns have changed this is expected
2019/02/28-09:50:41.607527 ~~~~     7F3C92369703      std:TCD:   Unit Attention update from 0000001A to 0001030D for LUN 0x3.
2019/02/28-10:02:55.860669 ~~~~     7FE476E61702      std:TCD:   Unit Attention update from 00010149 to 00010157 for LUN 0x3.

[/box]

We should disable DelayedAck and they kindly gave me the VMware KB that outlined the procedure.

Solution

The procedure outlined (for VMware 6.x) is to put the host in maintenance mode, then edit the properties of the iSCSI controller(s), untick the DelayedAck options, reboot the Host, and everything will be peachy. However, even though (post reboot) everything looks good in the the vSphere Web console. If you look on the host you may find something like this;

[box]

vmkiscsid --dump-db | grep Delayed

[/box]

DelayedAck = ‘1’ means ENABLED, DelayedAck = ‘0’ means DISABLED

So half my iSCSI entries in the iSCSI database still have DelayedAck ENABLED?

Some Internet searching told me this was quite common, and that the best way to ‘fix‘ it was to, disable the iSCSI initiator, remove the iSCSI database, reboot and then setup iSCSI again;

[box]

cd /etc/vmware/vmkiscsid
esxcfg-swiscsi -d
rm -f vmkiscsid.db
reboot

[/box]

Which is fine IF YOU ARE USING A SOFTWARE iSCSI INITIATOR, I however was not, I had 2x dedicated hardware iSCSI HBAs on each host!

After many hours of messing about and trial and error, it became clear, I had to do things in a certain order, or DelayedAck would simply just be enabled whether I liked it or not. 🙁

Disable DelayedAck With Hardware iSCSI NICs / HBAs

MAKE SURE THE HOST IS IN MAINTENANCE MODE FIRST

Then take a note of your iSCSI setup, Port Groups, VMKernel Ports, and Physical NICs, you are going to delete the iSCSI database in a minute, and you will need to ‘rebind’ the VMKernel Ports and add the iSCSI targets back in again.

Manually remove your iSCSI target(s) for ALL the iSCSI NIC/HBA’s

Below if you re-run the command, vmkiscsid –dump-db | grep Delayed you will see there’s still some entries in the database with DelayedAck enabled! So unlike above (see example for software iSCSI) we are going to remove the iSCSI database, only here we don’t need to disable the software iSCSI initiator (because we are not using one!) Finally reboot the host.

[box]

cd /etc/vmware/vmkiscsid
rm -f vmkiscsid.db
reboot

[/box]

When the host is back online ADD in the Network Port Binding for the appropriate VMkernel adaptor.

Like so;

DON’T RESCAN THE CONTROLLER AS PROMPTED TO DO SO!

On the Advanced Settings of EACH hardware iSCSI NIC/HBA > Edit > UNTICK ‘DelayedAck’.

Double check they are both still unticked (I’ve seen them re-tick themselves for no discernible reason!) Then rescan the controller(s).

Target > Add.

Re-add the iSCSI target back in, (that you took note of above).

Select the Target > Advanced > Untick the DelayedAck option (Note: This time it’s not inherited). Repeat for any additional iSCSI targets.

When they are all added, rescan the storage controllers again.

Finally recheck all the database entries are set to DISABLED.

[box]

vmkiscsid --dump-db | grep Delayed

[/box]

Related Articles, References, Credits, or External Links

Thanks to Russell and Iain for their patience while I worked all that out!