HPE MSA Cannot See LUN?

 Cannot See LUN KB ID 0001862

Problem

I finally got round to replacing the SAN on my test network, I setup the new one via direct cable connection (10Gbps iSCSI DAC).  I created vDisks and volumes, presented those volumes. Setup iSCSI bindings in vSphere, all vanilla stuff.

ESX hosts could not see the storage LUNS, they could see the SAN, but ‘add datastore‘ showed me no available storage.

Solution: Cannot See LUN

Two days! That’s what this cost me, I’ve spent over 20 years deploying storage (mostly HPE) but an assortment of HPE, Dell, IBM, NetApp, and a score of cheap alternatives. I manually changed the iqn names in VMware, I proved connectivity from VMKernels to the storage array with vmping.  I updated the controller and card firmware – nothing.

I got a trusted colleague on the gear remotely to check I’d not done anyhting stupid, he made some suggestions, still no progress. I opened an quesiton on Experts Exchange – lot’s of good advice but nothing worked.

Then after trawling through old HPE  and VMware forum posts I found a link to a video, it was an Indian chap deploying some iSCSI volumes to a Windows server, even though I don’t speak Hindi, I thought “What the hell I’ll watch it, and make sure (once again) I had not done anything stupid.

Then while mapping the new volume, he did something so simple and so mind bogglingly easy to miss, everyone I’d spoken to had missed it also. When mapping a volume you create a LUN (in this example LUN 10) Set the rights to ‘read-write’ and apply.

See those green ticks over the iSCSI ports they DO NOT MEAN present the storage through those ports. They simply mean there’s a working cable in those ports.

You must manually go to each port, and make sure the PORT IS TICKED so it looks like this.

Whoever designed that GUI needs a massive punch in the face.

Related Articles, References, Credits, or External Links

NA

vSphere Adding iSCSI Storage

vSphere Adding iSCSI KB ID 0001378

Problem

iSCSI storage is nice and cheap, so adding iSCSI 10/1Gbps storage to your virtual infrastructure is a common occurrence.

vSphere Adding iSCSI Solution (vSphere 7/8)

Add a Software iSCSI Adaptor: Select the host > Configure > Storage Adapters > Add > Software iSCSI adaptor > OK.

After a few seconds you should see it appear at the bottom of the list.

Create a vSwitch and VMKernel:If you already have this configured you can skip this section, but basically you need a vSwitch, with a VMKernel interface (that has an IP address on it that can ‘see’ your iSCSI device), and then you need to connect a physical NIC from that vSwitch the iSCSI network (or VLAN).

With the host still selected > Configure > Virtual Switches > Add Networking.

.

VMKernel Network Adapter > Next.

New Standard Switch > Set the MTU to 9000 to enable jumbo frames > Next.

Note: Make sure the physical switches you are connecting to also support Jumbo Frames. Each vendor will be slightly different to configure.

THIS IS CONFUSING: Select the NIC you want to add the the vSwitch, and then ‘Move Down‘ so that it is listed in Active Adapters > Next.

Give the switch a sensible name (like iSCSI) > Next.

Define the IP address of the VMKernel (this needs to be able to see the iSCSI Target IP addresses) > Next.

Note: Don’t worry about the default gateway, it will display the default gateway of the managment network, that’s fine, unless you need to route to the iSCSI devices).

Review the settings > Finish.

You should now have a new vSwitch for iSCSI.

vSphere Adding iSCSI Storage: Create Port Binging

Back on the Storage Adapters tab > Select the iSCSI adapter > Network Port Binding > Add.

Select the one you’ve just created > OK.

vSphere Adding iSCSI Storage: Add iSCSI Target

Dynamic Discovery > Add.

Add in the iSCSI Target IP for your storage device/provider > OK.

At this point it’s a good idea to do a full storage rescan.

No Storage Has Appeared? Remember at this point your iSCSI storage device probably needs to ‘allow’ this ESX server access to the storage before it will either appear (if it’s already been formatted as VMFS and is in use by other hosts) or if it’s the first host that needs to connect to format the datastore as VFMS.

How this is done varies from vendor to vendor.

If you need to add the storage manually > Host > Storage > New Datastore.

vSphere Adding iSCSI Solution (vSphere 5/6)

Add a Software iSCSI Adaptor: Select the host > Configure > Storage Adaptor > Add > Software iSCSI adaptor.

After a few seconds you should see it appear at the bottom of the list.

Create a vSwitch and VMKernel:If you already have this configured you can skip this section, but basically you need a vSwitch, with a VMKernel interface (that has an IP address on it that can ‘see’ your iSCSI device), and then you need to connect a physical NIC from that vSwitch the iSCSI network (or VLAN).

Note: You can add a port group to an existing switch, (or use a distributed switch!) Here I’m using a standard vSwitch and keeping my storage on its own vSwitch.

With the host still selected > Configure > Virtual Switches > Add.

VMware Kernel Adaptor > Next > New Standard Switch > Next > Add in the Physical NIC that’s connected to your iSCSI network > Next.

Give the VMKernel port a name (i.e. Storage-iSCSI) > Next > Put in the IP details* > Next > Finish.

*Note: You may need to add a gateway if your iSCSI device is on another network.

Jumbo Frames Warning: Edit the properties of the switch and set it’s MTU to 9000 to allow for jumbo frames.

vSphere Adding iSCSI Storage, make sure the physical switches you are connecting to also support Jumbo Frames. Each vendor will be slightly different in my case the switches are Cisco Catalyst 3750-X’s so I just need to enable jumbo frames universally on the switch (which requires a reload/reboot!)

Allow Jumbo Frames Cisco Catalyst 3750-X

Execute the following commands;

[box]

Petes-Switch#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Petes-Switch(config)#system mtu jumbo 9198
Changes to the system jumbo MTU will not take effect until the next reload is done

Then Reboot/Reload the Switch and Check

Petes-Switch#show system mtu

System MTU size is 1500 bytes
System Jumbo MTU size is 9198 bytes
System Alternate MTU size is 1500 bytes
Routing MTU size is 1500 bytes

[/box]

vSphere Configure iSCSI: Back on your vCenter, we need to ‘Bind’ the VMKernel port we created above, to our Software iSCSI adaptor. With the host selected > Configure > Storage Adaptors > Select the iSCSI Adaptor > Network Port Binding > Add.

Select the VMKernel Port  > OK.

Note: If you can’t see/select anything, make sure each iSCSI port group is set to use ONLY ONE physical NIC, (i.e. move the others into ‘unused’). That’s on the port group properties NOT the failover priority of the vSwitch.

Add an iSCSI Target to vSphere: With the iSCSI Adaptor still selected > Targets Add.

Give it the IP address of your iSCSI device.

At this point, I would suggest you perform a ‘Storage Rescan’.


Ensure ALL HOSTS, have had the same procedure carried out on them. Then (assuming you have configured your iSCSI device), presented the storage, and allowed access to it from your ESX hosts. Right click the Cluster > Storage > New Datastore > Follow the instructions.

IBM Storagewize v3700 iSCSI 

This article is just for configuring the VMware side, but just as a placeholder, (and to jog my memory if ever I put in another one.) The process is.

1. iSCSI IP addresses, Note: these are under Settings > Network > Ethernet Ports. (Not iSCSI confusingly.) 

2. Create the Hosts (Note: you can copy the iqn in from vCenter).

 

3. Create MDiscs (RAID groups) from the available disks, Note: Global Spares are allocated here.

4. Create a Pool, I don’t really see the point of these, but you need one to create a volume.

5. Create the Volumes, which you will present to the Hosts, then create host mappings.

 

Related Articles, References, Credits, or External Links

vSphere ESX – Configure Buffalo Terastation 5000 as an iSCSI Target

VMware: Change IOPS Limit From 1000 to 1

KB ID 0001532

Problem

I got asked to do this by a client this week, HP has requested that this be set for connections to their Storevirtual VSA that had been having some problems.

Solution

I followed the instructions and was at first confused because I could not see the settings that needed changing? That’s because this only applies if you have MULTIPATHING enabled and set to ‘Round Robin’. So if your storage does NOT look like below, (All paths Active I/O). then this procedure is not applicable.

So assuming you are using round robin multipathing, and, <ahem!> the storage vendor hasn’t just pulled a solution from a list of things that might work, rather than actually diagnosing the problem. Then you can see the current setting with the following command;

[box]

esxcli storage nmp device list

[/box]

Take note of the iSCSI storage names, below you can see they all start with naa.6000, you can also see the IOPS value is set to 1000.

To change the value use the following command (change the value in red to match yours);

[box]

for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.6000`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done

[/box]

Then recheck, the new value should be ‘1’.

Related Articles, References, Credits, or External Links

Disable ATS Heartbeat

VMware ‘Disable DelayedAck’ Does Not Work?

 

VMware ‘Disable DelayedAck’ Does Not Work?

KB ID 0001525

Problem

I’ve got a client that’s been having some performance issues with their VMs. Their storage vendor, (EMC) said that as a result of finding this in the logs;

[box]

B       02/28/19 09:50:53.953 scsitarg          117000e [INFO] System: iSCSI Logout Initiator Data: IP=192.168.200.161 Name=...-ec-21 Target Data: Port=2 Flags=0x00002002 Info=0x01200801
B       02/28/19 09:50:53.969 scsitarg          117000e [INFO] System: iSCSI Logout Initiator Data: IP=192.168.201.161 Name=...-ec-21 Target Data: Port=3 Flags=0x00002002 Info=0x01200801
B       02/28/19 09:51:16.413 Health              608fe [WARN] User: Host ESXi-01.petenetlive.com does not have any initiators logged into the storage system.
A       02/28/19 10:04:25.968 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.200.161 Name=...-ec-21 Target Data: Port=2 Flags=0x00002002 Info=0x00000000 [Target]
B       02/28/19 10:04:26.034 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.200.161 Name=...-ec-21 Target Data: Port=2 Flags=0x00002002 Info=0x00000000
A       02/28/19 10:04:31.996 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.201.161 Name=...-ec-21 Target Data: Port=3 Flags=0x00002002 Info=0x00000000 [Target]
B       02/28/19 10:04:32.055 scsitarg          117000d [INFO] System: iSCSI Login Initiator Data: IP=192.168.201.161 Name=...-ec-21 Target Data: Port=3 Flags=0x00002002 Info=0x00000000
B       02/28/19 10:04:57.438 Health              608fc [INFO] User: Host ESXi-01.petenetlive.com is operating normally.
Host Host ESXi-01.petenetlive.com is accessing lun Datastore_3 as HLU 3, After the initiators for this host start logging in/logging,  unit attention update events will be logged as the paths to the luns have changed this is expected
2019/02/28-09:50:41.607527 ~~~~     7F3C92369703      std:TCD:   Unit Attention update from 0000001A to 0001030D for LUN 0x3.
2019/02/28-10:02:55.860669 ~~~~     7FE476E61702      std:TCD:   Unit Attention update from 00010149 to 00010157 for LUN 0x3.

[/box]

We should disable DelayedAck and they kindly gave me the VMware KB that outlined the procedure.

Solution

The procedure outlined (for VMware 6.x) is to put the host in maintenance mode, then edit the properties of the iSCSI controller(s), untick the DelayedAck options, reboot the Host, and everything will be peachy. However, even though (post reboot) everything looks good in the the vSphere Web console. If you look on the host you may find something like this;

[box]

vmkiscsid --dump-db | grep Delayed

[/box]

DelayedAck = ‘1’ means ENABLED, DelayedAck = ‘0’ means DISABLED

So half my iSCSI entries in the iSCSI database still have DelayedAck ENABLED?

Some Internet searching told me this was quite common, and that the best way to ‘fix‘ it was to, disable the iSCSI initiator, remove the iSCSI database, reboot and then setup iSCSI again;

[box]

cd /etc/vmware/vmkiscsid
esxcfg-swiscsi -d
rm -f vmkiscsid.db
reboot

[/box]

Which is fine IF YOU ARE USING A SOFTWARE iSCSI INITIATOR, I however was not, I had 2x dedicated hardware iSCSI HBAs on each host!

After many hours of messing about and trial and error, it became clear, I had to do things in a certain order, or DelayedAck would simply just be enabled whether I liked it or not. 🙁

Disable DelayedAck With Hardware iSCSI NICs / HBAs

MAKE SURE THE HOST IS IN MAINTENANCE MODE FIRST

Then take a note of your iSCSI setup, Port Groups, VMKernel Ports, and Physical NICs, you are going to delete the iSCSI database in a minute, and you will need to ‘rebind’ the VMKernel Ports and add the iSCSI targets back in again.

Manually remove your iSCSI target(s) for ALL the iSCSI NIC/HBA’s

Below if you re-run the command, vmkiscsid –dump-db | grep Delayed you will see there’s still some entries in the database with DelayedAck enabled! So unlike above (see example for software iSCSI) we are going to remove the iSCSI database, only here we don’t need to disable the software iSCSI initiator (because we are not using one!) Finally reboot the host.

[box]

cd /etc/vmware/vmkiscsid
rm -f vmkiscsid.db
reboot

[/box]

When the host is back online ADD in the Network Port Binding for the appropriate VMkernel adaptor.

Like so;

DON’T RESCAN THE CONTROLLER AS PROMPTED TO DO SO!

On the Advanced Settings of EACH hardware iSCSI NIC/HBA > Edit > UNTICK ‘DelayedAck’.

Double check they are both still unticked (I’ve seen them re-tick themselves for no discernible reason!) Then rescan the controller(s).

Target > Add.

Re-add the iSCSI target back in, (that you took note of above).

Select the Target > Advanced > Untick the DelayedAck option (Note: This time it’s not inherited). Repeat for any additional iSCSI targets.

When they are all added, rescan the storage controllers again.

Finally recheck all the database entries are set to DISABLED.

[box]

vmkiscsid --dump-db | grep Delayed

[/box]

Related Articles, References, Credits, or External Links

Thanks to Russell and Iain for their patience while I worked all that out!

Windows Server: Connecting to iSCSI Storage Using MPIO

KB ID 0001392

Problem

In my scenario my Windows Server is a VMware virtual machine. To enable MPIO (Multipath I/O) I’m going to need two network cards, connected to the two iSCSI networks. 

Above I’ve shown both iSCSI networks in  different colours 192.168.51.0/24 and 192.168.50.0/24 in production I would also have these in their own VLANs, (or even separate physical networks).

This article is not about setting up your iSCSI Target/Storage, I’m assuming you have this up and running with the correct IP addresses connected to the correct networks ready to go.

Note: I’m also NOT using iSCSI authentication, and I’m also assuming you have allowed either the two IP addresses of the Windows server, (or more likely its iSCSI iqn address), access to the storage.

Solution

Firstly MPIO is NOT enabled or installed by default, you need to add it. Open Server Manager > Manage > Add Roles and Features > Follow the wizard all the way to ‘features’ > Enable Multipath I/O > Complete the Wizard.

Back in Server Manager > Tools > MPIO > Discover  Multi-Paths > Add support for iSCSI devices > Yes  > Let the server reboot.

After the reboot go back into the MPIO properties, and make sure iSCSI is now listed, (MSFT2005iSCSIBusType_0x9). You can close the MPIO properties now.

Now back in Server Manager > Tools > iSCSI Initiator.

First task is to add the TWO iSCSI Target IP’s (192.168.50.200 and 192.168.51.200) > Discovery > Discover Portal > Put in the first iSCSI Target IP > Advanced > Local Adapter = Microsoft iSCSI Initiator > Initiator IP = The Servers NIC that’s on the same iSCSI network as this target, (i.e. 192.168.50.6 or 192.168.51.6) > OK > OK > Apply > OK.

NOW REPEAT THE PROCEDURE FOR THE SECOND iSCSI TARGET

Assuming your iSCSI and networking setup are correctly, you should start to see the storage appearing on the ‘Targets’ tab. Select the first piece of storage you want to attach > Connect > Tick ‘Enable Multi-path’ > Advanced > Local Adapter = Initiator IP (either 192.168.50.6 or 192.168.51.6)  > Target Portal IP  = (The iSCSI Target IP that corresponds to the IP you have just set, either 192.168.50.200 or 192.168.51.200) > OK > OK > Apply > OK.


The status should change to connected.

NOW REPEAT THE PROCEDURE A ‘SECOND TIME’ FOR THE SAME PEICE OF STORAGE, BUT CONNECT TO IT FROM THE OTHER iSCSI IP ADDRESS, TO THE OTHER iSCSI TARGET IP. THERE YOU CONNECT TO EACH ONE ‘TWICE’ (ONCE OVER EACH iSCSI NETWORK).

If you now look in the properties of the storage, you will see it has two identifiers and two IPv4 Portal groups.

At this point you would need to go into ‘Disk Management’ (Server Manager > Tools > Computer Management > Disk Management). You will see the storage presented but ‘Offline’ you will need to bring the drive online > Create a partition on it, (if it does not already have one),  and you can also assign a new drive letter. Note: Look in the Properties here, and you can prove MPIO is working and change the MPIO policy (if you require).

Related Articles, References, Credits, or External Links

NA

Using Openfiler and vSphere ESX / ESXi 5

KB ID 0000380

Problem

Openfiler is a free NAS / SAN prebuilt Linux distribution, that can provide iSCSI storage to your VMware environment, it’s ideal for small setups (This video was made with all the devices running in VMware workstation 7, on my laptop. That’s two ESXi servers, a vCenter server, and the Openfiler iSCSI target server).

Solution

Related Articles, References, Credits, or External Links

Openfiler Thanks to VMware for the free copy of VMware Workstation.

 

Creating a ‘Seeded’ Veeam Replication Job

KB ID 0000912

Problem

If you have a slow connection, and you are trying to replicate servers from one site to another you may struggle to do the initial replication. I’ve had an ongoing problem with a client who was trying to do this, we set it up, and the link was too slow. The client upgraded his internet connections at both sites, still the replication window would have been longer than 24 hours. In the end we chose to ‘seed’ the replication. Using this process we take a backup on the servers at the source location, then take the backup to the target location. Finally we setup the replication task and tell it to use the backup as a ‘seed’. Using this method is preferable because only the changes then get replicated over the slow link.

In the following scenario Im using Veeam 6.5 but the process is the same for Veeam 7. As a backup target Im going to host a backup repository on a Buffalo NAS Box (via iSCSI), that I can transport to the other site easily. I’ve also got a Veeam server at both locations, if you do not you may need to setup a temporary server at the source location to do the initial backup.

Because I’ve got a Veeam server at both locations I can utilise them BOTH as backup proxies, If you are only going to have a Veeam box at the target location, then I strongly suggest you setup a backup proxy on another server at the source site.

Solution

Veeam Backup and Recovery Download

Create a Backup of the Source Machine with Veeam

At this point I’ve added the iSCSI box as a backup repository (If you are unsure on how to do this, I do the same thing again to present the iSCSI box at the target site below.

1. Im not going to run through how to setup a simple backup job, Veeam is refreshingly easy to use.

2. So now I have the backup on my iSCSI device, I can turn it off and move the files to the target location.

Present the Backed Up files to the Veeam Server at the Target Location

3. Here I’m pointing my Veeam Server directly at the iSCSI server.

4. Now I can bring the new ‘drive’ online and make sure it gets a drive letter in Windows.

Veeam: How Do I Add a Backup Repository?

5. Launch Veeam > Backup & Replication > Backup Repositories > Add Backup Repository.

6. Give it a sensible name > Next.

7. Next.

8. This Server > Populate > Select the iSCSI drive letter.

9. Browse to the folder that contains your backup data > Next.

10. I’ve already configured vPower NFS so I’ll just click Next.

11. Tick ‘Import existing backups automatically’, and ‘Import guest file system index’ > Next.

12. Finish.

How Do I Setup a Veeam ‘Seeded’ Replication Job?

13. Launch Veeam > Backup & Replication > Replication Job > Give the job a name > Tick ‘Low connection bandwidth (enable replica seeding). At this point I also want to tick the next two options so that if I need to failover the Virtual machines it will connect them to the correct VMware Port group on the target host. Also the IP addresses of the failed over machines will be changed to match the subnet of the target network > Next.

14. Add > Browse to the VM(s) you want to replicate and select them > Next.

15. Choose the host that you want to replicate the virtual machine to >Set the resource pool if you use them > Select the datastore where you will be hosting the replica files > Next.

16. Add > Locate the ‘Port Groups’ on the source and the target virtual networks. (Note: Here the port groups have the same name, they are NOT the same port group) > Next.

17. Add > Add in the IP address details from the source network and the network you will want to bring up the replicas on in the event of a failover > OK > Next.

18. Add in the source and destination proxies (make sure you have one at both ends!) > Select a local repository (this is just for the metadata not the actual replica) > Here I’m going to store seven restore points (handy because you can restore single files from a replica if you need to). DONT click Next.

19. Advanced > Traffic Tab > Set Optimize for to ‘WAN target’ > OK >Next.

20. Enable seeding and select your new repository > If you have ran the job successfully before you may have an existing replica mapping you can use, I do not > Next.

21. Enable application aware image processing (in case you ever want to restore a single file, or mail attachment, or SQL table for example) > Enter and administrative account and password > Next.

22. Set the schedule for the job > Create.

23. Finish, (if you want to start the job immediately tick the box, and it will run now, and then run again as scheduled).

24. Now when the job runs it scans the ‘seed’ first, creates the replica, and finally replicates the difference.

25. You will notice whenever the replication tasks run in future, it only replicates the differences. For example, here on a subsequent run, it only took twenty six and a half minutes to do the job.

 

Related Articles, References, Credits, or External Links

NA