What are IOPS?

What are IOPS KB ID 0001833

My IOPS History

I was on a call this morning where the IOPS (Input / Output Operations Per Second) were being discussed. I have a love / hate relationship with IOPS insofar as they are ONLY any use when you are comparing apples with apples, and more importanly (which is the bit we don’t talk about) that we have defined what an apple is. Because one mans Golden Delicious is another mans Bramley cooking apple, (that was deep eh?).

A few years ago when I was back on the tools I was installing a storage system for a client (it was a virtual storage array) and we had benchmarked it with some software at 95 thousand IOPS. the vendor that supplied the storage pulled the support for it, so we were left red faced trying to source an alternative. Everything we installed came out with a figure of less than 95 thousand IOPS – As far as the customer was concerned we had promised him one thing and delivered another.

So What Are IOPS?

Let’s say you want to buy a car, these days with environmental concerns and the cost of fuel, one of the things you might want to compare are the ‘Miles per Gallon‘ fuel consumption. Let’s say one of your choices has an MPG figure of 96Mpg (154.5Kpg). Well that’s dandy, but I guaranty that figure was tested in an environment that gave the manufacturer the best possible outcome, so unless you are going to drive at 56 miles an hour constantly, with the highest rated fuel, on a rolling road and never stop or brake, then ACTUAL RESULTS MAY VARY. And who is to say car vendor A used the same tests as car vendor B. And there’s also THREE DIFFERENT SIZES for a gallon, and for countries that don’t use gallons they will convert from litres which can’t be done to less than a lot of decimal places.

IOPS suffers from the similar problems e.g. Storage Vendor ‘A’ will say, “we deliver 1.2 million IOPS”, and Vendor B will say “we deliver 1.8 million IOPS” – so Vendor B is the better option, right? Well NO, that’s why you need to know how the figures are derived.

The figure that gets derived relies heavily on the following factors.

  • Block size / sector size of the storage.
  • Resiliency/RAID level of the storage.
  • Actual physical storage media (e.g. Spinning disk/nearline/midline/SSD).
  • Actual physical connection fabric (e.g. SAS/Fiber/iSCSI).
  • Size of data written.
  • Size of data read.
  • Sequential or random read/writes, or a blend of the two.
  • Concurrent Workload (Testing an array with no load, is like driving an F1 race car on a closed motorway).
  • Storage QoS If you’re in a ‘shared’ storage environment your IOPS may be ‘capped’.

What are IOPS: Throughput and Latency

THROUGHPUT is normally used in conjunction with IOPS, throughput is a figure measured in bps bits per second, or Bps bytes per second. So If we know this figure AND we have an IOPS figure (that we know how it was derived.) Then we can make a comparison? Well no, there’s a third thing we didn’t take into consideration – LATENCY, this is the amount of time it takes to get an operation to and from the storage array. Why is that important? Let’s say we have an ‘All SSD’ array with blistering throughput and IOPS figures, but your 10+ year old Solaris 7 servers cannot match that through their 5+ year old HBAs then your ‘experience’ is going to be bad. OK that’s a severe example. But put that in a real world scenario, I work for a service provider, we provide storage, If we say we will warrant X thousand IOPS and a customer that just consumes storage from us connects their Solaris 7 servers to that storage and says, “we are only getting half that performance”. Whose responsibility is it to investigate why? This is why if you look at the large hyperscalers, when they give you performance info, they will give you IOPS (without telling you what those IOPS are!) and they will give you throughput (That they will cap, usually at xMbps). Because latency is not really their problem  – search their documentation and they deliberately only use the word latency to say things like ‘Ultra low latency SSD” or that “SSD provides lower latency than HDD“.

So Why the Ben Affleck Meme?

Because of the three things you need to take into consideration when looking at storage performance, (and remember this is storage performance, not application performance, because a poorly coded DB application from 1987 can be on the best hardware in the world and still be awful – and your DB consultant will blame the storage or the network because he can earn several hundred pounds a day while you bust a gut proving otherwise). Because it’s a figure that without any definition, means nothing.

I do like an analogy, (as you’ve seen). What are IOPS? IOPS are the digital equivalent of giving 50 teenage boys some ribbon and a sharpie, and telling them to all to make a tape measure and find out who is the best endowed, then deciding (without seeing the tape measures), based on who came up with the biggest number.

Related Articles, References, Credits, or External Links

NA

Veeam: Backup to Public Cloud?

KB ID 0001691

Problem

I’ve always been a fan of Veeam, I’ve championed it for years, as a consultant and engineer I want solutions that are easy to deploy, administer, and upgrade, that cause no problems. Like all things that are easy to use, and gain a lot of popularity, Veeam is starting to get DESTROYED BY DEVELOPMENT. What do I mean? Well, things that were simple and easy to find now require you to look at knowledge base articles and pull a ‘frowny face’. Also the quality of support has gone dramatically downhill. We stand at the point where another firm can come in and do what Veeam did, (march in and steal all the backup & replication revenue worldwide, with a product that simply works and is easy to use).

I digress (sorry). So you want to backup to public cloud yes?

Solution

Veeam Backup and Recovery Download

Veeam Backup For Azure Download

Veeam Backup for AWS Download

Well then, you log into Veeam look at your backup infrastructure, and simply add an External Repository and backup to that? NO! That would be common sense, (and the way Veeam used to to things). External Repositories are not for that, Veeam points this out when you try and add one;

So how do you backup to public cloud? (I know other vendors are available, but we are talking primarily about Azure and AWS). Well to do that you need to be more familiar with Scale Out Backup Repositories (SOBR).

With an SOBR you can add ‘cloud storage’ i.e. Azure Cold Blob storage or AWS S3, as ‘Capacity Tier‘ storage.  How is the Capacity Storage Tier Used? Well theres two options, ‘Backup to Capacity after x Days’ or ‘Backup to Capacity Tier as soon as backup are created‘. like so;

  1. Send your backup to a Scale Out Backup Repository.
  2. The backup gets placed into the Performance Tier.
  3. Option 1: Copy to Cloud after x Days, or Option 2: Copy to cloud immediately.

Note: This is configured on the SOBR configuration NOT on individual backup jobs/sets.

Adding Azure Cold Blob Storage

Well before you can add cloud storage to a SOBR you need to add it to Veeam, how’s that done? Well firstly you need to create an Azure Storage account.

Then generate an ‘Access Key‘.

Then create a ‘Container‘ in your storage account.

Then within Veeam > Options > Manage cloud credentials > Add > Add Azure Storage Account > Enter the Storage account and Access Key > OK.

Adding ‘Cloud Storage’ as ‘Capacity Tier’ to a Scale Out Backup Repository

Either create a new Scale Out Backup Repository, (Backup Infrastructure > Scale Out Backup Repository,) or edit an existing one. When you get to Capacity Tier > Tick the ‘Extend..’ option > Add > Microsoft Azure Blob Storage.

Azure Blob Storage > Give the storage a name > Next.

Select the storage account you created above > Select your Gateway Server (usually the Veeam B&R server but it does not have to be) > Next > Browse.

Select or create a new folder > Limit the amount of space to use (if required) > Next > Finish.

What about AWS? Well Microsoft kindly give me a certain amount of ‘free‘ Azure credits every month so it’s easy to showcase their product, (I use this for learning and PNL tutorials), so Microsoft pretty much get the benefit. I know AWS have a free tier and a trial tier, but honestly after spending 2 hours trying to find out what you actually get, and am I going to get stung on my credit card bill If I do ‘xyz‘ I lost all interest!

AWS, be like Veeam used to be, make it easy! AWS is like flying with Ryanair,

Oh so you want a seat? That will be and extra £x a month, and for every trip to the toilet will be an extra £x a month. Will you be wanting nuts? Because we charge by the nut, and no one knows how many nuts are in each bag, so it will be different every time, and speaking of time if you want to look at the clock that will be £x a month also!

People will email me and complain Azure is the same, and to an certain extent I will agree, but nothing will change until, public cloud providers start charging fixed prices for things, so IT departments can work out what the Opex is going to be e.g. like private cloud providers do! Of course working for a private cloud provider maybe I’m a little biased? 

Related Articles, References, Credits, or External Links

NA

Adding Windows Server NFS Shares to VMware ESX

KB ID 0000319

Problem

You have a Windows 2019/2016, 2012, or 2008 server with plenty of storage space, and you would like to present that to an ESX/ESXi server as a datastore. You can configure a folder (or drive) as an NFS share and present it to VMware vSphere, so that it can be used as a datastore.

Note: For Server 2008 and vSphere 4/5 Scroll down.

Create NFS Shares on Windows Server 2019, 2016, and 2012

Essentially you need to add the ‘Server for NFS’ role, (Below “File and Storage Services“).

Create a folder to share, on its properties > NFS Sharing > Manage NFS Sharing.

Tick to share > Permissions.

You can add each host individually here, but I’m just changing the default rule to allow Read/Write to ALL MACHINES > Tick ‘Allow root access’ > OK.

VMWare vSphere 6 Connecting to Windows NFS Shares

Make Sure you have a VMKernel port on the same network as your NFS share.

DataStore View > Right click the ‘Cluster‘ > Storage > New Datastore > NFS > Next > NFS 3 > Next.

Give the datastore a name > Select the share name (prefix it with a forward slash, and remember they are case sensitive!) > Enter the IP or FQDN of the NFS server > Next > Next > Finish.

Create NFS Shares on Windows Server 2008

Gotchas

1. The system will not work if you do not have a vmkernel port, if you already have iSCSI or vmotion working then this will already be in place.

If not you will see an error like this,

Call “HostDatastoreSystem.CreateNasDatastore” for object “ha-datastoresystem” on ESX “{name or IP of ESX server}” failed.

2. Make sure TCP port 2049 is open between the NFS share and the ESX box. On an ESX 3.x servers you may need to run ” esxcfg-firewall -e nfsClient “.

Other Points

1. You CAN boot a windows VM from any NFS store (just because Windows cannot boot from NFS – does not mean a VM can’t).

2. NFS Datastores are limited to 16TB.

3. vSphere supports up to 64 NFS Datastores (ESX supports up to 32).

4. Thin provisioned disks will “re-expand” when moved/cloned to another NFS Datastore (THOUGH NOT in a vSphere environment).

5. On Server 2008 R2 NFS can only support 16 TCP connections, to raise the limit see here.

Related Articles, References, Credits, or External Links

NA