AnyConnect: Unauthorized Connection Mechanism

KB ID 0001699

Problem

I was assisting a colleague to setup some AnyConnect for a client this afternoon, when all of a sudden I was met with this;

VPN

Logon denied, unauthorised connection mechanism, contact your administrator

Solution

This was a confusing one, I replicated the problem on my own test firewall. All I had done was change the AAA method from LOCAL to LDAP? It took me a while to figure out what was going on?

The reason why this is happening is because the GROUP POLICY your AnyConnect PROFILE is using does not have SSL enabled. (This makes no sense as it was working with LOCAL authentication, but this is how I fixed it).

You will be either using a specific group policy or the DfltGrpPolicy

[box]

IF USING THE DEFAULT GROUP POLICY
Petes-ASA(config)# group-policy DfltGrpPolicy attributes
Petes-ASA(config-group-policy)# vpn-tunnel-protocol ssl-client ssl-clientless

IF USING A SPECIFIC GROUP POLICY (Remember to include any, that already exist! e.g. l2tp-ipsec)

Petes-ASA(config)# group-policy PNL-GP-ANYCONNECT-ACCESS attributes
Petes-ASA(config-group-policy)# vpn-tunnel-protocol ssl-client ssl-clientless l2tp-ipsec 

[/box]

Or, if you really HAVE TO use the ASDM.

Configuration > RemoteAccess VPN > Network (Client) Access > Group Policies > Select the Group Policy you are using > Edit.

General > More Options > Tick the SSL Options > OK > Apply.

Don’t forget to save your changes! Then try connecting again.

Related Articles, References, Credits, or External Links

NA

F5: Setup Basic Web Load Balancing

KB ID 0001698

Problem

In past articles I’ve got my F5 BIG IP appliance up and running, and I’ve built some web servers to test load balancing. Now to actually connect things together and start testing things. Below is my lab setup, I will be deploying simple web load balancing (Static: Round Robin) between three web servers, each serving a simple HTTP web site.

Test F5 to Web Server Connectivity

For obvious reasons the F5 needs to be able to speak to the web servers, so it needs to be on the same network/VLAN and have connectivity. To test that we can log onto the the F5 console directly, and ‘ping’ the web servers.

So connectivity is good, let’s make sure we can actually see the web content on those boxes, the best tool for that is to use curl, which will make a web request, and the wen server ‘should’ return some HTML.

[box]curl http://10.2.0.11[/box]

F5 BIG-IP Load Balancing Terminology

Yeah I said ‘load balancing‘ and not ADC sue me! There are a number of building blocks that F5 uses, and you need to understand the terminology to put things together, firstly lets look at things BEHIND the F5 appliance;

  • Node: An actual machine/appliance, (be that physical or virtual.) That provides some sort of service or a collections of services e.g. a web server, telnet server, FTP site etc.
  • Pool Member: Is a combination of a Node AND a Port/Service, e.g. 192.168.1.100:80 (IP address and TCP port 80 (or HTTP)).
  • Pool: A Logical collection on Pool Members, that provide the same service e.g a collection of pool members offering a website on TCP port 80.

F5 BIG-IP Adding Nodes

While connected to the web management portal > Local Traffic > Nodes > Create (Note: You can also press the green ‘add’ button on the Node pop-out on newer versions).

Specify a name > Description (optional) > IP address (or FQDN) > ‘Repeat‘ > Continue to add Nodes as required, then click ‘Finished‘.

F5 BIG-IP Adding Pools

Now we have our Nodes, We need to create a Pool. Local Traffic > Pools > Create, (again on newer versions theres a green add button on the pop-out).

Add a Name > Description (Optional) > Add an applicable Health Monitor (in our case http) > Select the ‘Node List’ radio button > Select your first Node > Set the Port/Service  > Add  > Continue to Add the remaining Nodes.

Note: Here is where you add the IPs to the Port/Service and create the Pool Members.

Sorry! Busy Screenshot

When all the Nodes are added > ‘Finished‘.

Your web pool ‘should‘ show healthy, Note: that does not mean ALL the nodes are online!

To make sure ‘all’ the Nodes are healthy > Go to the Members Tab.

F5 BIG-IP Virtual Servers

I’m not a fan of using this term ‘Virtual Server‘ I prefer Virtual IP (or VIP,) but we are where we are! Above we’ve looked at things BEHIND the F5, now we need to present those services IN FRONT of the F5 (Note: I don’t say publicly, because we deploy plenty of BIIG-IP solutions inside  networks). So a Virtual Server is the outside IP address or FQDN of that a ‘consumer’ will connect to;

Local Traffic > Virtual Servers > Create.

Supply a name > Description (optional)  > Destination Address (the ‘available outside’) IP address > Set the service/port > Scroll down to the bottom.

Set the ‘Default Pool’ to the pool you created (above) > ‘Finished‘.

For a brief overview or check what you have created  > Click Local Traffic > Network Map Note: This will look different on older versions of the F5.

Then test the service form the outside, here each web server serves a different colour page so I can test it’s working properly.

My Web Page Does Not Change? If you keep seeing the same colour/page then it’s probably because you chose browser is ‘caching’ web content on your test machine, you may need to disable caching on your chosen web browser, for an accurate test.

So that’s Static Round Robin (Equal Ratio) Based Load Balancing. In the next article I’ll look at how you can manipulate the ratios, to better serve your hardware, and requirements.

Related Articles, References, Credits, or External Links

NA

EVE-NG Deploying F5 BIG-IP

KB ID 0001696

Problem

I already had some F5 Images in my EVE-NG, but I wanted to run version 16.x. However, I didn’t think that was officially supported, so I thought I would try and get it running anyway!

Solution

Theres no need to scour the internet for ‘dodgy’ versions, F5 will quite happily give you the latest version, just sign up for a free account, and you can download the image. While you are there, you can apply for a trial licence, (or two if you want to test HA).

Important: I use FileZilla to upload images into EVE-NG, make sure your transfers are set for ‘binary’ I’ve seen this break things in the past, so mines already setup to use that by default;

Upload the image into EVE-NG, (I’ve shown the location, on the image below).

Now, SSH into EVE-NG, extract/unzip the image, then copy/rename it to virtioa.qcow2, remove the ZIP file, and finally fix the permissions; (Change the values in bold (below) to match your version);

[box]

cd /opt/unetlab/addons/quemu/bigip-16.0/
unzip BIGIP-16.0.0-0.0.12.ALL.qcow2.zip
mv BIGIP-16.0.0-0.0.12.ALL.qcow2 virtioa.qcow2
rm BIGIP-16.0.0-0.0.12.ALL.qcow2.zip
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions

[/box]

You can now add a BIG-IP LTM VE into your lab.

Select Version 16 > Scroll down.

Change the Console to VNC > Save.

Power it on.

Log in, the DEFAULT USERNAMES AND PASSWORDS are;

Username: root Password:default

Username: admin Password: admin

You will be asked to change the passwords. (Note: The admin password may expire straight away so you will need to change it again when you log into the web console!)

To ‘Get Access’ you will need to configure the Management Network on the F5, to do that run the config command.

I don’t wish to insult your intelligence by walking though these steps, set an IP address and subnet mask on the management port.

In ‘Most” cases you wont want a default route on the management network, (normally that’s set on the ‘External‘ network).

Now browse to the appliance from a host on the management network, you will need to log on as the ‘admin‘ user, and (as I mentioned above), I needed to reset the password again!

Now you can configure the appliance, when your trial licences, (unless you bought some lab licences,) come through.

Related Articles, References, Credits, or External Links

NA

TinyCore Linux: Build a ‘Persistent’ Web Server

KB ID 0001697

Problem

Recently I was building a lab for testing load balancing, and needed some web servers, I could have built three Windows servers, but I wanted to run them in EVE-NG, so they had to be as light as I could make them. I chose TinyCore Linux, (I know there are smaller options, but it’s light enough for me to run, and work with).

The problem occurs when you reboot the TinyCore host, it (by default) reverts back to its vanilla state, (that’s not strictly true, a couple of folders are persistent).

So I had to build a server that would let me SFTP some web content into it and allow me to reboot it without losing the web content, settings, and IP address.

Step 1: Configure TinyCore IP & Web Server

This is a two step procedure, firstly I’m going to give it a static IP.

[box]

sudo ifconfig eth0 192.168.100.110 netmask 255.255.255.0
sudo route add default gw 192.168.100.1

[/box]

I don’t need DNS, if you do, then simply edit the resolve.conf file;

[box]

sudo vi /etc/resolv.conf
Add a value e.g.
Nameserver 8.8.8.8

[/box]

If you are scared of  the VI editor see Using the VI Editor (For Windows Types)

To connect via SSH/SFTP you will need opnessh installing, and to run the website, we will use Busybox, to install those, do the following;

[box]

tce-load -wi busybox-httpd.tcz
tce-load -wi openssh

[/box]

You will now need to set a password for the root account, (so you can log on and trasfer web files in!)

[box]

su
passwd
Type in, and confirm a new password!

[/box]

Start the OpenSSH, and TFTP services;

[box]

cd /usr/local/etc/init.d/
./openssh start
cd /etc/init.d/services/
./tftpd start

[/box]

Now create a basic web page, (index.html) which you can update later. Setup the website, then copy that file to a location that will be persistent (you will see why later).

[box]

cd /usr/local/httpd/bin
sudo ./busybox httpd -p 80 -h /usr/local/httpd/bin/
sudo vi index.html {ENTER SOME TEXT TO TEST, AND SAVE}
sudo mkdir /mnt/sda1/wwwsite/
sudo cp /usr/local/httpd/bin/index.html /mnt/sda1/wwwsite/index.html

[/box]

At this point, (if you want) you can use your favourite SFTP client, (I recommend FileZilla or WinSCP) and copy in some live web content to /mnt/sda1/wwwsite/ But ensure the home/landing page is still index.html though!

Step 2: Make TinyCore Settings ‘Persistent’

There may be better ways to do this, this just worked for me, and made sense! There’s a shell script that is executed as the TinyCore machine boots (bootlocal.sh) so if you edit that file and put in the commands to configure the IP, copy the website files from the permanent mount folder, start the web server, then start SSH and TFTP, you end up with a server doing what you want, every time the server boots.

[box]

sudo vi /opt/bootlocal.sh

ADD THE FOLLOWING TO THE BOTTOM OF THE FILE;

sudo ifconfig eth0 192.168.100.110 netmask 255.255.255.0 
sudo route add default gw 192.168.100.1
cp /mnt/sda1/wwwsite/index.html /usr/local/httpd/bin/index.html
cd /usr/local/httpd/bin/
Sudo ./busybox httpd -p 80 -h /usr/local/httpd/bin/
cd /usr/local/etc/init.d/
./openssh start
cd /etc/init.d/services/
./tftpd start

[/box]

Save and exit the file, then finally BACKUP THE CHANGES with the following command;

[box]

filetool.sh -b

[/box]

Related Articles, References, Credits, or External Links

NA

EVE-NG: Committing / Saving Qemu Virtual Machine Settings

KB ID 0001695

Problem

I’ve been working on a load balancing lab in EVE-NG this last week or so. I created some web servers (in TinyCore Linux,) to act as the web servers in that lab. (Essentially they serve a different colour web page so I can test the load balancing is working OK).

Now I wanted to save the changes I made so that I could redeploy the configured servers to multiple labs. But when you deploy a qemu VM as a node in a lab, EVE-NG copies the VM to the lab, and the changes you make, only apply to the node, in the lab, in the pod, you are working on!

So I wanted to update the ‘Master‘ image in EVE-NG, with the one I configured. Here is how to do that;

Solution

Firstly you need to get your POD NUMBER, you can get that from the user management screen, below you can see my user, (you can see already logged on), is using pod number 1.

Now you need to get the LAB ID NUMBER. Open the lab > Shut down the machine that you want to save the changes from > Lab Details > Copy the lab ID number.

Lastly you need the NODE ID NUMBER. Either  select Nodes and take note of the number, or right click the node and the node ID is shown (in brackets).

Armed with those three pieces of information, SSH into the EVE-NG host, and execute the following commands;

[box]

cd /opt/unetlab/tmp/POD-NUMBER/LAB-ID/NODE-ID/

for example;

cd /opt/unetlab/tmp/1/2277307f-b0bc-45a4-831f-a89a716b5841/3/

[/box]

Now depending on the VM/Appliance in question, it may be called hda.qcow2, or virtioa.qcow2 (a quick ls command will tell you!) Take the name and commit the changes with the following command;

[box]

/opt/qemu/bin/qemu-img commit hda.qcow2

[/box]

Job done!

Yes but you wanted three different servers? Correct, I then copied the server (twice) edited the IP address, and the web page served on the two copies and committed the changes back to the original VMs!

Related Articles, References, Credits, or External Links

NA

NGINX Error: ’98: Address already in use’

KB ID 0001694

Problem

After an update, (WordPress – unrelated) yesterday, this website fell over! I rebooted the host, site was still down. I reluctantly restored to the previous evenings backup, and powered on the server. Alarmingly the site was still down!

I logged a call to my VPS provider, and attempted to troubleshoot the problem while I was waiting.

Very soon it was apparent my server appeared to be OK, but my web hosting platform (NGINX) was not running, and when I attempted to get it running this happened;

[box]

Aug 12 13:42:28 localhost nginx[2045]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
Aug 12 13:42:28 localhost nginx[2045]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
Aug 12 13:42:29 localhost nginx[2045]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
Aug 12 13:42:29 localhost nginx[2045]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
Aug 12 13:42:29 localhost nginx[2045]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
Aug 12 13:42:29 localhost nginx[2045]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
Aug 12 13:42:30 localhost nginx[2045]: nginx: [emerg] still could not bind,()
Aug 12 13:42:30 localhost systemd[1]: nginx.service: Control process exited, code=exited status=1
Aug 12 13:42:30 localhost systemd[1]: nginx.service: Failed with result 'exit-code'.
Aug 12 13:42:30 localhost systemd[1]: Failed to start A high performance web server and a reverse proxy server.

[/box]

Solution

A Google search kept pointing me towards my config files for NGINX being improperly formatted for port 80, but a) NIGINX has been running fine since the server was built, and b) I update things regularly! But nevertheless I wasted an hour and half going down that road. THIS WAS A BLIND ALLEY, MY NGINX CONFIG FILES WERE FINE! 

Other posts were, (more correctly) telling me something is using that port (TCP 80 or HTTP). So, to find out what;

[box]

sudo netstat -plant | grep 80

[/box]

BOOM! theres my problem Apache, what’s that doing running? I’m pretty sure I (stupidly) managed to install this a while ago, when updating my PHP to the latest version. I’m guessing that this had been a ticking time bomb and my WordPress update and reboot had caused it to explode! This would also explain why a restore didn’t fix the problem!

Lets stop and disable Apache;

[box]

sudo systemctl disable apache2 && sudo systemctl stop apache2

[/box]

And then (fingers crossed) start NGINX;

[box]

service nginx start
sudo service nginx status

[/box]

Site back up and running again (lesson learnt!)

Related Articles, References, Credits, or External Links

NA

Azure: Point to Site VPN From mac OS?

KB ID 0001693

Problem

We mac users always get overlooked. If I had a pound for every time I’ve heard ‘Yeah we don’t support macs?” I would be a rich man. But thankfully this makes us work things out for ourselves usually!

So recently I did a article Azure: Point To Site VPN (Remote Access User VPN) but what if you want to use the same solution for a remote mac user?

Solution

Firstly you will want to download the VPN package (and have a valid client/user certificate, [see the link above]).

Obviously the installer is for Windows, but within the ZIP file you download, it has a copy of the XML file with the settings in it, and a copy of the Root CA certificate you used.

So your first job is to ‘import‘ the client certificate, it will be in PFX format, (if you followed my instructions), so you will need to supply the password you specified when creating the PFX file (not the mac password), when prompted to install it (double click on it).

The engineer in me isn’t quite sure why the client needs the Root CA certificate on it, (because that’s not how certificates work!) But Microsoft insist it’s necessary, so also double click and install the Root CA Certificate, (it’s inside the VPN Package).

You don’t need to install VPN software onto the mac, (it has its own built in). Click the Apple Logo > System Preferences > Network > Add > Interface = VPN > VPN Type = IKEv2 > Service Name = Azure-Client-VPN > Create.

Now open the XML file from within you VPN client software ZIP file, and locate the FQDN of the ‘Gateway’ address in Azure > Copy it to the clipboard.

Paste the server address into BOTH Server Address AND Remote ID > (Leave Local ID blank for now) > Authentication Settings

WARNING: I’m using mac OS Catalina, so I choose ‘None’ (NOT CERTIFICATE). But for mac OS Mojave (and older) CHOOSE CERTIFICATE). It’s a bug that causes an error (see below) if you don’t.

Select > Choose the CLIENT certificate you imported earlier, (Take note of the name in brackets, this is the common name on the certificate). You will need this in a minute!  > Continue > OK.

Put the Common Name from the certificate into the Local ID section > Apply > Connect.

All being well it should connect, (though it may prompt for you to enter your user password). BY DEFAULT the option ‘Show VPN Status in Menu Bar‘ should be ticked, if it isn’t then tick it.

With that option ticked, you can connect and disconnect the VPN quickly without needing to go back into System Preferences like so;

Error: VPN Connection, ‘An unexpected error occurred’

Remember above when I said choose ‘None‘ for Catalina, NOT certificate? Well this is what happens if you choose certificate!

Related Articles, References, Credits, or External Links

NA

Azure VPN: Point To Site VPN (Remote Access)

KB ID 0001692

Problem

Given my background I’m usually more comfortable connecting to Azure with a Route Based VPN from a hardware device, like a Cisco ASA. I got an email this afternoon, a client had a server in a private cloud and a server in Azure, they needed to transfer files from the Azure server to the server in the private cloud. Now on further investigation this client had a Cisco vASA so a VPN was the best option for them, (probably).

But what if they didn’t? Or what if they were ‘working from home’ and needed to access their Azure servers that were not otherwise publicly accessible?

Well the Microsoft solution for that is called an ‘Azure Point to Site VPN‘, even though in the current Azure UI they’ve called it ‘User VPN Configuration‘, because ‘Hey! Screw consistency and documentation that goes out of date every time a developer has a bright idea, and updates the UI’ Note: I have a thing about things being changed in GUIs!

So regardless whether you are on or off the corporate LAN, you can connect to your Azure Virtual Networks.

Azure VPN (Remote Access)

This is not a full Azure tutorial, I’m assuming, as you want to connect to existing Azure resources, you will already have most of this setup already. But, just to quickly run through. You will need a Resource Group, and in that Resource Group you will need a Virtual Network. (Note: I like to delete the ‘default‘ subnet and create one with a sensible name).

So far so good, within your virtual network you will need to create, (if you don’t already have one,) a ‘Gateway Subnet‘. To annoy the other network engineers, I’ve made it a /24, but to be honest a /29 is usually good enough).

Now to terminate a VPN, you need a ‘Virtual Network Gateway‘.

Make sure it’s set for VPN (Route Based) > Connected to your Virtual Network  > Either create (or assign) a public IP to it. I told you I’d be quick, however the Gateway will take a few minutes to deploy, (time for a coffee.)

Azure VPN Certificate Requirement

For the purpose of this tutorial I’ll just create some certificates with PowerShell, (a root CA cert, and a client cert signed by that root certificate). This wont scale very well in a production environment. I’d suggest setting up a decent PKI infrastructure, Then using auto-enrolment for your users to get client certificates. However for our run through, execute the following TWO commands;

[box]

$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature -Subject "CN=Azure-VPN-Root-Cert" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\CurrentUser\My" -KeyUsageProperty Sign -KeyUsage CertSign

New-SelfSignedCertificate -Type Custom -DnsName Azure-VPN-Client-Cert -KeySpec Signature -Subject "CN=Azure-VPN-Client-Cert" -KeyExportPolicy Exportable -HashAlgorithm sha256 -KeyLength 2048 -CertStoreLocation "Cert:\CurrentUser\My" -Signer $cert -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2")

[/box]

Now launch ‘certmgr‘ and you will see the two certificates. Firstly, export the client certificate.

Yes you want to export the private key > You want to Save it as a .PFX file > Create a password for the certificate (MAKE NOTE OF IT!) > Save it somewhere you can get to, (you will need it in a minute).

Secondly, export the Root CA certificate.

 You DON’T export the private key > Save as Base-64 encoded > Again save it somewhere sensible, you will also need it in a minute.

Open the ROOT CA CERT with Notepad, and copy all the text BETWEEN —-BEGIN CERTIFICATE—- and —-END CERTIFICATE—- Note: This is unlike most scenarios, when working with PEM files, where you select everything, (it tripped me up!)

Back in Azure > Select your Virtual Network Gateway > Select ‘User VPN Connection’ (seriously, thanks Microsoft be consistent eh!) > ‘Configure now‘.

Pick an address pool for your remote clients to use, (make sure it does not overlap with any of your assets, and don’t use 192.168.1.0/24, or 192.168.0.0/24, Note: These will work, but most home networks use these ranges, and let’s not build in potential routing problems before we start!)

Choose IKEv2 and SSTP > Authentication Type = Azure Certificate > Enter your Root CA details, and paste in the PEM text, you copied above > Save > Time for another coffee!

When is stopped deploying, you can download the the VPN client software.

Azure Point to Site (User VPN) Client Configuration

So for your client(s) you will need the Client Certificate, (the one in PFX format,*) and the VPN Client software >  Double click the PFX file > Accept ‘Current User‘.

*Note: Unless you deployed user certificates already, and your corporate Root Cert was entered into Azure above.

Type in the certificate password you created above > Accept all the defaults.

Yes.

Now install the Client VPN software, you may get some security warnings, accept them and install.

Now you will have a configured VPN connection. I’m a keyboard warrior so I usually run ncpa.cpl to get to my network settings, (because it works on all versions of Windows back to NT4, and ‘developers’ haven’t changed the way it launches 1006 times!)

Launch the Connection > Connect > Tick the ‘Do not show…‘ option > Continue > If it works, everything will just disappear and you will be connected.

Related Articles, References, Credits, or External Links

NA

Veeam: Backup to Public Cloud?

KB ID 0001691

Problem

I’ve always been a fan of Veeam, I’ve championed it for years, as a consultant and engineer I want solutions that are easy to deploy, administer, and upgrade, that cause no problems. Like all things that are easy to use, and gain a lot of popularity, Veeam is starting to get DESTROYED BY DEVELOPMENT. What do I mean? Well, things that were simple and easy to find now require you to look at knowledge base articles and pull a ‘frowny face’. Also the quality of support has gone dramatically downhill. We stand at the point where another firm can come in and do what Veeam did, (march in and steal all the backup & replication revenue worldwide, with a product that simply works and is easy to use).

I digress (sorry). So you want to backup to public cloud yes?

Solution

Veeam Backup and Recovery Download

Veeam Backup For Azure Download

Veeam Backup for AWS Download

Well then, you log into Veeam look at your backup infrastructure, and simply add an External Repository and backup to that? NO! That would be common sense, (and the way Veeam used to to things). External Repositories are not for that, Veeam points this out when you try and add one;

So how do you backup to public cloud? (I know other vendors are available, but we are talking primarily about Azure and AWS). Well to do that you need to be more familiar with Scale Out Backup Repositories (SOBR).

With an SOBR you can add ‘cloud storage’ i.e. Azure Cold Blob storage or AWS S3, as ‘Capacity Tier‘ storage.  How is the Capacity Storage Tier Used? Well theres two options, ‘Backup to Capacity after x Days’ or ‘Backup to Capacity Tier as soon as backup are created‘. like so;

  1. Send your backup to a Scale Out Backup Repository.
  2. The backup gets placed into the Performance Tier.
  3. Option 1: Copy to Cloud after x Days, or Option 2: Copy to cloud immediately.

Note: This is configured on the SOBR configuration NOT on individual backup jobs/sets.

Adding Azure Cold Blob Storage

Well before you can add cloud storage to a SOBR you need to add it to Veeam, how’s that done? Well firstly you need to create an Azure Storage account.

Then generate an ‘Access Key‘.

Then create a ‘Container‘ in your storage account.

Then within Veeam > Options > Manage cloud credentials > Add > Add Azure Storage Account > Enter the Storage account and Access Key > OK.

Adding ‘Cloud Storage’ as ‘Capacity Tier’ to a Scale Out Backup Repository

Either create a new Scale Out Backup Repository, (Backup Infrastructure > Scale Out Backup Repository,) or edit an existing one. When you get to Capacity Tier > Tick the ‘Extend..’ option > Add > Microsoft Azure Blob Storage.

Azure Blob Storage > Give the storage a name > Next.

Select the storage account you created above > Select your Gateway Server (usually the Veeam B&R server but it does not have to be) > Next > Browse.

Select or create a new folder > Limit the amount of space to use (if required) > Next > Finish.

What about AWS? Well Microsoft kindly give me a certain amount of ‘free‘ Azure credits every month so it’s easy to showcase their product, (I use this for learning and PNL tutorials), so Microsoft pretty much get the benefit. I know AWS have a free tier and a trial tier, but honestly after spending 2 hours trying to find out what you actually get, and am I going to get stung on my credit card bill If I do ‘xyz‘ I lost all interest!

AWS, be like Veeam used to be, make it easy! AWS is like flying with Ryanair,

Oh so you want a seat? That will be and extra £x a month, and for every trip to the toilet will be an extra £x a month. Will you be wanting nuts? Because we charge by the nut, and no one knows how many nuts are in each bag, so it will be different every time, and speaking of time if you want to look at the clock that will be £x a month also!

People will email me and complain Azure is the same, and to an certain extent I will agree, but nothing will change until, public cloud providers start charging fixed prices for things, so IT departments can work out what the Opex is going to be e.g. like private cloud providers do! Of course working for a private cloud provider maybe I’m a little biased? 

Related Articles, References, Credits, or External Links

NA

PowerCLI: Get All Snapshot Information

KB ID 0001690

Problem

This was asked on EE today, and it was an interesting one so I wrote it up. How to locate all the Snapshots in your VMware virtual infrastructure, and see how much space they are taking.

Solution

Use the following PowerCLI;

[box]

Get-Snapshot * | Select-Object -Property VM, Name, SizeGB, Children | Sort-Object -Property sizeGB -Descending | ft -AutoSize

[/box]

Related Articles, References, Credits, or External Links

NA