Fortigate Load Balancing

KB ID 0001762

Problem

I’ve been getting through my NSE4, and one of todays topics was NAT, just as an offhand comment the ‘narrator‘ (I say narrator because it’s a monotonous robot AI voice,) mentioned Fortigate Load Balancing.

In the past (with my Cisco hat on) when I’ve been asked about load balancing, I’ve said ‘If you want to load balance, buy a load balancer‘. But the Fortigate does try to be ‘all things to all men‘ so I wondered just how good a load balancer can it be?

Turns out, quite a decent one, if you just want simple http round robin, it does that, it you want weighted traffic routing, or host health monitoring, or HTTP cookie persistence, and even SSL offload. It’s as good as anything I’ve ever worked on before. Here’s my Fortigate ‘Test Bench‘, you will see I’ve added three web servers (on the right) called Red, Green, and Blue (the significance of which will become apparent). Note: Yes there’s another web server at the bottom, (I’m too lazy to remove it from the lab!)

I’m going to setup simple round robin load balancing between these three web servers, and I’m going to get the Fortigate to monitor their health by simply making sure they respond to ping packets. (Note: it can monitor http availability or something a little better if you wish).

Solution

This tripped me up for a while! Load balancing is a feature, you need to  turn it on first, System > Feature Visibility > Load Balancing > Enable.

FortiGate Load Balancing: Create a Health Check

Cisco Types: Think of this as a tracked SLA

Policy & Objects > Health Check > Create New > Give it a name > Type = Ping > Interval = 10 > Timeout = 2 > Retry = 3 > OK

Now create a Virtual Server (not a VIP!) Policy  & Objects > Virtual Servers > Create New  > Name = Give it a sensible name > Type = HTTP > Interface = Your Outside/WAN interface  > Virtual Server IP (Externally!) > Virtual Server Port = 80 > Load Balancing method = Round Robin > Persistence = HTTP cookie > Heath Check = Select the one you created above.

Scroll down > Real Servers > Create New.

Add in the first (internal server IP) > Port = 80 > Max connections  = 0 (that’s unlimited) > OK.

Repeat the process to add the remaining servers > OK.

FortiGate Load Balancing: Enable Firewall Policy

Now you need to ‘allow’ traffic in (it is a firewall after all!) Policy & Objects > Firewall Policy (or IPv4 policy on older firewalls) > Create New > Name = Give it a sensible name > Incoming Interface = Outside > Outgoing Interface = Inside > STOP Change Inspection Mode to PROXY Based > Destination = Your Virtual Server (it’s not visible unless you have enabled proxy based!) > Schedule = Always > Service = All > Action = Accept > NAT = Enabled > You may also enable AV inspection > OK.

FortiGate Load Balancing:Testing and Tweaking

So from ‘Outside’ let’s hit our load balanced page.

That’s great but if you hit refresh a few times nothing changes (in production nothing would change anyway, but to prove my back end servers are getting used and load balanced, each of mine serves a different coloured page (hence the red, green and blue server names). The reason I’m only seeing the blue one, is because we enabled ‘HTTP cookie Persistence‘ let’s just nip back onto the firewall and disable that (set it to None > OK).

Now when I refresh by browser I can see it cycling though the back end servers.

FortiGate SSL Offload

To use and process SSL requires some CPU power, some websites (like this one) serve their webpages protected by https and the certificate that enables that lives on the web server, for sites like mine that are getting about 12k hits a day that’s fine but if you are getting hundreds of thousands of hits a minute that’s a MASSIVE drain on CPU resources. That’s what SSL offload is all about, getting another device (in this case the Fortigate) to do all the heavy lifting for you. Then the back end servers can get on with the job of serving web pages. 

Upload the Certificate to the FortiGate

For HTTPS you will need a web certificate that will be trusted by your visitors. I’m lazy and tight so I’ll just create one in Microsoft Certificate Services, but in Production you will need Publicly Signed Certificate. System > Certificates (if you can’t see certificates) > Import > Local Certificate.

Mine’s in PFX format so I need to select PKCS#12 > upload the certificate and supply a password > OK

FortiGate: Enable SSL Offload

On your Virtual Server, change the Type to HTTPS > Virtual ServerPort to 443 > Certificate to the one you just uploaded > OK.

We are now serving pages securely even though the web servers are not configured for https.

Related Articles, References, Credits, or External Links

NA

F5: Static Load Balancing (Ratios)

KB ID 0001700

Problem

In the previous post, we deployed a web load balanced solution with three web servers. Out of the box the BIG-IP solution will use Round Robin load balancing and it will treat all Nodes or Pool Members the same, (it assigns a RATIO OF 1).

Everything gets weighted the same, and the F5 will send requests to the Nodes or Pool members one at a time.

But what if one of those web servers was a beast of a machine, with much better CPU/RAM than all the others? How do you ensure that gets sent the ‘Lions share’ of the traffic?

Solution

Well you can simply alter the Ratio for that server, you can do that directly on the Node, or you can do it within the Pool on a Pool Member. (That’s why you can see 6 ratios in the examples I’ve posted).

What if I change the Ratios on Nodes AND Pool Members: You can do that, but the load balancing method uses one or the other. So they wont conflict.

So let’s say 10.2.0.11 is a brand new server and has ten times the processing power of the other two nodes like so;

Local Traffic > Nodes > Select the node in question > Change the Ratio accordingly > Update.

Nothing will happen until you change the load balancing method of the Pool. On the properties of the Pool, change the Load Balancing Method to Ratio (node) > Update.

If you reset the counters and wait a while, you can see now that the server is getting (more or less*) 10 times the amount of traffic.

*Note: The maths will never be perfect, and my web pages are all ‘very slightly’ different, which is amplified over time.

Changing F5 Pool Member Ratios

The process is similar, (if you are following along, you might want to change your Node value back to ‘1, not that it will affect anything, it’s just if you are like me you will forget!) So now let’s say we’ve got a new server and its 10.2.0.13, and we want to change the ratio on the Pool Member like so;

Open the Pool > Select the Node from here.

Change there ratio here > Update.

Now change the Load Balancing Method to Node (member) > Update  >Note: Here, ratios are shown on the Pool page.

Reset your counters, and wait a while, you will see the other server is now getting most of the traffic.

In large production environments, you will probably want to use Dynamic Load Balancing methods, so I’ll look at those next.

Related Articles, References, Credits, or External Links

NA

Load Balance IIS with Microsoft ARR

KB ID 0001573

Problem

If you have a lot of IIS servers, and want to load balance between them, then you can either buy a load balancer, or use Microsoft ARR (Application Request Routing). Note: ARR does a lot more than simply load balancing, e.g. it can perform caching, and complex web routing, and even SSL offloading. Here we are just looking at load balancing.

I’m going to deploy TWO ARR servers in my DMZ, here I’ve got two ‘back-end’ IIS web servers, (you may have many more.)

WHY ARR?: Rather than use WAP (Web Application Proxy,) or a connection broker, ARR is application aware, i.e. it WONT attempt to serve pages from a broken IIS server, e.g. if the host server is online, but the site to IIS is broken for instance.

WHY TWO?: Well we are talking about balancing and availability, I’m deploying two so the event one fails, the other one will still be online, you can have these running on different hypervisors, or even in different datacenters, for added resiliency.

Deploy Network Load Balancing (NLB)

Our first task is to deploy NLB this will create a ‘Virtual IP’ for both of the ARR servers to use.

NLB is a ‘Feature‘ to enable it, launch Server Manager > Manage > Add Roles and Features > Next > Next > Next > Next > Tick ‘Network Load Balancer‘ > Next > Next > Finish.

Launch ‘Network Load Balancing Manager’.

New Cluster.

Add in the first host > Connect > Next.

Check the IP > Next.

Add a ‘Cluster IP’, (this is the IP that you will connect to for services), and is the ‘Shared’ IP > OK > Next.

Next.

Next.

Repeat the procedure to add the additional IP(s).

You will need to make the NLB IP is ‘Publicly Available,’ and open HTTP/HTTPS as required. Also the ARR hosts will need HTTP/HTTPS (as required) open to the internal IIS servers. I usually test all that at this point.

Deploy ARR and ‘URLRewrite’ for Load Balancing

ARR and URL rewrite are both IIS components, but you don’t need to install IIS yourself. You can if you wish, and then install URL Rewrite THEN ARR (In that order!) But it’s much simpler to download and use the ‘IIS Web Platform Installer‘.

Launch the Web Platform Installer, and do a search for URL > Select URL Rewrite > Add > Repeat the process, searching for ARR, and add Application Request Routing version 3, (Not the 2.5 version at the top!) > Next > Follow the wizard and complete the install.

Launch IIS Manager > Now you will see you have a new option ‘Server Farm‘ > Create Server Farm.

Give your server farm a name > Next > Add in all the ‘Back-end’ IIS servers > Finish.

You will get a pop-up asking if you want to create a URL rewrite rule. In this case we want a simple rewrite rule as we are doing plain old load balancing and we have no special requirements, so Select YES. (Only click No if you have specific rewrite requirements and you want to set them up manually).

Now test externally. WARNING don’t expect the page to ‘flip over’ every time, remember ARR is caching these web requests, and your browser will also be performing web page cashing, use a couple of browsers and wait a minute or two between refreshes to make sure that all the web servers are being used!.

Related Articles, References, Credits, or External Links

NA

Remote Desktop Services: Balancing Sessions Hosts and Connection Brokers

KB ID 0001424

Problem

I got an email from a colleague who was setting up an RDS farm, (2012 R2). He was having some problems and asked me; “If the Connection Broker brokers the connections to the Session Hosts, how do I RDP to the Session Broker?”

This threw me completely, I usually jump on the console in VMware or use a third party remote management tool, I don’t tend to to RDP onto servers. I had fallen into the same trap he had. I assumed: You connect to a SESSION BROKER and it BROKERS YOUR SESSION to the least busy session host, (or reconnects your broken sessions).

THIS IS WRONG!

 

How Session Brokers Work

You don’t connect to a session broker, (unless you are an admin who is about to do some work on the Session Broker). You connect to a DNS RECORD, and that record points to a SESSION HOST, (I know that makes no sense, but bear with me). And you create a DNS record with the SAME NAME for every Session Host like so,

 

This works because, (by default) Windows DNS uses ’round robin’ so if it has multiple values for one DNS name is responds with the first one to the first request, the second one to the second etc.

But Pete? Round Robin is Bobbins for Load Balancing? Yes it is, that’s what the Session Broker is for! In reality this is what happens;

This is two scenarios that should clear things up, User1 queries DNS for TSFarm.my-domain.com and gets an IP of 192.168.1.1, They go to that SESSION HOST, the the session host CHECKS WITH THE CONNECTION BROKER, firstly to see if User1 already has a session on another session host, if so they are reconnected to that session, above that’s on SESSION HOST 2.

Then User2 attempts to connect toTSFarm.my-domain.com and gets an IP of 192.168.1.2 (Because of DNS ’round-robin’). They go to that SESSION HOST, then the session host CHECKS WITH THE CONNECTION BROKER, firstly to see if User2 already has a session on another session host, in this case they don’t. But, this host already has User1 connected to it, so it redirects User2 onto SESSION HOST 1.

Of course a user can connect to a SESSION HOST and after checking with the the CONNECTION BROKER they get connected to the host they originally queried if, (for example) the other session hosts are busier, (and the user has no existing sessions.)

But With Server 2012 You Can Do Connection Broker Load Balancing? Yes, you can, but that’s load balancing for the connection brokers, NOT the user sessions!

 

Related Articles, References, Credits, or External Links

Thanks to James White for making me do some work!

Citrix NetScaler – Simple HTTP Site Load Balancing

KB ID 0001188 

Problem

Here is the simplest load balancing scenario I can think of, I’ve got two web servers, (on http port 80) and I’m presenting them though my NetScaler as an HTTP (Virtual Server).

 

Solution

First we add the ‘back-end’ servers. Connect to the management IP of your NetScaler and login > Configuration > Traffic Management  > Load Balancing > Servers > Add.

Define a name for the first server and enter its IP address > Create.

Repeat to add the second internal web server. 

Now I’m going to group these servers together in a ‘service group’, (you don’t have to, you can present them individually to the virtual server you will create in a minute if you prefer). Configuration > Traffic Management  > Load Balancing > Service Groups > Add.

Name the group and set the protocol to HTTP  > OK.

When created, you will see it says ‘No Service Group members’  > Click there.

Select ‘Server Based’ > Click the search arrow.

Tick them all > Select.

Set the port (HTTP is TCP port 80) > Create.

OK.

Now we need to add a monitor, this is what the NetScaler will use to monitor the service availability of your ‘back-end’ servers on TCP port 80 (HTTP). Click Monitors.

This confused me for a while, selecting things on the right, drops them at the bottom of the main page > Click ‘No service Group Monitor Binding’.

NetScaler has a monitor for http pre-configured, so I’m going to use that > Click the search arrow.

Click ‘http’  > Select.

Bind.

Done.

Now we tie all that together in a ‘Virtual Server’ > Configuration > Traffic Management  > Load Balancing > Virtual Servers > Add.

Give the Virtual Server a name > Protocol is HTTP > Specify the IP address (this will be the VIP the NetScaler presents to the outside world)  > Port 80 > OK.

Now we need the add the group we created earlier, click where it says ‘No load balancing Virtual Servers Service Group Binding’.

 

Click the search arrow.

Click the group you created earlier > Select.

Bind.

Continue.

Done.

Save your hard work.

You should be green across the board.

To test this I put a different web ‘welcome’ page on both of the servers, that way as I refresh the page I can see that the NetScaler is doing its job and balancing the requests across both back-end web servers.

 

Related Articles, References, Credits, or External Links

NA