In this post I’ll walk you through Azure load balancers without all the fluff you’ll find elsewhere online. After reading this you’ll understand load balancing concepts in Azure.
Load Balancing Re-Cap
Traditionally load balancing is what it says on the tin, balancing loads. Anyone who has ever configured a Kemp or similar device will understand that it’s essentially a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Why do we need them? Load balancers are used to increase capacity (concurrent users) and reliability of applications.
Cheap load balancing works in a round-robin style, trying each node in sequence regardless of state. An intelligent load balancer is aware of node status as it uses probes to monitor servers and won’t bother trying to route traffic to a failed server until it becomes available again. The latter is what you’ll get in Azure.
Internet-Facing Vs Internal
In Azure you have two choices. The first is an Internet Facing load balancer. This accepts traffic from a publically available IP address via it’s endpoint and routes to resources on a private range (usually). The second type is an Internal load balancer, this type of load balancer will accept traffic from one internal network (vNET) and route it to another. This makes a lot more sense in a real world scenario so I’ve created the following image to put both types of load balancer into context;
In this example we have internet traffic hitting the end point of the internet facing load balancer. Sitting behind the load balancer is a group of web servers running a web application (for example a website). When the web application wants to write back to a database server it must traverse an internal load balancer. This internal load balancer has an end point configured to accept the traffic from the web servers virtual network, converts (using inbound NAT rules) the traffic and forwards it to the pool of database servers. I’ve called them internet facing and internal for simplicity, Microsoft call them application gateway and azure load balancer;
Internet Facing Load Balancer = Application Gateway (Layer 4)
Internal Load Balancer = Azure Load Balancer (Layer 7)
Azure Load Balancer and Application Gateway route network traffic to endpoints but they have different usage scenarios to which traffic to handle. The following table helps understanding the difference between the two load balancers:
|Type||Azure Load Balancer||Application Gateway|
|IP reservation||Supported||Not supported|
|Load balancing mode||5-tuple(source IP, source port, destination IP, destination port, protocol type)||Round Robin
Routing based on URL
Cloud Service boundaries
A ‘Cloud Service’ is the tier of grouped resources within a load-balanced boundary. In the image above ‘Web Servers’ is a cloud service boundary and ‘Database Servers’ is another cloud service boundary.
The public facing URL for the internet facing load balancer will be <publicDNSname>.<region>.cloudapp.azure.com
How Does it Distribute Traffic?
The Azure load balancer uses a hash-based distribution algorithm to choose which server to route traffic too next. By default, it uses a 5-tuple hash composed of source IP, source port, destination IP, destination port, and protocol type to map traffic to available servers. It provides stickiness only within a transport session. Packets in the same TCP or UDP session will be directed to the same instance behind the load-balanced endpoint. When the client closes and reopens the connection or starts a new session from the same source IP, the source port changes. This may cause the traffic to go to a different endpoint in a different data center.
See my Traffic Manager post!