Choosing A Load Balancer Algorithm
This article describes three load balancing algorithms, round robin, least connections, and random, and when you should use each type of algorithm.
Understanding Load Balancing algorithms
All Load Balancers, whether they are virtual devices in the cloud or physical devices, have different algorithms by which they can distribute incoming requests to the backend nodes. Selecting the algorithm that best suits the needs of your applications and efficiently handles the types of connections that are created will help to optimize your environment.
SpinUp Load Balancers support weighted versions of certain algorithms. This means that specific values can be assigned to each of your nodes so that some of them receive more traffic than others. This value can be between 1 and 100, and represents the proportion of traffic that goes to a node. For example, if node A has a weight of 10, and node B has a weight of 1, then node A will receive roughly 10 times as many connections as node B.
Round robin is the most basic algorithm for load balancing and works on very simple logic. The Load Balancer assigns a sequential order to the nodes, distributes the incoming requests out to each node in turn, and then starts again at the top of the list. If you’re unsure about how to manage a small workload, leaving this default works for most of your needs. However round robin can present issues if one server in the group cannot handle as much workload as another. The Load Balancer continues to distribute traffic in turn regardless of whether one node is getting overloaded or not. For this reason, round robin is best suited for environments where your servers all have equal specifications for processing power and memory.
In cases where the servers have differing capacities to handle work, consider using weighted round robin. The same sequential distribution happens, but the weighted value tells the Load Balancer to keep giving additional traffic to a higher weighted node until it has the correct proportion of traffic before moving on to the next. If node A has a weight of 10, and node B has a weight of 1, weighted round robin assigns 10 requests to node A, and then the 11th request is assigned to node B.
Even if servers with identical capabilities are behind a Load Balancer, there are still scenarios where one node might end up overutilized, specifically when connections stay open for much longer on some servers than others. In order to better compensate for differing connection lengths, least connections is a better fit. This algorithm takes into account the current number of open connections on each of its nodes and then assigns each new request to the node with the least number of open connections at that time.
Weighted least connections can be used for scenarios where the nodes have unequal resources so that the system takes into account both your assigned proportions of traffic and the number of open connections when distributing the traffic.
This algorithm simply distributes requests to one of the Load Balancer’s nodes at random. This method is best suited for scenarios where a large number of requests are coming in to nodes with similar or identical capacity. In cases where the number of requests is sufficiently large, this ends up distributing traffic evenly to the nodes. This option is best for any situations where the fixed sequence used by round robin is not desirable.