NGINX - HTTPS Load Balancer Configuration

321 Views Asked by At

I have created 2 CentOS servers on different zones in the same region and installed NGINX on that. Created Instance groups like ig1 & ig2 and added those servers in that. Created the external load balancer. I'm able to launch the web page using public static IP. But the result is not as expected. Is there any round-robin method in LB config? if yes how do we achieve that?

I have set the Max RPS is 1 sec on both the instance groups and health check interval period 1 sec.

NA

The requirement is, whenever I'm refreshing the load balancer IP once, it should load the page from different instances. But the thing is, I have to refresh the page no of times to load the page from different instances. I'm not sure what configuration is missing. Can someone help me with this?

1

There are 1 best solutions below

0
On

Most load balancers use a round-robin.

In GCP HTTP(S) LB has two methods of determining instance load. Within the backend service resource, the balancing Mode property selects between the requests per second (RPS) and CPU utilization modes.

You can override round-robin distribution by configuring session affinity. However, note that session affinity works best if you also set the balancing mode to requests per second (RPS).

Session affinity sends all requests from the same client to the same virtual machine instance as long as the instance stays healthy and has capacity.

================

Now, GCP HTTP(S) LB offers two types of session affinity:

a) client IP affinity— forwards all requests from the same client IP address to the same instance.

Client IP affinity directs requests from the same client IP address to the same backend instance based on a hash of the client's IP address. Client IP affinity is an option for every GCP load balancer that uses backend services.

but, when using client IP affinity, keep the following in mind:

The client IP address as seen by the load balancer mightn't be the originating client if it is behind NAT or makes requests through a proxy. Requests made through NAT or a proxy use the IP address of the NAT router or proxy as the client IP address. This can cause incoming traffic to clump unnecessarily onto the same backend instances.

If a client moves from one network to another, its IP address changes, resulting in broken affinity.

b) generated cookie affinity— sets a client cookie, then sends all requests with that cookie to the same instance.

When the generated cookie affinity is set, the load balancer issues a cookie named GCLB on the first request and then directs each subsequent request that has the same cookie to the same instance. Cookie-based affinity allows the load balancer to distinguish different clients using the same IP address so it can spread those clients across the instances more evenly. Cookie-based affinity allows the load balancer to maintain instance affinity even when the client's IP address changes.

The path of the cookie is always /, so if there are two backend services on the same hostname that enable cookie-based affinity, the two services are balanced by the same cookie.

===========================

Main source:

Load distribution algorithm

Requests per second