Uneven Load Distribution in Kubernetes Pods with Multithreaded Execution using fork join pool

40 Views Asked by At

I have a Spring Boot application where I had a CPU-intensive function performing 43 non-dependent function calls sequentially, resulting in significant execution time. To optimize performance, I parallelized these function calls using a ForkJoinPool. However, in a Kubernetes environment, enabling this multithreaded flow causes approximately 90-95% of requests to be directed to a single pod, leading to uneven load distribution among pods and service failure.

Here's how I implemented the multithreaded execution:

ForkJoinPool forkJoinPool = new ForkJoinPool(2);
CompletableFuture<?> var1 = CompletableFuture.supplyAsync(() -> f(parameters1), forkJoinPool);
CompletableFuture<?> var2 = CompletableFuture.supplyAsync(() -> f(parameters2), forkJoinPool);
// Repeat for var3 to var43...

CompletableFuture<Void> allOfFutures = CompletableFuture.allOf(var1, var2, var3, ..., var43);

try {
    allOfFutures.get(8, TimeUnit.SECONDS);
} catch (Exception e) {
    Thread.currentThread().interrupt();
    // Handle exception
}

Despite parallelizing the function calls, the load distribution among pods remains uneven, with a single pod handling the majority of requests despite load balancer policy set as round robin. I'm seeking insight into why this might be happening in a Kubernetes environment.

Any suggestion or solution is highly appreciated. Thanks in advance!

0

There are 0 best solutions below