When I remove cores for a service which connects to redis with go client, the latency between my service to redis is increased. In redis, for each additional core we add 10 connections by default. Meaning, if we remove cores, number of connections decreased too. Is there a reason why in this case, the latency of a request from my service to redis is increased dramatically? Both redis and my service reside in aws.
I changed 4 cores to 1 core, sent 2 requests. Latency for one request became 60ms where with 4 cores it 2ms. For two cores, latency for one request is 20ms.
Is it possible it is related to epoll?
There's a cost to managing connections.
The Redis library defaults values based on
GOMAXPROCSto make it generally scale up and down with your resources. But it's not perfect — no library is going to guess your program's workload accurately.If you find that you're saturating your connection pool when decreasing available cores, manually configure the behavior to suit your program. The library doesn't multiplex requests over the same connection, so having fewer in the pool means more time spent waiting for a connection to become available.
The settings are here: https://pkg.go.dev/github.com/go-redis/redis/v8#Options
We're not going to be able to recommend the ideal configuration without profiling your application, but take a look at...
If you want to be more accurate, collect CPU and trace profiles for your program to determine what makes sense.
Without code examples or profile data, this will be hard to answer accurately. There's also network variance, so you'd need to run tests over a longer period of time to find what the real deviation is.