If you get a relatively large value (example 100kb) out of redis
concurrently with a high amount of go routines, you see a consistantly increasing latency. Why is that and how can it be handled?
Furthermore, if you run multiple instances of the script (even on different clients so not cpu bound), you see the latency increasing faster. So each additional client is slowing down all other clients.
This should not happen at this small scale on redis right? From redis-benchmark
It can handle 100k connections at a huge throughput. I am doing 1-5 connections.
Here are some results of running the script on just a single client, followed by the script.
0: time: 0ms, val size: 100000
1: time: 37ms, val size: 100000
12: time: 37ms, val size: 100000
13: time: 37ms, val size: 100000
38: time: 37ms, val size: 100000
133: time: 97ms, val size: 100000
145: time: 97ms, val size: 100000
77: time: 97ms, val size: 100000
.
.
.
979: time: 235ms, val size: 100000
914: time: 236ms, val size: 100000
957: time: 232ms, val size: 100000
980: time: 235ms, val size: 100000
958: time: 235ms, val size: 100000
954: time: 236ms, val size: 100000
913: time: 236ms, val size: 100000
921: time: 236ms, val size: 100000
961: time: 236ms, val size: 100000
976: time: 236ms, val size: 100000
package main
import (
"fmt"
"context"
"time"
"strings"
"github.com/redis/go-redis/v9"
)
var ctx = context.Background()
func main() {
// Create a new Redis client
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
DB: 5,
})
// Store a 100k value in Redis
key := "some:test:key"
value := strings.Repeat("a", 100000)
err := rdb.Set(ctx, key, value, 0).Err()
if err != nil {
fmt.Println("Failed to set value:", err)
return
}
// Get the value asynchronously 500 times, log the time it takes for each
for i := 0; i < 1000; i++ {
go func(i int) {
t1 := time.Now()
val, err := rdb.Get(ctx, key).Result()
if err != nil {
fmt.Println("Failed to get value:", err)
return
}
t2 := time.Now()
fmt.Printf("%d: time: %vms, val size: %d\n", i, t2.Sub(t1).Milliseconds(), len(val))
}(i)
}
// Wait for a while to ensure all goroutines have finished
time.Sleep(2 * time.Second)
}
I don't have a definitive answer, I see the same slowdown as you, however, a couple of things may help explain things.
First,
t1
is oddly placed. It doesn't allow you to measure the time each goroutine spends, but rather the time the from when the firstt1
was created to when the goroutine ends. Remember thatgo func(){}()
only schedules a goroutine, it will run later. Sot1
will probably not vary much for those 1000 iterations.The second thing is that with a 1000 concurrent goroutines, bound almost completely to TCP i/o, the scheduler is probably going crazy. Adding a semaphore to your code allows each goroutine to complete in ~1ms, in the same timespan as the original code.
My guess is that redis is idling when your code runs, but the scheduler is being thrashed.
Yields
Obviously I measure different things, but from a goroutine standpoint, there's no latency when talking to redis.