Maybe I am missing something, this seems too simple. Is it possible to make redis durable by having a master redis node duplicate data to a slave redis node?
My situation I have a REST endpoint which upon recieving a request from a client sticks the payload it in a redis queue and then returns a success (HTTP 200) to the client. If that queue goes down before the message is processed and before fsync occured, I've lost that payload and no one knows about it.
I was wondering, if instead I could simply write to two redis queues (in different zones) one the master and one the slave. When I write to the 'master' redis will then automatically write the same element in the slave queue and only then does the endpoint return a HTTP 200 to the client.
Is this possible? Redis would (i) need a way to write to a slave and (ii) have a synchronous API or awaitable API which will only return once there is confirmation the payload has been written to both the master and slave. The key here is that redis allows the caller to know that the slave has received the event.
If the client doesn't get a HTTP 200 they know they should try sending it again. Feel like there are caveats I'm not seeing.
Thanks
Short answer. NO, it's NOT possible.
Redis can replicate the data to slave. However, the replication is async, which means it will return response to client before the data has been written to slave.
Since Redis 3.0, it supports
WAIT
command, which will block the client until the write operations of this client have been replicated to the given number of slaves.This might mitigate the problem, at least you can ensure the the write operation have been replicated to serval nodes. However, you still might lose your data. Because the slave might also be down before it persists data to disk.