I have a use case where I need to consume a message in all the instances of service. let's say if my service is running on 5 instances, then the message coming through Kafka needs to be processed on every instance. Since this data is being used in many other APIs so we are storing this in local memory to serve APIs.
Since this data is used very frequently, I don't want to store this data in Redis or some other global cache which will increase latency and cost of network calls.
I want to create a pipeline where any change in data by third-party service will be updated to all the instances and new data is being served in the APIs by all the instances.
It isn't possible with kafka. It seems that kafka isn't the right choice for this case.
I can suggest 3 solutions:
The hack you can do is to consume with a different Consumer Group at each instance. (Let's say a random UUID when you start polling).