Context: invoicing system, the sent invoices must have consecutive numbers.
Each invoice has a unique invoice number, for sake of simplicity let's say they are I1
, I2
, I3
, and so on. So, the first invoice in the system has the number I1
, and it gets incremented for every next invoice. Then, each invoice is being produced in a Kafka topic.
So, one could always calculate the number for the next invoice only by the contents of this topic, right? (count of invoices in the topic + 1 = next number) We could call such a system event-sourced then.
But how do you achieve this? For me, this seems like a circular data flow: in order to produce into the topic, I first need to ensure that I consumed the same whole topic at another place.
Am I getting something wrong about event streaming or is it just impossible with Kafka?
Invoices are always being assigned the number and sent one-by-one, not in parallel.
Producers shouldn't care about what has (or hasn't) been consumed.
You seem to simply need to ensure that the producer has
acks=1
, meaning the broker accepted the message, and that you have one partition to ensure complete ordering.If you need to ensure atomically increasing values across distributed threads/processes, then you'll want to save off that number somewhere else rather than rely on the state of the topic (for example, a Zookeeper lock).