Disruptor behavior - Drain full buffer before consuming new data

371 Views Asked by At

I have the following scenario.

We were load testing our application in Cloud on K8s. Our inbound messages are coming over Kafka and we are writing back to Kafka. Our architecture is such that where Kafka threads push the message onto disruptor(blocking wait strategy, 512 size) and business thread takes the message out of the disruptor to process. To simulate load we primed our Kafka topic(4 partitions) with close 500K messages when the application was not running. We then started our application to gauge the load.

What we saw was disruptor fills up completely with 0 capacity remaining and then start draining and this continues over and over.

Is this the right behavior or are we using disruptors in the wrong way? Please share your thoughts?

1

There are 1 best solutions below

0
On

You should start the disruptor before enqueueing messages to it. The events will be processed by the event handlers as soon as they are published to the ring buffer.

The number of events waiting in the ring buffer at any one time will depend on the inbalance between the producer(s) and consumer(s) processing times.

Note that when the ring buffer is full you will experience back pressure when using the next() method to claim a sequence in the ring buffer and your producer thread will be blocked.