How do I send lots of messages over NServiceBus without locking the Queue?

1.5k Views Asked by At

So I was doing some performance evaluations of NServiceBus and I realized that it behaves very oddly if you try to send say 1000 messages all at the same time... It actually sends them all async (which is fine) but it locks the queue from the handler. The result is the handler can not process any messages until the senders has completed sending all of there.

The behavior shows up in two slightly different ways.

  • Inside a Handler if you do a lot of sending, it looks like the receiving queue is locked until the handler completes (so say you add a thread sleep between each send, the receiver won't start handling messages until the Handler completes.

  • If I just send the message from a newed up Bus then a small sleep breaks the relationship, but if I just send say 1000 messages all at "once" the handler won't get the first one until after the last one is written, even though each one (at that point) should be a seporate call.

Is there an undocumented strategy here to batch send or something else going on... I understand you wouldn't "want" to do this normally, but understanding what happens during a Send from a handler, or a batch send from a normal BUS is pretty important to know ;-).

2

There are 2 best solutions below

1
On

NServiceBus message handlers, by default, run wrapped in a TransactionScope. The processing of a message, any updates you do to your business data and any send of new messages will either complete or roll back together. This is what transactional messaging is all about.

If you send 1000 messages in a message handler, then it will not complete until the underlying messaging infrastructure has received all of them successfully. This can take some time, depending on your hardware.

If you want to opt out of this safe-by-default approach, there are several things you can do. You can disable transactional handling for your NServiceBus endpoint, or you can just suppress the ambient transaction scope when sending the messages. Notice however that you no longer have any transactional guarantees, so if you get an exception after sending 500 of those 1000 messages those 500 will be sent, while 500 will not.

0
On

One of my teams strategy for this is to try to break down a large batch into smaller batches, and then have a handler that receives those smaller batches and pushes out a individual events for each one.

Scenario: We have an endpoint that reads a database log file and pushes out a "TransactionOccurred" event for each line of the log file. We then read the log file again after a 10 second timeout and push out another batch of messages.

So, instead of pushing out 5K messages in one handler, we broke it down into 5 messages of 1K a piece and sent a command of that. Then we had a handler that received the 1K batch message, looped through and published out an individual event for each message.

The issue came in around doing a "publish" for 5K messages because there were several events being published and each one had a different set of subscribers with queues on the same server and remote servers which slowed the system down.

With this strategy we were also able to turn the MaximumConcurrencyLevel up a little to process multiple messages at one time and were able to get a higher throughput.

We have done this on a handful of endpoints and each one is a little different regarding the batch size and the MaximumConcurrencyLevel value. I'd recommend getting a control set of 50-100K messages and moving these values around a little to what is the most optimal for your situation.