I want to pull the messages from AMQ to python. I want to use python for batch processing (like if I have 1000 messages in the queue, I need to dequeue every 100 messages and process them and take the next 100 messages and process...until all messages are dequeued.)
here is my python code for batchListener:
class BatchEventListner(stomp.ConnectionListener):
def on_message(self, headers, message):
print('received a message "%s"' % message)
batchLsnr = BatchEventListner()
self.conn = stomp.Connection(host_and_ports=hosts)
self.conn.set_listener('', batchLsnr)
self.batchLsnr = batchLsnr
self.conn.start()
self.conn.connect('username', 'password', wait=True)
self.conn.subscribe(destination='/queue/' + self.queue_name, id=1, ack='auto')
I wrote a simulator to push messages to AMQ. once I push 1000 messages to ActiveMQ. when the consumers start, python code will start pulling the data from ActiveMQ but python code is pulling more than 100 messages at once. (processing is happening only for 100 but more than 100 messages are getting dequeued). i.e., for the last batch (100 messages), we are not seeing any messages in ActiveMQ but the messages are getting in the python process.
1. Does stomp holds any messages while dequeuing from ActiveMQ ? 2. is stomp holds any data while the batch is processing ?
You may be seeing the result of prefetch. Try setting the
activemq.prefetchSize
header in yourSUBSCRIBE
frame to1
.Also, try setting your acknowledgement mode to
client
orclient-individual
. Usingauto
will basically trigger the broker to dispatch messages to the client as fast as it can.Keep in mind that prefetching messages is a performance optimization so lowering it will potentially result in a performance drop. Of course, performance must be weighed against other factors of functionality. I recommend you test and tune until you meet all of your goals or find an acceptable compromise between them.