I'm currently using version 5.2.6 of Celery and version 6.2.6 of Redis. When I turn on the task_reject_on_worker_lost flag, I am expecting Celery to redeliver a task executed by a worker that died abruptly. However, trying this on Redis as message broker my task doesn't actually get redelivered immediately after a worker goes down. On the other hand, when I try the exact same configuration with RabbitMQ it works as expected.
Any pointers on how to achieve the same behavior with Redis as message broker?
I am new to celery recently and facing the same issue as you did. Which means with ack config:
If broker config use redis:
Task will not be re queued if worker being killed during running task and restarted again.
But if use rabbitmq:
Task got re queued to run.
My environment
Finally, I found this comment from celery github issues.
Additional config value
visibility_timeout
ofbroker_transport_options
is required forredis
broker.I added the additional config in my config and it's working.
FYI, here is my config file :
celery_config.py
app.py