Can a long running redis job yield the worker and requeue itself?

811 Views Asked by At

Is it possible for a job to yield the worker and put itself back to the end of the queue?

The jobs in a redis queue are processed sequentially and a long-running job might be hogging the cpu. Is there a pattern for it to decide it has consumed enough time and should yield to other items in the queue?

I note that there is provision to requeue_job in the rq implementation; if the job has "failed". Perhaps that is a way to hack up a way to do it?

Or perhaps there is a job timeout that can be leveraged? Or is this branch of thinking just another deadend?

1

There are 1 best solutions below

0
On

There is a job timeout parameter:

job = q.enqueue(count_words_at_url, 'http://stackoverflow.com', timeout=180)

The default timeout is 180 if you choose not to set it explicitly.

If the job times out, the job will move to the failed queue.

You can later requeue failed jobs:

from redis import Redis
from rq import Queue, get_failed_queue

r = Redis()
failed_queue = get_failed_queue(r)
print failed_queue.count
for job_id in failed_queue.job_ids:
    failed_queue.requeue(job_id)