Celery connecting to rabbitmq-server instead of redis-server

4.1k Views Asked by At

I have a Django application which I want to configure it celery to run background tasks.

Packages:

  1. celery==4.2.1

  2. Django==2.1.3

  3. Python==3.5

  4. Redis-server==3.0.6

Configuration of celery in settings.py file is:

CELERY_BROKER_URL = 'redis://localhost:6379'

CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/Kolkata'
CELERY_BEAT_SCHEDULE = {
    'task-number-one': {
            'task': 'app.tasks.task_number_one',
            'schedule': crontab(minute='*/1'),
    },
}

And celery.py file:

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings.prod')

app = Celery('project')

# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings')

# Load task modules from all registered Django app configs.
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)


@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

When I run : celery -A project worker -l info -B -E

It points to rabmmitmq server, instead it should point to redis-server, shown below:

 -------------- celery@user-desktop v4.2.1 (windowlicker)
---- **** ----- 
--- * ***  * -- Linux-4.15.0-39-generic-x86_64-with-Ubuntu-18.04-bionic 2018-11-21 12:04:51
-- * - **** --- 
- ** ---------- [config]
- ** ---------- .> app:         project:0x7f8b80f78d30
- ** ---------- .> transport:   amqp://guest:**@localhost:5672//
- ** ---------- .> results:     redis://localhost:6379/
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: ON
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery


[tasks]
  . app.tasks.task_number_one
  . project.celery.debug_task

[2018-11-21 12:04:51,741: INFO/Beat] beat: Starting...

The same happend in production enviornment. In production I have deployed the Django application with Gunicorn and Nginx, and now I want to implement some method to run background tasks, as django-crontab package is not working.

Problem:

  1. What is the problem with celery configuration?

  2. Could anyone please recommend a method to run periodic background task?

**Note: I have tried implementing supervisor, but it seems supervisor is not compatible with python3, and therefore could not configure it.

6

There are 6 best solutions below

1
On BEST ANSWER

The setting for the broker url changed in v4. It should be BROKER_URL and not CELERY_BROKER_URL.

0
On

after changing BROKER_URL TO CELERY_BROKER_URL have to change this line in celery.py

app = Celery('proj')

add 'backend='redis://localhost', broker='redis://' so it looks like this

app = Celery('proj', backend='redis://localhost', broker='redis://')

now it will work :)

0
On

If you have redis as broker with .delay() method for queing tasks and got strange connection error 111 refusing connection to rabbitmq (which you don't use at all) try to use .apply_async()

This behavior happens in production.

0
On

Replace CELERY_BROKER_URL = 'redis://localhost:6379' with BROKER_URL = 'redis://localhost:6379'. This worked for me.

1
On

If you have copied the content of celery.py from the celery official website https://docs.celeryproject.org/en/latest/django/first-steps-with-django.html

try changing following line, from

app.config_from_object('django.conf:settings', namespace='CELERY')

to

app.config_from_object('django.conf:settings', namespace='')

0
On

In my case the issue was using the CELERY_BROKER_URL as the env variable to set the rediss endpoint and creating the Django settings CELERY_BROKER_URL with it

if is_env_var_set("CELERY_BROKER_URL"):
    CELERY_BROKER_URL = (
        "rediss://" +
        os.getenv("CELERY_BROKER_ENDPOINT") +
        ":6379" + "/0" +
        "?ssl_cert_reqs=required"
    )
else:
    CELERY_BROKER_URL = 'redis://localhost:6379/0'

setting env variable

export CELERY_BROKER_URL="xyz.redis.region.oci.oraclecloud.com"

This was causing the celery to use amqp scheme instead of defined rediss scheme.

CELERY_BROKER_URL is the special env variable that is used by celery, using this variable for injecting redis endpoint caused this difficult to debug issue.

Solution

if is_env_var_set("CELERY_BROKER_ENDPOINT"):
    CELERY_BROKER_URL = (
        # note the rediss:// instead of redis:// to use SSL
        "rediss://" +
        os.getenv("CELERY_BROKER_ENDPOINT") +
        ":6379" + "/0" +
        "?ssl_cert_reqs=required"
    )
else:
    CELERY_BROKER_URL = 'redis://localhost:6379/0'