[Django]-Celery not connecting to redis server

16👍

changing localhost to 127.0.0.1 solved the problem in my case :

CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True

2👍

I am using celery & redis.Had the same issue, and I resolved it like

In you celery.py go to line

 app.config_from_object("django.conf:settings", namespace="CELERY")

You just have to remove the namespace="CELERY" and finally your code should be

     app.config_from_object("django.conf:settings")

This is working perfect in my case.
firstly it was like this

And After update above settings

1👍

If you look closely at your celery output from celery@octopus, you’ll see that it is connected to an amqp broker and not a redis broker: amqp://guest:**@localhost:5672//. This means that your octopus worker has been configured somewhere to point ot a rabbitmq broker, and not a redis broker. In order to correct this, you’ll have to find where that rabbitmq broker configuration setting is and see how that is being pulled into celery. Because what that broker_url tells us is that somehow celery is being reconfigured elsewhere or that there are other settings that are being applied on the server.

👤2ps

1👍

I was having same problem. Solved by running celery with right settings files. Your configs are absolutely right, the problem is from where you are running celery. You might either providing wrong application module or not providing it at all, while running celery.

you need to specify --app=<module_which_contains_your_celery_file>

exact syntax is following:

celery --app=<APPLICATION> worker

when celery run with no –app or -A, it will use transport url = amqp://guest:@localhost:5672//**

This can be observed from your celery logs also:

in first celery log:

- ** ---------- .> transport:  amqp://guest:**@localhost:5672//
- ** ---------- .> results:     disabled://

and in another (working) celery log:

- ** ---------- .> transport:   redis://localhost:6379//
- ** ---------- .> results:     redis://localhost:6379/

0👍

Please update the CELERY_BROKER_URL like the following:

CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'

You can check the documentation regarding connecting Redis as broker in here.

👤ruddra

0👍

In the interest of closing out this question, I will answer it. To be honest, I am not sure how I fixed the problem, but it just went away after a few changes and reboots of my system. The setup is still the same as above.

I later discovered that I had an issue with naming modules in that two modules had the same name. Once I corrected that issue, most of my other celery problems went away. However, to be clear, the redis/celery part was working before I fixed the module naming issue.

Thanks to everyone who posted suggestions to my question!

0👍

Completely unrelated to how you solved it, but I had a similar issue where the “transport” url in config was looking for port 5672 (instead of redis’s 6379, the results url was correct). While debugging earlier I had removed namespace from app.config_from_object. Putting it back solved my issue. Putting it here for anybody who makes the same mistake and comes here

Leave a comment