可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm trying to run example from Celery documentation.
I run: celeryd --loglevel=INFO
/usr/local/lib/python2.7/dist-packages/celery/loaders/default.py:64: NotConfigured: No 'celeryconfig' module found! Please make sure it exists and is available to Python.
"is available to Python." % (configname, )))
[2012-03-19 04:26:34,899: WARNING/MainProcess]
-------------- celery@ubuntu v2.5.1
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://guest@localhost:5672//
- ** ---------- . loader: celery.loaders.default.Loader
- ** ---------- . logfile: [stderr]@INFO
- ** ---------- . concurrency: 4
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
tasks.py:
# -*- coding: utf-8 -*-
from celery.task import task
@task
def add(x, y):
return x + y
run_task.py:
# -*- coding: utf-8 -*-
from tasks import add
result = add.delay(4, 4)
print (result)
print (result.ready())
print (result.get())
In same folder celeryconfig.py:
CELERY_IMPORTS = ("tasks", )
CELERY_RESULT_BACKEND = "amqp"
BROKER_URL = "amqp://guest:guest@localhost:5672//"
CELERY_TASK_RESULT_EXPIRES = 300
When I run "run_task.py":
on python console
eb503f77-b5fc-44e2-ac0b-91ce6ddbf153
False
errors on celeryd server
[2012-03-19 04:34:14,913: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': 'tasks.add', 'utc': False, 'args': (4, 4), 'expires': None, 'eta': None, 'kwargs': {}, 'id': '841bc21f-8124-436b-92f1-e3b62cafdfe7'}
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 444, in receive_message
self.strategies[name](message, body, message.ack_log_error)
KeyError: 'tasks.add'
Please explain what's the problem.
回答1:
You can see the current list of registered tasks in the celery.registry.TaskRegistry
class. Could be that your celeryconfig (in the current directory) is not in PYTHONPATH
so celery can't find it and falls back to defaults. Simply specify it explicitly when starting celery.
celeryd --loglevel=INFO --settings=celeryconfig
You can also set --loglevel=DEBUG
and you should probably see the problem immediately.
回答2:
I had the same problem:
The reason of "Received unregistered task of type.."
was that celeryd service didn't find and register the tasks on service start (btw their list is visible when you start
./manage.py celeryd --loglevel=info
).
These tasks should be declared in CELERY_IMPORTS = ("tasks", )
in settings file.
If you have a special celery_settings.py
file it has to be declared on celeryd service start as --settings=celery_settings.py
as digivampire wrote.
回答3:
I think you need to restart the worker server. I meet the same problem and solve it by restarting.
回答4:
Whether you use CELERY_IMPORTS
or autodiscover_tasks
, the important point is the tasks are able to be found and the name of the tasks registered in Celery should match the names the workers try to fetch.
When you launch the Celery, say celery worker -A project --loglevel=DEBUG
, you should see the name of the tasks. For example, if I have a debug_task
task in my celery.py
.
[tasks]
. project.celery.debug_task
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
If you can't see your tasks in the list, please check your celery configuration imports the tasks correctly, either in --setting
, --config
, celeryconfig
or config_from_object
.
If you are using celery beat, make sure the task name, task
, you use in CELERYBEAT_SCHEDULE
matches the name in the celery task list.
回答5:
I also had the same problem; I added
CELERY_IMPORTS=("mytasks")
in my celeryconfig.py
file to solve it.
回答6:
For me this error was solved by ensuring the app containing the tasks was included under django's INSTALLED_APPS setting.
回答7:
Using --settings did not work for me. I had to use the following to get it all to work:
celery --config=celeryconfig --loglevel=INFO
Here is the celeryconfig file that has the CELERY_IMPORTS added:
# Celery configuration file
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'amqp://'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'America/Los_Angeles'
CELERY_ENABLE_UTC = True
CELERY_IMPORTS = ("tasks",)
My setup was a little bit more tricky because I'm using supervisor to launch celery as a daemon.
回答8:
I had this problem mysteriously crop up when I added some signal handling to my django app. In doing so I converted the app to use an AppConfig, meaning that instead of simply reading as 'booking
' in INSTALLED_APPS
, it read 'booking.app.BookingConfig'
.
Celery doesn't understand what that means, so I added, INSTALLED_APPS_WITH_APPCONFIGS = ('booking',)
to my django settings, and modified my celery.py
from
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
to
app.autodiscover_tasks(
lambda: settings.INSTALLED_APPS + settings.INSTALLED_APPS_WITH_APPCONFIGS
)
回答9:
I had the same problem running tasks from Celery Beat. Celery doesn't like relative imports so in my celeryconfig.py
, I had to explicitly set the full package name:
app.conf.beat_schedule = {
'add-every-30-seconds': {
'task': 'full.path.to.add',
'schedule': 30.0,
'args': (16, 16)
},
}
回答10:
app = Celery('proj',
broker='amqp://',
backend='amqp://',
include=['proj.tasks'])
please include=['proj.tasks']
You need go to the top dir, then exec this
celery -A app.celery_module.celeryapp worker --loglevel=info
not
celery -A celeryapp worker --loglevel=info
in your celeryconfig.py input imports = ("path.ptah.tasks",)
please in other module invoke task!!!!!!!!
回答11:
I did not have any issue with Django. But encountered this when I was using Flask. The solution was setting the config option.
celery worker -A app.celery --loglevel=DEBUG --config=settings
while with Django, I just had:
python manage.py celery worker -c 2 --loglevel=info
回答12:
What worked for me, was to add explicit name to celery task decorator. I changed my task declaration from @app.tasks
to @app.tasks(name='module.submodule.task')
Here is an example
# test_task.py
@celery.task
def test_task():
print("Celery Task !!!!")
# test_task.py
@celery.task(name='tasks.test.test_task')
def test_task():
print("Celery Task !!!!")
回答13:
This, strangely, can also be because of a missing package. Run pip to install all necessary packages:
pip install -r requirements.txt
autodiscover_tasks
wasn't picking up tasks that used missing packages.
回答14:
If you are running into this kind of error, there are a number of possible causes but the solution I found was that my celeryd config file in /etc/defaults/celeryd was configured for standard use, not for my specific django project. As soon as I converted it to the format specified in the celery docs, all was well.
回答15:
The solution for me to add this line to /etc/default/celeryd
CELERYD_OPTS="-A tasks"
Because when I run these commands:
celery worker --loglevel=INFO
celery worker -A tasks --loglevel=INFO
Only the latter command was showing task names at all.
I have also tried adding CELERY_APP line /etc/default/celeryd but that didn't worked either.
CELERY_APP="tasks"
回答16:
I had the issue with PeriodicTask classes in django-celery, while their names showed up fine when starting the celery worker every execution triggered:
KeyError: u'my_app.tasks.run'
My task was a class named 'CleanUp', not just a method called 'run'.
When I checked table 'djcelery_periodictask' I saw outdated entries and deleting them fixed the issue.
回答17:
Just to add my two cents for my case with this error...
My path is /vagrant/devops/test
with app.py
and __init__.py
in it.
When I run cd /vagrant/devops/ && celery worker -A test.app.celery --loglevel=info
I am getting this error.
But when I run it like cd /vagrant/devops/test && celery worker -A app.celery --loglevel=info
everything is OK.
回答18:
I've found that one of our programmers added the following line to one of the imports:
os.chdir(<path_to_a_local_folder>)
This caused the Celery worker to change its working directory from the projects' default working directory (where it could find the tasks) to a different directory (where it couldn't find the tasks).
After removing this line of code, all tasks were found and registered.
回答19:
Celery doesn't support relative imports so in my celeryconfig.py, you need absolute import.
CELERYBEAT_SCHEDULE = {
'add_num': {
'task': 'app.tasks.add_num.add_nums',
'schedule': timedelta(seconds=10),
'args': (1, 2)
}
}
回答20:
An additional item to a really useful list.
I have found Celery unforgiving in relation to errors in tasks (or at least I haven't been able to trace the appropriate log entries) and it doesn't register them. I have had a number of issues with running Celery as a service, which have been predominantly permissions related.
The latest related to permissions writing to a log file. I had no issues in development or running celery at the command line, but the service reported the task as unregistered.
I needed to change the log folder permissions to enable the service to write to it.
回答21:
My 2 cents
I was getting this in a docker image using alpine. The django settings referenced /dev/log
for logging to syslog. The django app and celery worker were both based on the same image. The entrypoint of the django app image was launching syslogd
on start, but the one for the celery worker was not. This was causing things like ./manage.py shell
to fail because there wouldn't be any /dev/log
. The celery worker was not failing. Instead, it was silently just ignoring the rest of the app launch, which included loading shared_task
entries from applications in the django project
回答22:
In my case the error was because one container created files in a folder that were mounted on the host file-system with docker-compose.
I just had to do remove the files created by the container on the host system and I was able to launch my project again.
sudo rm -Rf foldername
(I had to use sudo because the files were owned by the root user)
Docker version: 18.03.1
回答23:
If you use autodiscover_tasks
, make sure that your functions
to be registered stay in the tasks.py
, not any other file. Or celery can not find the functions
you want to register.
Use app.register_task
will also do the job, but seems a little naive.
Please refer to this official specification of autodiscover_tasks
.
def autodiscover_tasks(self, packages=None, related_name='tasks', force=False):
"""Auto-discover task modules.
Searches a list of packages for a "tasks.py" module (or use
related_name argument).
If the name is empty, this will be delegated to fix-ups (e.g., Django).
For example if you have a directory layout like this:
.. code-block:: text
foo/__init__.py
tasks.py
models.py
bar/__init__.py
tasks.py
models.py
baz/__init__.py
models.py
Then calling ``app.autodiscover_tasks(['foo', bar', 'baz'])`` will
result in the modules ``foo.tasks`` and ``bar.tasks`` being imported.
Arguments:
packages (List[str]): List of packages to search.
This argument may also be a callable, in which case the
value returned is used (for lazy evaluation).
related_name (str): The name of the module to find. Defaults
to "tasks": meaning "look for 'module.tasks' for every
module in ``packages``."
force (bool): By default this call is lazy so that the actual
auto-discovery won't happen until an application imports
the default modules. Forcing will cause the auto-discovery
to happen immediately.
"""
回答24:
I encountered this problem as well, but it is not quite the same, so just FYI. Recent upgrades causes this error message due to this decorator syntax.
ERROR/MainProcess] Received unregistered task of type 'my_server_check'.
@task('my_server_check')
Had to be change to just
@task()
No clue why.