Django 1.6 + RabbitMQ 3.2.3 + Celery 3.1.9 - why d

2019-06-27 14:44发布

This seems to address a very similar issue, but doesn't give me quite enough insight: https://github.com/celery/billiard/issues/101 Sounds like it might be a good idea to try a non-SQLite database...

I have a straightforward celery setup with my django app. In my settings.py file I set a task to run as follows:

CELERYBEAT_SCHEDULE = {
    'sync_database': {
        'task': 'apps.data.tasks.celery_sync_database',
        'schedule': timedelta(minutes=5)
    }
}

I have followed the instructions here: http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html

I am able to open two new terminal windows and run celery processes as follows:

ONE - the celery beat process which is required for scheduled tasks and will put the task on the queue:

PROMPT> celery -A myproj beat
celery beat v3.1.9 (Cipater) is starting.
__    -    ... __   -        _
Configuration ->
    . broker -> amqp://myproj@localhost:5672//
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> djcelery.schedulers.DatabaseScheduler

    . logfile -> [stderr]@%INFO
    . maxinterval -> now (0s)
[2014-02-20 16:15:20,085: INFO/MainProcess] beat: Starting...
[2014-02-20 16:15:20,086: INFO/MainProcess] Writing entries...
[2014-02-20 16:15:20,143: INFO/MainProcess] DatabaseScheduler: Schedule changed.
[2014-02-20 16:15:20,143: INFO/MainProcess] Writing entries...
[2014-02-20 16:20:20,143: INFO/MainProcess] Scheduler: Sending due task sync_database (apps.data.tasks.celery_sync_database)
[2014-02-20 16:20:20,161: INFO/MainProcess] Writing entries...

TWO - the celery worker, which should take the task off the queue and run it:

PROMPT> celery -A myproj worker -l info

 -------------- celery@Jons-MacBook.local v3.1.9 (Cipater)
---- **** -----
--- * ***  * -- Darwin-13.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         myproj:0x1105a1050
- ** ---------- .> transport:   amqp://myproj@localhost:5672//
- ** ---------- .> results:     djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery


[tasks]
  . apps.data.tasks.celery_sync_database
  . myproj.celery.debug_task

[2014-02-20 16:15:29,402: INFO/MainProcess] Connected to amqp://myproj@127.0.0.1:5672//
[2014-02-20 16:15:29,419: INFO/MainProcess] mingle: searching for neighbors
[2014-02-20 16:15:30,440: INFO/MainProcess] mingle: all alone
[2014-02-20 16:15:30,474: WARNING/MainProcess] celery@Jons-MacBook.local ready.

When the task gets sent, however, it appears that about 50% of the time the worker runs the task and the other 50% of the time I get the following error:

[2014-02-20 16:35:20,159: INFO/MainProcess] Received task: apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25]
[2014-02-20 16:36:54,561: ERROR/MainProcess] Process 'Worker-4' pid:19500 exited with exitcode -11
[2014-02-20 16:36:54,580: ERROR/MainProcess] Task apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
  File "/Users/jon/dev/vpe/VAN/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost
    human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).

I am developing on a Macbook Pro running Mavericks.

Celery version 3.1.9 RabbitMQ 3.2.3 Django 1.6

Note that I am using django-celery 3.1.9 and have the djcelery app enabled.

1条回答
闹够了就滚
2楼-- · 2019-06-27 15:27

When I switched from SQLite to PostgreSQL the problem disappeared.

查看更多
登录 后发表回答