Asynchronous task queue processing of in-memory da

2019-06-10 19:50发布

I have a singleton in-memory data-structure inside my Django project (some kind of kd-tree that needs to get accessed all across the project).

For those that don't know Django, I believe the same issue would appear with a regular Python code.

I know it's evil (Singleton), and I'm looking for better ways to implement that, but my question here is related to another topic:

I am instantiating the singleton inside my code by calling Singleton.instance() and it gives me the object correctly, it then stays at some place in the memory inside my ./manage.py runserver.

The problem is that I am making some asynchronous processing with Celery on this same Singleton data-structure (such as reconstructing the kd-tree).

BUT when launching a Celery worker, it runs the code within a different process, and therefore has a different memory space, which means that it works on a totally different instance of the Singleton.

What would be the best Design Pattern to this issue? I have thought of doing all the processing related to my data-structure inside the Django project (without using Celery) but what I liked very much about Celery is that the processing required on the data-structure can take a long-time (around 30 seconds) and it needs to handle concurrency nicely (there could be several requests at the same time to reconstruct the Kd-Tree).

I would be very glad to have some insights for this since I'm not making any progress those last 3 days. Thanks a lot.

0条回答
登录 后发表回答