I am implementing python-rq to pass domains in a queue and scrape it using Beautiful Soup. So i am running multiple workers to get the job done. I started 22 workers as of now, and all the 22 workers is registered in the rq dashboard. But after some time the worker stops by itself and is not getting displayed in dashboard. But in webmin, it displays all workers as running. The speed of crawling has also decreased i.e. the workers are not running. I tried running the worker using supervisor and nohup. In both the cases the workers stops by itself.
What is the reason for this? Why does workers stops by itself? And how many workers can we start in a single server?
Along with that, whenever a worker is unregistered from the rq dashboard, the failed count increases. I don't understand why?
Please help me with this. Thank You