A web crawler script that spawns at most 500 threads and each thread basically requests for certain data served from the remote server, which each server's reply is different in content and size from others.
i'm setting stack_size as 756K's for threads
threading.stack_size(756*1024)
which enables me to have the sufficient number of threads required and complete most of the jobs and requests. But as some servers' responses are bigger than others, and when a thread gets that kind of response, script dies with SIGSEGV.
stack_sizes more than 756K makes it impossible to have the required number of threads at the same time.
any suggestions on how can i continue with given stack_size without crashes? and how can i get the current used stack_size of any given thread?