Can libcurl be used to make multiple concurrent re

2019-08-09 09:23发布

问题:

I am using libcurl for one of my projects. I know that curl is not used to make multiple concurrent requests but does libcurl support it?

I know there are other tools like ab but that there are many features that libcurl provides. Again I know I can use curl within script to run multiple requests,but that's not what I am looking for.

I could not find a satisfactory answer for this expect this one. Although, It's not conclusive.

I should be able to use multiple handles for multiple connections.

Has anyone tried this? Are there any gotchas I need to look out for?
I should be able to do something like this:

 my_app --total_connections 1000 --concurrency 100 <Other libcurl options> url

回答1:

To test what you are looking for, i wrote a little C program. It executes 10 http-get requests using libcurl in a loop. The loop is parallelized using openmp (if available).

To run it, just save it in a file called for example parallel_curl_test.cpp and compile it two times. First using g++ parallel_curl_test.cpp -fopenmp $(pkg-config --libs --cflags libcurl) -o parallel_curl for the parallel version and a second time using g++ parallel_curl_test.cpp $(pkg-config --libs --cflags libcurl) -o sequential_curl without openmp for the sequential version.

Here is the code:

#include <cmath>
#include <stdio.h>
#include <curl/curl.h>
#include <time.h>

void curl_request();
size_t write_data(void *, size_t, size_t, void *);

static struct timeval tm1;
static int num_requests = 10;

static inline void start()
{
    gettimeofday(&tm1, NULL);
}

static inline void stop()
{
    struct timeval tm2;
    gettimeofday(&tm2, NULL);
    unsigned long long t = 1000 * (tm2.tv_sec - tm1.tv_sec) + (tm2.tv_usec - tm1.tv_usec) / 1000;
    printf("%d requests in %llu ms\n",num_requests , t);
}

int main()
{           
    start();
    #pragma omp parallel for
    for(int n=0; n<num_requests; ++n){
        curl_request();
    }
    stop();

    return 0;
}

void curl_request()
{
    CURL *curl;
    CURLcode res;

    curl = curl_easy_init();
    if(curl) {
    curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
    curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
        curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data);
    res = curl_easy_perform(curl);
    if(res != CURLE_OK)
        fprintf(stderr, "curl_request() failed: %s\n",
            curl_easy_strerror(res));

        curl_easy_cleanup(curl);
    }
}

size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp)
{
   return size * nmemb;
}

The output for ./parallel_curl will look like this:

10 requests in 657 ms

the output for ./sequential_curl will look something like:

10 requests in 13794 ms

As you can see, the parallel_curl which uses concurrency finished significantly faster than sequential_curl which ran sequential.

Thus the answer to your question is : Yes!

Of course, sequential execution could be done much more efficient using pipelining, keep-alives and reusage of resources. But this is another question.



标签: c curl libcurl