How to make batch http call in java

2019-07-13 18:11发布

I am trying to access another service by Http to fetch data using HttpClient. The uri should look like endpoint:80/.../itemId.

I am wondering if there is a way to make a batch call to specify a set of itemIds? I did find someone suggests to .setHeader(HttpHeaders.CONNECTION, "keep-alive") when creating request. Via doing this, how could I release the client after getting all data?

Also, seems like this method still need to get one response then send another request? Is this possible to do it as async, and how? BTW, seems like I could not use AsyncHttpClient in this case for some reason.

Since I am barely know nothing about HttpClient, the question may look dumb. Really hope someone could help me solve the problem.

1条回答
三岁会撩人
2楼-- · 2019-07-13 18:21

API support on the server

There is a small chance that the API supports requesting multiple IDs at a time (e.g. using a URL of the form http://endpoint:80/.../itemId1,itemId2,itemId3). Check the API documentation to see if this is available, because if so that would be the best solution.

Persistent Connections

It looks like Apache HttpClient uses persistent ("keep alive") connections by default (see the Connection Management tutorial linked in @kichik's comment). The logging facilities could help to verify that connections are reused for several requests.

To release the client, use the close() method. From 2.3.4. Connection manager shutdown:

When an HttpClient instance is no longer needed and is about to go out of scope it is important to shut down its connection manager to ensure that all connections kept alive by the manager get closed and system resources allocated by those connections are released.

CloseableHttpClient httpClient = <...>
httpClient.close();

Persistent connections eliminate the overhead of establishing new connections, but as you've noted the client will still wait for a response before sending the next request.

Multithreading and Connection Pooling

You can make your program multithreaded and use a PoolingHttpClientConnectionManager to control the number of connections made to the server. Here is an example based on 2.3.3. Pooling connection manager and 2.4. Multithreaded request execution:

import java.io.*;
import org.apache.http.*;
import org.apache.http.client.*;
import org.apache.http.client.methods.*;
import org.apache.http.client.protocol.*;
import org.apache.http.impl.client.*;
import org.apache.http.impl.conn.*;
import org.apache.http.protocol.*;

// ...
PoolingHttpClientConnectionManager cm =
        new PoolingHttpClientConnectionManager();
cm.setMaxTotal(200); // increase max total connection to 200
cm.setDefaultMaxPerRoute(20); // increase max connection per route to 20
CloseableHttpClient httpClient = HttpClients.custom()
        .setConnectionManager(cm)
        .build();

String[] urisToGet = { ... };
// start a thread for each URI
// (if there are many URIs, a thread pool would be better)
Thread[] threads = new Thread[urisToGet.length];
for (int i = 0; i < threads.length; i++) {
    HttpGet httpget = new HttpGet(urisToGet[i]);
    threads[i] = new Thread(new GetTask(httpClient, httpget));
    threads[i].start();
}
// wait for all the threads to finish
for (int i = 0; i < threads.length; i++) {
    threads[i].join();
}

class GetTask implements Runnable {
    private final CloseableHttpClient httpClient;
    private final HttpContext context;
    private final HttpGet httpget;

    public GetTask(CloseableHttpClient httpClient, HttpGet httpget) {
        this.httpClient = httpClient;
        this.context = HttpClientContext.create();
        this.httpget = httpget;
    }

    @Override
    public void run() {
        try {
            CloseableHttpResponse response = httpClient.execute(
                httpget, context);
            try {
                HttpEntity entity = response.getEntity();
            } finally {
                response.close();
            }
        } catch (ClientProtocolException ex) {
            // handle protocol errors
        } catch (IOException ex) {
            // handle I/O errors
        }
    }
}

Multithreading will help saturate the link (keep as much data flowing as possible) because while one thread is sending a request, other threads can be receiving responses and utilizing the downlink.

Pipelining

HTTP/1.1 supports pipelining, which sends multiple requests on a single connection without waiting for the responses. The Asynchronous I/O based on NIO tutorial has an example in section 3.10. Pipelined request execution:

HttpProcessor httpproc = <...>
HttpAsyncRequester requester = new HttpAsyncRequester(httpproc);
HttpHost target = new HttpHost("www.apache.org");
List<BasicAsyncRequestProducer> requestProducers = Arrays.asList(
    new BasicAsyncRequestProducer(target, new BasicHttpRequest("GET", "/index.html")),
    new BasicAsyncRequestProducer(target, new BasicHttpRequest("GET", "/foundation/index.html")),
    new BasicAsyncRequestProducer(target, new BasicHttpRequest("GET", "/foundation/how-it-works.html"))
);
List<BasicAsyncResponseConsumer> responseConsumers = Arrays.asList(
    new BasicAsyncResponseConsumer(),
    new BasicAsyncResponseConsumer(),
    new BasicAsyncResponseConsumer()
);
HttpCoreContext context = HttpCoreContext.create();
Future<List<HttpResponse>> future = requester.executePipelined(
    target, requestProducers, responseConsumers, pool, context, null);

There is a full version of this example in the HttpCore Examples ("Pipelined HTTP GET requests").

Older web servers may be unable to handle pipelined requests correctly.

查看更多
登录 后发表回答