Does anyone know a standard way to batch http requests? Meaning - sending multiple http atomic requests in one round trip?
We need such mechanism in our REST API implementation for performance reasons. This kind of mechanism can reduce dramatically the number of round trips that the client needs to perform to consume the API.
Thanks in advance,
Shay
You create batch requests by calling new_batch_http_request() on your service object, which returns a BatchHttpRequest object, and then calling add() for each request you want to execute. You may pass in a callback with each request that is called with the response to that request. The callback function arguments are a unique request identifier for each API call, a response object which contains the API call response, and an exception object which may be set to an exception raised by the API call. After you've added the requests, you call execute() to make the requests. The execute() function blocks until all callbacks have been called.
References:
You can try this too https://developers.google.com/api-client
library/python/guide/batch
https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch
There's an official HTTP way to do that which is called HTTP Pipelining. But you may have more problems with the browser side than with the server-side. So you may be able to use it if you have a hig-level of control on the client side only.
XHR does not always allow pipeling, and AFAIK you have no control of the HTTP tunneling with Javascript. So a basic ajax-jQuery implementation cannot exists. But you may find some advanced things with Comet and the Bayeux protocol, emulating bi-directionnal long-term tcp-connections, where you will certainly reduce the tcp round trips.
I'm not a comet specialist, but you may find useful informations on this Comet & HTTP Pipeling article, to my understanding most of this is highly experimental, but at least you could have a nice fallback with 'classical' comet when HTTP Pipelining is not available. This would maybe need a retag or a new question.
If using dedicated 'aggregate' resources as fumanchu said above does not work for you, you can also try if you can move representations of less volatile resources to caches to reduce load on your system. For example: HTML pages on the 'human' Web often include loads and loads of images and the many sub request are of no concern there.
That's a problem with REST. They are at entity level. The REST idea is to have each URL uniquely identify a resource. Of course you can introduce aggregated resource. For ex, www.yoursite.com/customerA?include=Orders,Faults,Incidents This returns the XML for CustomerA but also returns the Orders, faults, Incidents of the customer as embedded collection.
Define a new resource that contains the data the client wants. See http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-743
If you're looking at REST based services or an API of some kind. There is some beginnings of a standard here http://www.odata.org/documentation/odata-version-3-0/batch-processing/
And an implementation by Google here https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch