What might cause the big overhead of making a Http

2019-04-09 00:45发布

问题:

When I send/receive data using HttpWebRequest (on Silverlight) in small blocks, I measure the very small throughput of 500 bytes/s over a "localhost" connection. When sending the data in large blocks, I get 2 MB/s, which is some 5000 times faster.

Does anyone know what could cause this incredibly big overhead?

Additional info:

  • I'm using the HTTP POST method
  • I did the performance measurement on both Firefox 3.6 and Internet Explorer 7. Both showed similar results.
  • My CPU is loaded for only 10% (quad core, so 40% actually)
  • WebClient showed similar results
  • WCF/SOAP showed similar results

Update: The Silverlight client-side code I use is essentially my own implementation of the WebClient class. The reason I wrote it is because I noticed the same performance problem with WebClient, and I thought that the HttpWebRequest would allow to tweak the performance issue. Regrettably, this did not work. The implementation is as follows:

public class HttpCommChannel
{
    public delegate void ResponseArrivedCallback(object requestContext, BinaryDataBuffer response);

    public HttpCommChannel(ResponseArrivedCallback responseArrivedCallback)
    {
        this.responseArrivedCallback = responseArrivedCallback;
        this.requestSentEvent = new ManualResetEvent(false);
        this.responseArrivedEvent = new ManualResetEvent(true);
    }

    public void MakeRequest(object requestContext, string url, BinaryDataBuffer requestPacket)
    {
        responseArrivedEvent.WaitOne();
        responseArrivedEvent.Reset();

        this.requestMsg = requestPacket;
        this.requestContext = requestContext;

        this.webRequest = WebRequest.Create(url) as HttpWebRequest;
        this.webRequest.AllowReadStreamBuffering = true;
        this.webRequest.ContentType = "text/plain";
        this.webRequest.Method = "POST";

        this.webRequest.BeginGetRequestStream(new AsyncCallback(this.GetRequestStreamCallback), null);
        this.requestSentEvent.WaitOne();
    }

    void GetRequestStreamCallback(IAsyncResult asynchronousResult)
    {
        System.IO.Stream postStream = webRequest.EndGetRequestStream(asynchronousResult);

        postStream.Write(requestMsg.Data, 0, (int)requestMsg.Size);
        postStream.Close();

        requestSentEvent.Set();
        webRequest.BeginGetResponse(new AsyncCallback(this.GetResponseCallback), null);
    }

    void GetResponseCallback(IAsyncResult asynchronousResult)
    {
        HttpWebResponse response = (HttpWebResponse)webRequest.EndGetResponse(asynchronousResult);
        Stream streamResponse = response.GetResponseStream();
        Dim.Ensure(streamResponse.CanRead);
        byte[] readData = new byte[streamResponse.Length];
        Dim.Ensure(streamResponse.Read(readData, 0, (int)streamResponse.Length) == streamResponse.Length);
        streamResponse.Close();
        response.Close();

        webRequest = null;
        responseArrivedEvent.Set();
        responseArrivedCallback(requestContext, new BinaryDataBuffer(readData));
    }

    HttpWebRequest webRequest;
    ManualResetEvent requestSentEvent;
    BinaryDataBuffer requestMsg;
    object requestContext;
    ManualResetEvent responseArrivedEvent;
    ResponseArrivedCallback responseArrivedCallback;
}

I use this code to send data back and forth to an HTTP server.

Update: after extensive research, I conclude that the performance problem is inherent to Silverlight v3.

回答1:

Quite possibly you're witnessing the the effects of the Nagle Algorithm, try:

this.webRequest.UseNagleAlgorithm.ServicePoint = false;

Also, the Expect100Continue 'handshake' is relevant to soap service call performance:

this.webRequest.Expect100Continue.ServicePoint = false;

UDPATE:

Just realised that the ServicePoint isn't available in Compact Framework. However you can prove the point by doing:

ServicePointManager.UseNagleAlgorithm = false

Or changing the relavant setting in the app config file, or whatever the equivalnent is in silverlight?



回答2:

I suspect that your problem is simply latency. Any message takes some time to get to the server, and be parsed and processed and a response generated, and the response takes some time to return to the client, and be parsed into a usable response. Your performance is most likely being dominated by round-trip time.

Fundamentally any interface that crosses a communication boundary - be it inter-process or inter-machine - should be 'chunky' not 'chatty'. Send as much information in each request as you can, and get as much data in response as you can. It may seem trivial on the same machine, but I saw a tenfold improvement of performance in an application server by batching up commands in a worker process, rather than making a callback from the worker process into the main server process for each command.

You really answered your own question by indicating that you get much better performance when you use large block sizes.



回答3:

As Mike Dimmick mentioned in his answer latency issue can cause problems, however in addition to latency issues in the case of very small data payloads, the allocation of a thread for execution (even with a threadpool) followed by the establishment of the connection will account for a much larger percentage of the total time taken, than with the bulk payload route.