Download large file from HTTP with resume/retry su

2019-02-04 07:12发布

How to implement downloading a large file (~500MB) from HTTP in my application? I want to support automatic resume/retry, so that when connection is disconnected, my application can try to reconnect to get the file and avoid re-downloading the downloaded part, if possible (I know, this depends on the server as well).

This is similar to the behaviour in download managers and some browsers.

3条回答
何必那么认真
2楼-- · 2019-02-04 07:31

You can implement downloading from a web-server in C# from scratch in one of the two ways:

  1. Using the high-level APIs in System.Net such as HttpWebRequest, HttpWebResponse, FtpWebRequest, and other classes.

  2. Using the low-level APIs in System.Net.Sockets such as TcpClient, TcpListener and Socket classes.

The advantage of using the first approach is that you typically don't have to worry about the low level plumbing such as preparing and interpreting HTTP headers and handling the proxies, authentication, caching etc. The high-level classes do this for you and hence I prefer this approach.

Using the first method, a typical code to prepare an HTTP request to download a file from a url will look something like this:

HttpWebRequest request = (HttpWebRequest)WebRequest.Create(Url);
if (UseProxy)
{
    request.Proxy = new WebProxy(ProxyServer + ":" + ProxyPort.ToString());
    if (ProxyUsername.Length > 0)
        request.Proxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword);
}
//HttpWebRequest hrequest = (HttpWebRequest)request;
//hrequest.AddRange(BytesRead); ::TODO: Work on this
if (BytesRead > 0) request.AddRange(BytesRead);

WebResponse response = request.GetResponse();
//result.MimeType = res.ContentType;
//result.LastModified = response.LastModified;
if (!resuming)//(Size == 0)
{
    //resuming = false;
    Size = (int)response.ContentLength;
    SizeInKB = (int)Size / 1024;
}
acceptRanges = String.Compare(response.Headers["Accept-Ranges"], "bytes", true) == 0;

//create network stream
ns = response.GetResponseStream();        

At the end of the above code, you get a network-stream object which you can then use to read the bytes of the remote file as if you are reading any other stream object. Now, whether the remote url supports resuming partial downloads by allowing you to read from any arbitary position is determined by the "Accept-Ranges" HTTP header as shown above. If this value is set to anything other than "bytes", then this feature won't be supported.

In fact, this code is part of a bigger opensource download-manager that I'm trying to implement in C#. You may refer to this application and see if anything can be helpful to you: http://scavenger.codeplex.com/

查看更多
唯我独甜
3楼-- · 2019-02-04 07:31

There is open source .Net http file downloader with automatic resume/retry support (if server supports it), looks like it's exactly what you need, can try to use it:

https://github.com/Avira/.NetFileDownloader .

查看更多
劫难
4楼-- · 2019-02-04 07:48

There's a Internet Download Manager support for you to handle this. Download IDMCOMAPI.zip, Then with (Tlbimp) Type Library Importer, import the file IDManTypeInfo.tlb from the extracted zip.

cmd:

C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\TlbExp.exe (FilePathGoesHere)\IDManTypeInfo.tlb

Take note:

1.) You should run the Command Prompt in an Administrator mode.

2.) There are some files without resume capability.

查看更多
登录 后发表回答