How do I implement client-side bandwidth throttlin

2019-08-07 16:39发布

问题:

I am tasked with writing a client-side data download system (on Linux) that uses FTP or HTTP to download terabyte-sized data from external partners to our local site. Our company's network admin tells me that I cannot exceed a certain bandwidth. What is the best way for me to implement such a system? Do existing libraries exist?

I am open to writing my own FTP and HTTP clients (in either C or Java on Linux) but would prefer to stay out of the kernel. I know that I can limit the rate at which my FTP/HTTP client calls a socket read(), but what happens if the server-side calls write() faster than my limit?

回答1:

You could build another layer on top of an InputStream: In the read method, you can count the bytes so far. If the number of bytes/second exceed a certain limit, let the download thread sleep for a while. TCP's flow control does the rest.



回答2:

I know Apache JMeter simulates slow connections. You could maybe take look at the code.



回答3:

If you know the network path delay you could just set your TCP receive buffer size to the desired bandwidth-delay product. That will throttle the sender all right. But the resulting value may be too small for your platform, so it may adjust it upwards. Check the value after you set it.

Does your netadmin know that TCP automatically shares bandwidth fairly?



回答4:

Are you open to off the shelf GUI or command line products? Filezillia provides this. There also is a linux command line client called lftp. A settable parameter is net:limit-total-rate which will limit the rate of transfer. Since this client supports multiple transfers at one time, it also has a parameter net:limit-rate.



回答5:

To keep it simple, if you are on linux you just could use wget instead of re-inventing the wheel? Take a look at the --limit-rate switch.

But back on topic :) This answer could get you started: How can I implement a download rate limited in Java?