How to measure network throughput during runtime

2019-06-01 09:14发布

问题:

I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality.

So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option because they do their measurements by creating their own traffic.

The best idea I could come up with is: somehow determine the clock difference of client and server, include a server send timestamp with each message and then have the client report back to the server the difference between this timestamp and the time the client received the message. The server can then determine the time it took a message to reach the client.

Is there an easier way to do this? Are there any libraries for this?

回答1:

A simple solution:

Save a timestamp on the server before you send a defined amount of packages.

Then send the packages to the client and let the client report back to the server when it has recieved the last package.

Save a new timestamp on the server when the client has answered.

all you need to to now is determine die RTT and substract RTT/2 from the difference between the two timestamps.

This should get you a fairly accurate measurement.