I'm wondering how to best measure network throughput during runtime. I'm writing a client/server application (both in java). The server regularly sends messages (of compressed media data) over a socket to the client. I would like to adjust the compression level used by the server to match the network quality.
So I would like to measure the time a big chunk of data (say 500kb) takes to completely reach the client including all delays in between. Tools like Iperf don't seem to be an option because they do their measurements by creating their own traffic.
The best idea I could come up with is: somehow determine the clock difference of client and server, include a server send timestamp with each message and then have the client report back to the server the difference between this timestamp and the time the client received the message. The server can then determine the time it took a message to reach the client.
Is there an easier way to do this? Are there any libraries for this?