Im sending multiple udp packets consecutively to a remote pc. the problem is, if the amount of data is too high, some device somewhere between the channel experience buffer overflow. i intent to limit/throttle/control sending rate of the udp packets. can somebody give me some guide on how to find the optimal rate sending interval?
By the way, please stop suggesting tcp over udp. the objective is not to send data reliably, but to measure maximum throughput.
Trial and error. Point.
- Make up a secnod connection (UDP or TCP based) that you use ONLY to send control commands.
- Send stats there about missing packets etc. Then the sides can decide whether the data rate is too high.
- Possibly start low then up the data rate until you see missing packets.
NEVER (!) assume all packets will arrive. Means: you need (!) a way to reask for missing packets. Even under perfect cnoditions packets will get lost sometimes.
If loss is ok and only should be minimized, a statistical approach is pretty much the only way I see to handle this.
Despite your proposal that I don't suggest TCP over UDP, I have to. In same paragraph you are stating that the main purpose of your test is to measure throughput - that is bandwidth - and only way to do it properly without to re-invent whole TCP stack is to actually USE TCP stack.
Large parts of TCP are designed to work with flow control issues, and when TCP streams are used, you'll get exactly what you need - maximum bandwidth for given connection, with ease and without 'warm water inventing'.
If this answer doesn't suit you, that probably means that you have to re-state your requirements on the problem. They are in conflict.
Try this then:
- Start with packets of 1KB size (for example).
- For them, calculate how many packets per second will be OK to send - for example - 1GB ethernet = 100MBytes of raw bandwidth -> 100000 packets
- create a packed so first 4 bytes would be the serial number, rest could be anything - if you are testing here, fill it with zeroes or noise (random data)
- on sending side, create a packets and push them at the speed of RATE (previously calculated) for one second. calculate time spent, and
Sleep()
the rest of the time, waiting for new time slot.
- on receiving end, gather packets and look into their serial numbers. if packets are missing, send (another connection) some info to the sender about it.
- sender, on info about lost packets, should do something like
RATE = RATE * .9
- reduce sending rate to 90% of a previous one
- sender should gradually increase rate (say 1%) every few seconds if it doesn't get any 'lost packets' message
- after some time you RATE will converge to something that you wanted in the first place
Some considerations:
- if back connection is TCP, you'll have some overhead there
- if back connection is UDP, you can also have dropped packets here (because you are flooding the channel) and sender could never know that packets are dropped
- algorithm above won't solve missing data issue or out-of-order data issue, it will just measure the throughput.