Im sending multiple udp packets consecutively to a remote pc. the problem is, if the amount of data is too high, some device somewhere between the channel experience buffer overflow. i intent to limit/throttle/control sending rate of the udp packets. can somebody give me some guide on how to find the optimal rate sending interval?
By the way, please stop suggesting tcp over udp. the objective is not to send data reliably, but to measure maximum throughput.
Trial and error. Point.
NEVER (!) assume all packets will arrive. Means: you need (!) a way to reask for missing packets. Even under perfect cnoditions packets will get lost sometimes.
If loss is ok and only should be minimized, a statistical approach is pretty much the only way I see to handle this.
Try this then:
Sleep()
the rest of the time, waiting for new time slot.RATE = RATE * .9
- reduce sending rate to 90% of a previous oneSome considerations: - if back connection is TCP, you'll have some overhead there - if back connection is UDP, you can also have dropped packets here (because you are flooding the channel) and sender could never know that packets are dropped - algorithm above won't solve missing data issue or out-of-order data issue, it will just measure the throughput.
Despite your proposal that I don't suggest TCP over UDP, I have to. In same paragraph you are stating that the main purpose of your test is to measure throughput - that is bandwidth - and only way to do it properly without to re-invent whole TCP stack is to actually USE TCP stack.
Large parts of TCP are designed to work with flow control issues, and when TCP streams are used, you'll get exactly what you need - maximum bandwidth for given connection, with ease and without 'warm water inventing'.
If this answer doesn't suit you, that probably means that you have to re-state your requirements on the problem. They are in conflict.