Which is the best approach to send large UDP packe

2019-05-07 15:38发布

问题:

I have an android application that needs to send data through the protocol UDP every 100 milliseconds. Each UDP packet has 15000 bytes average. packets are sent in broadcast

Every 100 milliseconds lines below are run through a loop.

DatagramPacket sendPacket = new DatagramPacket(sendData, sendData.length, broadcast, 9876); 
clientSocket.send(sendPacket);

Application starts working fine, but after about 1 minute frequency of received packets decreases until the packets do not arrive over the destination.

The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes

I know the media MTU of a network is 1500 bytes and when I send a packet bigger it is broken into several fragments and if a fragment does not reach the destination the whole package is lost.

I do not understand why at first 1 minute the packets are sent correctly and after a while the packets do not arrive more. So I wonder what would be the best approach to solve this problem?

回答1:

It's exactly the problem you described. Each datagram you broadcast is split into 44 packets. If any one of those is lost, the datagram is lost. As soon as you have enough traffic to cause, say, 1% packet loss, you have 35% datagram loss. 2% packet loss equals 60% datagram loss.

You need to keep your broadcast datagrams small enough not to fragment. If you have a stream of 65,507 byte chunks such that you cannot change the fact that you must have the whole chunk for the data to be useful, then naive UDP broadcast was bad choice.

I'd have to know a lot more about the specifics of your application to make a sensible recommendation. But if you have a chunk of data around 64KB such that you need the whole chunk for the data to be useful, and you can't change that, then you should be using an approach that divides that data into pieces with some redundancy such that some pieces can be lost. With erasure coding, you can divide 65,507 bytes of data into 46 chunks, each 1,490 bytes, such that the original data can be reconstructed from any 44 chunks. This would tolerate moderate datagram loss with only about a 4% increase in data size.



回答2:

TCP is used specifically instead of UDP when you need reliable and correctly ordered delivery. But assuming you really need UDP for broadcasting, you could:

  1. debug the network to see how & where packets are lost, or maybe it is the receiver that is clogged/lagged. But often you don't have control over these things. Is a WiFi network involved? If so it's hard to get good QoS.

  2. do something on the application layer to ensure ordering and reliable delivery. For example, SIP normally uses UDP, but the protocol uses transactions and sequence numbers so clients & servers will retransmit messages as needed.

  3. implement packet loss concealment. Using maths, the receiver can recreate a lost packet, analogous to how a RAID disk setup can lose drives and still function.

That your setup works fine for a minute and then doesn't is a hint that there is either network congestion or software congestion on the broadcast or receiver side.

Can you do some packet captures with Wireshark and share the results?