I would like to simulate packet delay and loss for UDP
and TCP
on Linux to measure the performance of an application. Is there a simple way to do this?
相关问题
- Is shmid returned by shmget() unique across proces
- how to get running process information in java?
- Error building gcc 4.8.3 from source: libstdc++.so
- Why should we check WIFEXITED after wait in order
- Null-terminated string, opening file for reading
iptables(8) has a statistics module that can be used to match every nth packet. To drop this packet, just append -j DROP.
You can try http://snad.ncsl.nist.gov/nistnet/ It's quite old NIST project (last release 2005), but it works for me.
netem leverages functionality already built into Linux and userspace utilities to simulate networks. This is actually what Mark's answer refers to, by a different name.
The examples on their homepage already show how you can achieve what you've asked for:
Note that you should use
tc qdisc add
if you have no rules for that interface ortc qdisc change
if you already have rules for that interface. Attempting to usetc qdisc change
on an interface with no rules will give the errorRTNETLINK answers: No such file or directory
.An easy to use network fault injection tool is Saboteur. It can simulate:
For dropped packets I would simply use iptables and the statistic module.
Above will drop an incoming packet with a 1% probability. Be careful, anything above about 0.14 and most of you tcp connections will most likely stall completely.
Take a look at man iptables and search for "statistic" for more information.
One of my colleagues uses tc to do this. Refer to the man page for more information. You can see an example of its usage here.