I would like to simulate packet delay and loss for UDP
and TCP
on Linux to measure the performance of an application. Is there a simple way to do this?
相关问题
- Is shmid returned by shmget() unique across proces
- how to get running process information in java?
- Error building gcc 4.8.3 from source: libstdc++.so
- Why should we check WIFEXITED after wait in order
- Null-terminated string, opening file for reading
This tutorial on networking physics simulations contains a C++ class in the sample code for simulating latency and packet loss in a UDP connection and may be of guidance. See the public latency and packetLoss variables of the Connection class found in the Connection.h file of the downloadable source code.
Haven't tried it myself, but this page has a list of plugin modules that run in Linux' built in iptables IP filtering system. One of the modules is called "nth", and allows you to set up a rule that will drop a configurable rate of the packets. Might be a good place to start, at least.
One of the most used tool in the scientific community to that purpose is DummyNet. Once you have installed the
ipfw
kernel module, in order to introduce 50ms propagation delay between 2 machines simply run these commands:In order to also introduce 50% of packet losses you have to run:
Here more details.