Simulate delayed and dropped packets on Linux

2020-01-23 03:20发布

I would like to simulate packet delay and loss for UDP and TCP on Linux to measure the performance of an application. Is there a simple way to do this?

9条回答
贼婆χ
2楼-- · 2020-01-23 04:23

This tutorial on networking physics simulations contains a C++ class in the sample code for simulating latency and packet loss in a UDP connection and may be of guidance. See the public latency and packetLoss variables of the Connection class found in the Connection.h file of the downloadable source code.

查看更多
混吃等死
3楼-- · 2020-01-23 04:25

Haven't tried it myself, but this page has a list of plugin modules that run in Linux' built in iptables IP filtering system. One of the modules is called "nth", and allows you to set up a rule that will drop a configurable rate of the packets. Might be a good place to start, at least.

查看更多
虎瘦雄心在
4楼-- · 2020-01-23 04:26

One of the most used tool in the scientific community to that purpose is DummyNet. Once you have installed the ipfw kernel module, in order to introduce 50ms propagation delay between 2 machines simply run these commands:

./ipfw pipe 1 config delay 50ms
./ipfw add 1000 pipe 1 ip from $IP_MACHINE_1 to $IP_MACHINE_2

In order to also introduce 50% of packet losses you have to run:

./ipfw pipe 1 config plr 0.5

Here more details.

查看更多
登录 后发表回答