tcpulse: A TCP/UDP load generator that provides fine-grained, flow-level control

43 y_uuki 7 6/8/2025, 9:19:40 PM github.com ↗

Comments (7)

T3OU-736 · 30d ago
Looks like a neat tool, and a valuable addition to the network bandwidth/latency/performance toolkit.

(Admittedly, it has been a while since I have done it) It did feel odd that there was not a mention of time discipline for the client/server and the impact on finer-grained stats. Perhaps, at least, mention that NTP (or, ideally, PTP, but that is a fair bit more involved) is strongly recommended to be running and stable (NTP's own jitter being low)?

PeterWhittaker · 31d ago
Any possibility of using it with diodes? I've used iperf for this (two separate instances), but iperf2 doesn't support this. We've also written our own in Rust, tightly coupled to our needs, but if there are better tools out there, well, we'd be silly not to investigate. Thanks!
y_uuki · 30d ago
Currently tcpulse only supports a `pingpong` mode, so true one-way (“diode”) transfers aren’t available out of the box. That said, the design is simple enough that you could introduce an option or wrapper to lock the flow in one direction and implement a diode mode. Happy to review a PR if someone wants to add it
bwen · 31d ago
Cool little tool! Is it possible for the server to also print out performance metrics of each peer which it is connected to?
y_uuki · 31d ago
Thank you! No, tcpulse only measures and prints out metrics from the client side, not the server side. However, since multiple clients each output their own measurement results, this effectively results in performance metrics being output for each peer.
defenestrated · 31d ago
Any idea how this differs from iperf?
y_uuki · 31d ago
iperf3 is a link “speedometer” – spin it up between two hosts, crank -P or -u -b, and it tells you max TCP/UDP throughput (and jitter/loss if you like).

tcpulse is a fine-grained traffic “microscope” – you dial exact CPS or concurrent sockets, spray dozens of targets from one client, and get p90/p95/p99 latencies per flow.

Use iperf3 for a quick bandwidth check; use tcpulse when you need repeatable, controlled connection patterns and detailed latency stats across many backends.