Show HN: AsyncFlow – stop guessing p95 latency, simulate your system before prod

1 gb00 0 8/24/2025, 4:39:02 PM
Hi HN, I’ve been working on *AsyncFlow*, an open-source simulator for async/distributed backends built on top of SimPy.

The idea is simple: instead of pushing code straight to prod and hoping p95/p99 latency and throughput hold, you can model your topology, generate workloads, and run scenarios offline.

Define topology: for example client → load balancer → servers → edges in YAML or Python Generate stochastic traffic (Poisson arrivals, exponential service times) Run the simulation and collect metrics

Metrics out of the box: - Latency distribution (p50, p95, p99, min/max) - Throughput (per server and system-wide) - Queue lengths / ready queues - RAM in use

The idea is not to replace production monitoring, but to give engineers a “what-if” playground before deployment: • What if latency on a link spikes for 30s? • What if I add/remove a server behind the LB? • How does p99 shift if service times degrade?

It’s still early-stage, but I’d love feedback on whether this approach makes sense for pre-production capacity planning and what scenarios would be most useful to support.

Repo in the first comment!

Comments (0)

No comments yet