Show HN: AsyncFlow – Event-loop aware simulator for async distributed systems

1 gb00 0 8/19/2025, 4:14:00 PM github.com ↗
Hi HN,

I’m a theoretical-physics–turned–software engineer, and over the last several months I built AsyncFlow, an open-source discrete-event simulator for asynchronous distributed backends.

Idea: create a digital twin of your service before you deploy. You define a topology in YAML (client → optional load balancer → N servers → network edges with stochastic latency) and a workload driven by a stochastic request generator. The simulator runs full request lifecycles through each server’s event loop with explicit models for CPU work (blocking), I/O waits (non-blocking), and RAM residency.

What you get: production-like metrics (p50/p95/p99 latency, RPS, queue lengths) and plots to answer questions like “How many cores do I need to keep p99 < 100 ms?” without spinning up infra or running an expensive load test.

Example with output (2-server LB): https://github.com/AsyncFlow-Sim/AsyncFlow/tree/main/example...

Quick start

pip install asyncflow-sim

Repo & README: https://github.com/AsyncFlow-Sim/AsyncFlow

Why this vs load testing? Load tests (Locust, k6) measure a real system. AsyncFlow simulates one that doesn’t exist yet, so you can compare designs and size capacity offline, then validate with a load test later.

Status (alpha):

Works end-to-end; strict Pydantic schemas; ready-made plots; Pythonic builder.

Limits: simple network (latency + optional drops), single event loop per server, linear endpoint flows, no thread concurrency, single core per server.

Roadmap: Monte Carlo runs w/ confidence intervals, event injection (server down/up, network spikes), cache step, richer network model, richer nodes structure.

I’d love feedback on:

- YAML vs Python builder ergonomics - Which metrics/plots you want by default - Realistic knobs you’re missing for what-if capacity planning

MIT-licensed. Thanks!

Comments (0)

No comments yet