Show HN: Kodosumi – Open-source runtime for AI agents

3 Padierfind 0 9/14/2025, 10:31:26 AM kodosumi.io ↗
Hi Hacker News, I’m part of the team behind

Kodosumi. We built it because deploying agentic services in production was much harder than building proof-of-concepts. Many frameworks let you build agents, but when you want them to run long tasks, scale across machines, or be observed/monitored reliably, things fall apart.

Kodosumi is our answer to that. It’s an open-source runtime that uses Ray + FastAPI + Litestar to let you:

Deploy “agents” and “flows” and expose APIs easily, with minimal config (just one YAML)

Scale horizontally for bursty workloads and long-running tasks without losing stability

Monitor realtime status, logs, dashboards, etc. so you can see what’s going on under the hood

Avoid vendor lock-in (you can plug in your own LLMs, vector stores, frameworks) and deploy on cloud, on-prem, Docker, Kubernetes, etc.

We’re still early, so some parts are in active development. But if you are working with AI agents — especially ones that need to run reliably over long durations / scale up under load — we think Kodosumi could save you a lot of infra pain.

Happy to answer questions, comparisons to e.g. LangChain / Ray Serve / custom setups, criticisms, etc.

Comments (0)

No comments yet