Show HN: Sim Studio – Open-Source Agent Workflow GUI
Our repo is https://github.com/simstudioai/sim, docs are at https://docs.simstudio.ai/introduction, and we have a demo here: https://youtu.be/JlCktXTY8sE?si=uBAf0x-EKxZmT9w4
Building reliable, multi-step agent systems with current frameworks often gets complicated fast. In OpenAI's 'practical guide to building agents', they claim that the non-declarative approach and single multi-step agents are the best path forward, but from experience and experimentation, we disagree. Debugging these implicit flows across multiple agent calls and tool uses is painful, and iterating on the logic or prompts becomes slow.
We built Sim Studio because we believe defining the workflow explicitly and visually is the key to building more reliable and maintainable agentic applications. In Sim Studio, you design the entire architecture, comprising of agent blocks that have system prompts, a variety of models (hosted and local via ollama), tools with granular tool use control, and structured output.
We have plenty of pre-built integrations that you can use as standalone blocks or as tools for your agents. The nodes are all connected with if/else conditional blocks, llm-based routing, loops, and branching logic for specialized agents.
Also, the visual graph isn't just for prototyping and is actually executable. You can run simulations of the workflows 1, 10, 100 times to see how modifying any small system prompt change, underlying model, or tool call change change impacts the overall performance of the workflow.
You can trigger the workflows manually, deploy as an API and interact via HTTP, or schedule the workflows to run periodically. They can also be set up to trigger on incoming webhooks and deployed as standalone chat instances that can be password or domain-protected.
We have granular trace spans, logs, and observability built-in so you can easily compare and contrast performance across different model providers and tools. All of these things enable a tighter feedback loop and significantly faster iteration.
So far, users have built deep research agents to detect application fraud, chatbots to interface with their internal HR documentation, and agents to automate communication between manufacturing facilities.
Sim Studio is Apache 2.0 licensed, and fully open source.
We're excited about bringing a visual, workflow-centric approach to agent development. We think it makes building robust, complex agentic workflows far more accessible and reliable. We'd love to hear the HN community's thoughts!
How this translates in the application is through features like allowing for custom tool calling with code execution, JSON schema input for response format, etc. I'd love to hear your thoughts using Sim Studio - let us know how we compare to the other workflow builders!
In my experience so far it's not just complicated, but effectively impossible. I struggle to get a single agent to reliably & consistently use tools, and adding n+1 agents is a error multiplier.
Do you mind elaborating on what differentiates Sim Studio from n8n, Flowise, RAGFlow and other open source flow based AI automation platforms?
For instance, n8n has a "memory" parameter, which is not an inherent parameter of LLMs. You can inject your agent's memories into the agent message history (or system prompt) - which is the most common scenario - but we give you control over that. We want to provide visibility, so everything that's exposed on the workflow canvas is exactly what's being executed in the background. Also, we think it's faster and more intuitive to get your workflow up and running in Sim Studio. I'd love your feedback, though! What do you think?
If I run Sim Studio with docker compose, how do I point it to the existing `ollama serve` instance running on the host?
I looked in settings (in the workspace UI) but don't see anywhere to configure the ollama endpoint.
for ollama running on your host machine, you'll need to modify the docker configuration since by default it's looking at http://localhost:11434 which points to localhost inside the container, not your host. you can either add `extra_hosts` as `host.docker.internal:host-gateway` to your docker compose and set the OLLAMA_HOST envvar to `OLLAMA_HOST=http://host.docker.internal:11434`, or just run `docker compose up --profile local-cpu -d --build --network=host` when running the compose command.
will add this to the readme and add in some UI locally so its easily configurable! let me know if you have any issues
Then I went to localhost:3000/w/
Then I added an Agent block. I expected ollama (or my ollama models) to show up in the drop-down, but I only see the hosted models.
I even tried editing `sim/providers/ollama/index.ts`:
Any ideas?(BTW I did NOT run `--profile local-cpu` because I didn't want to run ollama in a docker container, as it's already running on the host.)
- the models from my local ollama show up in the logs, and
- the models don't show up in the model drop-down in the Agent block.
AFAICT the only impact of `--profile local-cpu` is starting a docker container with ollama running.
If you want a custom integration, you can either request it, or if you are running locally we have really thorough instructions on how you can add a tool/block in the contributing guide for the repo. You can use this to extend the platform for yourself or add integrations to the main repo. Hope that helps!
Quick glance at GitHub suggests that GitHub package for the Docker image is missing, let me know if you need help with that — happy to contribute!
I’m conflicted because n8n does feel like the right level of abstraction but the UI and dated JS runtime environment are horrible. I don’t really want to write my own memory functionality for my AI agents but wondering if it’s worth it just to have a nicer UI and more modern JS env.
This space is REALLY struggling to graduate from Gradio-like design sensibilities.
That being said, I'm looking forward to playing with this, congrats on the launch!
I have been looking for a good solution in this increasingly crowded space and if I could offer a word of unsolicited advice it would be to ensure documentation is top notch, truthful (some competitors mention non-existent features in their docs), and includes a relatively detailed roadmap.
Good luck with Sim Studio. I may try it out in a few weeks!
Right now my solution is to build extensions that I can manually start on my browser. But using extensions to gather and export data + maintaining them is a bit of a pain