And we also blogged[1] about how the whole thing works. We're very excited about getting this out but we have a ton of improvements we'd like to make still. Please let us know if you have any questions!
Where we are tied is the LLM provider - you will need to supply your own keys for Anthropic / Gemini.
We did a couple runs on top of Ollama + Gemma - expect support for local LLMs. Can't swear on the timeline, but one of our core contributors recently built a water cooled rig with a bunch of 3090s so my guess is "pretty soon".
ah27182 · 2d ago
The CLI for this feels extremely buggy, Im attempting to build the application but the screen is flickering like crazy: https://streamable.com/d2jrvt
We have a similar wrapper for local LLMs on the roadmap.
If you use CLI only - we run claude 4 + gemini on the backend, gemini serving most of the vision tasks (frontend validation) and claude doing core codegen.
davidgomes · 2d ago
We use both Claude 4 and Gemini by default (for different tasks). But the idea is you can self-host this and use other models (and even BYOM - bring your own models).
csomar · 1d ago
Average experience for AI-made/related products.
ecb_penguin · 1d ago
Exactly. Non-AI projects have always been easy to build without issues. That's why we have so many build systems. We perfected it the first try and then made lots of new versions based on that perfect Makefile.
- https://github.com/appdotbuild/agent
- https://github.com/appdotbuild/platform
And we also blogged[1] about how the whole thing works. We're very excited about getting this out but we have a ton of improvements we'd like to make still. Please let us know if you have any questions!
[1]: https://www.app.build/blog/app-build-open-source-ai-agent
We have baseline cursor rules included in case you want to hack on this manually https://github.com/appdotbuild/agent/tree/main/agent/trpc_ag...
Where we are tied is the LLM provider - you will need to supply your own keys for Anthropic / Gemini.
We did a couple runs on top of Ollama + Gemma - expect support for local LLMs. Can't swear on the timeline, but one of our core contributors recently built a water cooled rig with a bunch of 3090s so my guess is "pretty soon".
We have a similar wrapper for local LLMs on the roadmap.
If you use CLI only - we run claude 4 + gemini on the backend, gemini serving most of the vision tasks (frontend validation) and claude doing core codegen.