Show HN: Container Use for Agents

72 aluzzardi 17 6/5/2025, 5:44:27 PM github.com ↗

Comments (17)

lmeyerov · 1d ago
Interesting. I have been doing a simple man's version of multiple git clone folders and 'docker compose -p'. Making that smoother is attractive, esp if can be made opaque for our more junior teammates.

On one end, I have been curious about getting multiple agents to work on the same branch, but realized I can just wait till they do that natively.

More so, all this feels like a dead end. I think OpenAI and github are right to push to remote development, so these don't matter. Eg, mark up a PR or branch in GitHub, and come back as necessary, and do it all from my phone. If I want an IDE, it can be remote ssh.

shykes · 1d ago
Hi all, we open sourced this live on stage today at AI Engineer World Fair (great event by the way).

If you're interested, here's the keynote recording: https://www.youtube.com/live/U-fMsbY-kHY?t=3400s

steeve · 1d ago
Very cool that this runs as a MCP server, very cool demo
dboreham · 1d ago
Seems odd that the LLM is so clever it can write programs to drive any API. But so dumb that it needs a new special purpose protocol proxy to access anything behind such an API...
sharifhsn · 1d ago
It’s about resilience. LLMs are prone to hallucinations. Although they can be very intelligent, they don’t have 100% correct output unaided. The protocol helps increase the resilience of the output so that there’s more of a guarantee that the LLM will stay within the lines you’ve drawn around it.
beardedwizard · 1d ago
That's really not true. Context is one strategy to keep a models output constrained, and tool calling allows dynamic updates to context. Mcp is a convenience layer around tool calls and the systems they integrate with
nsonha · 17h ago
> LLM is so clever it can write programs to drive any API

It is not, name one software that has a LLM generating code on the fly to call APIs. Why do people have this delusion?

TeMPOraL · 10h ago
Every runtime executing LLMs with support for tools does it, starting with the first update to ChatGPT app/webapp that made use of the earliest version of "function calling"? Even earlier, there were third-party runtimes/apps (including scripts people made for themselves), that used OpenAI models via API with a prompt teaching LLM a syntax it can use to "shell out", which the runtime would scan for.

If you tell a model it can use some syntax, e.g. `:: foo(arg1, arg2) ::`, to cause the runtime to call an API, and then, based on the context of the conversation, the model outputs `:: get_current_weather("Poland/Warsaw")`, that is "generating code on the fly to all APIs". How `:: get_current_weather("Poland/Warsaw")` gets turned into a bunch of cURL invocations against e.g. OpenWeather API, is an implementation detail of the runtime.

ErikBjare · 10h ago
This is basically just function calling?
nsonha · 6h ago
No the person I replied to made the argument that tool calling or MCP is uneccessary because why not just make the LLM generate any code on the fly to do anything instead. They think there should be just one tool: eval.

Surprisingly many people say this. I essentially ask them if they have seen a non-toy product that works like that, because everything is tool calling afak.

rahimnathwani · 1d ago
I'm curious: what do containers add over and above whatever you'd get using worktrees on their own?
shykes · 1d ago
They're complementary. git worktrees isolate file edits; containers isolate execution: building, testing, running dev instances..

container-use combines both forms of isolation: containers and git worktrees in a seamless system that agents can use to get work done.

brunoqc · 1d ago
I would guess isolation/safety.
kamikaz1k · 1d ago
Page is crashing my mobile chrome.
akshayKMR · 1d ago
Freezing for me on Safari desktop. I think the culprit is the SVG based demo in the README.md
shykes · 1d ago
Sorry about that! We'll fix it.
meling · 1d ago
On iPad as well.