Launch HN: Cua (YC X25) – Open-Source Docker Container for Computer-Use Agents
Check out our demo to see it in action: https://www.youtube.com/watch?v=Ee9qf-13gho, and for more examples - including Tableau, Photoshop, CAD workflows - see the demos in our repo: https://github.com/trycua/cua.
For Computer-Use AI agents to be genuinely useful, they must interact with your system's native applications. But giving full access to your host device is risky. What if the agent's process gets compromised, or the LLM hallucinates and leaks your data? And practically speaking, do you really want to give up control of your entire machine just so the agent can do its job?
The idea behind c/ua is simple: let agents operate in a mirror of the user’s system - isolated, secure, and disposable - so users can fire-and-forget complex tasks without needing to dedicate their entire system to the agent. By running in a virtualized environment, agents can carry out their work without interrupting your workflow or risking the integrity of your system.
While exploring this idea, I discovered Apple’s Virtualization.Framework and realized it offered fast and lightweight virtualization on Apple Silicon. This led us to build a high-performance virtualization layer and, eventually, a computer-use interface that allows agents to interact with apps just like a human would - without taking over the entire system.
As we built this, we decided to open-source the virtualization core as a standalone CLI tool called Lume (Show HN here: https://news.ycombinator.com/item?id=42908061). c/ua builds on top of Lume, providing a full framework for running agent workflows inside secure macOS or Linux VMs, so your system stays free for you to use while the agent works its magic in the background.
With Cua you can build an AI agent within a virtual environment to: - navigate and interact with any application's interface; - read screen content and perform keyboard/mouse actions; - switch between applications and self-debug when needed; - operate in a secure sandbox with controlled file access. All of this occurs in a fully isolated environment, ensuring your host system, files, and sensitive data remain completely secure, while you continue using your device without interruption.
People are using c/ua to: - Bypass CryptoJS-based encryption and anti-bot measures to interact with modern web apps reliably; - Automate Tableau dashboards and export insights via Claude Desktop; - Drive Photoshop for batch image editing by prompt; - Modify 3D models in Fusion 360 with a CAD Copilot; -Extract data from legacy ERP apps without brittle screen‑scraping scripts.
We’re currently working on multi‑VM orchestration for parallel agentic workflows, Windows and Linux VM support, and episodic and long-term memory for CUA Agents.
On the open‑source side, c/ua is 100 % free under the MIT license - run it locally with any LLM you like. We’re also gearing up a hosted orchestration service for teams who want zero‑ops setup (early access sign‑ups opening soon).
We’d love to hear from you. What desktop or legacy apps do you wish you could automate? Any thoughts, feedback, or horror stories from fragile AI automations are more than welcome!
I don’t know if this is a problem you’ve faced, but I’m curious: how do LLM tool devs handle authn/authz? Do host apps normally forward a token or something? Is there a standard commonly used? What if the tool needs some permissions to act on the user’s behalf?
I'm also working on a blog post that touches on this - particularly in the context of giving agents long-term and episodic memory. Should be out next week!
We, at NonBioS.ai [AI Software Dev], built something like this from scratch for Linux VM's, and it was a heavy lift. Could have used you guys if had known about it. But can see this being immediately useful at a ton of places.
We’re currently focused on macOS but planning to support Linux soon, so I’d love to hear more about your use case. Feel free to reach out at founders@trycua.com - always great to learn from others building in this space.
We covered this a fair bit on our blogs: - https://www.nonbios.ai/post/why-nonbios-chose-cloud-vms-for-... - https://www.nonbios.ai/post/private-linux-vms-for-every-nonb...
This is like an OS developer who has never heard of Linux.
First time: it opened a MacOS VM and started to do stuff, but it got ahead of itself and starting typing things in the wrong place. So now that VM has a Finder window open, with a recent file that's called
The second and third times, it launched the VM but failed to do anything, showing these errors: This was using the gradio interface, with the agent loop provider as OMNI and the model as gemma3:4b-it-q4_K_MThese versions:
Stay tuned - we're also releasing support for UI-Tars-1.5 7B this week! It offers excellent speed and accuracy, and best of all, it doesn't require bounding box detection (Omni) since it's a pixel-native model.
Feel free to ping me on Discord (I'm francesco there) - happy to hop on a quick call to help debug: https://discord.com/invite/mVnXXpdE85
I reckon I could run this for buying fashion drops, is this a use case y'all have seen?
I wanted to look at a Docker alternative to e2b
The LLM interacts with the VM through a structured virtual computer interface (cua-computer and cua-agent). It’s a high-level abstraction that lets the agent act (e.g., “open Terminal”, “type a command”, “focus an app”) and observe (e.g., current window, file system, OCR of the screen, active processes) in a way that feels a lot more like using a real computer than parsing raw data.
So under the hood, yes, screen+metadata are used (especially with the Omni loop and visual grounding), but what the model sees is a clean interface designed for agentic workflows - closer to how a human would think about using a computer.
If you're curious, the agent loops (OpenAI, Anthropic, Omni, UI-Tars) offer different ways of reasoning and grounding actions, depending on whether you're using cloud or local models.
https://github.com/trycua/cua/tree/main/libs/agent#agent-loo...
Second, as a user, you’d want to handle the case where some or all of these have been fully compromised. Surreptitiously, super-intelligently, and partially or fully autonomously, one container or many may have access to otherwise isolated networks within homes, corporate networks, or some device in a high security area with access to a nuclear weapons, biological weapons, the electrical grid, our water supply, our food supplies, manufacturing, or even some other key vulnerability we’ve discounted, like a toy.
While providing more isolation is good, there is no amount of caution that can prevent calamity when you give everyone a Pandora’s box. It’s like giving someone a bulletproof jacket to protect them from fox tapeworm cancer or hyper-intelligent, time-traveling, timespace-manipulating super-Ebola.
That said, it’s the world we live in now, where we’re in a race to our demise. So, thanks for the bulletproof jacket.
We're designing with that in mind: think fine-grained permissioning, auditability, and minimizing surface area. But it’s still early, and a lot of it depends on how teams end up using CUAs in practice.
thank you e forza Cua
Agents seems exciting to us because have you ever tried getting an 80 year old man to figure out how to pay his town taxes online? Or how to register for some obscure permit?
We hope agents will be able to guide these users to some degree. So many users struggle with basic information and interfaces.
Picture this:
User walks up to kiosk. Wants to pay property tax bill. They have to study the kiosk/website homepage, sift through dozens or hundreds of options/menus/pages (or go through "wizards") to get to the right page for their issue. Then they have to figure out how to use that page!
These kiosks/websites usually support many functions, not just paying property tax.
So the user gets frustrated and says, "I just want to pay my property tax."
Enter the agent.
Anything that "improves access to public services" is what our customers are paying for. And we def see this as a viable option.
- Open-source from the start. Cua’s built under an MIT license with the goal of making Computer-Use agents easy and accessible to build. Cua's Lume CLI was our first step - we needed fast, reproducible VMs with near-native performance to even make this possible.
- Native macOS support. As far as we know, we’re the only ones offering macOS VMs out of the box, built specifically for Computer-Use workflows. And you can control them with a PyAutoGUI-compatible SDK (cua-computer) - so things like click, type, scroll just work, without needing to deal with any inter-process communication.
- Not just the computer/sandbox, but the agent too. We’re also shipping an Agent SDK (cua-agent) that helps you build and run these workflows without having to stitch everything together yourself. It works out of the box with OpenAI and Anthropic models, UI-Tars, and basically any VLM if you’re using the OmniParser agent loop.
- Not limited to Linux. The hosted version we’re working on won’t be Linux-only - we’re going to support macOS and Windows too.
In the meantime, I’ll give this a shot on macOS tonight. Congrats!
Also, let us know on Discord once you’ve tried out c/ua locally on macOS: https://discord.com/invite/mVnXXpdE85
(I am not affiliated)
Also, is the project still active? No commits for 2 months is odd for a YC startup in current batch :)
https://news.ycombinator.com/threads?id=SkylerJi
https://news.ycombinator.com/threads?id=zwenbo
https://news.ycombinator.com/threads?id=ekarabeg
https://news.ycombinator.com/threads?id=jameskuj
Here's what you guys need to understand:
(1) Not everyone spends hours on Hacker News—many casual users have no idea about the culture of this place re voting rings, booster comments, and so on.
(2) Many people enjoy congratulating their friends when they reach a major milestone.
(3) Other sites have a culture where this kind of thing is fine.
HN is different, of course, and we tell founders to stop this from happening. In fact, I basically yell it at them in the Launch HN guide: https://news.ycombinator.com/yli.html#noboost. I also yell it at them in person every chance I get—I do my best to scare them! But if you think that including something in a list of rules plus repeating it over and over in person is sufficient to get a message across, may I introduce you to the Measure Zero Effect: no matter how often you repeat something, the set of users who receive the message has measure zero (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)
As it happens, I saw those comments in the thread (mostly the same ones you listed), marked them offtopic, and emailed the founders as soon as I could:
"Btw, did you send a message to batchmates/friends about this thread? I"m seeing a lot of booster comments in there now. This is not good for you! (See https://news.ycombinator.com/yli.html.)
Fortunately though, there are a lot of organic comments as well so I can just move the booster ones lower down and they shouldn't harm anything. Still, if you have a way to tell your friends not to do that, it would be good. Send them to https://news.ycombinator.com/yli.html as well, if you like :) - the text about that is repeated and in a bold font for a reason!"
They replied that their Discord was probably spreading word of the launch and they'd add a message asking people to stop. After that, it mostly stopped.
Seriously though, this kind of behavior should be considered a violation of the social contract.
Would love to chat sometime!
Feel free to join our Discord so we can chat more: https://discord.com/invite/mVnXXpdE85
Also built something on top of Browser Use (Nanobrowser) and Docker.
https://github.com/reindent/nanomachine
Just finished planning and shell capabilities
Lets chat @reindentai (X)