Show HN: Open-source alternative to ChatGPT Agents for browsing
We are Winston, Edward, and James, and we built Meka Agent, an open-source framework that lets vision-based LLMs execute tasks directly on a computer, just like a person would.
Backstory:
In the last few months, we've been building computer-use agents that have been used by various teams for QA testing, but realized that the underlying browsing frameworks aren't quite good enough yet.
As such, we've been working on a browsing agent.
We achieved 72.7% on WebArena compared to the previous state of the art set by OpenAI's new ChatGPT agent at 65.4%. You can read more about it here: https://github.com/trymeka/webarena_evals.
Today, we are open sourcing Meka, our state of the art agent, to allow anyone to build their own powerful, vision-based agents from scratch. We provide the groundwork for the hard parts, so you don't have to:
* True vision-based control: Meka doesn't just read HTML. It looks at the screen, identifies interactive elements, and decides where to click, type, and scroll.
* Full computer access: It's not sandboxed in a browser. Meka operates with OS-level controls, allowing it to handle system dialogues, file uploads, and other interactions that browser-only automation tools can't.
* Extensible by design: We've made it easy to plug in your own LLMs and computer providers.
* State-of-the-art performance: 72.7% on WebArena
Our goal is to enable developers to create repeatable, robust tasks on any computer just by prompting an agent, without worrying about the implementation details.
We’d love to get your feedback on how this tool could fit into your automation workflows. Try it out and let us know what you think.
You can find the repo on GitHub and get started quickly with our hosted platform, https://app.withmeka.com/.
Thanks, Winston, Edward, and James
This seems pretty scary. Just recently an AI wiped a company database: https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-d...
Can it be installed on a conventional (personal or work) desktop?
Definitely would be happy to be wrong and missed something here!
Also the task I gave it this was the result:
I was unable to retrieve any live fare data because both airline sites became unworkable in the remote session (xxxx selectors would not stay open; xxxxsearch could not be completed before the session ended). Below is a blank comparison table you can fill in once you gather the prices manually:
is that the current state of best in class computer use agents? or is more of a we need to modify it until it is good for our use case?
trying to provide helpful feedback and honest curiosity, this is awesome work
1. Proxy support for sites that block the user
2. Browser extensions support for uBlock, password managers, etc.
3. CAPTCHA solving
1. We have proxy support right now and most traffic are already being proxied today. Might allow fine tuning of this over time 2. We have plans to allow this, but not currently available 3. We are leveraging some anti bot/captcha solving, but I do believe this will be a never ending problem in some sense
Out of curiosity, what do you think contributed to this working better than even OpenAI agent or some of the other tools out there?
I'm not that familiar with how OpenAI and other agents like Browser Use currently work, but is this, in your opinion, the most important factor?
> An infrastructure provider that exposes OS-level controls, not just a browser layer with Playwright screenshots. This is important for performance as a number of common web elements are rendered at the system level, invisible to the browser page
IMO, the combination of having an "evaluator model" at the end to verify if the intent of the task was complete, and using multiple models that look over each other's work in every step was helpful - lots of human organization analogies there, like "trust but verify" and pair programming. Memory management was also very key.
I did YC back in S16 and was just reminiscing with a friend about how startups felt so different back then.
Examples include form filling, sales prospecting, lead enrichment, or even just keeping track of prices of important things.
Over time, we do expect the cost of tokens on these models to decrease drastically. Powerful vision models are still relatively new compared to other generic LLM models for text. Definitely a lot of room for optimizations that we expect will come quickly!
Does it use openrouter for model selection? Which models did you achieve the webarena result with? Are there any open source models which are any good for this?
Unfortunately, we didn't try it out with open source models, but you are welcome to pull the repo and try with any model that has good visual grounding! (I heard UI-TARS and the latest Qwen visual model are quite good)
* Accuracy (does it do what we want) * Reliability (does it consistently do what we want) * Speed (does it do what we want fast)
We're mostly focused on solving 1 and maybe in some capacity 2.
The belief here is that models are going to get better. With that smaller models will become more capable. This will result in speed ups automatically.
So yes, I will concur that speed is probably not the main strength of our framework right now, but believe that we will get there with time.