Void: Open-source Cursor alternative

547 sharjeelsayed 236 5/8/2025, 4:35:34 PM github.com ↗

Comments (236)

BeetleB · 5h ago
Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?

As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.

Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.

As an example of the questions I have:

- Does it have something like Aider's repomap (or better)?

- To what granularity can I limit the context?

[1] https://aider.chat/

andrewpareles · 5h ago
Thanks for the feedback. We'll definitely add a feature list. To answer your question, yes - we support Cursor's features (quick edits, agent mode, chat, inline edits, links to files/folders, fast apply, etc) using open source and openly-available models (for example, we haven't trained our own autocomplete model, but you can bring any autocomplete model or "FIM" model).

We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.

Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).

throwup238 · 5h ago
An important Cursor feature that no one else seems to have implemented yet is documentation indexing. You give it a base URL and it crawls and generates embeddings for API documentation, guides, tutorials, specifications, RFCs, etc in a very language agnostic way. That plus an agent tool to do fuzzy or full text search on those same docs would also be nice. Referring to those @docs in the context works really well to ground the LLMs and eliminate API hallucinations

Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.

[1] https://forum.cursor.com/t/how-does-docs-crawling-work/264/3

andrewpareles · 3h ago
This is a good point.We've stayed away from documentation assuming that it's more of a browser agent task, and I agree with other commenters that this would make a good MCP integration.

I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.

RobinL · 4h ago
I agree that on the face of it this is extremely useful. I tried using it for multiple libraries and it was a complete failure though, it failed to crawl fairly standard mkdocs and sphynx sites. I guess it's better for the 'built in' ones that they've pre-indexed
throwup238 · 3h ago
I use it mostly to index stuff like Rust docs on docs.rs and rendered mdbooks. The RAG is hit or miss but I haven’t had trouble getting things indexed.
steveharman · 4h ago
Just use the Context7 MCP ? Actually I'm assuming Void supports MCP.
Aeroi · 13m ago
can you elaborate on how context7 handles document indexing or web crawling. If i connect to the mcp server, will it be able to crawl websites fed to it?
andrewpareles · 3h ago
Agreed - this is one of the better solutions today.
satvikpendem · 3h ago
> The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?

That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.

_345 · 5h ago
Am I the only one that has had bad experiences with aider? For me each time I've tried it, I had to wrestle with and beg the AI to do what I wanted it to do, almost always ending in me just taking over and doing it myself.

If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:

1. CTRL L block of code 2. Ask a question or give a task 3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements

CuriouslyC · 3h ago
Aider is quite configurable, you need to look at the leaderboard and copy one of the high performing model/config setups. Additionally, you need to autoload files such as the readme and coding guidelines for your project.

Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.

attentive · 5h ago
That depends on models you use and your prompts.

Use gemini-2.5pro or sonnet3.5/3.7 or gpt-4.1

Be as specific and detailed in your prompts as you can. Include the right context.

dingnuts · 4h ago
and what do you do if you value privacy and don't want to share everything in your project with silicon valley, or you don't want to spend $8/hr to watch Claude do your hobby for you?

I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.

If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.

At least you seem to be admitting that aider is useless with local models. That's certainly my experience.

wredcoll · 4h ago
I haven't used local models. I don't have the 60+gb of vram to do so.

I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.

Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.

mp5 · 1h ago
Local models just aren't there yet in terms of being able to host locally on your laptop without extra hardware.

We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.

BeetleB · 2h ago
> At least you seem to be admitting that aider is useless with local models. That's certainly my experience.

I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.

Although I would expect they would be much worse than Sonnet, etc.

> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.

Examples? Aider is a great tool and much (probably most) of it is written by AI.

wkat4242 · 1h ago
This is exactly the issue I have with copilot in office. It doesn't learn from my style so I have to be very specific how I want things. At that point it's quicker to just write it myself.

Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.

mp5 · 1h ago
I really wonder why dynamic learning hasn't been explored more. It would be a huge moat for the labs (everyone would have to host and dynamically train their own model with a major lab). Seems like it would make the AI way smarter too.
NewsaHackO · 3h ago
Is this post just you yelling at the wind? What does this have to do with the post you replied to?
olalonde · 7h ago
It feels like everyone and their mother is building coding agents these days. Curious how this compares to others like Cline, VS Code Copilot's Agent mode, Roo Code, Kilo Code, Zed, etc. Not to mention those that are closed source, CLI based, etc. Any standout features?
andrewpareles · 6h ago
Void dev here! The biggest players in AI code today are full IDEs, not just extensions, and we think that's because they simply feel better to use by having more control over the UX.

There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).

Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.

We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.

nico · 4h ago
> and there is no open source full IDE alternative (besides Void).

And Zed: https://zed.dev

Yesterday on the front page of HN:

https://news.ycombinator.com/item?id=43912844

Internal server error