Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
Thanks for the feedback. We'll definitely add a feature list. To answer your question, yes - we support Cursor's features (quick edits, agent mode, chat, inline edits, links to files/folders, fast apply, etc) using open source and openly-available models (for example, we haven't trained our own autocomplete model, but you can bring any autocomplete model or "FIM" model).
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
throwup238 · 63d ago
An important Cursor feature that no one else seems to have implemented yet is documentation indexing. You give it a base URL and it crawls and generates embeddings for API documentation, guides, tutorials, specifications, RFCs, etc in a very language agnostic way. That plus an agent tool to do fuzzy or full text search on those same docs would also be nice. Referring to those @docs in the context works really well to ground the LLMs and eliminate API hallucinations
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
The continue.dev plugin for Visual Studio Code provides documentation indexing. You provide a base URL and a tag. The plugin then scrapes the documentation and builds a RAG index. This allows you to use the documentation as context within chat. For example, you could ask @godotengine what is a sprite?
conartist6 · 63d ago
So this is why everything is going behind Anubis then?
GreenWatermelon · 63d ago
Nah, Anubis combats systematic Scraping of the web by data scrapers, not actual user agents.
conartist6 · 63d ago
A scraper in this case is the agent of the user. Doesn't make it not a scraper that can and will get trapped.
lgiordano_notte · 63d ago
Cursor’s doc indexing is acc one of the few AI coding features that feels like it saves time. Embedding full doc sites, deduping nav/header junk, then letting me reference @docs inline actually improves context grounding instead of guessing APIs.
steveharman · 63d ago
Just use the Context7 MCP ? Actually I'm assuming Void supports MCP.
gesman · 63d ago
Context7 is missing lots of info pieces from the repos it indexing and getting overbloated with similar sounding repos, which is becoming confusing for LLM's.
Aeroi · 63d ago
can you elaborate on how context7 handles document indexing or web crawling. If i connect to the mcp server, will it be able to crawl websites fed to it?
andrewpareles · 63d ago
Agreed - this is one of the better solutions today.
andrewpareles · 63d ago
This is a good point.We've stayed away from documentation assuming that it's more of a browser agent task, and I agree with other commenters that this would make a good MCP integration.
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
RobinL · 63d ago
I agree that on the face of it this is extremely useful. I tried using it for multiple libraries and it was a complete failure though, it failed to crawl fairly standard mkdocs and sphynx sites. I guess it's better for the 'built in' ones that they've pre-indexed
throwup238 · 63d ago
I use it mostly to index stuff like Rust docs on docs.rs and rendered mdbooks. The RAG is hit or miss but I haven’t had trouble getting things indexed.
I've used both Cursor and Aider but I've always wanted something simple that I have full control on, if not just to understand how they work. So I made a minimal coding agent (with edit capability) that is fully functional using only seven tools: read, write, diff, browse, command, ask, and think.
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
Will check this out. I like to have a bit more control over my stack if possible.
satvikpendem · 63d ago
> The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
_345 · 63d ago
Am I the only one that has had bad experiences with aider? For me each time I've tried it, I had to wrestle with and beg the AI to do what I wanted it to do, almost always ending in me just taking over and doing it myself.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code
2. Ask a question or give a task
3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
CuriouslyC · 63d ago
Aider is quite configurable, you need to look at the leaderboard and copy one of the high performing model/config setups. Additionally, you need to autoload files such as the readme and coding guidelines for your project.
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
larusso · 63d ago
Never used the tool. But it seems both aider and cursor are not at their strongest out of the box? I read similar thing about cursor and doing custom configuration so it picks up coding guidelines etc etc. Is there some kind of agreed best practice standard that is documented or just try and error best practices from users sharing these?
CuriouslyC · 63d ago
Aider's leaderboard is a baseline "best practice" for model/edit format/mode selection. Beyond that, it's basically whatever you think are best practices in engineering and code style, which you should capture in documents that can serve double duty both for AI and for human contributors. Given that a lot of this stuff is highly contentious it's really up to you as to pick and choose what you prefer.
attentive · 63d ago
That depends on models you use and your prompts.
Use gemini-2.5pro or sonnet3.5/3.7 or gpt-4.1
Be as specific and detailed in your prompts as you can. Include the right context.
dingnuts · 63d ago
and what do you do if you value privacy and don't want to share everything in your project with silicon valley, or you don't want to spend $8/hr to watch Claude do your hobby for you?
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
wredcoll · 63d ago
I haven't used local models. I don't have the 60+gb of vram to do so.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
mp5 · 63d ago
Local models just aren't there yet in terms of being able to host locally on your laptop without extra hardware.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
This is exactly the issue I have with copilot in office. It doesn't learn from my style so I have to be very specific how I want things. At that point it's quicker to just write it myself.
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
mp5 · 63d ago
I really wonder why dynamic learning hasn't been explored more. It would be a huge moat for the labs (everyone would have to host and dynamically train their own model with a major lab). Seems like it would make the AI way smarter too.
BeetleB · 63d ago
> At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
NewsaHackO · 63d ago
Is this post just you yelling at the wind? What does this have to do with the post you replied to?
olalonde · 64d ago
It feels like everyone and their mother is building coding agents these days. Curious how this compares to others like Cline, VS Code Copilot's Agent mode, Roo Code, Kilo Code, Zed, etc. Not to mention those that are closed source, CLI based, etc. Any standout features?
andrewpareles · 63d ago
Void dev here! The biggest players in AI code today are full IDEs, not just extensions, and we think that's because they simply feel better to use by having more control over the UX.
There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).
Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.
We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.
nico · 63d ago
> and there is no open source full IDE alternative (besides Void).
And Emacs, also mentioned in that thread (by me, but still).
swyx · 63d ago
this joke could not have been more perfectly set up if it were staged. thanks for the guffaw.
andrewpareles · 63d ago
I should have been more careful with my wording - I was talking about major VS Code-based IDEs as alternatives. Zed is very impressive, and we've been following them since before Void's launch!
_kidlike · 63d ago
Maybe I live in a bubble, but it's surprising to me that nobody mentions Jetbrains in all these discussions. Which in my professional working experience are the only IDEs anyone uses :shrug:
TingPing · 63d ago
I’m not sure I’ve met a Jetbrains user in projects I’ve worked on. It’s a paid product so just has a small userbase.
Their tools are wildly popular in many spaces. It isn't for everyone though. It's totally believable in your circle no one uses their tools, but it isn't niche.
whstl · 63d ago
Funny enough, I know a lot of people who work at JetBrains, but only a few end-users.
Their use base is completely different. And we’re both in a bubble, I reckon. IntelliJ people also only know a few VSCode users!
weberer · 63d ago
Pycharm is extremely popular in the data science world. The Community Edition is free and has 99% of the features most people need. Even when developing with Cursor, I find myself going back to Pycharm just to use the debugger, which I greatly prefer to the debugger used in these VS Code forks.
TingPing · 63d ago
Lately I’ve only tried clion, which has no free version. Personally it didn’t function as well as vscode for C++.
> The biggest players in AI code today are full IDEs, not just extensions,
Claude Code (neither IDE nor extension) is rapidly gaining ground, it's biggest current limitation being cost, which is likely to get resolved sooner rather than later (Gemini Code anyone?). You're right about the right now, but with the pace at which things are moving, the trends are honestly more relevant than the status quo.
andrewpareles · 63d ago
Just want to share our thinking on terminal-based tools!
We think in 1-2 years people will write code at a systems level, not a function level, and it's not clear to us that you can do that with text. Text-based tools like Claude Code work in our text-based-code systems today, but I think describing algorithms to a computer in the future might involve more diagrams, and terminal will not be ideal. That's our reasoning against building a tool in the terminal, but it clearly works well today, and is the simplest way for the labs to train/run terminal tool-use agents.
bcrosby95 · 63d ago
Diagrams are great at providing a simplified view of things but they suck ass when it comes to providing details.
There's a reason why fully creating systems from them died 20 years ago - and it wasn't just because the code gen failed. Finding a bug in your spec when its a mess of arrows and connections can be nigh impossible.
Go image search "complex unreal blueprint".
andrewpareles · 63d ago
This is completely true, and it's a really common objection.
I don't imagine people will want to fully visualize codebases in a giant unified diagram, but I find it hard to imagine that we won't have digests and overviews that at least stray from plaintext in some way.
I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
ayewo · 63d ago
> I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
Sounds exactly like what DeepWiki is doing from the Devin AI Agent guys: https://deepwiki.com
abnercoimbre · 63d ago
Terminals aren't too far away from evolving [0] beyond UTF-8 characters. Therefore I suspect IDEs and CLIs will continue their turf wars as always.
> hard to imagine that we won't have digests and overviews
FYI, this is currently a dead link - wasn't sure if typo, so googled and confirmed, looks like you're down at the moment. Hopefully not a painful fix on a Sunday.
conartist6 · 63d ago
You have lost all connection to reality.
Hey by the way I hear all communication between people is going to shift to pictograms soon. You know -- emoji and hieroglyphs. Text just isn't ideal, you know
km144 · 63d ago
Every system can be translated can be translated to text though. If there is one thing LLMs have essentially always been good at, it is processing written language.
opdahl · 63d ago
> Claude Code (neither IDE nor extension) is rapidly gaining ground
What makes you say that? From what I’m observing it doesn’t seem to be talked much about at all.
jjani · 63d ago
Spending too much time on HN and other spaces (including offline) where people talk about what they're doing. Making LLM-based things has also been my job since pretty much the original release of GPT3.5 which kicked off the whole industry, so I have an excuse.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
I personally tried it and I felt it way more confusing to use compared to using Cursor with Claude 3.7 Sonnet. The CLI interface seems to me more to lend itself for «vibe coding» where you actually never work and look with the actual code. That is why I think Cursor and IDEs are more popular than CLI only tools.
olalonde · 63d ago
Claude Code's announcement earned 2k+ point on HN when it launched (7th most popular HN submission this year).
Together with 3.7 Sonnet. And the claim was that it is rapidly gaining ground, not that it sparked initial interest. I still don’t see much proof of adoption. This is actually the first I’ve heard about anyone actually actively using it since its launch.
linsomniac · 63d ago
>This is actually the first I’ve heard about anyone actually actively using it
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.
fuzzythinker · 63d ago
I don't know Claude Code, so if it's "neither IDE nor extension", what is it?
girvo · 63d ago
It's a CLI tool
bglusman · 63d ago
The versioning and git branching sounds really neat, I think! Can you say more about that? Curious if you've looked at/are considering using Jujutsu/JJ[0] in addition or instead of git for this, I've played with it some, but been considering trying it more with new AI coding stuff, it feels like it could be a more natural fit than actually creating explicit commits for every change, while still tracking them all? Just a thought!
Interesting, thanks for sharing! We planned on spinning up a new Git branch and shallow Git clone (or possibly worktree/something more optimized) for each agent, and also adding a small auto-merge-with-LLM flow, although something more granular like this might feel better. If we don't use a versioning tool like JJ at first (may just use Git for simplicity at first), we will certainly consider it later on, or might end up building our own.
danenania · 63d ago
If you're open to something CLI-based, my project Plandex[1] offers git-based branching (and granular versioning) for AI coding. It also has a sandbox (also built on git) that keeps cumulative changes separate from project files until they're ready to apply.
Isn't continue.dev also open source and not using 'their backend' when sending stuff? I didn't use it in a while, but I know it had support for llama, local models for tab completions, etc.
andrewpareles · 63d ago
Continue is doing great work, but they're an extension (plugin)!
miroljub · 63d ago
What’s wrong with a plugin? I don’t see the benefit of an IDE over a plugin.
mp5 · 63d ago
The extensions API lets you control the sidebar, but you basically don't have control over anything in the editor. We wouldn't have been able to build our inline edit feature, or our navigation UI if we were an extension.
LiveTheDream · 63d ago
Continue.dev is an extension and it does inline edits just fine in VS Code and IntelliJ.
mp5 · 63d ago
Big fan of Continue btw! There's a small difference in how we handle inline edits - if you've used inline edits in Cursor/Windsurf/Void you'll notice that a box appears above the text you are selecting, and you can type inside of it. This isn't possible with VS Code extensions alone (you _have_ to type into the sidebar).
esperent · 63d ago
Is inline edits the same as diff edits? In that case I think Cline and Roo can do it as well.
mp5 · 63d ago
If I understand your question correctly - Cline and Roo both display diffs by using built-in VS Code components, while Cursor/Windsurf/Void have built their own custom UI to display diffs. Very small detail, and just a matter of preference.
esperent · 63d ago
It's about whether the tool can edit just a few lines of the file, or whether it needs to stream the whole file every time - in effect, editing the whole file even though the end result may differ by just a few lines.
I think editing just a part of the file is what Roo calls diff editing, and I'm asking if this is what the person above means by line edits.
vunderba · 63d ago
I think it'd be worthwhile to call out in a FAQ/comparison table specifically how something like an "AI powered IDE" such as Cursor/Void differs from just using an IDE + a full-featured agentic plugin (VS Codium + Cline).
monkpit · 63d ago
I agree, having used Cline I am not sure what advantages this would offer, but I would like to know (beyond things like “it’s got an open source ide” - Cline has those too specifically because I can use it in my open source ide)
lnxg33k1 · 63d ago
>> The biggest players in AI code today are full IDEs, not just extensions
Are you sure? I have some expertise with my IDE, some other extension which solve problems for me, a wide range of them, I've learnt shortcuts, troubleshooting, where and who ask for help, but now you're telling me that I am better off leaving all that behind, and it's better for me? ;o
SegmentTree · 63d ago
I think it's worth mentioning that the Theia IDE is a fully open source VS Code-compatible IDE (not a fork of VS Code) that's actively adding AI features with a focus on transparency and hackability.
andrewpareles · 63d ago
We considered Theia, and even building our own IDE, but obviously VSCode is just the most popular. Theia might be a good play if Microsoft gets more aggressive about VSCode forks, although it's not clear to us that people will be spending their time writing code in 1-2 years. Chances are definitely not 0 that we end up moving away from VSCode as things progress.
conartist6 · 63d ago
It's the most popular because the tech is decades old. You're all rushing to copy obsolete technology. Now we have 10 copies of an obsolete technology.
We used to know better
conartist6 · 63d ago
I mean I guess I should thank the 10 teams who forked VSCode for proving beyond all reasonable doubt that VSCode is architecturally obsolete. I was already trying to make that argument, but the 10 forks do it so much better.
huevosabio · 63d ago
So this is closer to Zed than Cursor/Windsurf/Continue, right?
edit: ahh just saw that it is also a fork of VS Code, so it is indeed OSS Cursor
andrewpareles · 63d ago
Yep, Void is a VSCode fork, but we're definitely not wed to VSCode! Building our own IDE/browser-port is not out of the picture. We'll have to see where the next iteration of tool-use agents takes us, but we strongly feel writing typescript/rust/react is not the endgame when describing algorithms to a computer, and a text-based editor might not be ideal in 10 years, or even 2.
jeron · 63d ago
openAI chose to acquire windsurf for 3B instead of building something like Void, very curious decision. awesome project, will be closely following this
jadbox · 64d ago
My 2c: I rarely need agent mode. As an older engineer, I usually know what exactly needs to be done and have no problem describing to the LLM what to do to solve what I'm aiming to do. Agent mode seems its more for novice developers who are unsure what tasks need to be broken down and the strategy that they are then solved.
fellowmartian · 64d ago
I’m a senior engineer and I find myself using agents all the time. Working on huge codebases or experimenting with different languages and technologies makes everybody “novice”.
azinman2 · 63d ago
Can you give some examples of how you use it? I'm used to asking for very specific things, but less so full on agent mode.
hadlock · 63d ago
Agent mode seems to be better at realizing all the places in the code base that need to be updated, particularly if the feature touches 5+ files, whereas editor starts to struggle with features that touch 2-3 files. "every 60 ticks, predict which items should get cached based on user direction of travel, then fetch, transform and cache them. when new items need to be drawn, check the cache first and draw from there, otherwise fetch and transform on demand." this touches the core engine, user movement, file operations, graphics etc and agent mode seems to have no problem with this at all.
maronato · 63d ago
Personally, I’ve found agents to be a great “multitasking” tool.
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
andrewpareles · 63d ago
I think a good use of time while waiting for an LLM is to ask another LLM for something. Until then Slack will do :)
ivape · 64d ago
"Novice mode" has always been true for the newcomer. When I was new, I really was at the mercy of:
1) Authority (whatever a prominent evangelist developer was peddling)
2) The book I was following as a guide
3) The tutorial I was following as a guide
4) The consensus of the crowd at the time
5) Whatever worked (SO, brute force, whatever library, whatever magic)
It took a long ass time before I got to throw all five of those things out (throw the map away). At the moment, #5 on that list is AI (whatever works). It's a Rite of Passage, and because so much of being a developer involves autodidacticism, this is a valley you must go through. Even so, it's pretty cool when you make it out of that valley (you can do whatever you want without any anxiety about is this the right path?). You are never fearful or lost in the valley(s) for the most part afterward.
boredtofears · 64d ago
If you use AI agents for all your work as a novice do you ever make it out of the valley?
ivape · 64d ago
Yeah.
Most people have not deployed enough critical code that was mostly written with AI. It's when that stuff breaks, and they have to debug it with AI, that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again. The thing is, we can never not use AI ever again. So, this is the trial by fire where many will figure out the depth of the valley and emerge from it with all the lessons. I can only speculate, but I suspect the lessons will be something along the lines of "some things should use less AI than others".
I think it's a cool journey, best of luck to the AI-first crowd, you will learn lessons the rest of us are not brave enough to embark on. I already have a basket of lessons, so I travel differently through the valley (hint: My ship still has a helm).
juliushuijnk · 63d ago
> that's when they'll have to contend with the blood, sweat, and tears.
Or, most software will become immutable. You'll just replace it.
You'll throw away the mess, and let a newer LLM build a better version in a couple of days. You ask the LLM to write down the specs for the newer version based on the old code.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
spookie · 63d ago
Slaves to the system.
Do we really want that? To be beholden to the hands of a few?
Hell, you can't even purchase a GPU with high enough VRAM these days for an acceptable amount of money. In part because of geopolitics. I wonder how many morerestrictions are there to come.
There's a lot of FOMO going around, those honing their programming skills will continue to thrive, and that's a guarantee. Don't become a vassal when you can be a king.
aerhardt · 63d ago
The scenario you paint sounds very implausible for non-trivial applications, but even if it ends up becoming the development paradigm, I doubt anyone will be "left behind" as such. People will have time to re-skill. The question is whether some will ever want to or would prefer to take up woodworking.
skeledrew · 63d ago
Whether one takes up woodworking or not depends on whether or not development was primarily for profit, with little to not intrinsic enjoyment of the role.
weq · 63d ago
Coding and woodworking are similiar from my perspective, they are both creative arts. I like coding in different lanuages, woodworking is simply a physical manifestion of such. In a world where you only need agents, is not a world where nerds will be employed. Traditional nerds cant stand out from the crowd anymore.
This is peak AI, it only goes downhill from here in terms of quality, the AI first flows will be replaceable. Those offshored teams that we have suffered with for years now will be primarly replaced (google first programmers). And developers will continue, working around the edges. The differences will be that startups wont be able to use technology horading to stifle competition, unless they make themselves immune from the ai vacumes.
I can appreciate the comments further up around how AI can help unravel the mysterys of a legacy codebase. Being able to ask questions about code in quick succession will mean that we will feel more confident. AI is lossy, hard to direct, yet very confident always. We have 10k line functions in our legacy code that nests and nest. How confident are you to let ai go and refactor this code without oversight and ship to a customer? Thus far im not, maybe i dont know the best model and tools to use and how to apply them, but even if one of those logic branches gets hallucinated im in for a very bumpy ride. Watching non-technical people at my org get frusted and stuck with it in a loop is a lot more common then the successes which seem to be the opposite of the experienced engineers who use it as a tool, not a savour. But every situation is different.
If you think you company can be a differentiator in the market because it has access to the same AI tool as every other company? We'll well see about that. I believe there has to be more.
Im an experienced engineer of 30+yrs. Technology comes and goes. AI is just a another tool in the chest. I use it primarily because i dont have to deal with ads. I also use it to be a electrical engineer, designing circuts in areas i am not familiar with. I can see very simply the noivce side of the coin, it feels like you have super powers because you just dont know enough about the subject to be aware of anything else. Its sped up the learning cycle considerably beacause of the conversational nature. After a few years of projects, i know how to ask better questions to get better results.
conartist6 · 61d ago
Just... WAT.
That's like saying "I'll just burn down my house because I can replace it. Anyone who repairs their house will be left behind."
It's true, you can replace it, so I can't put my finger on what has been stopping people from burning their houses down instead of, say, spring cleaning
JoshuaDavid · 63d ago
> Or, most software will become immutable. You'll just replace it.
The joys of dependency hell combined with rapid deprecation of the underlying tooling.
ivape · 63d ago
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Not even, devoured might be more apt. If I'm manually moving through this valley and a flood is coming through, those who are sticking automatic propellers and navigation systems on their ship are going to be the ones that can surf the flood and come out of the valley. We don't know, this is literally the adventure. I'm personally on the side of a hybrid approach. It's fun as hell, best of luck to everyone.
It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean? These are risks we all take.
Quoting the Admiral from Starcraft Broodwars cinematic (I'm a learned person):
"... You must go into this with both eyes open"
dbalatero · 63d ago
> It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean?
Not sure if you drew the right conclusion from that one.
conartist6 · 61d ago
What leads you to believe non-AI coding is over?
I'm not using AI and I'm still an incredibly high velocity engineer because I own my codebase. I've written each line ten times over, like a player who has become highly skilled at one particular game.
andrekandre · 61d ago
> that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again.
sounds like me after my first hangover...
dakiol · 64d ago
Same here. It’s fine for me to use the ChatGPT web interface and switch between it and my IDE/editor.
Context switching is not the bottleneck. I actually like to go away from the IDE/keyboard to think through problems in a different environment (so a voice version of chatgpt that I can talk to via my smartwatch while walking and see some answers either on my smartglasses or via sound would be ideal… I don’t really need more screen (monitor) time)
divan · 63d ago
I use voice mode of ChatGPT almost exclusively using RayBan metaglasses(especially when outside / cyciling).
johnisgood · 63d ago
> Same here. It’s fine for me to use the ChatGPT web interface and switch between it and my IDE/editor.
I do this all the time, and I am completely fine with it. Sure, I need to pay more attention, but I think it does more good than harm.
CuriouslyC · 63d ago
Sorry to say but this workflow just isn't great unless you're working on something where AI models aren't that helpful -- obscure language/libraries/etc where they hallucinate or write non-idiomatic solutions if left to run much by themselves. In that case, you want the strong human review loop that comes from crafting the context via copy paste and inspecting the results before copying back.
For well trodden paths that AI is good at, you're wasting a ton of time copying context and lint/typechecking/test results and copying back edits. You could probably double your productivity by having an agentic coding workflow in the background doing stuff that's easy while you manually focus on harder problems, or just managing two agents that are working on easy code.
jjani · 63d ago
You would like to, or you're actually doing that right now?
DoesntMatter22 · 63d ago
Man that workflow is brutal
pdntspa · 63d ago
20yrs engineer here, all my life I've dreamed of having something that I could ask general questions about a codebase to and get back a cohesive, useful answer. And that future is now.
econ · 63d ago
I would put it more generic. I love that one can now ask as many dumb questions as it takes about anything.
With humans there is this point where even the most patient teacher has to move on to do other things. Learning is best when one is curios about something and curiosity is more often specific. (When generic one can just read the manual)
volkk · 64d ago
kind of ironic, because the novices are the ones that absolutely should be doing things by hand to get better at the craft.
skeledrew · 63d ago
The day will come when only a few need to be "better at the craft". Just as with Assembly and even C.
spookie · 63d ago
C is still in the top 5 most used languages by any metric.
volkk · 63d ago
no one writes assembly, but good engineers still understand how things work under the hood
SkyBelow · 64d ago
One benefit is when working on multiple code bases where the size of the code base is larger than the time spent working on it, so there is still a gap of knowledge. Agents don't guarantee the correctness of a search the same an old search field does, but it offers a much more expressive way to do searches and queries in a code base.
Now that I think about it, I might have only ever used agents for searching and answering questions, not for producing code. Perhaps I don't trust the AI to build a good enough structure, so while I'll use AI, it is one file at a time sort of interaction where I see every change it makes. I should probably try out one of these agent based models for a throw away project just to get more anecdotes to base my opinion on.
victorbjorklund · 64d ago
I dont agree. I use agents all the time. I say exactly what the agent should do but often changes need to be made in more than one place in the code base. Could I prompt it for every change one at a time per file? Sure, but it is faster do prompt an agent for it.
SkyPuncher · 63d ago
I couldn't use AI code without agentic mode.
At it's most basic, agentic mode is necessary for building the proper context. While I might know the solution at the high level, I need the agent to explore the code base to find things I reference and bring them into context before writing code.
Agentic mode is also super helpful for getting LLMs from "99%" correct code to "100%" correct code. I'll ask them to do something to verify their work. This is often when the agent realizes it hallucinated a method name or used a directionally correct, but wrong column name.
mulmen · 63d ago
I think this perspective is better characterized as “solo” and not “old”. I don’t think your age is relevant here.
Senior engineers are not necessarily old but have the experience to delegate manageable tasks to peers including juniors and collaborate with stakeholders. They’re part of an organization by definition. They’re senior to their peers in terms of experience or knowledge, not age.
Agentic AIs slot into this pattern easily.
If you are a solo dev you may not find this valuable. If you are a senior then you probably do.
neutronicus · 63d ago
My main interest in agent mode is deputizing the C++ compiler to tell the LLM about everything it has hallucinated.
rcarmo · 63d ago
Considering that Agent Mode saves me a lot of hassle doing refactoring ("move the handler to file X and review imports", "check where this constant is used and replace it with <another> for <these cases>", etc.), I'd say you are missing the point...
I actually flip things - I do the breakdown myself in a SPEC.md file and then have the agent work through it. Markdown checklists work great, and the agent can usually update/expand them as it goes.
jmvldz · 64d ago
Coding agents are the future and it's anyone's game right now.
The main reason I think there is such a proliferation is it's not clear what the best interface to coding agents will be. Is it in Slack and Linear? Is it on the CLI? Is it a web interface with a code editor? Is it VS Code or Zed?
Just like everyone has their favored IDE, in a few years time, I think everyone will have their favored interaction pattern for coding agents.
Product managers might like Devin because they don't need to setup an environment. Software engineers might still prefer Cursor because they want to edit the code and run tests on their own.
Cursor has a concept of a shadow workspace and I think we're going to see this across all coding agents. You kick off an async task in whatever IDE you use and it presents the results of the agent in an easy to review way a bit later.
As for Void, I think being open source is valuable on it's own. My understanding is Microsoft could enforce license restrictions at some point down the road to make Cursor difficult to use with certain extensions.
The weird thing is, the biggest reason I don't use Cursor much is because they just distribute this AppImage, which doesn't install or add itself to the Ubuntu app menu, it just sits there and I have to do
The setuid sandbox is not running as root. Common causes:
* An unprivileged process using ptrace on it, like a debugger.
* A parent process set prctl(PR_SET_NO_NEW_PRIVS, ...)
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted
I have to go Googling, then realize I have to run it with
Often I'm lazy to do all of this and just use the Claude / ChatGPT web version and paste code back and forth to VS code.
The effort required to start Cursor is the reason I don't use it much. VS code is an actual, bona fide installed app with an icon that sits on my screen, I just click it to launch it. So much easier. Even if I have to write code manually.
NoahKAndrews · 63d ago
AppImageLauncher improves the AppImage experience a lot, including making sure they get added to the menu. I'm not sure if it makes launching without the sandbox easier or not.
kristopolous · 64d ago
Void has been around since last year.
I'm working on an agnostic unified framework to make contexts transferrable between these tools.
This will permit zero friction, zero interruption transitions without any code modification.
Should have something to show by next week.
Hit me up if you're interested in working on this problem - I'm tired of cowboying my projects.
nowittyusername · 63d ago
I've tried many of AI coding IDE's, the best ones like RooCode are good simply because they don't gimp your context. The modern day models are already more then capable enough for many coding tasks, you just need to leave them alone and let them utilize their full context window and all will go well. If you hear a bad experience with any of these IDE's, most of the time its because its limiting use of context or improper management of related functions.
cryptoz · 64d ago
Yup - honestly the space is so open right now still, everyone is trying haha. It's got quite hard to keep track of different models and their strengths / weaknesses, much less the IDE and editor space! I have no idea which of these AI editors would suite me best and a new one comes out like every day.
I'm still in vim with copilot and know I'm missing out. Anyway I'm also adding to the problem as I've got my own too (don't we all?!), at https://codeplusequalsai.com. Coded in vim 'cause I can't decide on an editor!
jmvldz · 64d ago
This is cool! I like that you have a visual element for the agent working on multiple tickets at a time.
cryptoz · 63d ago
Thanks! And yeah, it really is satisfying watching the tickets move from column to column "all on their own" as the works gets done!
ramoz · 63d ago
You forgot the best one to compare against - Claude Code.
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
- To what granularity can I limit the context?
[1] https://aider.chat/
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
[1] https://forum.cursor.com/t/how-does-docs-crawling-work/264/3
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
https://docs.cursor.com/context/@-symbols/@-docs
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
Have a look at https://github.com/aperoc/toolkami to see if it might be useful for you.
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code 2. Ask a question or give a task 3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
Use gemini-2.5pro or sonnet3.5/3.7 or gpt-4.1
Be as specific and detailed in your prompts as you can. Include the right context.
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
There's no such thing as a "right prompt". It's all snake oil. https://dmitriid.com/prompting-llms-is-not-engineering
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).
Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.
We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.
And Zed: https://zed.dev
Yesterday on the front page of HN:
https://news.ycombinator.com/item?id=43912844
Their tools are wildly popular in many spaces. It isn't for everyone though. It's totally believable in your circle no one uses their tools, but it isn't niche.
Their use base is completely different. And we’re both in a bubble, I reckon. IntelliJ people also only know a few VSCode users!
Claude Code (neither IDE nor extension) is rapidly gaining ground, it's biggest current limitation being cost, which is likely to get resolved sooner rather than later (Gemini Code anyone?). You're right about the right now, but with the pace at which things are moving, the trends are honestly more relevant than the status quo.
We think in 1-2 years people will write code at a systems level, not a function level, and it's not clear to us that you can do that with text. Text-based tools like Claude Code work in our text-based-code systems today, but I think describing algorithms to a computer in the future might involve more diagrams, and terminal will not be ideal. That's our reasoning against building a tool in the terminal, but it clearly works well today, and is the simplest way for the labs to train/run terminal tool-use agents.
There's a reason why fully creating systems from them died 20 years ago - and it wasn't just because the code gen failed. Finding a bug in your spec when its a mess of arrows and connections can be nigh impossible.
Go image search "complex unreal blueprint".
I don't imagine people will want to fully visualize codebases in a giant unified diagram, but I find it hard to imagine that we won't have digests and overviews that at least stray from plaintext in some way.
I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
Sounds exactly like what DeepWiki is doing from the Devin AI Agent guys: https://deepwiki.com
> hard to imagine that we won't have digests and overviews
100% agreed here.
Disclosure: I'm the author of the project below.
[0] https://terminal.click
Hey by the way I hear all communication between people is going to shift to pictograms soon. You know -- emoji and hieroglyphs. Text just isn't ideal, you know
What makes you say that? From what I’m observing it doesn’t seem to be talked much about at all.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
[0] https://news.ycombinator.com/item?id=43931409
https://hn.algolia.com/?q=claude+code
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.
[0]https://github.com/jj-vcs/jj
1 - https://github.com/plandex-ai/plandex
I think editing just a part of the file is what Roo calls diff editing, and I'm asking if this is what the person above means by line edits.
Are you sure? I have some expertise with my IDE, some other extension which solve problems for me, a wide range of them, I've learnt shortcuts, troubleshooting, where and who ask for help, but now you're telling me that I am better off leaving all that behind, and it's better for me? ;o
We used to know better
edit: ahh just saw that it is also a fork of VS Code, so it is indeed OSS Cursor
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
1) Authority (whatever a prominent evangelist developer was peddling)
2) The book I was following as a guide
3) The tutorial I was following as a guide
4) The consensus of the crowd at the time
5) Whatever worked (SO, brute force, whatever library, whatever magic)
It took a long ass time before I got to throw all five of those things out (throw the map away). At the moment, #5 on that list is AI (whatever works). It's a Rite of Passage, and because so much of being a developer involves autodidacticism, this is a valley you must go through. Even so, it's pretty cool when you make it out of that valley (you can do whatever you want without any anxiety about is this the right path?). You are never fearful or lost in the valley(s) for the most part afterward.
Most people have not deployed enough critical code that was mostly written with AI. It's when that stuff breaks, and they have to debug it with AI, that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again. The thing is, we can never not use AI ever again. So, this is the trial by fire where many will figure out the depth of the valley and emerge from it with all the lessons. I can only speculate, but I suspect the lessons will be something along the lines of "some things should use less AI than others".
I think it's a cool journey, best of luck to the AI-first crowd, you will learn lessons the rest of us are not brave enough to embark on. I already have a basket of lessons, so I travel differently through the valley (hint: My ship still has a helm).
Or, most software will become immutable. You'll just replace it.
You'll throw away the mess, and let a newer LLM build a better version in a couple of days. You ask the LLM to write down the specs for the newer version based on the old code.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Do we really want that? To be beholden to the hands of a few?
Hell, you can't even purchase a GPU with high enough VRAM these days for an acceptable amount of money. In part because of geopolitics. I wonder how many morerestrictions are there to come.
There's a lot of FOMO going around, those honing their programming skills will continue to thrive, and that's a guarantee. Don't become a vassal when you can be a king.
This is peak AI, it only goes downhill from here in terms of quality, the AI first flows will be replaceable. Those offshored teams that we have suffered with for years now will be primarly replaced (google first programmers). And developers will continue, working around the edges. The differences will be that startups wont be able to use technology horading to stifle competition, unless they make themselves immune from the ai vacumes.
I can appreciate the comments further up around how AI can help unravel the mysterys of a legacy codebase. Being able to ask questions about code in quick succession will mean that we will feel more confident. AI is lossy, hard to direct, yet very confident always. We have 10k line functions in our legacy code that nests and nest. How confident are you to let ai go and refactor this code without oversight and ship to a customer? Thus far im not, maybe i dont know the best model and tools to use and how to apply them, but even if one of those logic branches gets hallucinated im in for a very bumpy ride. Watching non-technical people at my org get frusted and stuck with it in a loop is a lot more common then the successes which seem to be the opposite of the experienced engineers who use it as a tool, not a savour. But every situation is different.
If you think you company can be a differentiator in the market because it has access to the same AI tool as every other company? We'll well see about that. I believe there has to be more.
Im an experienced engineer of 30+yrs. Technology comes and goes. AI is just a another tool in the chest. I use it primarily because i dont have to deal with ads. I also use it to be a electrical engineer, designing circuts in areas i am not familiar with. I can see very simply the noivce side of the coin, it feels like you have super powers because you just dont know enough about the subject to be aware of anything else. Its sped up the learning cycle considerably beacause of the conversational nature. After a few years of projects, i know how to ask better questions to get better results.
That's like saying "I'll just burn down my house because I can replace it. Anyone who repairs their house will be left behind."
It's true, you can replace it, so I can't put my finger on what has been stopping people from burning their houses down instead of, say, spring cleaning
The joys of dependency hell combined with rapid deprecation of the underlying tooling.
Not even, devoured might be more apt. If I'm manually moving through this valley and a flood is coming through, those who are sticking automatic propellers and navigation systems on their ship are going to be the ones that can surf the flood and come out of the valley. We don't know, this is literally the adventure. I'm personally on the side of a hybrid approach. It's fun as hell, best of luck to everyone.
It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean? These are risks we all take.
Quoting the Admiral from Starcraft Broodwars cinematic (I'm a learned person):
"... You must go into this with both eyes open"
Not sure if you drew the right conclusion from that one.
I'm not using AI and I'm still an incredibly high velocity engineer because I own my codebase. I've written each line ten times over, like a player who has become highly skilled at one particular game.
Context switching is not the bottleneck. I actually like to go away from the IDE/keyboard to think through problems in a different environment (so a voice version of chatgpt that I can talk to via my smartwatch while walking and see some answers either on my smartglasses or via sound would be ideal… I don’t really need more screen (monitor) time)
I do this all the time, and I am completely fine with it. Sure, I need to pay more attention, but I think it does more good than harm.
For well trodden paths that AI is good at, you're wasting a ton of time copying context and lint/typechecking/test results and copying back edits. You could probably double your productivity by having an agentic coding workflow in the background doing stuff that's easy while you manually focus on harder problems, or just managing two agents that are working on easy code.
With humans there is this point where even the most patient teacher has to move on to do other things. Learning is best when one is curios about something and curiosity is more often specific. (When generic one can just read the manual)
Now that I think about it, I might have only ever used agents for searching and answering questions, not for producing code. Perhaps I don't trust the AI to build a good enough structure, so while I'll use AI, it is one file at a time sort of interaction where I see every change it makes. I should probably try out one of these agent based models for a throw away project just to get more anecdotes to base my opinion on.
At it's most basic, agentic mode is necessary for building the proper context. While I might know the solution at the high level, I need the agent to explore the code base to find things I reference and bring them into context before writing code.
Agentic mode is also super helpful for getting LLMs from "99%" correct code to "100%" correct code. I'll ask them to do something to verify their work. This is often when the agent realizes it hallucinated a method name or used a directionally correct, but wrong column name.
Senior engineers are not necessarily old but have the experience to delegate manageable tasks to peers including juniors and collaborate with stakeholders. They’re part of an organization by definition. They’re senior to their peers in terms of experience or knowledge, not age.
Agentic AIs slot into this pattern easily.
If you are a solo dev you may not find this valuable. If you are a senior then you probably do.
I actually flip things - I do the breakdown myself in a SPEC.md file and then have the agent work through it. Markdown checklists work great, and the agent can usually update/expand them as it goes.
The main reason I think there is such a proliferation is it's not clear what the best interface to coding agents will be. Is it in Slack and Linear? Is it on the CLI? Is it a web interface with a code editor? Is it VS Code or Zed?
Just like everyone has their favored IDE, in a few years time, I think everyone will have their favored interaction pattern for coding agents.
Product managers might like Devin because they don't need to setup an environment. Software engineers might still prefer Cursor because they want to edit the code and run tests on their own.
Cursor has a concept of a shadow workspace and I think we're going to see this across all coding agents. You kick off an async task in whatever IDE you use and it presents the results of the agent in an easy to review way a bit later.
As for Void, I think being open source is valuable on it's own. My understanding is Microsoft could enforce license restrictions at some point down the road to make Cursor difficult to use with certain extensions.
Another YC backed open source VS Code is Continue: https://www.continue.dev/
(Caveat: I am a YC founder building in this space: https://www.engines.dev/)
For real. I think it's because code editors seem to be in that perfect intersection of:
- A tool for programmers. Programmers like building for programmers.
- A tool for productivity. Companies will pay for productivity.
- A tool that's clearly AI-able. VC's will invest in AI tools.
- A tool with plenty of open source lift. The clear, common (and extreme?) example of this being forking VSCode.
Add to that the recent purchase of VSCode-fork [1] Windsurf for $3 billion [2] and I suspect we will see many more of these.
[1]: https://windsurf.com/blog/why-we-built-windsurf#:~:text=This...
[2]: https://community.openai.com/t/openai-is-acquiring-windsurf-...
No comments yet
The effort required to start Cursor is the reason I don't use it much. VS code is an actual, bona fide installed app with an icon that sits on my screen, I just click it to launch it. So much easier. Even if I have to write code manually.
I'm working on an agnostic unified framework to make contexts transferrable between these tools.
This will permit zero friction, zero interruption transitions without any code modification.
Should have something to show by next week.
Hit me up if you're interested in working on this problem - I'm tired of cowboying my projects.
I'm still in vim with copilot and know I'm missing out. Anyway I'm also adding to the problem as I've got my own too (don't we all?!), at https://codeplusequalsai.com. Coded in vim 'cause I can't decide on an editor!