I find it strange how most of these terminal-based AI coding agents have ended up with these attempts at making text UIs flashy. Tons of whitespace, line art, widgets, ascii art and gradients, and now apparently animations. And then what you don't get is the full suite of expected keybindings, tab completion, consistent scrollback, or even flicker-free text rendering. (At least this one seems to not be written with node.js, so I guess there's some chance that the terminal output is optimized to minimize large redraws?).
So they just don't tend to work at all like you'd expect a REPL or a CLI to work, despite having exactly the same interaction model of executing command prompts. But they also don't feel at all like fullscreen Unix TUIs normally would, whether we're talking editors or reader programs (mail readers, news readers, browsers).
Is this just all the new entrants copying Claude Code, or did this trend get started even earlier than that? (This is one of the reasons Aider is still my go-to; it looks and feels the way a REPL is supposed to.)
citizenpaul · 18h ago
Well this specific tool is by a company called Charm that has the mission statment of making the command line glamerous. They have been around for several years prior to the LLM craze.
They make a CLI framework for golang along with tools for it.
reactordev · 17h ago
That’s right. Charm has been making pretty tuis since the beginning of the group. BubbleTea and VHS are amazing. Everyone should try them.
citizenpaul · 9h ago
Also their website features a public ssh CLI interface that is a cool demo of what the framework can do. Go hug it to death HN!
stavros · 16h ago
Oooh yes, VHS is amazing.
thinkxl · 16h ago
I came to reply this. They have been building very cool CLI projects and those projects end up composing new bigger projects. This is their last one (That I know of) which use most of the other projects they created before.
They didn't do it flashy for this project specifically (like Claude Code, which I don't think is flashy at all) but every single one of their other projects are like this.
breuleux · 17h ago
What bothers me is that what I like about terminals is the scrolling workflow of writing commands and seeing my actions and outputs from various sources and programs sequentially in a log. So what I want is a rich full-HTML multi-program scrolling workflow. Instead, people are combining the worst of both worlds. What are they doing? Give me superior UI in a superior rendering system, not inferior UI in an inferior rendering system, god damn it.
dedpool · 16h ago
You can run it inside the terminal while still using your code editor with full support for diffs and undo. It works seamlessly with IDE like Cursor AI or VSCode, allowing multiple agents to work on different tasks at the same time, such as backend and frontend. The agents can also read each other’s rules, including Cursor rules and Crush Markdown files.
mccoyb · 16h ago
Say more about what you mean by "multi-program scrolling workflow", if you don't mind
teraflop · 16h ago
I think what they mean by "multi-program scrolling workflow" is just what you ordinarily get in a terminal window. You run command A, and below it you see its output. You run command B, and below it you see its output. And you can easily use the scroll bar to look at earlier commands and their output.
The parent commenter seems to be asking for the same thing, but with rich text/media.
breuleux · 16h ago
I mean a session that isn't limited to interaction with a single program. For example, if I have an agent in this paradigm, I want to easily interleave prompting with simple commands like `ls`, all in the same history. That's not what I'm getting with apps like claude code or crush. They just take over the entire terminal, and crush even quits without leaving a trace.
matsimitsu · 6h ago
You are probably looking for Mods (https://github.com/charmbracelet/mods), their other CLI AI Agent tool that's not a TUI, but a commandline interface to AI Agents.
mccoyb · 15h ago
Thanks, thinking about this - I think there are several ways to serve this concern: one could be having an agent as a server, where "prompting" is just a command `prompt ...` sent to the agent as a server. Then you never leave your terminal.
Presumably, you'd want this, but with some sort of interface _like_ these TUI systems?
wredcoll · 6h ago
> Thanks, thinking about this - I think there are several ways to serve this concern: one could be having an agent as a server, where "prompting" is just a command `prompt ...` sent to the agent as a server. Then you never leave your terminal.
Presumably, you'd want this, but with some sort of interface _like_ these TUI systems?
The whole advantage to a shell, and why people get so obsessed with it, is that they provide a standard interface to every program. Tuis break that by substituting their own ui.
They also provide a standard output... limitation, really, which means you can suddenly do pipes.
The future should look to capture both of those benefits.
Everyone will get super mad, but html is honest to god a great way to do this. Imagine a `ls` that output html to make a prettt display of your local files, but if you wanted to send the list to another program you pass some kind of css selector `ls | .main-list.file-names.name` or something. It's not exactly beautiful but you get the idea.
breuleux · 14h ago
> one could be having an agent as a server, where "prompting" is just a command `prompt ...` sent to the agent as a server
Yeah, I prefer something like that (which should be strictly easier to make than these elaborate TUIs). I could also be interested in a shell that supports it natively, e.g. with a syntax such as `-- this is my prompt! it's not necessary to quote stuff`. I'd also enjoy an HTML shell that can output markdown, tables and interactive plots rather than trying to bend a matrix of characters to do these things (as long as it's snappy and reliable). I haven't looked very hard, so these things might already exist.
spenczar5 · 9h ago
Yes, I think that sounds like Warp.
Personally, I find warp a bit disorienting, but it is indeed a more integrated approach.
__jonas · 18h ago
Nah, this type of text UI has been charmbracelet's whole thing since before AI agents appeared.
I quite like them, unlike traditional TUIs, the keybindings are actually intuitively discoverable, which is nice.
Arubis · 18h ago
I suspect some of it is that these interfaces are rapidly gaining adherents (and developers!) whose preference and accustomed usage is more graphically IDE-ish editors. Not everyone lives their life in a terminal window, even amongst devs. (Or so I’m told; I still have days where I don’t bother starting X/Wayland)
That package _does_ just run it in vterm, and it just adds automatic code links (the @path/to/file syntax), and a few more conveniences.
umanwizard · 17h ago
Good to know! Perhaps I’ll install it.
segmondy · 18h ago
you are showing how young you are. ;-) I'm glad this is back as someone that grew up in the BBS era, colorful text based stuff brings back joyful memories. I'm building my own terminal CLI coding agent. My plan is to make it this colorful with ascii art when I'm done, I'm focused on features now.
smokel · 18h ago
They are easier to make than full-fledged user interfaces, so you get to see more of them.
drdaeman · 12h ago
Well, they all seem to have issues with multi-line selection, as those get all messed up with decorations, panes and whatever noise is there. To best of my awareness, the best a TUI can do is to implement its own selection (so, alt-screen, mouse tracking, etc. - plenty of stuff to handle, including all the compatibility quirks) and use OSC 52 for clipboard operations, but that loses the native look-and-feel and terminal configuration.
(Technically, WezTerm's semantic zones should be the way to solve this for good - but that's WezTerm-only, I don't think any other terminal supports those.)
On the other hand, with GUIs this is not an issue at all. And YMMV, but for me copying snippets, bits of responses and commands is a very frequent operation for any coding agent, TUI, GUI or CLI.
knoopx · 14h ago
this is debatable, a proper TUI has the same complexities as conventional UIs + legacy rendering.
wonger_ · 16h ago
Flashy TUIs have been around for a few years. Check out the galleries for TUI frameworks:
I've been drafting a blog post about their pros and cons. You're right, text input doesn't feel like a true REPL, probably because they're not using readline. And we see more borders and whitespace because people can afford the screen space.
But there's perks like mouse support, discoverable commands, and color cues. Also, would you prefer someone make a mediocre GUI or a mediocre GUI for your workflows?
JeanMertz · 18h ago
For what it's worth, this is exactly why I am working on Jean-Pierre[0], pitched as:
> A command-line toolkit to support you in your daily work as a software programmer. Built to integrate into your existing workflow, providing a flexible and powerful pair-programming experience with LLMs.
The team behind DCD[1] are funding my work, as we see a lot of potential in a local-first, open-source, CLI-driven programming assistant for developers. This is obviously a crowded field, and growing more crowded by the day, but we think there's still a lot of room for improvement in this area.
We're still working on a lot of the fundamentals, but are moving closer to supporting agentic workflows similar to Claude Code, but built around your existing workflows, editors and tools, using the Unix philosophy of DOTADIW.
We're not at a state where we want to promote it heavily, as we're introducing breaking changes to the file format almost daily, but once we're a bit further along, we hope people find it as useful as we have in the past couple of months, integrating it into our existing terminal configurations, editors and local shell scripts.
I have a sneaking suspicion claude code is tui just because that’s more convenient for running on ephemeral vms (no need to load a desktop os, instant ssh compatibility) and that they didn’t realize everyone would be raw dogging —dangerously-no-permissions on their laptop’s bare metal OS
Oras · 17h ago
It feels like going back to NC again (Norton Commander)
bdhcuidbebe · 18h ago
> I find it strange how most of these terminal-based AI coding agents have ended up with these attempts at making text UIs flashy.
Its next gen script kids.
jrm4 · 18h ago
If true, GOOD.
I 100% unironically believe we're better off more script kiddies today, not fewer.
wglb · 15h ago
Out of curiosity, why is that?
anp · 11h ago
Not GP but as a 90s kid who didn’t learn programming until my 20s my impression is that script kiddies with cute hacks are a huge source of creativity. And maybe even playful tools, which I personally enjoy. Something about not focusing on deep programming or tech maybe makes it easier to just have fun and accomplish your goals.
jrm4 · 11h ago
Yup, similar to other comment.
I fear that todays kids are too compliant on everything; the script kiddie ethos at the time still wasn't primarily clear fraud, just chaos. Which, yeah, I think we could use a little of that now.
anthk · 17h ago
Uhm, you forgot ANSI animations from BBS, stuff like the BB demo from AALIB, aafire, midnight commander with tons of colours, mocp with the same...
Flashy stuff for the terminal it's not new. Heck, in late 90's/early 00's everyone tired e17 and eterm at least once. And then KDE3 with XRender extensions with more fancy stuff on terminals
and the like, plus compositor effects with xcompmgr and later, compiz.
But I'm old fashioned. I prefer iomenu+xargs+nvi and custom macros.
JTbane · 17h ago
Some really old heads told me that syntax highlighting is a gimmick. I can't imagine looking at code without it.
troupo · 15h ago
People have been making TUIs since time immemorial.
Discover the joys of Turbo Vision and things like Norton Commander, DOS Navigator, Word Perfect etc.
They problem is that most current tools can neither do the TUI right or the terminal part right.
layer8 · 14h ago
I wouldn’t describe those traditional TUIs as trying to be flashy, though. They were largely utilitarian.
bananapub · 16h ago
why is it strange? they're making it look slick and powerful and scifi and cool, as a - extremely successful - marketing gimmick.
tptacek · 19h ago
One nice thing about this is that it's early days for this, and the code is really clear and schematic, so if you ever wanted a blueprint for how to lay out an agent with tool calls and sessions and automatic summarization and persistence, save this commit link.
The big question - which one of these new agents can consume local models to a reasonable degree? I would like to ditch the dependency on external APIs - willing to trade some performance in lieu.
jasonm23 · 19h ago
Crush has an open issue (2 weeks) to add Ollama support - it's in progress.
ggerganov · 17h ago
They should add "custom endpoint" support instead [0].
it's basic, edit the config file. I just downloaded it, ~/.cache/share/crush/providers.json
add your own or edit an existing one
Edit api_endpoint, done.
Aperocky · 18h ago
nice, that would be my reason to use Crush.
tempodox · 17h ago
Me too.
0x457 · 19h ago
Most of these agents work with any OpenAI compatible endpoints.
oceanplexian · 19h ago
Actually not really.
I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.
Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.
Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.
spmurrayzzz · 18h ago
> I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support
Almost from day one of the project, I've been able to use local models. Llama.cpp worked out of the box with zero issues, same with vllm and sglang. The only tweak I had to make initially was manually changing the system prompt in my fork, but now you can do that via their custom modes features.
The ollama support issues are specific to that implementation.
simonw · 18h ago
I still haven't seen any local models served by Ollama handle tool calls well via that OpenAI endpoint. Have you had any success there?
bachittle · 18h ago
LM Studio is probably better in this regard. I was able to get LM studio to work with Cursor, a product known to specifically avoid giving support to local models. The only requirement is if it uses servers as a middle-man, which is what Cursor does, you need to port forward.
What happens if you just point it at its own source and ask it to add the feature?
segmondy · 13h ago
it will add the feature, I saw openAI make that claim that developers are adding their own features, saw Anthrophic make the same claim, and Aider's paul often says Aider wrote most of the code. I started building my own coding CLI for the fun of it, and then I thought, why not have it start developing features, and it does too. It's as good as the model. For ish and giggles, I just downloaded crush, pointed it to a local qwen3-30b-a3b which is a very small model and had it load the code, refactor itself and point bugs. I have never used LSP, and just wanted to see how it performs compared to treesitter.
segmondy · 13h ago
all of them, you can even use claude-code with a local model
navanchauhan · 19h ago
sst/opencode
metadat · 18h ago
But only a few models can actually execute commands effectively.. what is it, Claude and Gemini? Did I miss any?
rekram1-node · 17h ago
kiwi k2, Qwen3-Coder
metadat · 15h ago
Qwen3 issued such stupid commands I wouldn't count it in the same universe as Claude or Gem. I still need to test out Kimi, though- Cheers!
jmj · 14h ago
Seconding Kimi K2
tough · 11h ago
glm 4.5 air too
beanjuiceII · 11h ago
has a ton of bugs
tough · 11h ago
works quite reliably with claude-code-router and any agentic SOTA model (not necessarily anthropic)
cristea · 19h ago
I would love a comparison between all these new tools, like this with Claude Code, opencode, aider and cortex.
I just can’t get an easy overview of how each tool works and is different
riotnrrd · 19h ago
One of the difficulties -- and one that is currently a big problem in LLM research -- is that comparisons with or evaluations of commercial models are very expensive. I co-wrote a paper recently and we spent more than $10,000 on various SOTA commercial models in order to evaluate our research. We could easily (an cheaply) show that we were much better than open-weight models, but we knew that reviewers would ding us if we didn't compare to "the best."
Even aside from the expense (which penalizes universities and smaller labs), I feel it's a bad idea to require academic research to compare itself to opaque commercial offerings. We have very little detail on what's really happening when OpenAI for example does inference. And their technology stack and model can change at any time, and users won't know unless they carefully re-benchmark ($$$) every time you use the model. I feel that academic journals should discourage comparisons to commercial models, unless we have very precise information about the architecture, engineering stack, and training data they use.
tough · 11h ago
you have the separate the model , from the interface, imho.
you can totally evaluate these as GUI's, and CLI's and TUI's with more or less features and connectors.
Model quality is about benchmarks.
aider is great at showing benchmarks for their users
gemini-cli now tells you % of correct tools ending a session
imjonse · 17h ago
This used to be opencode but was renamed after some fallout between the devs I think.
- Kujtim Hoxha creates a project named TermAI using open-source libraries from the company Charm.
- Two other developers, Dax (a well-known internet personality and developer) and Adam (a developer and co-founder of Chef, known for his work on open-source and developer tools), join the project.
- They rebrand it to OpenCode, with Dax buying the domain and both heavily promoting it and improving the UI/UX.
- The project rapidly gains popularity and GitHub stars, largely due to Dax and Adam's influence and contributions.
- Charm, the company behind the original libraries, offers Kujtim a full-time role to continue working on the project, effectively acqui-hiring him.
- Kujtim accepts the offer. As the original owner of the GitHub repository, he moves the project and its stars to Charm's organization. Dax and Adam object, not wanting the community project to be owned by a VC-backed company.
- Allegations surface that Charm rewrote git history to remove Dax's commits, banned Adam from the repo, and deleted comments that were critical of the move.
- Dax and Adam, who own the opencode.ai domain and claim ownership of the brand they created, fork the original repo and launch their own version under the OpenCode name.
- For a time, two competing projects named OpenCode exist, causing significant community confusion.
- Following the public backlash, Charm eventually renames its version to Crush, ceding the OpenCode name to the project now maintained by Dax and Adam.
canadaduane · 7h ago
This is like game of thrones, dev edition. Thanks for the background.
/me up and continues search for good people and good projects.
beanjuiceII · 11h ago
yea two of the devs did a crazy rug pull
tough · 11h ago
lmao weirdest stuff ever on X and it seems like nobody cares anymore?
paradite · 17h ago
The performance not only depends on the tool, it also depends on the model, and the codebase you are working on (context), and the task given (prompt).
And all these factors are not independent. Some combinations work better than others. For example:
- Claude Sonnet 4 might work well with feature implementation, on backend code python code using Claude Code.
- Gemini 2.5 Pro works better for big fixes on frontend react codebases.
...
So you can't just test the tools alone and keep everything else constant. Instead you get a combinatorial explosion of tool * model * context * prompt to test.
16x Eval can tackle parts of the problem, but it doesn't cover factors like tools yet.
- Can't combine models. Claude Code using a combination of Haiku for menial search stuff and Sonnet for thinking is nice.
- Adds a lot of unexplained junk binary files in your directory. It's probably in the docs somewhere I guess.
- The initial init makes some CHARM.md that tries to be helpful, but everything it had did not seem like helpful things I want the model to know. Simple stuff, like, my Go tests use PascalCasing, e.g. TestCompile.
- Ctrl+C to exit crashed my terminal.
jimmcslim · 14h ago
> The initial init makes some CHARM.md
Oh god please no... can we please just agree on a standard for a well-known single agent instructions file, like AGENT.md [1] perhaps (and yes, this is the standard being shilled by Amp for their CLI tool, I appreciate the irony there). Otherwise we rely on hacks like this [2]
I’ve been playing with Crush over the past few weeks and I’m genuinely bullish on its potential.
I've been following Charm for some time and they’re one of the few groups that get DX and that consistently ship tools that developers love. Love seeing them joining the AI coding race. Still early days, but this is clearly a tool made by people who actually use it.
bazhand · 58m ago
Very interesting to see so many new TUI tools for llm.
Opencode allows auth via Claude Max, which is a huge plus over requiring API (ANTHROPIC_API_KEY)
anonzzzies · 19h ago
Another one, but indeed very nice looking. Will definitely be testing it.
What I miss from all of these (EDIT: I see opencode has this for github) is the lack of being able to authenticate with the monthly paid services; github copilot, claude code, openai codex, cursor etc etc
That would be the best addition; I have these subscriptions and might not like their interfaces, so it would be nice to be able to switch.
MrGreenTea · 6h ago
I don't think most of these allow other tools to "use" the monthly subscription. Because of that you need an API key and have to pay per tokens. Even Claude code for a while did not use your Claude subscription.
anonzzzies · 4h ago
But now they have a subscription for claude code , copilot has a sub and some others too. They might not allow it, but whatever; we are paying, so what's the big deal.
NitpickLawyer · 17h ago
> LSP-Enhanced: Crush uses LSPs for additional context, just like you do
This is the most interesting feature IMO, interested to see how this pans out. The multiple sessions / project also seems interesting.
esafak · 17h ago
There are LSP MCPs so you can use them with other agents too.
NitpickLawyer · 17h ago
I'm not really into golang, but if I read this [1] correctly, they seem to append the LSP stuff to every prompt, and automatically after each tool that supports it? It seems a bit more "integrated" than just an MCP.
remarkably both Cursor and Zed do this (GUI, not CLI)
mbladra · 19h ago
Woah I love the UI. Compared to the other coding agents I've used (eg. Claude Code, aider, opencode) this feels like the most enjoyable to use so far..
Anyone try switching LLM providers with it yet? That's something I've noticed to be a bit buggy with other coding agents
bachittle · 18h ago
Bubble Tea has always been an amazing TUI. I find React TUI (which is what Claude Code uses) to be buggy and always have to work against it.
tbeseda · 18h ago
Agreed. Charm has a solid track record of great TUIs. While I appreciate a good DSL, I don't think React for a TUI (via ink) is working out well.
petesergeant · 8h ago
Yes, me too. The inline syntax highlighting is very nice. I hope CC steals liberally.
jekude · 19h ago
Charmbracelet is amazing. Will there be an equivalent of Claude Code's CLAUDE.md files?
An unfortunate clash. I can say from experience that the sst version has a lot of issues that would benefit from more manpower, even though they are working hard. If only they could resolve their differences.
threecheese · 18h ago
I’m definitely interested as well. This is the other side of the sst/charm ‘opencode-ai’ fork we’ve been expecting, and I can’t wait to see how they are differentiating. Talented teams on all sides, glad to see indie dev shops getting involved (guess you could include Warp or Sourcegraph here as well, though their funding models are quite different).
eddyg · 19h ago
One big benefit of opencode is that it lets you authenticate to GitHub Copilot. This lets you switch between all the various models Copilot supports, which is really nice.
indigodaddy · 17h ago
What if you don’t have a copilot plan, can you still authenticate to your GitHub account and get some free tier level services ?
bluelightning2k · 15h ago
Is this the company that did shady things by buying an open source repo and kicking out the contributors? Something to do with OpenCode or SST or something idk. Could be a different company ?
teh_klev · 12h ago
Yes, this is that company. This is the "original":
They mention FreeBSD support in the README so that adds a couple points
throitallaway · 19h ago
All of Charmbracelet's open source stuff is Go-based, which supports a bunch of platforms.
graemep · 19h ago
This is not open source.
bsideup · 18h ago
"the Charm stuff" it is built on (and anyone else can use) is :)
And that's the stuff most of the industry (e.g. GitHub CLI) is using, btw.
Also, it looks like Crush has an irrevocable eventual fallback to MIT[1] allowing them to develop in open so you basically get all the bells and whistles available. We probably couldn't ask for more :)
This one feels refreshing. It’s written in Go, and the TUI is pretty slick. I’ve been running Qwen Coder 3 on a GPU cluster with 2 B200s at $2 per hour, getting 320k context windows and burning through millions of tokens without paying closed labs for API calls.
segmondy · 13h ago
how many tk/sec are you getting on that setup especially when you have 100k+ tokens?
yahoozoo · 14h ago
Are you using a service for the GPU cluster?
pizzalife · 19h ago
Beautiful terminal interface, well done. For people using Crush, how do you feel it compares to Claude Code or Cursor?
dgunay · 17h ago
Played with it a bit. So far it lacks some key functionality for my use case: I need to be able to launch an interactive session with a prefilled prompt. I like to spawn tmux sessions running an agent with a prompt in a single command, and then check in on it later if any followup prompting is needed.
Other papercuts: no up/down history, and the "open editor" command appears to do nothing.
Still, it's a _ridiculously_ pretty app. 5 stars. Would that all TUIs were this pleasing to look at.
bewuethr · 16h ago
There is history scrolling, you have to focus on the chat with tab first.
cedws · 18h ago
I'm happy to see some LLM tooling in Go, I really don't want to touch anything to do with JavaScript/npm/Python if I can help it.
I'm guessing building this is what Charm raised the funds for.
dwaltrip · 18h ago
Can’t hate on JS anymore, we have typescript now :)
cedws · 18h ago
If anything it makes me hate it more, because now you have a variety of build systems, even more node_modules heaviness, and ample opportunities for supply chain attacks via opaque transpiled npm packages.
SwiftyBug · 15h ago
I completely agree with you, but let's not pretend that Go's dependency manager is free from supply chain attacks vulnerabilities. The whole module mirror shenanigans took a hit on my trust of Go's module management.
cedws · 10h ago
Go still has one of the best supply chain security stories of any language.
8n4vidtmkvmk · 18h ago
Fwiw, I compile to readable JS. No point minifying. If someone wants to use it in an app, they will do so anyway
jemiluv8 · 13h ago
Wondered how claude code would look like if it was built by the people over
at charmbracelet. I suppose this is it
The terminal is so ideal for agentic coding and the more interactive the better.
I personally would like to be able to enter multiline text in a more intuitive
way in claude code. This nails it.
LouisvilleGeek · 17h ago
Looks "Glamourous" but lacks the basics:
Up / down history
Copy text
Other than these issues feels much nicer than Claude Code since the screen does not shake violently.
indigodaddy · 17h ago
Is Groq still “free” ? Anyone tested Crush with a Groq free key to see how mileage you can get out of it?
ryanmcbride · 16h ago
Trying this on windows after installing from npm and when it asks for my chatgpt api key, doesn't seem to let me paste it, or type anything, or submit at all. Just sits there until I either go back or force quit.
edit: setting the key as an env variable works tho.
rawkode · 15h ago
How to handle an OSS dispute where you're the baddy: rebrand and hope nobody notices.
Let's not forget they're the company that bought an OSS project, OpenCode, and tried to "steal" it
mccoyb · 15h ago
You're talking about the wrong company. This is not the company you think it is. These are the creators of `bubbletea`, a popular TUI framework in Go.
nsonha · 2h ago
I think you're pretty clueless to make that claim, they bought something, not stole, and one out of the 3 core contributors (who is also the original creator or the project) agreed. You should form an opinion based on logic and facts, not who you follow.
n00bskoolbus · 13h ago
I'm unfamiliar with this. How would they steal it if they bought the open source project?
xxzure · 17h ago
So what are the differences between this one, Claude Code and Gemini CLI?
bluelightning2k · 15h ago
Claude Code and Gemini CLI (and OpenAI Codex) are first party from the respective companies. But also kind of products - in extreme cases people pay $200/month for Claude Code and get $thousands and thousands of usage. There's product bundling there beyond just the interface.
I think Claude Code specifically has a reputation for being a 1st class citizen - as in the model is trained and evalled on that specific toolcall syntax.
yoavm · 16h ago
Claude Code uses Claude, Gemini CLI uses Gemini, and this one can be configured to use any model you want.
Why does this require XCode 26 to be installed instead of XCode 16?
Silly
ianbutler · 18h ago
I am starring this just for aesthetic absolutely nailed it.
apwell23 · 19h ago
sucks that i can't use any of these because claude code has me in golden handcuffs. I don't care about the cli but for a hobbyist i can't afford to call llm apis directly.
epiccoleman · 19h ago
I've been meaning to try out Opencode on the basis of this comment from a few weeks back where one of the devs indicated that Claude Pro subscriptions worked with Opencode:
> opencode kinda cheats by using Antropic client ID and pretending to be Claude Code, so it can use your existing subscription. [1]
I'd definitely like to see Anthropic provide a better way for the user's choice of clients to take advantage of the subscription. The way things stand today, I feel like I'm left with no choice but to stick to Claude Code for sonnet models and try out cool tools like this one with local models.
Now, with all that said, I did recently have Claude code me up a POC where I used Playwright to automate the Claude desktop app, with the idea being that you could put an API in front of it and take advantage of subscription pricing. I didn't continue messing with it once the concept was proved, but I guess if you really wanted to you could probably hack something together (though I imagine you'd be giving up a lot by ramming interactions through Claude Desktop in this manner). [2]
I thought Claude Code (sub) could work with alternate UIs, no? Eg doesn't Neovim have a Claude Code plugin? I want to say there are one or two more as well.
Though i think in Neovims case they had to reverse engineer the API calls for Claude Code. Perhaps that's against the TOS.
Regardless i have the intention to make something similar, so hopefully it's not against the TOS lol.
lvl155 · 18h ago
Someone please make/release a Rust CLI. OpenAI what are you doing with Codex?
Can you integrate this with/forward through a GitHub Copilot subscription?
insane_dreamer · 15h ago
Other than switching LLMs, if I'm already mostly using Claude, any reason to use this over CC?
One problem with these agents is that the tokens don't count for your Claude Max subscription. (Same reason I use CC instead of Zed's AI agent.)
827a · 17h ago
One thing I'm curious about: Assuming you're using the same underlying models, and putting obvious pricing differences aside: What is the functional difference between e.g. Charm and Claude Code? And Cursor, putting aside the obvious advantages running in a GUI brings and that integration.
Is there secret sauce that would make one better than the other? Available tools? The internal prompting and context engineering that the tool does for you? Again, assuming the model is the same, how similar should one expect the output from one to another be?
segmondy · 13h ago
the secret sauce will be the available tools, prompting, context engineering, etc, yup whatever "agentic algorithm" has been built in.
azemetre · 16h ago
I would honestly think no, what couldn't be reversed engineered eventually? We see this all the time.
Am curious about such results, it's one thing to think it's another to know! :D
827a · 10h ago
Yeah and as far as I know, both Claude Code and obviously Crush here are open source. Cursor isn't, but their code is probably just sitting in javascript in the application bundle and should be reversible if it actually mattered?
m3kw9 · 18h ago
Why not use Aider?
rane · 18h ago
Aider is not agentic.
Alifatisk · 19h ago
Oh, it’s by Charm_!
verdverm · 19h ago
I don't get why terminal agents are so popular of late. Having spent more than a decade in terminal based development (vi*), and now fully moved over to a real IDE (vs code), it seems bonkers to me. The IDE is so much more... integrated
mbladra · 19h ago
At this point, TUI's still feel like the most streamlined interface for coding agents. They're inherently lighter weight, and generally more true to the context of dev environments.
y42 · 19h ago
"Feels like" is a subjective measure. For example, Gemini CLI does feel inherently lighter than something like VS Code. But why should it? It's just a chat interface with a different skin.
I'm also not sure whether Gemini CLI is actually better aligned with the context of development environments.
Anyway—slightly off-topic here:
I’m using Gemini CLI in exactly the same way I use VS Code: I type to it. I’ve worked with a lot of agents across different projects—Gemini CLI, Copilot in all its LLM forms, VS Code, Aider, Cursor, Claude in the browser, and so on. Even Copilot Studio and PowerAutomate—which, by the way, is a total dumpster fire.
From simple code completions to complex tasks, using long pre-prompts or one-shot instructions—the difference in interaction and quality between all these tools is minimal. I wouldn’t even call it a meaningful difference. More like a slight hiccup in overall consistency.
What all of these tools still lack, here in year three of the hype: meaningful improvements in coding endurance or quality. None of them truly stand out—at least not yet.
verdverm · 19h ago
> None of them truly stand out
I don't think any will every truly stand out from the others. Seems more like convergence than anything else
threetonesun · 19h ago
I like them because the interface is consistent regardless of what editor/IDE I'm using. Also frequently I use it to do stuff like convert files, or look at a structure and then make a shell script to modify it in some way, in which case an IDE is just overhead, and the output is just something I would run in the terminal anyway.
kermatt · 19h ago
Integration trades convenience for flexibility.
For me, a terminal environment means I can use any tool or tech, without it being compatible with the IDE. Editors, utilities, and runtimes can be chosen, and I'm responsible for ensuring they can interop.
IDEs being convenience by integrating all of that, so the choice is up to the user: A convenient self contained environment, vs a more custom self assembled one.
Choose your own adventure.
verdverm · 18h ago
VS Code has the terminal(s) right there, I'm not missing out on any tool or tech
What I don't have to do is context switch between applications or interfaces
In other comments I relayed the sentiment that I enjoy not having to custom assemble a dev environment and spend way too much time making sure it works again after some plugin updates or neovim changes their APIs and breaks a bunch of my favorite plugins
smlacy · 19h ago
Because integrating directly with very large varities of editors & environments is actually kind of hard? Everyone has their own favorite development environment, and by pulling the LLM agents into a separate area (i.e. a terminal app) then you can quickly get to "works in all environments". Additionally, this also implies "works with no dev environment at all". For example, vibe coding a simple HTML only webpage. All you need is terminal+browser.
verdverm · 19h ago
All of the IDEs already have the AI integrations, so there's no work to do. It's not like you don't have to do the equivalent work for a TUI as an IDE for integration of a new model, it's the same config for that task.
> works with no dev environment at all
The terminal is a dev environment, my IDE has it built in. Copilot can read both the terminal and the files in my project, it even opens them and shows me the diff as it changes them. No need to switch context between where I normally code and some AI tool. These TUIs feel like the terminal version of the webapp, where I have to go back and forth between interfaces.
tptacek · 19h ago
The words "the AI integrations" are doing some weird work there, right? Agents all have opinionated structure, which changes how effective they are at working on different kinds of problems.
verdverm · 19h ago
By AI (model) integrations, it's mostly pointing at a URL and what name the API keys are under in ENV. Everyone has pretty much adopted MCP for agents at this point, so also very standardized integration there too
tptacek · 18h ago
MCP is a pretty minor detail relative to all the other design decisions that go into an agent? A serious agent is not literally just glue between programs and an upstream LLM.
nartho · 19h ago
It seems like you might have missed the gap between vi and modern terminal based development. Neovim with plugins is absolutely amazing and integrated, there are even options like Lazyvim that do all the work for you. I took the opposite journey and went from IDE to Neovim and I'm glad I did. vs code is a bunch of stuff badly cobbled together in a web app, running in Electron. It's a resource hog and it gets quite slow in big projects. Neovim had a much higher learning curve but is so much more powerful than vs code or even jetbrain stuff in my opinion and so much snappier too
verdverm · 19h ago
> It seems like you might have missed the gap between vi and modern terminal based development.
No, I used neovim and spent way too much time trying to turn it into an IDE, even with the prepackaged setups out there
VS Code is sitting below 5% CPU and 1G of memory, not seeing the resource hog you are talking about. LSPs typically use more resources (which is outside and the same for both)
Aperocky · 19h ago
Neo(lazy)vim user here.. not sure what I'm missing from IDE...
Language server, check, Plugin ecosystem, check, Running tests on demand, check. lua sucks but that's an acceptable compromise as vimscript is worse.
verdverm · 19h ago
I was neovim in the end, 100% agree lua is so much better than vimscript, but now I don't need either. I spend no time trying to match what an IDE can do in the terminal and get to spend that time building the things I'm actually interested in. I recalled Linus saying the reason he (at the time) used Fedora was because it just worked and he could spend his time on the kernel instead of tinkering to get linux working. This is one of the biggest reasons I stopped using (neo)vim
I had lots of problems with plugins in the ecosystem breaking, becoming incompatible with others, or often falling into unmaintained status. Integrations with external SaaS services are much better too
Also information density (and ease of access) as a peer comment has mentioned
Aperocky · 18h ago
> match what an IDE can do in the terminal and get to spend that time building the things
This is a common complaint but I haven't done any setup for months.. And installing a language server because I need to write typescript is just <leader>cm and then lllll on the servers I need.
__alexs · 19h ago
Mice are good and the terminal doesn't make the best use of the information density possible on modern displays.
hnlmorg · 19h ago
My experience is the opposite. Terminal UIs make better use of information density because they don’t have ridiculous padding between widgets and other graphical chromes that modern GUIs have.
Also terminals support mice. Have done for literally decades.
Ultimately though, it just boils down to personal preference
nartho · 19h ago
Modern terminal emulators run at native resolution and support window splitting. You can have the exact same information density (I'd argue that a nice neovim environment has more information density than most IDEs since vs code and jetbrains seem to love putting extra space and padding everywhere now.)
__alexs · 15h ago
Terminal emulators can only do exactly that. Emulate increasingly large terminals. They are almost fundamentally unable to render something smaller than a single character.
procone · 19h ago
You can use a mouse with a terminal.
Also one could argue that you opt into your own level of information density.
epiccoleman · 19h ago
For me, the workflow that Claude Code provides via VSCode plugins or even IntelliJ integration is great. TUI for talking to the agent and then some mild GUI gloss around diffs and such.
I like terminal things because they are easy to use in context wherever I need them - whether that's in my shell locally or over SSH, or in the integrated terminal in whatever IDE I happen to be using.
I use vim if I need to make a quick edit to a file or two.
Idk, terminal just seems to mesh nicely into whatever else I'm doing, and lets me use the right tool for the job. Feels good to me.
verdverm · 19h ago
My VS Code has a terminal and can remote into any machine and edit code / terminal there.
What I don't get is going back to terminal first approaches and why so many companies are putting these out (except that it is probably (1) easy to build (2) everyone is doing it hype cycle). It was similar when everyone was building ChatGPT functions or whatever before MCP came out. I expect the TUI cycle will fade as quickly as it rose
tuckerman · 17h ago
I think a fairly large number of people using IDEs use them for the writing code coding part almost exclusively, e.g. don't use many/any of the other "integrated" features like VCS integration, build integration, etc. As an example, I think most people I've seen use vscode still use git via the cli (less sure but I'd guess most of them use a separate terminal even).
I don't know for sure or have anything besides anecdotal evidence but I'd wager this is a majority of vscode users.
dgunay · 15h ago
I like them because they're easier to launch multiple instances of and take fewer resources. Being able to fire agents off into tmux sessions to tackle small-fry issues that they can usually oneshot is a powerful tool to fight the decay of a codebase from high prio work constantly pushing out housekeeping.
esafak · 17h ago
I think it lets developer concentrate their energy on improving the agentic experience, which matters more right now. It's hard to keep up with all the models, which the developers have to write support code for. Once the products mature, I bet they'll go visual again.
jasonm23 · 19h ago
it's the flexibility, no need for packaged extensions, just compose / pipe / etc..
iLoveOncall · 19h ago
I also use IDEs and I think people who use terminal-based editors are lunatics but I prefer terminal-based coding agents (I don't use them a lot to be fair).
It's easier to see the diff file by file and really control what the AI does IMO.
On another note VS Code is not an IDE, it's a text editor.
verdverm · 18h ago
Copilot opens the files and shows me the diff, that is not missing in the IDE
Perhaps your definition of IDE is more restrictive. I see VS Code as my environment where I develop with masses of integrations
anthk · 17h ago
Terminal based editors can work as an IDE too, with diff and the like. EMacs it's like that and it has magic, ediff and who knows what. And VIM can do the same, of course.
rvz · 19h ago
> a real IDE (vs code)
Much better to use Neovim than a very clunky slow editor like VS Code or Jetbrains just to edit a text file.
The keyboard is far faster than clicking everywhere with the mouse.
airstrike · 19h ago
This gets said a lot, but it's not like vscode doesn't have keyboard support.
verdverm · 18h ago
I wouldn't have made the switch if I was not able to keep my vim movements and modes. There's a great extension `vscodevim` that even implements some of the common extras like vim-motion
verdverm · 18h ago
I use the Vim plugin to keep my keyboard navigation, editor modes, and such... best of both worlds.
I think the sentiment that VS Code is clunky and slow is outdated. I have seen no noticeable impact since moving over from neovim
immibis · 19h ago
Not new to AI agents, either. I'm sure you can set up vim to be like an IDE, but unless you're coding over ssh, I don't know why it's preferable to an actual IDE (even one with vim bindings). GUIs are just better for many things.
If the optimal way to do a particular thing is a grid of rectangular characters with no mouse input, nothing prevents you having one of those in your GUI where it makes sense.
For instance, you can look up the documentation for which keys to press to build your project in your TUI IDE, or you can click the button that says "build" (and hover over the button to see which key to press next time). Why is typing :q<enter> better than clicking the "X" in the top-right corner? Obviously, the former works over ssh, but that's about it.
Slowness is an implementation detail. If MSVC6 can run fast enough on a computer from 1999 (including parsing C++) then we should be able to run things very fast today.
procone · 19h ago
Clicking X at the top right corner... Not exactly muscle memory. Way slower than ;q
airstrike · 19h ago
Wait til you find out you can do ;q in vscode too
verdverm · 18h ago
I primarily code over ssh with VS Code Remote to cloud vm instances
anthk · 17h ago
Ever heard of Emacs? WPE for terminals as a C/C++ IDE? Free Pascal?
Being some IDE for a terminal doesn't mean you can't have menues and everything must be driven with vi modal keys and commands.
dennisy · 18h ago
Looks cool, but do we need another one of these?
gustavojoaquin · 18h ago
based
Nullabillity · 19h ago
RIP charm, I guess.
mcpar-land · 18h ago
A little disappointed to see Charm hop on AI as well.
No comments yet
anthk · 17h ago
No, thanks. I prefer the old way. Books, some editor (I like both emacs or nvi), books with exercises, and maybe some autocomplete setup for function names/procedures, token words and the like.
So they just don't tend to work at all like you'd expect a REPL or a CLI to work, despite having exactly the same interaction model of executing command prompts. But they also don't feel at all like fullscreen Unix TUIs normally would, whether we're talking editors or reader programs (mail readers, news readers, browsers).
Is this just all the new entrants copying Claude Code, or did this trend get started even earlier than that? (This is one of the reasons Aider is still my go-to; it looks and feels the way a REPL is supposed to.)
They make a CLI framework for golang along with tools for it.
They didn't do it flashy for this project specifically (like Claude Code, which I don't think is flashy at all) but every single one of their other projects are like this.
The parent commenter seems to be asking for the same thing, but with rich text/media.
Presumably, you'd want this, but with some sort of interface _like_ these TUI systems?
The whole advantage to a shell, and why people get so obsessed with it, is that they provide a standard interface to every program. Tuis break that by substituting their own ui.
They also provide a standard output... limitation, really, which means you can suddenly do pipes.
The future should look to capture both of those benefits.
Everyone will get super mad, but html is honest to god a great way to do this. Imagine a `ls` that output html to make a prettt display of your local files, but if you wanted to send the list to another program you pass some kind of css selector `ls | .main-list.file-names.name` or something. It's not exactly beautiful but you get the idea.
Yeah, I prefer something like that (which should be strictly easier to make than these elaborate TUIs). I could also be interested in a shell that supports it natively, e.g. with a syntax such as `-- this is my prompt! it's not necessary to quote stuff`. I'd also enjoy an HTML shell that can output markdown, tables and interactive plots rather than trying to bend a matrix of characters to do these things (as long as it's snappy and reliable). I haven't looked very hard, so these things might already exist.
Personally, I find warp a bit disorienting, but it is indeed a more integrated approach.
I quite like them, unlike traditional TUIs, the keybindings are actually intuitively discoverable, which is nice.
(Technically, WezTerm's semantic zones should be the way to solve this for good - but that's WezTerm-only, I don't think any other terminal supports those.)
On the other hand, with GUIs this is not an issue at all. And YMMV, but for me copying snippets, bits of responses and commands is a very frequent operation for any coding agent, TUI, GUI or CLI.
https://ratatui.rs/showcase/apps/
https://github.com/charmbracelet/bubbletea/tree/main/example...
https://textual.textualize.io/
I've been drafting a blog post about their pros and cons. You're right, text input doesn't feel like a true REPL, probably because they're not using readline. And we see more borders and whitespace because people can afford the screen space.
But there's perks like mouse support, discoverable commands, and color cues. Also, would you prefer someone make a mediocre GUI or a mediocre GUI for your workflows?
> A command-line toolkit to support you in your daily work as a software programmer. Built to integrate into your existing workflow, providing a flexible and powerful pair-programming experience with LLMs.
The team behind DCD[1] are funding my work, as we see a lot of potential in a local-first, open-source, CLI-driven programming assistant for developers. This is obviously a crowded field, and growing more crowded by the day, but we think there's still a lot of room for improvement in this area.
We're still working on a lot of the fundamentals, but are moving closer to supporting agentic workflows similar to Claude Code, but built around your existing workflows, editors and tools, using the Unix philosophy of DOTADIW.
We're not at a state where we want to promote it heavily, as we're introducing breaking changes to the file format almost daily, but once we're a bit further along, we hope people find it as useful as we have in the past couple of months, integrating it into our existing terminal configurations, editors and local shell scripts.
[0]: https://github.com/dcdpr/jp [1]: https://contract.design
^ for the uninitiated
Its next gen script kids.
I 100% unironically believe we're better off more script kiddies today, not fewer.
I fear that todays kids are too compliant on everything; the script kiddie ethos at the time still wasn't primarily clear fraud, just chaos. Which, yeah, I think we could use a little of that now.
Flashy stuff for the terminal it's not new. Heck, in late 90's/early 00's everyone tired e17 and eterm at least once. And then KDE3 with XRender extensions with more fancy stuff on terminals and the like, plus compositor effects with xcompmgr and later, compiz.
But I'm old fashioned. I prefer iomenu+xargs+nvi and custom macros.
Discover the joys of Turbo Vision and things like Norton Commander, DOS Navigator, Word Perfect etc.
They problem is that most current tools can neither do the TUI right or the terminal part right.
https://share.cleanshot.com/XBXQbSPP
[0] https://github.com/microsoft/vscode/issues/249605
``` { "providers": { "ollama": { "type": "openai", "base_url": "http://localhost:11434/v1", "api_key": "ollama", "models": [ { "id": "llama3.2:3b", "model": "Llama 3.2 3B", "context_window": 131072, "default_max_tokens": 4096, "cost_per_1m_in": 0, "cost_per_1m_out": 0 } ] } } } ```
it's basic, edit the config file. I just downloaded it, ~/.cache/share/crush/providers.json add your own or edit an existing one
Edit api_endpoint, done.
I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.
Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.
Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.
Almost from day one of the project, I've been able to use local models. Llama.cpp worked out of the box with zero issues, same with vllm and sglang. The only tweak I had to make initially was manually changing the system prompt in my fork, but now you can do that via their custom modes features.
The ollama support issues are specific to that implementation.
https://github.com/musistudio/claude-code-router
https://aider.chat/docs/llms.html
https://aider.chat/docs/llms/lm-studio.html
I just can’t get an easy overview of how each tool works and is different
Even aside from the expense (which penalizes universities and smaller labs), I feel it's a bad idea to require academic research to compare itself to opaque commercial offerings. We have very little detail on what's really happening when OpenAI for example does inference. And their technology stack and model can change at any time, and users won't know unless they carefully re-benchmark ($$$) every time you use the model. I feel that academic journals should discourage comparisons to commercial models, unless we have very precise information about the architecture, engineering stack, and training data they use.
you can totally evaluate these as GUI's, and CLI's and TUI's with more or less features and connectors.
Model quality is about benchmarks.
aider is great at showing benchmarks for their users
gemini-cli now tells you % of correct tools ending a session
https://x.com/thdxr/status/1933561254481666466 https://x.com/meowgorithm/status/1933593074820891062 https://www.youtube.com/watch?v=qCJBbVJ_wP0
Gemini summary of the above:
- Kujtim Hoxha creates a project named TermAI using open-source libraries from the company Charm.
- Two other developers, Dax (a well-known internet personality and developer) and Adam (a developer and co-founder of Chef, known for his work on open-source and developer tools), join the project.
- They rebrand it to OpenCode, with Dax buying the domain and both heavily promoting it and improving the UI/UX.
- The project rapidly gains popularity and GitHub stars, largely due to Dax and Adam's influence and contributions.
- Charm, the company behind the original libraries, offers Kujtim a full-time role to continue working on the project, effectively acqui-hiring him.
- Kujtim accepts the offer. As the original owner of the GitHub repository, he moves the project and its stars to Charm's organization. Dax and Adam object, not wanting the community project to be owned by a VC-backed company.
- Allegations surface that Charm rewrote git history to remove Dax's commits, banned Adam from the repo, and deleted comments that were critical of the move.
- Dax and Adam, who own the opencode.ai domain and claim ownership of the brand they created, fork the original repo and launch their own version under the OpenCode name.
- For a time, two competing projects named OpenCode exist, causing significant community confusion.
- Following the public backlash, Charm eventually renames its version to Crush, ceding the OpenCode name to the project now maintained by Dax and Adam.
/me up and continues search for good people and good projects.
And all these factors are not independent. Some combinations work better than others. For example:
- Claude Sonnet 4 might work well with feature implementation, on backend code python code using Claude Code.
- Gemini 2.5 Pro works better for big fixes on frontend react codebases.
...
So you can't just test the tools alone and keep everything else constant. Instead you get a combinatorial explosion of tool * model * context * prompt to test.
16x Eval can tackle parts of the problem, but it doesn't cover factors like tools yet.
https://eval.16x.engineer/
https://dictionary.cambridge.org/dictionary/english/glamorou...
Pros:
- Beautiful UI
- Useful sidebar, keep track of changed files, cost
- Better UX for accepting changes (has hotkeys, shows nicer diff)
Cons:
- Can't combine models. Claude Code using a combination of Haiku for menial search stuff and Sonnet for thinking is nice.
- Adds a lot of unexplained junk binary files in your directory. It's probably in the docs somewhere I guess.
- The initial init makes some CHARM.md that tries to be helpful, but everything it had did not seem like helpful things I want the model to know. Simple stuff, like, my Go tests use PascalCasing, e.g. TestCompile.
- Ctrl+C to exit crashed my terminal.
Oh god please no... can we please just agree on a standard for a well-known single agent instructions file, like AGENT.md [1] perhaps (and yes, this is the standard being shilled by Amp for their CLI tool, I appreciate the irony there). Otherwise we rely on hacks like this [2]
[1] https://ampcode.com/AGENT.md
[2] https://kau.sh/blog/agents-md/
I've been following Charm for some time and they’re one of the few groups that get DX and that consistently ship tools that developers love. Love seeing them joining the AI coding race. Still early days, but this is clearly a tool made by people who actually use it.
Opencode allows auth via Claude Max, which is a huge plus over requiring API (ANTHROPIC_API_KEY)
What I miss from all of these (EDIT: I see opencode has this for github) is the lack of being able to authenticate with the monthly paid services; github copilot, claude code, openai codex, cursor etc etc
That would be the best addition; I have these subscriptions and might not like their interfaces, so it would be nice to be able to switch.
This is the most interesting feature IMO, interested to see how this pans out. The multiple sessions / project also seems interesting.
[1] - https://github.com/charmbracelet/crush/blob/317c5dbfafc0ebda...
[1]: https://github.com/sst/opencode
[0]: https://github.com/opencode-ai/opencode
https://github.com/charmbracelet/crush/pulse/monthly
https://github.com/sst/opencode/pulse/monthly
An unfortunate clash. I can say from experience that the sst version has a lot of issues that would benefit from more manpower, even though they are working hard. If only they could resolve their differences.
https://github.com/sst/opencode
Also, it looks like Crush has an irrevocable eventual fallback to MIT[1] allowing them to develop in open so you basically get all the bells and whistles available. We probably couldn't ask for more :)
[1] https://github.com/charmbracelet/crush/blob/317c5dbfafc0ebda...
Other papercuts: no up/down history, and the "open editor" command appears to do nothing.
Still, it's a _ridiculously_ pretty app. 5 stars. Would that all TUIs were this pleasing to look at.
I'm guessing building this is what Charm raised the funds for.
The terminal is so ideal for agentic coding and the more interactive the better. I personally would like to be able to enter multiline text in a more intuitive way in claude code. This nails it.
Up / down history Copy text
Other than these issues feels much nicer than Claude Code since the screen does not shake violently.
edit: setting the key as an env variable works tho.
Let's not forget they're the company that bought an OSS project, OpenCode, and tried to "steal" it
I think Claude Code specifically has a reputation for being a 1st class citizen - as in the model is trained and evalled on that specific toolcall syntax.
Silly
> opencode kinda cheats by using Antropic client ID and pretending to be Claude Code, so it can use your existing subscription. [1]
I'd definitely like to see Anthropic provide a better way for the user's choice of clients to take advantage of the subscription. The way things stand today, I feel like I'm left with no choice but to stick to Claude Code for sonnet models and try out cool tools like this one with local models.
Now, with all that said, I did recently have Claude code me up a POC where I used Playwright to automate the Claude desktop app, with the idea being that you could put an API in front of it and take advantage of subscription pricing. I didn't continue messing with it once the concept was proved, but I guess if you really wanted to you could probably hack something together (though I imagine you'd be giving up a lot by ramming interactions through Claude Desktop in this manner). [2]
[1]: https://news.ycombinator.com/item?id=44488262
[2]: https://github.com/epiccoleman/claude-automator
Though i think in Neovims case they had to reverse engineer the API calls for Claude Code. Perhaps that's against the TOS.
Regardless i have the intention to make something similar, so hopefully it's not against the TOS lol.
https://github.com/aws/amazon-q-developer-cli
One problem with these agents is that the tokens don't count for your Claude Max subscription. (Same reason I use CC instead of Zed's AI agent.)
Is there secret sauce that would make one better than the other? Available tools? The internal prompting and context engineering that the tool does for you? Again, assuming the model is the same, how similar should one expect the output from one to another be?
Am curious about such results, it's one thing to think it's another to know! :D
I'm also not sure whether Gemini CLI is actually better aligned with the context of development environments.
Anyway—slightly off-topic here:
I’m using Gemini CLI in exactly the same way I use VS Code: I type to it. I’ve worked with a lot of agents across different projects—Gemini CLI, Copilot in all its LLM forms, VS Code, Aider, Cursor, Claude in the browser, and so on. Even Copilot Studio and PowerAutomate—which, by the way, is a total dumpster fire.
From simple code completions to complex tasks, using long pre-prompts or one-shot instructions—the difference in interaction and quality between all these tools is minimal. I wouldn’t even call it a meaningful difference. More like a slight hiccup in overall consistency.
What all of these tools still lack, here in year three of the hype: meaningful improvements in coding endurance or quality. None of them truly stand out—at least not yet.
I don't think any will every truly stand out from the others. Seems more like convergence than anything else
For me, a terminal environment means I can use any tool or tech, without it being compatible with the IDE. Editors, utilities, and runtimes can be chosen, and I'm responsible for ensuring they can interop.
IDEs being convenience by integrating all of that, so the choice is up to the user: A convenient self contained environment, vs a more custom self assembled one.
Choose your own adventure.
What I don't have to do is context switch between applications or interfaces
In other comments I relayed the sentiment that I enjoy not having to custom assemble a dev environment and spend way too much time making sure it works again after some plugin updates or neovim changes their APIs and breaks a bunch of my favorite plugins
> works with no dev environment at all
The terminal is a dev environment, my IDE has it built in. Copilot can read both the terminal and the files in my project, it even opens them and shows me the diff as it changes them. No need to switch context between where I normally code and some AI tool. These TUIs feel like the terminal version of the webapp, where I have to go back and forth between interfaces.
No, I used neovim and spent way too much time trying to turn it into an IDE, even with the prepackaged setups out there
VS Code is sitting below 5% CPU and 1G of memory, not seeing the resource hog you are talking about. LSPs typically use more resources (which is outside and the same for both)
Language server, check, Plugin ecosystem, check, Running tests on demand, check. lua sucks but that's an acceptable compromise as vimscript is worse.
I had lots of problems with plugins in the ecosystem breaking, becoming incompatible with others, or often falling into unmaintained status. Integrations with external SaaS services are much better too
Also information density (and ease of access) as a peer comment has mentioned
This is a common complaint but I haven't done any setup for months.. And installing a language server because I need to write typescript is just <leader>cm and then lllll on the servers I need.
Also terminals support mice. Have done for literally decades.
Ultimately though, it just boils down to personal preference
I like terminal things because they are easy to use in context wherever I need them - whether that's in my shell locally or over SSH, or in the integrated terminal in whatever IDE I happen to be using.
I use vim if I need to make a quick edit to a file or two.
Idk, terminal just seems to mesh nicely into whatever else I'm doing, and lets me use the right tool for the job. Feels good to me.
What I don't get is going back to terminal first approaches and why so many companies are putting these out (except that it is probably (1) easy to build (2) everyone is doing it hype cycle). It was similar when everyone was building ChatGPT functions or whatever before MCP came out. I expect the TUI cycle will fade as quickly as it rose
I don't know for sure or have anything besides anecdotal evidence but I'd wager this is a majority of vscode users.
It's easier to see the diff file by file and really control what the AI does IMO.
On another note VS Code is not an IDE, it's a text editor.
Perhaps your definition of IDE is more restrictive. I see VS Code as my environment where I develop with masses of integrations
Much better to use Neovim than a very clunky slow editor like VS Code or Jetbrains just to edit a text file.
The keyboard is far faster than clicking everywhere with the mouse.
I think the sentiment that VS Code is clunky and slow is outdated. I have seen no noticeable impact since moving over from neovim
If the optimal way to do a particular thing is a grid of rectangular characters with no mouse input, nothing prevents you having one of those in your GUI where it makes sense.
For instance, you can look up the documentation for which keys to press to build your project in your TUI IDE, or you can click the button that says "build" (and hover over the button to see which key to press next time). Why is typing :q<enter> better than clicking the "X" in the top-right corner? Obviously, the former works over ssh, but that's about it.
Slowness is an implementation detail. If MSVC6 can run fast enough on a computer from 1999 (including parsing C++) then we should be able to run things very fast today.
Being some IDE for a terminal doesn't mean you can't have menues and everything must be driven with vi modal keys and commands.
No comments yet