The Tech Industry Doesn't Understand Consent (soatok.blog)
1 points by ColinWright 1m ago 0 comments
Implement CI/CD monitoring using OpenTelemetry (signoz.io)
2 points by ankit01-oss 1h ago 0 comments
Zed: High-performance AI Code Editor
747 vquemener 422 5/7/2025, 6:38:40 AM zed.dev ↗
I worked with Antonio on prototyping the extensions system[0]. In other words, Antonio got to stress test the pair programming collaboration tech while I ran around in a little corner of the zed codebase and asked a billion questions. While working on zed, Antonio taught me how to talk about code and make changes purposefully. I learned that the best solution is the one that shows the reader how it was derived. It was a great summer, as far as summers go!
I'm glad the editor is open source and that people are willing to pay for well-engineered AI integrations; I think originally, before AI had taken off, the business model for zed was something along the lines of a per-seat model for teams that used collaborative features. I still use zed daily and I hope the team can keep working on it for a long time.
[0]: Extensions were originally written in Lua, which didn't have the properties we wanted, so we moved to Wasm, which is fast + sandboxed + cross-language. After I left, it looks like Max and Marshall picked up the work and moved from the original serde+bincode ABI to Wasm interface types, which makes me happy: https://zed.dev/blog/zed-decoded-extensions. I have a blog post draft about the early history of Zed and how extensions with direct access to GPUI and CRDTs could turn Zed from a collaborative code editor into a full-blown collaborative application platform. The post needs a lot of work (and I should probably reach out to the team) before I publish it. And I have finals next week. Sigh. Some day!
I've been trying to be active, create issues, help in any way I can, but the focus on AI tells me Zed is no longer an editor for me.
A feature that people are paying $0 for?
Do you think GPL3 will serve as an impediment to their revenue or future venture fundraising? I assume not, since Cursor and Windsurf were forks of MIT-licensed VS Code. And both of them are entirely dependent on Microsoft's goodwill to continue developing VS Code in the open.
Tangentially, do you think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? Would an AI art tool with sculpting and drawing benefit from being open source? I've talked with VCs that love open developer tools and they hate on the idea of open creative tools for designers, illustrators, filmmakers, and other creatives. I don't quite get it, because Blender and Krita have millions of users. Comfy is kind of in that space, it's just not very user-friendly.
To be clear, by "the user" I'm referring to the Cursor devs. This is the terminology of many F/OSS licenses.
In theory everyone can fork Chrome and Android, in practice none of the forks can keep up with Google's resources, unless they are Microsoft or Samsung.
Good luck on finals!
I learned something from that code, cool stuff!
One question: how do you handle cutting a new breaking change in wit? Does it take a lot of time to deal with all the boilerplate when you copy things around?
I check back on the GitHub issue every few months and it just has more votes and more supportive comments, but no acknowledgement.
Hopefully someone can rescue us from the sluggish VS Code.
https://github.com/zed-industries/zed/issues/7992
I have a 1440p monitor and seeing this issue.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
Setting on macos was called "use font smoothing when available".
In my opinion, this type of graphics work is not the core functionality of a text editor, same has been solved already in libraries. There is no reason to reinvent that wheel... Or if there is then please mention why.
In a world full of electron based apps, I appreciate anyone who dares to do things differently.
try it and see. i bet that helps/fixes at least some of you suffering from this.
A text area anchored correctly to the top and left of the window would absolutely not move no matter how tall or wide the window is, and no display scaling setting would impact that.
I have the same issue with macOS in general, and I don't understand how anyone can use it on a normal DPI monitor.
I'm guessing zed implemented their own text rendering without either hinting or subpixel rendering or both.
I have had similar blurring problems with a certain monitor (1920x1200 27"), which was resolved with changing some sharpening settings in the monitor itself. Strangely, that setting did not look well at my colleague's macbook, who was also often using that monitor, while the original settings looked fine, so we had to change the settings back and forth every time the other person had to use it. I do not think I was using zed at the time, other apps had that issue.
This is because macOS does not support subpixel rendering or hinting.
https://github.com/waydabber/BetterDisplay
(Or are you using it in vertical orientation?)
It looks like the relevant work needs to be done upstream.
I don't know the internals of Zed well, but it seems entirely plausible they're doing text rendering from scratch.
Apple has removed support for font rendering methods which make text on non-integer scaled screens look sharper. As a result, if you want to use your screen without blurry text, you have to use 1080p (1x), 4k (2x 1080p), 5k (2x 1440p) or 6k screens (or any other screens where integer scaling looks ok).
To see the difference, try connecting a Windows/Linux machine to your monitor and comparing how text looks compared to the same screen with a MacOS device.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
using pixel fonts on any non-integer multiplier of the native resolution will always result in horrible font rendering, I don't care what OS you're on.
I use MacOS on all kinds of displays as I move throughout the day, some of them are 1x, some are 2x, and some are somewhere in between. using a vector font in Zed looks fine on all of them. It did not look fine when I used a pixel font that I created for myself, but that's how pixel fonts work, not the fault of MacOS.
1) No hinting
2) No subpixel rendering
It is greyscale font rendering, yes, but it is coloring those pixels based on subpixel information.
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
(not parent commenter, but hold same opinion)
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
Apparently all editors bothered doing, except Zed.
From the Issue:
> Zed looks great on my MacBook screen, but looks bad when I dock to my 1080p monitor. No other editor has that problem for some reason.
If they're running everything on the GPU then their SDF text rendering needs more work to be resolution independent. I'm assuming they use SDFs, or some variant of that.
Really, the screen isn't the issue given that on other editors OP says it is fine.
Knuth would be angry reading this :)
The restore checkpoint/redo is too linear for my lizard brain. Am I wrong to want a tree-based agentic IDE? Why has nobody built it?
They fixed that with the new agent panel, which now works more like the other AI sidebars.
I was (mildly) annoyed by that too. The new UI still has rough edges but I like the change.
If you're working on stuff like marketing websites that are well represented in the model dataset then things will just fly, but if you're building something that is more niche it can be super important to tune the context -- in some cases this is the differentiating feature between being able to use AI assistance at all (otherwise the failure rate just goes to 100%).
Fully agreed. This was the killer feature of Zed (and locally-hosted LLMs). Delete all tokens after the first mistake spotted in generated code. Then correct the mistake and re-run the model. This greatly improved code generation in my experience. I am not sure if cloud-based LLMs even allow modifying assistant output (I would assume not since it becomes a trivial way to bypass safety mechanisms).
In general they do. For each request, you include the complete context as JSON, including previous assistant output. You can change that as you wish.
edit: actually it is still possible to include text threads in there
Oops, I guess.
So you could manage the context with great care, then go over to the editor and select specific regions and then "pull in" the changes that were discussed.
I guess it was silly that I was always typing "use the new code" in every inline assist message. A hotkey to "pull new code" into a selected region would have been sweet.
I don't really want to "set it and forget it" and then come back to some mega diff that is like 30% wrong. Especially right now where it keeps getting stuck and doing nothing for 30m.
Vote/read-up here for the feature on Zed: https://github.com/zed-industries/zed/issues/17455
And here on VSCode: https://github.com/microsoft/vscode/issues/20889
I would recommend you check it out if you've been frustrated by the other options out there - I've been very happy with it. I'm fairly sure you can't have git-like dag trees, nor do I think that would be particularly useful for AI based workflow - you'd have to delegate rebasing and merge conflict resolution to the agent itself... lots of potential for disaster there, at least for now.
What I don't like in the last update is that they removed the multi-tabs in the assistant. Previously I could have multiple conversations going and switch easily, but now I can only do one thing at a time :(
Haven't tried the assistant2 much, mostly because I'm so comfy with my current setup
You will not catch me using the words "agentic IDE" to describe what I'm doing because its primary purpose isn't to be used by AI any more than the primary purpose of a car is to drive itself.
But yes, what I am doing is creating an IDE where the primary integration surface for humans, scripts, and AIs is not the 2D text buffer, but the embedded tree structure of the code. Zed almost gets there and it's maddening to me that they don't embrace it fully. I think once I show them what the stakes of the game are they have the engineering talent to catch up.
The main reason it hasn't been done is that we're still all basically writing code on paper. All of the most modern tools that people are using, they're still basically just digitizations of punchcard programming. If you dig down through all the layers of abstractions at the very bottom is line and column, that telltale hint of paper's two-dimensionality. And because line and column get baked into every integration surface, the limitations of IDEs are the limitations of paper. When you frame the task of programming as "write a huge amount of text out on paper" it's no wonder that people turn to LLMs to do it.
For the integration layer using the tree as the primary means you get to stop worrying about a valid tree layer blinking into and out of existence constantly, which is conceptually what happens when someone types code syntax in left to right. They put an opening brace in, then later a closing brace. In between a valid tree representation has ceased to exist.
That's possible because the source of truth for the IDE's state is an immutable concrete syntax tree. It can be immutable without ruining our costs because it has btree amortization built into it. So basically you can always construct a new tree with some changes by reusing most of the nodes from an old tree. A version history would simply be a stack of these tree references.
How can I follow up on what you're building? Would you be open to having a chat? I've found your github, but let me know how if there's a better way to contact you.
https://github.com/helix-editor/helix/discussions/4037
> For the nth time, it's about enabling inline suggestions and letting anything, either LSP or Extensions use it, then you don't have to guess what the coolest LLM is, you just have a generic useful interface for LLM's or anything else to use.
An argument I would agree with is that it's unreasonable to expect Helix's maintainers to volunteer their time toward building and maintaining functionality they don't personally care about.
[1]: https://microsoft.github.io/language-server-protocol/specifi...
These last two months I've been trialing both Neovim and Zed alongside Helix. I know I should probably just use Neovim since, once set up properly, it can do anything and everything. But configuring it has brought little joy. And once set up to do the same as Helix out of the box, it's noticeably slower.
Zed is the first editor I've tried that actually feels as fast as Helix while also offering AI tooling. I like how integrated everything is. The inline assistant uses context from the chat assistant. Code blocks are easy to copy from the chat panel to a buffer. The changes made by the coding agent can be individually reviewed and accepted or rejected. It's a lot of small details done right that add up to a tool that I'm genuinely becoming confident about using.
Also, there's a Helix keymap, although it doesn't seem as complete as the Vim keymap, which is what I've been using.
Still, I hope there will come a time when Helix users can have more than just Helix + Aider, because I prefer my editor inside a terminal (Helix) rather than my terminal inside an editor (Zed).
https://github.com/helix-editor/helix/pull/8675
Also, the Helix way, thus far, has been to build a LSP for all the things, so I guess you'd make a copilot LSP (I be there already is one).
The only project I know of that recognizes this is https://github.com/SilasMarvin/lsp-ai, which pivoted away from completions to chat interactions via code actions.
I don't know the LSP spec well enough to know if these sort of complex interactions would work with it, but it seems super out of scope for it imo.
And yet, it's hard to ignore the fact that coding practices are undergoing a one-in-a-generation shift, and experienced programmers are benefiting most from it. Many of us had to ditch the comfort of terminal editors and switch to Microsoft's VSCode clones just to have these new incredible powers and productivity boosts.
Having AI code assistants built into the fast terminal editor sounds like a dream. And editors like Helix could totally deliver here if the authors were a bit more open to the idea.
edit: they updated the AI panel! looking good!
Man, so true. I tried this out a while back and it was pretty miserable to find docs, apis, etc.
IIRC they even practice a lot of bulk reexports and glob imports and so it was super difficult to find where the hell things come from, and thus find docs/source to understand how to use something or achieve something.
Super frustrating because the UI of Zed was so damn good. I wanted to replicate hah.
Have you had a chance to try the new panel? (The OP is announcing its launch today!)
The annoncement is about it reaching prod release, but they emailed people to try it out in the preview version.
edit: yes i missed something. i see the new feature. hell yeah!
Check out the video in the blog post to see the new one in action!
Editing and deleting not only your messages but also the LLM's messages should be trivial.
One of the coolest things about LLM tech is that it's stateless, yet we leave that value on the floor when UIs act like it's not.
Press the 3-dots menu in the upper right of the panel, and then choose "New Text Thread" instead of "New Thread".
EDIT: just gave it a shot and I get "unsupported GPU" as an error, informing me that my GPU needs Vulkan support.
Their detection must be wrong because this is not true. And like I said, other applications don't have this problem.
For one, not all applications are GPU accelerated.
Two, their UX may need to be improved for a specific hardware configuration. I have used Zed with good performance on Intel dGPU, AMD dGPU, and Intel iGPU without issue — my guess is a missing dependency?
I don't care about Zed fixing anything - they're Zed's issues, not mine. All I'm saying is that contrary to what someone else said about the software being "fast" I tried it and at startup, it was unusably slow. I'm what you would call a failed conversion.
> Also, how is whether the project is volunteer-run relevant? Would you file a support ticket for commercial software you use saying "it's slow" and then when they follow up asking for details about your setup, you say "sorry, you don't get free QA work from me"
So this is kind of needlessly antagonistic imo - the point between the lines is that FOSS projects run by volunteers get a lot more grace than venture backed companies that go on promotion blitzes talking about their performance.
Error message, hardware configuration, done.
From my perspective that is not something you do for zed, but something you do for your distro and hardware.
And ofc, your first comment was fine either way. But the attitude of the latter is just poor.
How about "I'm getting <1FPS perf on {specs}" instead of the snark.
The antagonistic part is assuming your specific Linux configuration is innately Zed’s issue. It’s possible simply mentioning it to them would lead you quickly and easily to a solution, no free labor needed. It’s possible Zed is prepared to spend their vast VC resources on fixing your setup, even—which seems to be what you expect. Point being there’s a middle ground where you telling Zed “hey it didn't work well for me” gives Zed the chance to resolve any issues on their end in order to properly convert you, if you truly are interested in trying their editor. You don’t need to respond to the suggestion with a lecture on how companies exploit free volunteer labor and anything short of software served up on a silver platter would make you complicit. It’s really a little absurd.
If I had to guess, your system globally or their rendering library specifically is probably stuck on llvmpipe.
seems like you needing a GPU would be your issue
Putting together a high quality, actionable bug report is a much higher bar that can often feel like screaming at the clouds.
I’m genuinely curious what you are getting out of it
As a Linux user, I am sadly accustomed to some software working in only a just-so configuration. A datapoint that the software is still Mac first development is useful to know. Zed might still be worth trying, but I have to temper my enthusiasm from the headline announcement of, “everything is great”.
I'm on PopOS and the issue ended up being DRI_PRIME.
Might be worth trying `DRI_PRIME=0 zed`.
At least it did a month or so ago, and at that time I couldn't figure out a practical use for the LLM-integration either so I kind of just went back to dumb old vim and IDEA Ultimate.
When its fast its pretty snappy though. I recently put revisiting emacs on my todo-list, should add taking Zed out for another round as well.
Edit: I just saw your edit to your reply here[1] and that's indeed what's happening. Now the question is “why does that happen?”.
[1]
[1]: people experiencing sluggishness on Linux are almost certainly hit by a bug that makes the rendering falls back to llvmpipe (that is CPU rendering) instead of Vulkan rendering, but MacOS shouldn't have this kind of problems.
Iced, being used by System76's COSMIC EPOCH, is not great in what regards? Serious question.
IMO Slint is milestones ahead and better. They've even built out solid extensions for using their UI DSL, and they have pages and pages of docs. Of course everything has tradeoffs, and their licensing is funky to me.
Calling iced not useful reads like an uninformed take
examples beyond tiny todo app/best practices would be a great start.
> Tutorials? That's for users to write.
sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
> sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
26.5k stars on github and a flourishing community of users, which grows noticeably larger every day. new features basically every week. bug fixes sometimes fixed in literal minutes.
it's not a matter of gatekeeping, but a matter of resources. iced is basically the brainchild of a single developer (plus core team members who tackle some bits and pieces of the codebase but not frequently), who already has a day time job and is doing this for free. would you rather him write documentation—which you and I could very well write—or keep adding features so the library can get to 1.0?
I encourage you to look for evidence that invalidates your biases, as I'm confident you'll find it. and you might just love the library and the community. I promise you a warm welcome when you join us on discord ;-)
here are a few examples of bigger apps you can reference:
https://github.com/squidowl/halloy
https://github.com/hecrj/icebreaker
https://github.com/hecrj/holodeck
and my smaller-scale examples (I'm afraid my own big app is proprietary):
https://github.com/airstrike/iced_receipts a simple app showing how to manage multiple screens for CRUD-like flows
https://github.com/airstrike/pathfinder/ a simple app showing how to draw on a canvas
https://github.com/airstrike/iced_openai a barebones app showing how to make async requests
https://github.com/airstrike/tabular a somewhat complex custom widget example
This single-handedly convinced me not to rely on anything using Iced. I have no patience left for projects with that low a bus factor.
I'll be waiting for you on Discord ;-) my username is the same there so ping me if you need anything
and I forgot to link to a ridiculously cool new feature that dropped last week: time travel debugging for free
https://github.com/iced-rs/iced/pull/2910
check out the third and fourth videos!
UI frameworks typically need more than just the type of documentation that Rust docs provide. We see this with just about every UI framework around.
Just write some tutorials already.
Tutorials might be nice, but the library is evolving fast. I'm happier the core team spent time working on an animations API and debugging (including time travel) since the last release instead of working on guides for beginners.
Maybe that changes after 1.0.
Until then, countless users have learned to use it. Also iced is more a library than a framework. There's no right answer to the problems you'll be trying to solve, so writing guides on "best practices" is generally unhelpful if not downright harmful.
And countless others have requested exactly what I'm saying here. Cuts both ways.
> There's no right answer to the problems you'll be trying to solve
There's no right answer in e.g AppKit or UIKit, but having actual guides for those ecosystems has been crucial for their uptake/usage over the past decade or so. UI frameworks and libraries are not like standard developer tools and need additional documentation.
I wouldn’t hold my breath. GPUI is built specifically for Zed. It is in its monorepo without separate releases and lots of breaking changes all the time. It is pretty tailored to making a text editor rather than being a reusable GUI framework.
i think there's some desire from within zed to making this a real thing for others to reuse.
That kind of setup is fine for internal use, but it’s not how you'd structure a library meant for stable, external reuse. Until they split it out, version it properly, and stop breaking stuff all the time, it's hard to treat GPUI as a serious general-purpose option.
Waiting for Robius / Makepad to mature a bit more. Looks very promising.
No comments yet
Went from Atom, to VSC, to Vim and finally to Zed. Never felt more at home. Highly recommend giving it a try.
AFAIK there is overlap between Atoms and Zeds developers. They built Electron to built Atom. For Zed they built gpui, which renders the UI on the GPU for better performance. In case you are looking for an interesting candidate for building multi platform GUIs in rust, you can try gpui yourself.
I highly doubt that, unless you consider bitblting to be hardware accelerated.
When people say "GPU render" they mean 3D accelerators from the line of Voodoo onwards, not regular 2D graphics cards.
But apropos TFA, it's nice to see that telemetry is opt-in, not opt-out.
Subscribed to their paid plan just to keep the lights on and hoping it will get even better in the future.
It's open source, builds extremely well out of the box, and the UI is declarative.
Also I don't want to pay with my private data from some of my systems. So I don't ever want to sign in on those systems and just have a useless button sitting there.
One way you could use LLMs w/o inducing brain mush would be for code or design reviews, testability, etc.
If you see codebases you like, stash them away for AI explanation later.
https://github.com/zed-industries/zed/issues/12325
https://github.com/zed-industries/zed/issues/6756
This was a long time ago, but the way I did it was to use XcodeGen (1) and a simple Makefile. I have an example repo here (2) but it was before Swift Package Manager (using Carthage instead). If I remember correctly XcodeGen has support for Swift Package Manager now.
On top of that I was coding in VS Code at the time, and just ran `make run` in the terminal pane when I wanted to run the app.
Now, with SwiftUI, I'm not sure how it would be to not use Xcode. But personally, I've never really vibed with Xcode, and very much prefer using Zed...
1: https://github.com/yonaskolb/XcodeGen 2: https://github.com/LinusU/Soon
Tried using zed on Linux (pop os, Nvidia) several months ago, was terribly slow, ~1s to open right click context window.
I've spent some time debugging this, and turns out that my GPU drivers are not the best with my current pop os release, but I still don't understand how it might take so long and how GPU is related to right clicking.
Switched back to emacs, love every second. :)
I'm not sure if title referring to actual development speed or the editor performance.
p.s. I play top games on Linux, all is fine with my GPU & drivers.
It seems Vulkan support, the only GPU rendering API Zed uses, isn't well supported by any of the Debian derivatives. The libraries are only installed and working in Ubuntu 24.04 in Gnome Wayland sessions for example (Ubuntu 24.04 doesn't have KDE new enough for Wayland support).
And there are also bugs in the Zed automatic GPU selection that will intermittently cause it to pick the wrong GPU in a system with multiple (E.g. a discreet GPU and a motherboard with integrated graphics). Vulkan can only run on the primary rendering GPU, but it doesn't always pick that one, and doesn't support trying any others after the first one or picks (it seems), so it just falls back to emulated.
For reference, I had to spend 4 days getting Zed to install as part of a Nix home-manager config with nixGL because out of the box it failed to use the GPU on 2 of 3 systems. But after forcing it to use the right GPU with a wrapper that had Vulkan support (a nixGL wrapper) all 3 systems worked fine (so it's a Zed assumption/bug problem).
Also, the fact that Zed without the Vulkan supported hardware rendering is unusably slow is a big problem. It's far slower than anything else on the system and cranks the CPU to 100 with its "emulated GPU" workaround. That's not acceptable, they really need to get at least basic performance for the seeming majority of target systems that don't/can't meet the hardware rendering needs.
I will keep playing around with it to see if it's worth switching (from JetBrains WebStorm).
Nvidia drivers in particular are terrible on Linux, so what OP is describing is likely some compatibility/version issue.
that is why I commented, since was disappointed a bit
These simple, composable tools can be utilized well enough by increasingly powerful LLM(s), especially Gemini 2.5 pro to achieve most tasks in a consistent, understandable way.
More importantly - I can just switch off the 'ask' tool for the agent to go full turbo mode without frequent manual confirmation.
I just released it yesterday, have a look at https://github.com/aperoc/toolkami for the implementation if you think it is useful for you!
Yours is the full agent, though... Nice.
[1] https://github.com/karthink/gptel
[2] https://github.com/dolmens/gptel-aibo
[3] https://github.com/lizqwerscott/mcp.el
It's like lisp's original seven operators: quote, atom, eq, car, cdr, cons and cond.
And I still can't stop smiling just watching the agent go full turbo mode when I disable the `ask` tool.
you can choose which tools are used in zed by creating a new "tools profile" or editing an existing one (also you can add new tools using MCP protocol)
Even though they brought back text threads, the context is no longer included (or include-able!) as context in the inline assist. That means that you can no longer select code, hit ctrl+enter, and type "use the new code" or whatever.
I wish there was a way to just disable the agent panel entirely. I'm so uninterested in magical shit like cursor (though claude code is tasteful IMO).
There is also the "+" button to add files, threads etc, though it would be nice if it could also be done through slash commands.
I opened a previous agent thread and it gave me the option to include both threads to the context of the inline prompt (the old text thread was included and I had to click to exclude it, the new thread was grayed out and I had to click to include it).
edit: yup, they fixed it 2 days ago
It looks like I was 2 days out of date, and updating fixed it for me.
I’d love a nvim plugin that is more or less just a split chat window that makes it easy to paste code I’ve yanked (like yank to chat) add my commentary and maybe easily attach other files for context. That’s it really.
https://github.com/yetone/avante.nvim
Then, connect it using this line: `client = MCPClient(server_url=server_url)` (https://github.com/aperoc/toolkami/blob/e49d3797e6122fb54ddd...)
Happy to help further if you run into issues.
MCP Clients and servers can support both sse or stdio
The goal is composable semantic routing -- seamlessly traversal between different tools through things like saved outputs and conversational partials.
Routing similar to pipewire, conversation chains similar to git, and URI addressable conversations similar to xpath.
This is being built application down to ensure usability, design sanity and functionality.
While the initial 400 error is a bummer, I am actually surprised and admire its persistence in trying to create the file and in the end finding a way to do so. It forgot to define a couple of stuff in the code, which was trivial to fix, after that the code was working.
If you're okay sharing the conversation with us, would you mind pressing the thumbs-down button at the bottom of the thread so that we can see what input led to the 400?
(We can't see the contents of the thread unless you opt into sharing it with the thumbs-down button.)
I used github copilot's sonnet 3.7. I now tried copilot's sonnet 3.5 and it seems to work, so it was prob a 3.7 issue? It did not let me try zed's sonnets, so I don't know if there is a problem with zed's 3.7 (I thought I could still do 50 prompts with a free account, but maybe that's not for the agent?).
No comments yet
(I've yet to dive deep into AI coding tools and currently use Zed as an 'open source Sublime Text alternative' because I like the low latency editing.)
I don't know what Zed's doing under the hood but the diffing tool has yet to fail on me (compared to multiple times per conversation in Cursor). Compared to previous Zed AI iterations, this one edits files much more willingly and clearly communicates what it's editing. It's also faster than Claude Code at getting up to speed on context and much faster than Cursor or Windsurf.
Apart from that, it's a hell of a lot better than alternatives, and my god is it fast. When I think about the perfect IDE (for my taste), this is getting pretty close.
Anyway you can always make your prompts to do or not do certain actions, they are adding more features, if you want you can ignore some of them - this is not contradictory.
Ah! So you can get that experience with the agent panel (despite "agent" being in the name).
If you click the dropdown next to the model (it will say "Write" by default) and change it from "Write" to "Minimal" then it disables all the agentic tool use and becomes an ordinary back-and-forth chat with an LLM where you can add context manually if you like.
Also, you can press the three-dots menu in the upper-right and choose New Text Thread if you want something more customizable but still not agentic.
I’ve been using PyCharm Professional for over a decade (after an even longer time with emacs).
I keep trying to switch to vscode, Cursor, etc. as they seem to be well liked by their users.
Recently I’ve also tried Zed.
But the Jetbrains suite of tools for refactoring, debugging, and general “intelligence” keep me going back. I know I’m not the only one.
For those of you that love these vscode-like editors that have previously used more integrated IDEs, what does your setup look like?
But Zed is a complete rewrite, which on one hand makes itsuper-fast, but otherwise is still super-lacking of integration with the existing vsix extensions, language servers, and what not. Many authors in this forum totally fail to see that SublimeText4 is super ultra fast also compared to Electron-based editors, but is not even close in terms of supported extensions.
The whole Cursor hysteria may abruptly end with CoPilot/Cline/Continue advancing, and honestly, havng used both - there isnt much difference in the final result, should you know what you are doing.
https://aider.chat/docs/usage/watch.html
[0] https://plugins.jetbrains.com/plugin/20540-windsurf-plugin-f...
I've heard decent things about the Windsurf extension in PyCharm, but not being able to use a local LLM is an absolute non-starter for me.
At the moment I’m using Claude Code in a dedicated terminal next to my Jetbrains IDE and am reasonably happy with the combination.
I've learned to work around the loss of some functionality over the past 6 months since I've switched and it hasn't been too bad. The AI features in Zed have been great and I'm looking forward to the debugger release so I can finally run and debug tests in Zed.
I used to have one of these and recently got an M1 Max machine - the performance boost is seriously incredible.
The throttling on those late-game intel macs is hardcore - at one point I downloaded Hot[1], which is a menu bar app that shows you when you're being throttled. It was literally all the time that the system was slowing itself down due to heat. I eventually just uninstalled it because it was a constant source of frustration to know I was only ever getting 50% performance out of my expensive dev laptop.
[1]: https://github.com/macmade/Hot
This isn't a great solution, but in cases where I've wanted to try out Cursor on a Java code base, I just open the project in both IDEs. I'll do AI-based edits with Cursor, and if I need to go clean them up or, you know, write my own code, I'll just switch over to IntelliJ.
Again, that's not the smoothest solution, but the vast majority of my work lately has been in Javascript, so for the occasional dip into Java-land, "dual-wielding" IDEs has been workable enough.
Cursor/Code handle JS codebases just fine - Webstorm is a little better maybe, but not the "leaps and bounds" difference between Code and IntelliJ - so for JS, I just live in Cursor these days.
vscode running a typescript extension (cline, gemini, cursor, etc) to achieve LLM-enhanced coding is probably the least efficient way to do it in terms of cpu usage, but the features they bring are what actually speeds up your development tasks - not the "responsiveness" of it all. It seems that we're making text editing and html rendering out to be a giant lift on the system when it's really not a huge part of the equation for most people using LLM tooling in their coding workflows.
Maybe I'm wrong but when I looked at zed last (about 2 months ago) the AI workflow was surprisingly clunky and while the editor was fast, the lack of tooling support and model selection/customization left me heading back to vscode/cline which has been getting nearly two updates per week since that time - each adding excellent new functionality.
Does responsiveness trump features and function?
I'm curious what you think of this launch! :D
We've overhauled the entire workflow - the OP link describes how it works now.
This is clearly a Markdown backend problem, but not really relevant in the editor arena, except maybe to realize that the editor "shell" latency is just a part of the overall latency problem.
I still keep it around as I do with other editors that I like, and sometimes use it for minor things, while waiting to get something good.
On this note, I think there's room for an open source pluggable PKM as an alternative to Obsidian and think Zed is a great candidate. Unfortunately I don't have time to build it myself just yet.
I'm also super interested in building this. OTOH Obsidian has a huge advantage for its plugin ecosystem because it is just so hackable.
One of the creators of Zed talked about their experience building Atom - at the time the plugin API was just wide open (which resulted in a ton of cool stuff, but also made it harder to keep building). They've taken a much stricter Plugin API approach in Zed vs. Atom, but I think the former approach is working out well for Obsidian's plugin ecosystem.
Notably it does not include the ability to add any features, or configure any settings. In VSCode the block edit function and multi-cursor weren't part of the original tool, but were available as extensions. While Zed has that particular feature right out of the box, there's no ability to add features like that via extensions in Zed. Also, Zed doesn't even seem to have the concept of per-buffer settings or "effective" settings that differ from what's on disk. It's why you can't set tabs vs spaces for indentation in a single buffer, you can only set it globally (for example). That's probably why they don't allow extensions that do things like add predefined key maps, or associate new files with a language (without defining a new grammar), or apply a pre-defined set of settings automatically (E.g. autodetect indentation type? Vim/Emacs modeline parsing?). Almost all of even the simplest VSCode, Emacs, and Vim/Neovim extensions/packages/plugins make use of this concept, which is why it's wild Zed doesn't even (seemingly) have the concept of it, let alone allowing extensions to use it.
In fact, I'd argue Zed doesn't actually have an extension system at all. It has a completion system (LSP servers), a language addition system (tree sitter grammar and/or LSP server), and a themeing system. It just combines all three into a single list it somewhat misleadingly calls "extensions". But it's missing the ability for "extensions" to do any of the most basic things every other tool assumes is table stakes for an extension system.
So far the only editor I've found that does this is Typora.
If you like Zed's collaboration features, I wrote a plugin that make Obsidian real-time collaborative too. We are very inspired by their work (pre agent panel...). The plugin is called Relay [0].
[0] https://relay.md
The pricing page was not linked on the homepage. Maybe it was, maybe it wasn't but it surely was not obvious to me.
Regardless of how good of a software it is or pretends to be I just do not care about landing pages anymore. Pricing page essentially tells me what I am actually dealing with. I knew about Zed when it was being advertised as "written in rust because it makes us better than everyone" trend everyone was doing. Now, it is LLM based.
Absolutely not complaining about them. Zed did position themselves well to take the crown of the multi billion dollar industry AI code editors has become. I had to write this wall of text of because I just want to just drop the pricing page link and help make people make their own decision, but I have to reply to "whats your point" comments and this should demonstrate I have no point aside from dropping a link.
No comments yet
> ... 3. Baked into a closed-source fork of an open-source fork of a web browser
I laughed out loud at this one.
You can sign up for the beta here - https://zed.dev/debugger - or build from source right now.
The free pricing is a bit confusing, it says 50 prompts/month, but also BYO API keys
So even if I use my own API keys, the prompts will stop at 50 per month?
Also, since it’s open source, couldn’t just someone remove the limit? (I guess that wouldn’t work if the limit is of some service provided by Zed)
Two nitpicks:
1) the terminal is not picking up my regular terminal font, which messes up the symbols for my zsh prompt (is there a way to fix this?)
2) the model, even though it's suggesting very good edits, and gives very precise instructions with links to the exact place in the code where to make the changes, is not automatically editing the files (like in the video), even though it seems to have all the Write tools enabled, including editing - is this because of the model I'm using (qwen3:32b)? or something else?
Edit: 3rd, big one, I had a js file, made a small 1 line change, and when I saved the file, the editor automatically, and without warning, changed all the single quotes to double quotes - I didn't notice at first, committed, made other commits, then a PR - that's when I realized all the quotes changes - which took me a while to figure out how they happened, until I started a new branch, with the original file, made 1 change, saved and then I saw it
Can this behavior be changed? I find it very strange that the editor would just change a whole file like that
2. Not sure.
3. For most languages, the default is to use prettier for formatting. You can disable `format_on_save` globally, per-language and per project depending on your needs. If you ever need to save without triggering format ("workspace: save without formatting").
Prettier is /opinionated/ -- and its default is `singleQuote` = false which can be quite jarring if unexpected. Prettier will look for and respect various configuration files (.prettierrc, .editorconfig, via package.json, etc) so projects can set their own defaults (e.g. `singleQuote = true`). Zed can also be configured to further override prettier config zed settings, but I usually find that's more trouble than it's worth.
If you have another formatter you prefer (a language server or an external cli that will format files piped to stdin) you can easily have zed use those instead. Note, you can always manually reformat with `editor: format` and leave `format_on_save` off by default if that's more your code style.
- https://zed.dev/docs/configuring-zed#terminal-font-family
- https://zed.dev/docs/configuring-zed#format-on-save
- https://prettier.io/docs/configuration
- https://zed.dev/docs/languages/yaml#prettier-formatting
- https://zed.dev/docs/configuring-zed#formatter
It would be nice for prettier to throw a user warning before making a ton of changes on save for the first time, and also let them know where they can configure it
I also laughed at the dig on VSCode at the start. For the unaware, the team behind Zed was originally working on Atom.
There are dozens of possible build tools for C and C++, all with complex syntax and most with mandatory user provided input to configure the build. For anything beyond simple syntax highlighting, you need to be able to context parse all the multi-file cross references and inputs that can only come from building the entire project with preprocessing and then parsing the LLM (the intermediate syntax, not the AI thing). For most projects that are nontrivial, a compilation cycle can be 10 minutes to 4+ hours, and requires the specific settings you want to build with. Breaking them down to per-file also doesn't work because you'd have to do a complete dry run execution of the build system just to get the specific toolchain build settings for each file. And remember there are dozens of possible build tools that your tool has to emulate a dry run of now.
Most tools I've seen can only make a half attempt at C/C++ as a result, and usually the solutions scale incredibly poorly. The basic CTags for example, that just indexes symbols in your project source code, easily generates a >4 GB database file on something like a Yocto build. Which is why they invented Exuberant CTags that uses a binary database to try and speed it up. But even still, you're getting almost no useful context from results, and it has a very long lag in response when you do ask something.
The AI LLM support for C and C++ seems able to make guesses with the partial info that's available to them, whether its only the one file of context or the whole project (very uncommon), but it has the lowest successful output rate of any context helper I've ever used.
Here's a nice recent post about it: https://felix-knorr.net/posts/2025-03-16-helix-review.html
I'm catching up on Zed architecture using deepwiki: https://deepwiki.com/zed-industries/zed
But I got back on the horse & broke out Zed this weekend, deciding that I'd give it another shot, and this time be more deliberate about providing context.
My first thought was that I'd just use Zed's /fetch and slam some crates.io docs into context. But there were dozens and dozens of pages to cover the API surface, and I decided that while this might work, it wasn't a process I would ever be happy repeating.
So, I went looking for some kind of Crates.io or Rust MCP. Pretty early looking effort, but I found cratedocs-mcp. It can search crates, lookup docs for crates,lookup specific members in crates; that seems like maybe it might be sufficient, maybe it might help. Pulled it down, built it... https://github.com/d6e/cratedocs-mcp
Then check the Zed docs for how to use this MCP server. Oh man, I need to create my own Zed extension to use an MCP service? Copy paste this postgres-context-extension? Doesn't seem horrendous, but I was pretty deflated at this side-quest continuing to tack on new objectives & gave up on the MCP idea. It feels like there should be some kind of builtin glue that lets Zed add MCP servers via configuration, instead of via creating a whole new extension!!
On the plus side, I did give DeepSeek a try and it kicked out pretty good code on the first try. Definitely some bits to fix, but pretty manageable I think, seems structurally reasonably good?
I don't know really know how MCP tool integration works in the rest of the AI ecosystem, but this felt sub ideal.
The extensions are just for more ease of use as they install the server as well. A one click solution.
VS Code forks (Cursor and Windsurf) were extremely slow and buggy for me (much more so than VS Code, despite using only the most vanilla extensions).
Personally, I just use the terminal for my build tools and Zed talks to clangd just fine for autocomplete etc.
It supports extensions for languages such as Java and seemingly that extension can build code, too.
Zed also contains Git-support out of the box, which sounds pretty much like a lightweight IDE.
I have run into some problems with it on both Linux and Mac where zed hangs if the computer goes to sleep (meaning when the computer wakes back up, zed is hung and has to be forcibly quit.
Haven't tried the AI agent much yet though. Was using CoPilot, now mostly Claude Code, and the Jetbrains AI agent (with Claude 3.7).
But I'm not sure how to get predictions working.
When the predictions on-ramp window popped up asking if I wanted to enable it, I clicked yes and then it prompted me to sign in to Github. Upon approving the request on Github, an error popover over the prediction menubar item at the bottom said "entity not found" or something.
Not sure if that's related (Zed shows that I'm signed in despite that) but I can't seem to get prediction working. e.g. "Predict edit at cursor" seems to no-op.
Anyways, the onboarding was pretty sweet aside from that. The "Enable Vim mode" on the launch screen was a nice touch.
No comments yet
You have to command-pallet to "assistant: show configuration" to setup almost any API or integration except the Zed "Zeta AI", and that configuration directly conflicts with the authentication needed for the Zed login. So you currently can't use a third-party authenticated AI engine and the Zed Collaboration features at the same time.
Once you've setup the third party AI configuration, you then have to open the settings.json and copy the "features" section from the default settings into it manually. Ignore the blog posts and docs from Zed, they're all wrong now that they've completely changed/broken everything with the "Zeta AI" release. In the copied "features" section of your settings.json, you have to set the value for the predictions to the name of your third party AI engine. Good luck guessing what the right string value is, the values for each engine aren't documented anywhere I can find.
Basically, by default:
- You have the chat
- Inline edits you do use the chat as context
And that is extremely powerful. You can easily dump stuff into the chat, and talk about the design, and then implement it via surgical inline edits (quickly).
That said, I wasn't able to switch to Zed fully from Goland, so I was switching between the two, and recently used Claude Code to generate a plugin for Goland that does chat and inline edits similarly to how the old Zed AI assistant did it (not this newly launched one) - with a raw markdown editable chat, and inline edits using that as context.
Cline's an Agent, and you chat with it, based on which it makes edits to your files. I don't think it has manual inline edit support?
What I'm talking about is that you chat with it, you're done chatting, you select some text and say "rewrite this part as discussed" and only that part is edited. That's what I mean with inline edits.
For Agentic editing I'm happy with Claude Code.
But it's painful some of the basic features that are missing. Like manually switching indentation settings on a file. Not supported by Zed at all, you have to write an entire editorconfig file for your project instead, or change your global settings.
I do really love how simple some of the GUI config options are though. Just a single value to hide the useless Collaboration panel and button. And having a default settings and default keymap file you can open to look at and copy from when modifying your own is so helpful, and makes it much easier to customize a key binding here or there from the defaults instead of the GUI-fied VSCode mess that you can't eve see what's been manually modified easily.
https://zed.dev/blog/fastest-ai-code-editor
It's fast paced, yet it doesn't blush over anything I'd find important. It shows clearly how to use it, shows a realistic use case, e.g. the model adding some nonsense, but catching something the author might have missed, etc. I don't think I've seen a better AI demo anywhere.
Maybe the bar is really low that I get excited about someone who demos an LLM integration for programmers to actually understand programming, but hey.
When any video starts by asking AI "Make me a todo app" I loose interest right away
That feature + native Git support has fully replaced VSCode for me.
Starting out with a much smaller ecosystem than already-popular alternatives is a totally normal part of the road to success. :)
One thing that works in favour of Zed, which previous IDEs didn't have, is that it's a lot easier to program things today, because of AI. It may even be possible to port many of the more popular extensions from VSCode to Zed with relatively low investment.
If the community goodwill can be maintained, and they can expand their extension system capabilities, the community will probably catch them up to the effective VSCode extension library size pretty quickly (at least for a 95% of user's needs cases). But I'm seeing a lot of indications they're headed toward enshitification before they even get fully off the ground. I'm just hoping they avoid the obvious pitfalls of prioritizing only profitable new features and flavors of the month when their only real path to success relies so much on tons of unpaid open source community contributions.
To date I know of barely anyone using it.
VSCode kind of had Atom’s audience to build off of, and other editors don’t always have that runway.
Does it not do incremental edits like Cursor? It seems like the LLM is typing out the whole file internally for every edit instead of diffs, and then re-generates the whole file again when it types it out into the editor.
We actually stream edits and apply them incrementally as the LLM produces them.
Sometimes we've observed the architect model (what drives the agentic loop) decide to rewrite a whole file when certain edits fail for various reasons.
It would be great if you could press the thumbs-down button at the end of the thread in Zed so we can investigate what might be happening here!
Firstly, when navigating in a large python repository, looking up references was extremely slow (sometimes on the order of minutes).
Secondly, searching for a string in the repo would sometimes be incorrect (e.g. I know the string exists but Zed says there aren't any results, as if a search index hasn't been updated). These two issues made it unusable.
I've been using PyCharm recently and found it to be far superior to anything else for Python. JetBrains builds really solid software.
That's nice for the chat panel, but the tab completion engine surprisingly still doesn't officially support a local, private option.[0]
Especially with Zed's Zeta model being open[1], it seems like there should be a way to use that open model locally, or what's the point?
[0]: https://github.com/zed-industries/zed/issues/15968
[1]: https://zed.dev/blog/edit-prediction
I might be missing the obvious, and I get no standard exists, but why aren't AI coding assistants just plugins?
I don't actually know their thinking but I know that for the VSCode ones (fork or extension), I tend to have at least 2 AIs at any point in time and compare them in my daily work. Probably when this field matures, lock-in will be more common, and you need control of the entire editor for that.
Now I'm excited that they actually have a Cursor-like agentic mode.
But the suggestions are still just nowhere near as "smart" as the ones from Cursor. I don't know if that's model selection or what. I can't even tell which model is being used for the suggestions.
Today I'm trying to use the Agentic stuff, I added an MCP server, and I keep getting non-stop errors even though I started the Pro trial.
First error: It keeps trying to connect to Copilot even though I cancelled my Copilot subscription. So I had to manually kill the Copilot connection.
Second Error: Added the JIRA MCP (it's working since Zed lists all the available tools in the MCP) and then asked a basic question (give me the 5 most recent tickets). Nope. Error interacting with the model, some OAuth error.
Third Weirdness (not error): Even though I'm on a Pro trial, the "Zed" agent configuration says "You have basic access to models from Anthropic through the Zed Free AI Plan" – aren't I on a Pro trial? I want to give you money guys, please, let me do that. I want to encourage a high performance editor to grow.
I'm not even trying to do anything fancy. I just am on a pro trial. Shouldn't this be the happiest of happy paths? Zed should use whatever the Pro stuff gives you, without any OAuth errors, etc. How can I help the Zed team debug this stuff? Not even sure where to start.
I also added an elixir RuleSet (I THINK it's being used, but can't easily tell).
Still missing the truly fast and elegant suggestions from Cursor (especially when Cursor suggests _removing_ lines, haven't seen that in Zed yet). But I can see it getting there.
Some agents stuff also worked well. I had it fix two elixir warnings and a rust warning in our NIF.
Unrelated to Zed, I find myself in the awkward position of maintaining a (very small) rust file in our code base without ever having coded rust. And any changes, upgrades, etc are done via AI.
So far it seems to work (according to our unit tests) and the library isn't in any critical path. But it's a new world :-)
Also, Zed still seems to only give me access to "basic" models even though I'm in the pro tier trial. Not sure if that's a bug.
Edit: Sorry, apparently this is supported. I'll give it a go!
I switched to cursor earlier this year to try out LLM assisted development and realised how much I now despise vscode. It’s slow, memory hungry, and just doesn’t work as well (and in a keyboard centric way) as Zed.
Then a couple of weeks ago, I switched back to Zed, using the agents beta. AI in Zed doesn’t feel quite as polished as cursor (at least, edit predictions don’t feel as good or fast), but the agent mode works pretty well now. I still use cursor a little because anything that isn’t vscode or pycharm has imho a pretty bad Python LSP experience (those two do better because they use proprietary LSP’s), but I’m slowly migrating to full stack typescript (and some Gleam), so hope to fully ditch cursor in favour of Zed soon.
Other than that a beautiful editor.
[0] : https://github.com/zed-industries/zed/pull/29496
``` "openai": { "api_url": "https://openrouter.ai/api/v1", "version": "1", "available_models": [ { "name": "anthropic/claude-3.7-sonnet:beta", "max_tokens": 200000 }, ... ```
Just change api_url in the zed settings and add models you want manually.
https://openrouter.ai/models?fmt=cards&providers=OpenAI
If they had focused on
1. Feature-parity with the top 10 VSCode extensions (for the most common beaten path — vim keybindings, popular LSPs, etc) and
2. Implemented Cursor's Tab
3. Simple chat interface that I can easily add context from the currently loaded repo
I would switch in a beat.
I _really_ want something better than VSCode and nvim. But this ain't it. While "agentic coding" is a nice feature, and specially so for "vibe coding projects", I (and most of my peers) don't rely on it that much for daily driving their work. It's nice for having less critical things going on at once, but as long as I'm expected to produce code, both of the features highlighted are what _effectively_ makes me more productive.
1. Zed has been working great for me for ~1.5 years while I ignored its AI features (I only started using Zed's AI features in the past 2 weeks). Vim keybindings are better IMHO than every other non-vim editor and the LSP's I've used (typescript, clangd, gleam) have worked perfectly.
2. The edit prediction feature is almost there. I do still prefer Cursor for this, but its not so far ahead that I feel like I want to use Cursor and personally I find Zed to be a much more pleasant editor to use than vscode.
3. When you switch the agent panel from "write" to "ask" mode, its basically that, no?
I'm not into vide coding at all, I think AI code is still 90% trash, but I do find it useful for certain tasks, repetitive edits, and boilerplate, or just for generating a first pass at a React UI while I do the logic. For this, Zed's agent feature has worked very well and I quite like the "follow mode" as a way to see what the AI is changing so I can build a better mental model of the changes I'm about to review.
I do wish there was a bit more focus on some core editor features: ligatures still don't fully work on Linux; why can't I pop the agent panel (or any other panel for that matter) into the center editor region, or have more than one panel docked side by side on one of the screen sides? But overall, I largely have the opposite opinion and experience from you. Most of my complaints from last year have been solved (various vim compatibility things), or are in progress (debugger support is on the way).
Huh?
Yes it's not the modern human but I think that's close enough.
I work at Zed and I like using Rust daily for my job, but outside work I also like Elm, and Zig, and am working on https://www.roc-lang.org
No comments yet
sorry, but to me it is just pure garbage.
Is this what happens to people who choose to learn Rust?
Joking aside, this is interesting, but I'm not sure what the selling point is versus most other AI IDEs out there? While it's great that you support ollama, practically speaking, approximately nobody is getting much mileage out of local models for complex coding tasks, and the privacy issues for most come from the LLM provider rather than the IDE provider.