Warp Code: the fastest way from prompt to production

41 brainless 50 9/3/2025, 3:31:31 PM warp.dev ↗

Comments (50)

serjester · 7h ago
Desperate pivot aside, I don't see how anyone competes with the big labs on coding agents. They can serve the models at a fraction of the API cost, can trivially add post training to fill gaps and have way deeper enterprise penetration.
ianbutler · 6h ago
Specialization into specific parts of the life cycle, specific technologies and integration into specific systems.

Things like self hosting and data privacy, model optionality too.

Plenty of companies still don’t want to ship their code, agreement or not over to these vendors or be locked into their specific model.

oceanplexian · 6h ago
I feel like it's totally the opposite.

The differentiator is the fact that the scaling myth was a lie. The GPT-5 flop should make that obvious enough. These guys are spending billions and can't make the models show more than a few % improvement. You need to actually innovate, e.g. tricks like MoE, tool calling, better cache utilization, concurrency, better prompting, CoT, data labeling, and so on.

Not two weeks ago some Chinese academics put out a paper called Deep Think With Confidence where they coaxed GPT-OSS-120B into thinking a little longer causing it to perform better on benchmarks than it did when OpenAI released it.

kristopolous · 5h ago
I've pitched people working there this multiple times. Warp is not just a terminal, it's a full stack of interaction, they have more of the vertical of the development cycle to leverage.

You need different relationships at different parts of coding, ideation, debugging, testing, etc. Cleverly sharing context while maintaining different flows and respecting the relationship hygiene is the key. Most of the vscode extensions now do this with various system prompt selections of different "personas".

I used to (6 months ago) compare these agentic systems basically as if they were John Wayne as contract programmer, parachuting in a project, firing off their pistol, shooting the criminals, mayor, and burning the barn down all the while you're yelling at it to behave better.

There's contexts and places where this can be more productive. Warp is one of them if executed with clean semantic perimeters. It's in a rather strong positioning for it and an obvious loyalty builder

wrs · 7h ago
(1) Aside from having a worse (sorry, “lighter weight”) editor, how is this functionally different from Cursor?

(2) A Microsoft VP of product spends enough time writing code to be a relevant testimonial?

sudhirb · 6h ago
For me, the USP Warp used to have was generating shell commands from prompts inside the terminal - but Cursor has had this in its embedded terminal for a while now so increasingly I find myself using Ghostty instead
kachapopopow · 7h ago
I switched to this and honestly, more or less feels the same as claude code except with a fancy UI and built-in mcp servers for automated memory management. But I am sticking to it so I don't have to deal with vendor lockin (I heavily disagree with what antrophic is doing when it comes to 'safety')
Esophagus4 · 7h ago
The difference to me is that I can quickly switch in and out of “AI mode” with Warp. So it’s a terminal when I want that, and it’s an AI assistant when I want that.

With Claude Code, you’re stuck in AI mode all the time (which is slow for running vanilla terminal commands) or you have to have a second window for just terminal commands.

Edit: just read some documentation saying Claude has a “bash mode” where it will actually pass through the commands, so off to try that out now.

CuriouslyC · 7h ago
All these monolithic agents are so wasteful. Having an agent orchestration service is so much more efficient and maintainable. My work in progress rust agent takes less cpu/memory for a whole swarm than one claude code instance.
all2 · 7h ago
Would you be willing to share? I've been poking at this kind of thing, but I haven't had great success rolling my own agents.
CuriouslyC · 7h ago
My rust agent is closed source (at least right now, we'll see) but I'm happy to discuss details of how stuff works to get you going in the right direction.
all2 · 2h ago
I'd be glad to hear more. I'm not certain what I would even ask, as the space is really fuzzy (prompting and all that).

I've got an Ollama instance (24GB VRAM) I want to leverage to try and reduce dependency on Claude Code. Even the tech stack seems unapproachable. I've considered LiteLLM, router agents, micro-agents (smallest slice of functionality possible), etc. I haven't wrapped my head around it all the way, though.

Ideally, it would be something like:

    UI <--> LiteLLM
               ^
               |
               v
            Agent Shim
Where the UI is probably aider or something similar. Claude Code muddies the differentiation between UI and agent (with all the built in system-prompt injection). I imagine I would like to move system-prompt injection / agent CRUD into the agent shim.

I'm just spitballing here.

Thoughts? (my email is in my profile if you would prefer to continue there)

CuriouslyC · 2h ago
I also have a 24gb card. Local LLMs are great for a lot of things but I wouldn't route coding questions to them, the time/$ tradeoff isn't worth it. Also, don't use LiteLLM, it's just bad, Bifrost is the way.

You can use a LLM router to direct questions to an optimal model on a price/performance pareto frontier. I have a plugin for Bifrost that does this, Heimdall (https://github.com/sibyllinesoft/heimdall), it's very beta right now but the test coverage is good, I just haven't paved the integration pathway yet.

I've got a number of products in the works to manage context automatically, enrich/tune rag, provide enhanced code search. Most of them are public and you can poke around and see what I'm doing. I plan on doing a number of launches soon but I like to build rock solid software and rapid agentic development really creates a large manual qa/acceptance eval burden.

all2 · 24m ago
So there is no place for a local llm in code dev. Bummer. I was hoping to get past the 5 hour limits on claude code with local models.
seunosewa · 6h ago
What is the best thing you've built with the swarm? Is it measurably better in any way?
CuriouslyC · 5h ago
I'm using the swarm to build ~20 projects in parallel, some released even, and some draft papers done. Take a look at the products gallery on my site (research papers linked on the research tab): https://sibylline.dev/products/
Aeolun · 7h ago
Wasn’t Warp an (electron based) terminal?

Why suddenly agentic coding?

kamikazeturtles · 7h ago
The real question is how does a startup that offers a terminal as its product command a $280 million valuation and need close to 100 employees?
wmf · 7h ago
This announcement is the answer: It's not a terminal any more; it's an IDE.
xnx · 7h ago
AIDE
mig1 · 6h ago
TM
bdcravens · 5h ago
While the value is questionable, it's more than just a dumb terminal, and they have better revenue as a percentage of valuation than Anthropic.
melaniecrissey · 6h ago
Warp was never Electron-based. It's built with Rust
sys13 · 7h ago
There was an intermediate step where they introduced AI command generation (super useful). Agentic coding follows naturally from that.
mungaihaha · 7h ago
No. Its built atop rust iirc
tills13 · 6h ago
> 97% acceptance rate

this concerns me given what I've seen generated by these tools. In 10? 5? 1? year(s) are we going to see an influx of CVEs or hiring of Senior+ level developers solely for the purpose of cleaning up these messes?

TheNewsIsHere · 6h ago
Insofar as CVEs issued for proprietary software, I would expect that the owning organization would not be inclined to blame AI code unless they think they can pass the buck.

But as for eventually having to hire senior developers to clean up the mess, I do expect that. Most organizations that think they can build and ship reliable products without human experts probably won’t be around long enough to be able to have actual CVEs issued. But larger organizations playing this game will eventually have to face some kind of reckoning.

STELLANOVA · 6h ago
I am not really convinced that rate is higher without AI tooling. CVEs existed before AI tools with only humans generating code...
whywhywhywhy · 6h ago
Why would you need a human to fix it if you know what the CVE is.
animex · 7h ago
The pivot is likely because there's more VC dollars there.

It is a handy AI-cli for any terminal. I've been using the "terminal" app for a few months and found it was a very competent coding tool. I kept giving feedback to the team that they should "beef up" the coding side because until Claude Code this was my daily driver for writing code until Opus 4. The interface still is a bit janky because i think it's trying to predict whether you're typing a console command or talking to it for new prompt (it tries to dynamical assess that but often enough it crosses the streams). Regardless, I highly recommend checking it out, I've had some great success with it.

alvis · 6h ago
The part I don't get is the pricing. Seems like its pricing is solely based on requests. Then how would someone use gpt 4.1 when opus is charing the same price???
giancarlostoro · 7h ago
Pro Tip: Ask the agent or another LLM to generate a prompt for an agent of what you want to build, then tweak it as needed, and then use that prompt. I've had decent success prompting Junie (JetBrains AI) a few times because of this.
DannyBee · 5h ago
Quickly entering "i'm calling today to talk to you about your car's agentic coding" territory.
barrrrald · 6h ago
no idea how the product itself works but they have set a new standard for small startup launch videos with this

https://www.youtube.com/watch?v=9jKOVAa1KAo

pseufaux · 7h ago
If warp had just stuck to being a decent terminal emulator with great UI, I would be using it without question. This AI nonsense is why I don't even consider them an option.
Esophagus4 · 7h ago
The slick AI integration is exactly why I use it over the other thousand terminal emulators.

Claude Code can replicate some of the behavior, but it’s too slow to switch in and out of command / agent flows.

wahnfrieden · 7h ago
Claude Code and Codex provide something like $5000 of tokens for $200. How will any other offering depending on their models ever compete with that except by luring suckers or tire kickers?
wmf · 6h ago
Assuming that's a temporary situation, they can paper it over with VC funding.
wahnfrieden · 6h ago
Why do you think it’s temporary?
bdcravens · 5h ago
Same reason that any giveaways by startups are.
wahnfrieden · 4h ago
They already have 40% margins on inference. Even if they give less with their own subscriptions, they may continue to have such margins on API, handicapping competitor tools
bt1a · 7h ago
optimizing for the shortest path from idea to prod sounds a tad warped, if i may
gandalfgeek · 7h ago
> Initialize projects with their own WARP.md files (compatible with Agents.MD, Claude.MD and cursor rules).

Can we please standardize this and just have one markdown file that all the agents can use?

kpen11 · 7h ago
The standardization is AGENTS.md, mentioned in the compatibility. See https://agents.md/
cbm-vic-20 · 5h ago
Off-topic: that site could just be a static page, does it really need to be a next.js app? https://github.com/openai/agents.md/
wahnfrieden · 7h ago
Edit: never mind / can’t delete because HN
Esophagus4 · 7h ago
I think this is a different Warp…
orliesaurus · 6h ago
Warp started well and now lost their way. Fuck them. Ghostty all the way.
LambdaComplex · 2h ago
> Warp started well

Did they? Their original product was a terminal emulator, with built-in telemetry, that required you to create an account to use.

orliesaurus · 1h ago
You could eventually remove all that and it was nice to have some features like a clean modern UI, although you could argue nothing beats iTerm2