As expected, many graybeard gatekeepers are telling others not to use LLM for any type of coding or assistance.
sdsd · 8h ago
Oof, this comes at a hard moment in my Claude Code usage. I'm trying to have it help me debug some Elastic issues on Security Onion but after a few minutes it spits out a zillion lines of obfuscated JS and says:
Error: kill EPERM
at process.kill (node:internal/process/per_thread:226:13)
at Ba2 (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19791)
at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19664
at Array.forEach (<anonymous>)
at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19635
at Array.forEach (<anonymous>)
at Aa2 (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19607)
at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19538
at ChildProcess.W (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:20023)
at ChildProcess.emit (node:events:519:28) {
errno: -1,
code: 'EPERM',
syscall: 'kill'
}
I'm guessing one of the scripts it runs kills Node.js processes, and that inadvertantly kills Claude as well. Or maybe it feels bad that it can't solve my problem and commits suicide.
In any case, I wish it would stay alive and help me lol.
schmookeeg · 4h ago
Claude and some of the edgier parts of localstack are not friends either. It's pretty okay at rust which surprised me.
It makes me think that the language/platform/architecture that is "most known" by LLMs will soon be the preferred -- sort of a homogenization of technologies by LLM usage. Because if you can be 10x as successfully vibey in, say, nodejs versus elixir or go -- well, why would you opt for those in a greenfield project at all? Particularly if you aren't a tech shop and that choice allows you to use junior coders as if they were midlevel or senior.
actsasbuffoon · 2h ago
This mirrors a weird thought I’ve had recently. It’s not a thing I necessarily agree with, but just an idea.
I hear people say things like, “AI isn’t coming for my job because LLMs suck at [language or tech stack]!”
And I wonder, does that just mean that other stacks have an advantage? If a senior engineer with Claude Code can solve the problem in Python/TypeScript in significantly less time than you can solve it in [tech stack] then are you really safe? Maybe you still stack up well against your coworkers, but how well does your company stack up against the competition?
And then the even more distressing thought accompanies it: I don’t like the code that LLMs produce because it looks nothing like the code I write by hand. But how relevant is my handwritten code becoming in a world where I can move 5x faster with coding agents? Is this… shitty style of LLM generated code actually easier for code agents to understand?
Like I said, I don’t endorse either of these ideas. They’re just questions that make me uncomfortable because I can’t definitively answer them right now.
majormajor · 51m ago
All the disadvantages of those stacks still exist.
So if you need to avoid GC issues, or have robust type safety, or whatever it is, to gain an edge in a certain industry or scenario, you can't just switch to the vibe tool of choice without (best case) giving up $$$ to pay to make up for the inefficiency or (worst case) having more failures that your customers won't tolerate.
But this means the gap between the "hard" work and the "easy" work may become larger - compensation included. Probably most notably in FAANG companies where people are brought in expected to be able to do "hard" work and then frequently given relatively-easy CRUD work in low-ROI ancillary projects but with higher $$$$ than that work would give anywhere else.
And the places currently happy to hire disaffected ex-FAANG engineers who realized they were being wasted on polishing widgets may start having more hiring difficulty as the pipeline dries up. Like trying to hire for assembly or COBOL today.
hoyo1s · 44m ago
Sometimes one just need [language or tech stack] to do something, especially for some performance/security considerations.
For now LLMs still suffers from hallucination and lack of generalizability, The large amount of code generated is sometimes not necessarily a benefit, but a technical debt.
LLMs are good for open and fast, prototype web applications, but if we need a stable, consistent, maintainable, secure framework, or scientific computing, pure LLMs are not enough, one can't vibe everything without checking details
dgunay · 2h ago
Letting go of the particulars of the generated code is proving difficult for me. I hand edit most of the code my agents produce for taste even if it is correct, but I feel that in the long term that's not the optimal use of my time in agent-driven programming. Maybe the models will just get so good that they know how I would write it myself.
bilekas · 1h ago
I would argue this approach will help you in the long term with code maintainability. Which I feel will be one of the biggest issues down the line with AI generated codebases as they get larger.
monkpit · 1h ago
The solution is to codify these sorts of things in prompts and tool use and gateways like linters etc. you have to let go…
bilekas · 1m ago
What do you mean "you have to let go".
I use some ai tools and sometimes they're fine, but I won't in my lifetime anyway hand over everything to an AI, not out of some fear or anything, but even purely as a hobby. I like creating things from scratch, I like working out problems, why would I need to let that go?
fragmede · 2h ago
LLMs write python and typescript well, because of all the examples in their training data. But what if we made a new programming language whos goal was to be optimal for an LLM to generate it? Would it be closer to assembly? If we project that the future is vibe coded, and we scarcely look at the outputted code, testing, instead, that the output matches the input correctly, not looking at the code, what would that language look like?
majormajor · 49m ago
What is it that you think would make a certain non-Python language "more optimal" for an LLM? Is there something inherently LLM-friendly about certain language patterns or is "huge sets of training examples" and "a robust standard library" (the latter to conserve tokens/attention vs having to spit out super-verbose 20x longer assembly all day) all "optimality" means?
alankarmisra · 1h ago
They’d presumably do worse. LLMs have no intrinsic sense of programming logic. They are merely pattern matching against a large training set. If you invent a new language that doesn’t have sufficient training examples for a variety of coding tasks, and is syntactically very different from all the existing languages, the LLMs wouldn’t have enough training data and would do very badly.
metrix · 1h ago
I have thought the same thing. How is it created? is it an idea by an LLM to make the language, or a dev to create a language designed for an llm.
How do we get the LLM to gain knowledge on this new language that we have no example usage of?
hoyo1s · 51m ago
Strict type-checking and at least with some dependent type and inductive type
yc-kraln · 6h ago
I get this issue when it uses sudo to run a process with root privileges, and then times out.
triyambakam · 7h ago
I would try upgrading or wiping away your current install and re-installing it. There might be some cached files somewhere that are in a bad state. At least that's what fixed it for me when I recently came across something similar.
sixtyj · 8h ago
Jump to another LLM helps me to find what happened. *This is not a official advice :)
idontwantthis · 8h ago
I have had zero good results with any LLM and elastic search. Everything it spits out is a hallucination because there aren’t very many examples of anything complete and in context on the internet.
OtherShrezzing · 8h ago
I think it’s just that the base model is good at real world coding tasks - as opposed to the types of coding tasks in the common benchmarks.
If you use GitHub Copilot - which has its own system level prompts - you can hotswap between models, and Claude outperforms OpenAI’s and Google’s models by such a large margin that the others are functionally useless in comparison.
paool · 33m ago
It's not just the base model
Try using opus with cline in vs code. Then use Claude code.
I don't know the best way to quantify the differences, but I know I get more done in CC.
ec109685 · 8h ago
Anthropic has opportunities to optimize their models / prompts during reinforcement learning, so the advice from the article to stay close to what works in Claude code is valid and probably has more applicability for Anthropic models than applying the same techniques to others.
With a subscription plan, Anthropic is highly incentivized to be efficient in their loops beyond just making it a better experience for users.
badestrand · 36m ago
I read all the praise about Claude Code, tried it for a month and was very disappointed. For me it doesn't work any better than Cursor's sidebar and has worse UX on top. I wonder if I am doing something wrong because it just makes lots of stupid mistakes when coding for me, in two different code bases.
ahmedhawas123 · 7h ago
Thanks for sharing this. At a time where this is a rush towards multi-agent systems, this is helpful to see how an LLM-first organization is going after it. Lots of the design aspects here are things I experiment with day to day so it's good to see others use it as well
A few takeaways for me from this
(1) Long prompts are good - and don't forget basic things like explaining in the prompt what the tool is, how to help the user, etc
(2) Tool calling is basic af; you need more context (when to use, when not to use, etc)
(3) Using messages as the state of the memory for the system is OK; i've thought about fancy ways (e.g., persisting dataframes, parsing variables between steps, etc, but seems like as context windows grow, messages should be ok)
chazeon · 57m ago
I want to note that: long prompts are good only if the model is optimized for it. I have tried to swap the underlying model for Claude Code. Most local models, even those claimed to work with long context and tool use, don't work well when instruction becomes too long. This has become an issue for tool use, where tool use works well in small ChatBot-type conversation demos, but when Claude's code-level prompt length increases, it just fails, either forgetting what tools are there, forgetting to use them, or returning in the wrong formats. Only the model by OpenAI, Google's Gemini, kind of works, but not as well as Anthropic's own models. Besides they feel much slower.
nuwandavek · 5h ago
(author of the blogpost here)
Yeah, you can extract a LOT of performance from the basics and don't have to do any complicated setup for ~99% of use cases. Keep the loop simple, have clear tools (it is ok if tools overlap in function). Clarity and simplicity >>> everything else.
samuelstros · 4h ago
does a framework like vercel's ai sdk help, or is handling the loop + tool calling so straightforward that a framework is overcomplicating things?
for context, i want to build a claude code like agent in a WYSIWYG markdown app. that's how i stumbled on your blog post :)
ahmedhawas123 · 2h ago
Function / tool calling is actually super simple. I'd honestly recommend either doing it through a single LLM provider (e.g., OpenAI or Gemini) without a hard framework first, and then moving to one of the simpler frameworks if you feel the need to (e.g., LangChain). Frameworks like LangGraph and others can get really complicated really quickly.
the_mitsuhiko · 8h ago
Unfortunately, Claude Code is not open source, but there are some tools to better figure out how it is working. If you are really interested in how it works, I strongly recommend looking at Claude Trace: https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...
It dumps out a JSON file as well as a very nicely formatted HTML file that shows you every single tool and all the prompts that were used for a session.
It's all how the base model has been trained to break tasks into discrete steps and work through them patiently, with some robustness to failure cases.
That repository does not contain the code. It's just used for the issue tracker and some example hooks.
CuriouslyC · 7h ago
It's a javascript app that gets installed on your local system...
the_mitsuhiko · 6h ago
I'm aware of how it works since I have been spending a lot of time over the last two months working with Claude's internals. If you have spent some time with it, you know that it is a transpiled and minified mess that is annoyingly hard to detangle. I'm very happy that claude-trace (and claude-bridge [1]) exists because it makes it much easier to work with the internals of Claude than if you have to decompile it yourself.
That's been DMCA'd since you posted it. Happen to know where I can find a fork?
mlrtime · 57m ago
Just search dnakov/claude-code mirror and there is a path to the source code, I found it in 2 minutes.
koakuma-chan · 5h ago
> That's been DMCA'd since you posted it.
I know, thus the :trollface:
> Happen to know where I can find a fork?
I don't know where you can find a fork, but even if there is a fork somewhere that's still alive, which is unlikely, it would be for a really old version of Claude Code. You would probably be better off reverse engineering the minified JavaScript or whatever that ships with the latest Claude Code.
throwaway314155 · 4h ago
Gotcha, I misunderstood.
alex1138 · 8h ago
What do people think of Google's Gemini (Pro?) compared to Claude for code?
I really like a lot of what Google produces, but they can't seem to keep a product that they don't shut down and they can be pretty ham-fisted, both with corporate control (Chrome and corrupt practices) and censorship
CuriouslyC · 7h ago
Gemini is amazing for taking a merge file of your whole repo, dropping it in there, and chatting about stuff. The level of whole codebase understanding is unreal, and it can do some amazing architectural planning assistance. Claude is nowhere near able to do that.
My tactic is to work with Gemini to build a dense summary of the project and create a high level plan of action, then take that to gpt5 and have it try to improve the plan, and convert it to a hyper detailed workflow xml document laying out all the steps to implement the plan, which I then hand to claude.
This avoids pretty much all of Claude's unplanned bumbling.
seanwessmith · 2h ago
mind typing this up? i've got a basic GPT -> Claude workflow going for now
I should mention I made that one for my research/stats workflow, so there's some specific stuff in there for that, but you can prompt chat gpt to generalize it.
koakuma-chan · 8h ago
I don't think Gemini Pro is necessarily worse at coding, but in my experience Claude is substantially better at "terminal" tasks (i.e. working with the model through a CLI in the terminal) and most of the CLIs use Claude, see https://www.tbench.ai/leaderboard.
jsight · 8h ago
For the web ui (chat)? I actually really like gemini 2.5 pro.
For the command line tool (claude code vs gemini code)? It isn't even close. Gemini code was useless. Claude code was mostly just slow.
upcoming-sesame · 5h ago
You mean Gemini CLI. Yeah it's confusing
jsight · 4h ago
Thanks, that's the one!
Herring · 5h ago
Yeah I was also getting much better results on the Gemini web ui compared to the Gemini terminal. Haven't gotten to Claude yet.
esafak · 1h ago
I used to like it a lot but I feel like it got dumber lately. Am I imagining things or has anyone else observed this too?
jonfw · 8h ago
Gemini is better at helping to debug difficult problems that require following multiple function calls.
I think Claude is much more predictable and follows instructions better- the todo list it manages seems very helpful in this respect.
divan · 7h ago
In my recent tests I found it quite smart at analyzing bigger picture (i.e. "hey, test failing not because of that, but because of whole assumption has changed and let me rewrite this test from scratch". But it also got stuck few times "I can't edit file, I'm stuck, let me try completely differently". But the biggest difference so far is the communication style - it's a bit.. snarky? I.e. comments like "yeah, tests are failing - as I suspected". Why the f it suspected failing test on the project it sees for the first time? :D
Keyframe · 8h ago
It's doing rather well at thinking, but not at coding. When it codes, often enough it runs in circles and ignores input. Where I find it useful is to read through larger codebases and distill what I need to find out from it. Even using gemini from claude to consult it for certain things. Opus is also like that btw, but a bit better at coding. Sonnet though, excels at coding.. from my experience though.
yomismoaqui · 8h ago
According to the guys from Amp Claude Sonnet/Opus are better at tool use.
nicce · 7h ago
If you could control the model with system command, it would be very good. But at last I have failed miserably. Model is too verbose and helpful.
ezfe · 8h ago
Gemini frequently didn't write code for me for no explicable reason, and just talked about a hypothetical solution. Seems like a tooling issue though.
djmips · 8h ago
Sounds almost human!
stabbles · 8h ago
In my experience it's better at lower level stuff, like systems programming. A pass afterwards with claude makes the code more readable.
filchermcurr · 7h ago
The Gemini CLI tool is atrocious. It might work sometimes for analyzing code, but for modifying files, never. The inevitable conclusion of every session I've ever tried has been an infinite loop. Sometimes it's an infinite loop of self-deprecation, sometimes just repeating itself to failure, usually repeating the same tool failure until it catches it as an infinite loop. Tool usage frequently (we're talking 90% of the time) fails. It's also, frankly, just a bummer to talk to. The "personality" is depressed, self-deprecating, and just overall really weird.
That's been my experience, anyway. Maybe it hates me? I sure hate it.
klipklop · 3h ago
This matches my experience with it. I won’t let it touch any code I have not yet safely checked in before firing up Gemini. It will commonly get into a death loop mid session that can’t be recovered from.
KaoruAoiShiho · 8h ago
It sucks.
KaoruAoiShiho · 6h ago
Lol downvoted, come on anyone who has used gemini and claude code knows there's no comparison... gimme a break.
bitpush · 6h ago
You're getting down voted because of the curt "it sucks" which shows a level of shallowness in your understanding.
Nothing in the world is simply outright garbage. Even the seemingly worst products exist for a reason and is used for a variety of use cases.
So, take a step back and reevaluate whether your reply could have been better. Because, it simply "just sucks"
itbeho · 34m ago
I use Claude code with Elixir and Phoenix. It's been mostly great but after a short time into a project it seems to break something unrelated to the task at hand.
mike1o1 · 31m ago
If you haven’t yet, you should try out usage_rules mix package. I mostly use Ash, which has great support for usage rules and it’s a night and day difference in effectiveness. Tidewave is also really nice as an MCP as it lets the agent query hexdocs or your schema directly.
I've literally built the entire MVP of my startup on Claude Code and now have paying customers. I've got an existential worry that I'm going to have a SEV incident that will trigger a house of falling cards, but until then I'm constantly leveraging Claude for fixing security vulnerabilities, implementing test-driven-development, and planning out the software architecture in accordance with my long-term product roadmap. I hope this story becomes more and more common as time passes.
ComputerGuru · 7h ago
> but until then I'm constantly leveraging Claude for fixing security vulnerabilities
That it authored in the first place?
dpe82 · 7h ago
Do you ever fix your own bugs?
janice1999 · 6h ago
Humans have the capacity to learn from their own mistakes without redoing a lifetime of education.
ComputerGuru · 7h ago
Bugs, yes. Security vulnerabilities? Rarely enough that it wouldn’t make my HN list. It’s not remotely hard to avoid the most common issues.
lajisam · 7h ago
“Implementing test-driven development, and planning out software architecture in accordance with my long-term product roadmap” can you give some concrete examples of how CC helped you here?
1zael · 34m ago
Yeah, so I continuously maintain a claude.md file with the feature roadmap for my product (which changes every week but acts as a source of truth). I feed that into a claude software architecture agent that I created, which reviews proposed changes for my current feature build against the longer-term roadmap to ensure I don't 1\ create tech debt with my current approach and 2\ identify opportunities to parallelize work that could help with multiple upcoming features at once.
I have also a code reviewer agent in CC that writes all my unit and integration tests, which feeds into my CI/CD pipeline. I use the "/security" command that Claude recently released to review my code for security vulnerabilities while also leveraging a red team agent that tests my codebase for vulnerabilities to patch.
I'm starting to integrate Claude into Linear so I can assign Linear tickets to Claude to start working on while I tackle core stuff. Hope that helps!
lifestyleguru · 7h ago
duh, I ordered Claude Code to simply transfer money monthly to my bank account and it does.
imiric · 7h ago
Well, don't be shy, share what CC helped you build.
1zael · 24m ago
Answered above, but to be concrete on features --> it helped me build an end-to-end multi-stage pipeline architecture for video and audio transcription, LLM analysis, content generation, and evals. It took care of stuff like Postgres storage and pgvector for RAG-powered semantic search, background job orchestration with intelligent retry logic, Celery workers for background jobs, and MCP connectors.
orsorna · 7h ago
You're speaking to a wall. For whatever reason, the type of people to espouse the wonders of their LLM workflow never reveal what kind of useful output they get from it, never mind substantiate their claims.
turnsout · 1h ago
There’s still a stigma. I think people are worried that if it gets out that their startup was built with the help of an LLM, they’ll lose customers who don’t want to pay for something “vibe coded.”
Honestly I don’t think customers care.
mlrtime · 55m ago
I used the analogy to how online dating started. I remember [some] people were embarrassed to say they met online so would make up a story. We're in that phase of AI development, it will pass.
foobarbecue · 7h ago
> I hope this story becomes more and more common as time passes.
Why????????????
Why do you want devs to lose cognaizance of their own "work" to the point that they have "existential worry"?
Why are people like you trying to drown us all in slop? I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.
Is it because you're lazy?
1zael · 27m ago
Congratulations, you replace my pile of "slop" (which really is functional, tight code written by AI in 1/1000th of the time it would take me to write it) with your "shorter" code that has the exact same functionality and performance. Congrats? The reality is no one (except in the case of like competitive programming) cares about the length of your code so long as it's maintainable.
BeetleB · 6h ago
> I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.
Actually, no. When LLMs produce good, working code, it also tends to be efficient (in terms of lines, etc).
May vary with language and domain, though.
stavros · 6h ago
Eh, when is that, though? I'm always worrying about the bugs that I haven't noticed if I don't review the changes. The other day, I gave it a four-step algorithm to implement, and it skipped three of the steps because it didn't think they were necessary (they were).
BeetleB · 6h ago
Hmm...
It may be the size of the changes you're asking for. I tend to micromanage it. I don't know your algorithm, but if it's complex enough, I may have done 4 separate prompts - one for each step.
stavros · 6h ago
It was really simple, just traversing a list up and down twice. It just didn't see the reason why, so it skipped it all (the reason was to prevent race conditions).
foobarbecue · 6h ago
Isn't it easier to just write the code???
BeetleB · 5h ago
Depends on the algorithm. When you've been coding for a few decades, you really, really don't want to write yet another trivial algorithm you've written multiple tens of times in your life. There's no joy in it.
Let the LLM do the boring stuff, and focus on writing the fun stuff.
Also, setting up logging in Python is never fun.
foobarbecue · 2h ago
Right-- it's only really capable of trivial code and boilerplate, which I usually just copy from one of my older programs, examples in docs, or a highly-ranked recent SO answer. Saves me from having to converse with an expensive chatbot, and I don't have to worry about random hallucinations.
If it's a new, non-trivial algorithm, I enjoy writing it.
Mallowram · 7h ago
second
erelong · 2h ago
> The main takeaway, again, is to keep things simple.
if true this seems like a bloated approach but tbh I wouldn't claim to know totally how to use Claude like the author here...
I find you can get a lot of mileage out of "regular" prompts, I'd call them?
Just asking for what you need one prompt at a time?
I still can't visualize how any of the complexity on top of that like discussed in the article adds anything to carefully crafted prompts one at a time
I also still can't really visualize how claude works compared to simple prompts one at a time.
Like, wouldn't it be more efficient to generate a prompt and then check it by looping through the appendix sections ("Main Claude Code System Prompt" and "All Claude Code Tools"), or is that basically what the LLM does somewhat mysteriously (it just works)? So like "give me while loop equivalent in [new language I'm learning]" is the entirety of the prompt... then if you need to you can loop through the appendix section? Otherwise isn't that a massive over-use of tokens, and the requests might even be ignored because they're too complex?
The control flow eludes me a bit here. I otherwise get the impression that the LLM does not use the appendix sections correctly by adding them to prompts (like, couldn't it just ignore them at times)? It would seem like you'd get more accurate responses by separating that from whatever you're prompting and then checking the prompt through looping over the appendix sections.
Does that make any sense?
I'm visualizing coding an entire program as prompting discrete pieces of it. I have not needed elaborate .md files to do that, you just ask for "how to do a while loop equivalent in [new language I'm learning]" for example. It's possible my prompts are much simpler for my uses, but I still haven't seen any write-ups on how people are constructing elaborate programs in some other way.
Like how are people stringing prompts together to create whole programs? (I guess is one question I have that comes to mind)
I guess maybe I need to find a prompt-by-prompt breakdown of some people building things to get a clearer picture of how LLMs are being used
zackify · 1h ago
How you see and use it is the same way I do. So interested to hear other replies
zackify · 1h ago
Wow. Auto correct. I meant “interested”
athrowaway3z · 8h ago
> "THIS IS IMPORTANT" is still State of the Art
Had a similar problems until
I saw the advice "Dont say what it shouldn't but focus on what it should".
i.e. make sure when it reaches for the 'thing', it has the alternative in context.
Haven't had those problems since then.
amelius · 7h ago
I mean, if advice like this worked, then why wouldn't Anthropic let the LLM say it, for instance?
syntaxing · 8h ago
I don’t know if I’m doing something wrong. I was using Sonnet 4 with GitHub Copilot. Recently a week ago switched to Claude Code. I find GitHub Copilot solves problem and bugs way better than Claude Code. For some reason, Claude Code seems very lazy. Has anyone experience something similar?
libraryofbabel · 8h ago
The consensus is the opposite: most people find copilot does less well than Claude with both using sonnet 4. Without discounting your experience, you’ll need to give us more detail about what exactly you were trying to do (what problem, what prompt) and what you mean by “lazy” if you want any meaningful advice though.
sojournerc · 6h ago
Where do you find this "consensus"?
rsanek · 4h ago
read HN threads, talk to people using AI alot. I have the same perception
sojournerc · 1h ago
Got it, anecdata.
StephenAshmore · 8h ago
It may be a configuration thing. I've found quite the opposite. Github Copilot using Sonnet 4 will not manage context very well, quite frequently resorting to running terminal commands to search for code even when I gave it the exact file it's looking for in the copilot context. Claude code, for me, is usually much smarter when it comes to reading code and then applying changes across a lot of files. I also have it integrated into the IDE so it can make visual changes in the editor similar to GitHub Copilot.
syntaxing · 8h ago
I do agree with you, Github Copilot uses more tokens like you mentioned with redundant searches. But at the end of the day, it solves the problem. Not sure if the cost out weights the benefit though compared to Claude Claude. Going to try Claude Code more and see if I'm prompting it incorrectly.
cosmic_cheese · 8h ago
I haven’t tried other LLMs but have a fair amount of experience with Claude Code, and there definitely times when you have to be explicit about the route you want it to take and tell it to not take shortcuts.
It’s not consistent, though. I haven’t figured out what they are but it feels like there are circumstances where it’s more prone to doing ugly hacky things.
wordofx · 7h ago
I have most of the tools setup so I can switch between them and test which is better. So far Amp and Claude Code are on top. GH Copilot is the worst. I know MS is desperately trying to copy its competitors but the reality is, they are just copying features. They haven’t solved the system prompts. So the outcomes are just inferior.
gervwyk · 8h ago
We’re considering building a coding agent for Lowdefy[1], a framework that lets you build web apps with YAML config.
For those who’ve built coding agents: do you think LLMs are better suited for generating structured config vs. raw code?
My theory is that agents producing valid YAML/JSON schemas could be more reliable than code generation. The output is constrained, easier to validate, and when it breaks, you can actually debug it.
I keep seeing people creating apps with vibe coder tools but then get stuck when they need to modify the generated code.
Curious if others think config-based approaches are more practical for AI-assisted development.
This is essential to productivity for humans and LLMs alike. The more reliable your edit/test loop, the better your results will be. It doesn't matter if it's compiling code, validating yaml, or anything else.
To your broader question. People have been trying to crack the low-code nut for ages. I don't think it's solvable. Either you make something overly restrictive, or you are inventing a very bad programming language which is doomed to fail because professional coders will never use it.
gervwyk · 7h ago
Good point. i’m making the assumption that if the LLM has a more limited feature space to produce as output, then the output is more predictable, and thus faster to comprehend changes. Similar to when devs use popular libraries, there is a well known abstraction, therefore less “new” code to comprehend as i see familiar functions, making the code predictable to me.
ec109685 · 8h ago
I wouldn’t get hung up on one shotting anything. Output to a format that can be machine verified, ideally in a format there is plenty of industry examples for.
Then add a grader step to your agentic loop that is triggered after the files are modified. Give feedback to the model if there any errors and it will fix them.
amelius · 8h ago
How do you specify callbacks?
Config files should be mature programming languages, not Yaml/Json files.
gervwyk · 8h ago
Callback: Blocks (React components) can register events with action chains (a sequential list of async functions) that will be called when the event is triggered. So it is defined in the react component. This abstraction of blocks, events, actions, operations and requests are the only abstraction required in the schema to build fully functional web apps.
Might sound crazy but we built full web apps in just yaml.. Been doing this for about 5 years now and it helps us scale to build many web apps, fast, that are easy to maintain. We at Resonancy[1] have found many benefits in doing so. I should write more about this.
I made insane progress with CC over last several weeks, but lately have noticed progress stalling.
I’m in the middle of some refactoring/bug fixing/optimization but it’s constantly running into issues, making half baked changes, not able to fix regressions etc. Still trying to figure out how to make do a better job. Might have to break it into smaller chunks or something. Been pretty frustrating couple of weeks.
If anyone has pointers, I’m all ears!!
jampa · 1h ago
I felt that, too. It turns out I was getting 'too comfortable' while using CC. The best way is to treat CC like a junior engineer and overexplain things before letting it do anything. With time, you start to trust CC, but you shouldn't do that because it is still the same LLM when you started.
Another thing is that before, you were in a greenfield project, so Claude didn't need any context to do new things. Now, your codebase is larger, so you need to point out to Claude where it should find more information. You need to spoon-feed the relevant files with "@" where you want it to look up things and make changes.
If you feel Claude is lazy, force it to use more thinking budget "think" < "think hard" < "think harder" < "ultrathink.". Sometimes I like to throw "ultrathink" and do something else while it codes. [1]
Ive seen context forge has a way to use hooks to keep CC going after context condensing. Are there any other patterns or tools people are using with CC to keep it on task, with current context until it has a validated completion of its task? I feel like we have all these tools separately but nothing brings it all together and also isn’t crazy buggy.
kroaton · 7h ago
Load up the context with your information + task list (broken down into phases).
Have Sonnet implement phase one tasks and mark phase 1 as done. Go into planning mode, have Opus review the work (you should ideally also review it at this point).
Double press escape and go back to the point in the conversation where you loaded up the context with your information + task list.
Tell it to do phase 2.
Repeat until you run out of usage.
conception · 5h ago
Yes, i can manage CC through a task list but there’s nothing technically stopping all your steps from happening automatically. That tool just doesn’t exist yet as far as I can tell but it’s not a very advanced tool to build. I’m surprised no one has put those steps together.
Also if the task runs out of context it will get progressively worse rather than refresh its own context from time to time.
kroaton · 7h ago
From time to time, go into Opus planning mode, have it review your entire codebase and tell it to go file by file and look for bugs, security issues, logical problems, etc.
Have it make a list. Then load up the context + task list...
It's more interesting to compare what gemini cli and codex cli did wrong? (though i haven't used both of them for weeks to months)
marmalade2413 · 7h ago
I would be remis if after reading this I didn't point people towards talk box ( https://github.com/rich-iannone/talk-box) from one of the creators of great tables.
diego_sandoval · 8h ago
It shocks me when people say that LLMs don't make them more productive, because my experience has been the complete opposite, especially with Claude Code.
Either I'm worse than then at programming, to the point that I find an LLM useful and they don't, or they don't know how to use LLMs for coding.
timr · 8h ago
It depends very much on your use case, language popularity, experience coding, and the size of your project. If you work on a large, legacy code base in COBOL, it's going to be much harder than working on a toy greenfield application in React. If your prior knowledge writing code is minimal, the more amazing the results will seem, and vice-versa.
Despite the persistent memes here and elsewhere, it doesn't depend very much on the particular tool you use (with the exception of model choice), how you hold it, or your experience prompting (beyond a bare minimum of competence). People who jump into any conversation with "use tool X" or "you just don't understand how to prompt" are the noise floor of any conversation about AI-assisted coding. Folks might as well be talking about Santeria.
Even for projects that I initiate with LLM support, I find that the usefulness of the tool declines quickly as the codebase increases in size. The iron law of the context window rules everything.
Edit: one thing I'll add, which I only recently realized exists (perhaps stupidly) is that there is a population of people who are willing to prompt expensive LLMs dozens of times to get a single working output. This approach seems to me to be roughly equivalent to pulling the lever on a slot machine, or blindly copy-pasting from Stack Overflow, and is not what I am talking about. I am talking about the tradeoffs involved in using LLMs as an assistant for human-guided programming.
ivan_gammel · 8h ago
Overall I would agree with you, but I start feeling that this „iron law“ isn’t as simple as that. After all, humans have limited „context window“ too — we don’t remember every small detail on a large project we have been working on for several years. Loose coupling and modularity helps us and can help LLM to make the size of the task manageable if you don’t ask it to rebuild the whole thing. It’s not the size that makes LLMs fail, but something else, probably the same things where we may fail.
timr · 8h ago
Humans have a limited short-term memory. Humans do not literally forget everything they've ever learned after each Q&A cycle.
(Though now that I think of it, I might start interrupting people with “SUMMARIZING CONVERSATION HISTORY!” whenever they begin to bore me. Then I can change the subject.)
ivan_gammel · 7h ago
LLMs do not „forget“ everything completely either. Probably all major tools by now consume information from some form of memory (system prompt, Claude.md, project files etc) before your prompt. Claude Code rewrites the Claude.md, ChatGPT may modify the chat memory if it finds it necessary etc.
timr · 7h ago
Writing stuff in a file is not “memory” (particularly if I have to do it), and in any case, it consumes context. Overrun the context window, and the tool doesn’t know about what is lost.
There are various hacks these tools take to cram more crap into a fixed-size bucket, but it’s still fundamentally different than how a person thinks.
ivan_gammel · 6h ago
> Writing stuff in a file is not “memory”
Do you understand yourself what you just said? File is a way to organize data in memory of a computer by definition. When you write instructions to LLM, they persistently modify your prompts making LLM „remember“ certain stuff like coding conventions or explanations of your architectural choices.
> particularly if I have to do it
You have to communicate with LLM about the code. You either do it persistently (must remember) or contextually (should know only in context of a current session). So word „particularly“ is out of place here. You choose one way or another instead of bring able to just tell that some information is important or unimportant long-term. This communication would happen with humans too. LLMs have different interface for it, more explicit (giving the perception of more effort, when it is in fact the same; and let’s not forget that LLM is able to decide itself on whether to remember something or not).
> and in any case, it consumes context
So what? Generalization is an effective way to compress information. Because of it persistent instructions consume only a tiny fraction of context, but they reduce the need for LLM to go into full analysis of your code.
> but it’s still fundamentally different than how a person thinks.
Again, so what? Nobody can keep in short-term memory the entire code base. It should not be the expectation to have this ability neither it should not be considered a major disadvantage not to have it. Yes, we use our „context windows“ differently in a thinking process. What matters is what information we pack there and what we make of it.
BeetleB · 6h ago
Both true and irrelevant.
I've yet had the "forgets everything" to be a limiting factor. In fact, when using Aider, I aggressively ensure it forgets everything several times per session.
To me, it's a feature, not a drawback.
I've certainly had coworkers who I've had to tell "Look, will you forget about X? That use case, while it look similar, is actually quite different in assumptions, etc. Stop invoking your experiences there!"
majormajor · 41m ago
> It is extremely important to identify the most important task the LLM needs to perform and write out the algorithm for it. Try to role-play as the LLM and work through examples, identify all the decision points and write them explicitly. It helps if this is in the form of a flow-chart.
I get lost a bit at things like this, from the link. The lessons in the article match my experience with LLMs and tools around them (see also: RAG is a pain in the ass and vector embedding similarity is very far from a magic bullet), but the takeaway - write really good prompts instead of writing code - doesn't ring true.
If I need to write out all the decision points and steps of the change I'm going to make, why am I not just doing it myself?
Especially when I have an editor that can do a lot of automated changes faster/safer than grep-based text-first tooling? If I know the language the syntax isn't an issue; if I don't know the language it's harder to trust the output of the model. (And if I 90% know the language but have some questions, I use an LLM to plow through the lines I used to have to go to Google for - which is a speedup, but a single-digit-percentage one.)
My experience is that the tools fall down pretty quickly because I keep trying to make them to let me skip the details of every single task. That's how I work with real human coworkers. And then something goes sideways. When I try to pseudocode the full flow vs actually writing the code I lose the speed advantage, and often end up with a nasty 80%-there-but-I-don't-really-know-how-to-fix-the-other-20%-without-breaking-the-80% situation because I noticed a case I didn't explicitly talk about that it guessed wrong on. So then it's either slow and tedious or `git reset` and try again.
(99% of these issues go away when doing greenfield tooling or scripts for operations or prototyping, which is what the vast majority of compelling "wow" examples I've seen have been, but only applies to my day job sometimes.)
Aurornis · 8h ago
I’ve found LLMs useful at some specific tasks, but a complete waste of time at others.
If I only ever wrote small Python scripts, did small to medium JavaScript front end or full stack websites, or a number of other generic tasks where LLMs are well trained I’d probably have a different opinion.
Drop into one of my non-generic Rust codebases that does something complex and I could spent hours trying to keep the LLM moving in the right direction and away from all of the dead ends and thought loops.
It really depends on what you’re using them for.
That said, there are a lot of commenters who haven’t spent more than a few hours playing with LLMs and see every LLM misstep as confirmation of their preconceived ideas that they’re entirely useless.
SXX · 8h ago
This heavily depends on what project and stack you working on. LLMs are amazing for building MVPs or self-contained micro-services on modern, popular and well-defined stacks. Every single dependency, legacy or proprietary library and every extra MCP make it less usable. It get's much worse if codebase itself is legacy unless you can literally upload documentation for each used API into context.
A lot of programmers work on maintaining huge monolith codebases, built on top of 10-years old tech using obscure proprietary dependencies. Usually they dont have most of the code to begin with and APIs are often not well documented.
breuleux · 4h ago
Speaking for myself, LLMs are reasonably good at writing tests or adapting existing structures, but they are not very good at doing what I actually want to do (design, novelty, trying to figure out the very best way to do a thing). I gain some productivity from the reduction of drudgery, but that's never been much of a bottleneck to begin with.
The thing is, a lot of the code that people write is cookie-cutter stuff. Possibly the entirety of frontend development. It's not copy-paste per se, but it is porting and adapting common patterns on differently-shaped data. It's pseudo-copy-paste, and of course AI's going to be good at it, this is its whole schtick. But it's not, like, interesting coding.
jsight · 8h ago
What is performance like for you? I've been shocked at how many simple requests turn into >10 minutes of waiting.
If people are getting faster responses than this regularly, it could account for a large amount of the difference in experiences.
totalhack · 8h ago
Agree with this, though I've mostly been using Gemini CLI. Some of the simplest things, like applying a small diff, take many minutes as it loses track of the current file state and takes minutes to figure it out or fail entirely.
tjr · 8h ago
What do you work on, and what do LLMs do that helps?
(Not disagreeing, but most of these comments -- on both sides -- are pretty vague.)
SXX · 8h ago
For once LLMs are good for building game prototypes. When all you care is to check whatever something is fun to play it really doesn'a matter how much of tech debt you generate in process.
And you start from the stratch all the time so you can generate all the documentation before you ever start to generate code. And when LLM slop become overwhelming you just drop it and go to check next idea.
lambda · 7h ago
It can be more than one reason.
First of all, keep in mind that research has shown that people generally overestimate the productivity gains of LLM coding assistance. Even when using a coding assistant makes them less productive, they feel like they are more productive.
Second, yeah, experience matters, both with programming and LLM coding assistants. The better you are, the less helpful the coding assistant will be, it can take less work to just write what you want than convince an LLM to do it.
Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.
pton_xd · 7h ago
> Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.
I've noticed the stronger my opinions are about how code should be written or structured, the less productive LLMs feel to me. Then I'm just fighting them at every step to do things "my way."
If I don't really have an opinion about what's going on, LLMs churning out hundreds of lines of mostly-working code is a huge boon. After all, I'd rather not spend the energy thinking through code I don't care about.
Uehreka · 7h ago
> research has shown that people generally overestimate the productivity gains of LLM coding assistance.
I don’t think this research is fully baked. I don’t see a story in these results that aligns with my experience and makes me think “yeah, that actually is what I’m doing”. I get that at this point I’m supposed to go “the effect is so subtle that even I don’t notice it!” But experience tells me that’s not normally how this kind of thing works.
Perhaps we’re still figuring out how to describe the positive effects of these tools or what axes we should really be measuring on, but the idea that there’s some sort of placebo effect going on here doesn’t pass muster.
ta12653421 · 8h ago
Productivity boost is unbelieveable!
If you handle it right, its a boon - its like having 3 junior devs at hand. And I'm talking about using the web interface.
I guess most people are not paying and cant therefore apply the project-space (which is one of the best features), which unleashes its full magic.
Even if I'm currently without a job, I'm still paying because it helps me.
ta12653421 · 7h ago
LOL
why do I get downvoted for explaining my experience? :-D
pawelduda · 5h ago
Because you posted a success story about LLM usage on HN
ta12653421 · 5h ago
Well, understood, but that part between the lines is not my fault?
pawelduda · 3h ago
Nah, never implied that
socalgal2 · 8h ago
I’m trying to learn jj. Both Gemini and ChatGPT gave me incorrect instructions 4 of 5 times
That's because jj is relatively new, and constantly changing. The official tutorial is (by their own admission), out of date. People's blog posts are fairly different in what commands/usage they recommend, as well.
I know it, because I recently learned jj, with a lot of struggling.
If a human struggles learning it, I wouldn't expect LLMs to be much better.
esafak · 1h ago
That's ironic considering jj is supposed to make version control easier.
dsiegel2275 · 8h ago
Agreed. I only started using Claude Code about a week and a half ago and I'm blown away by how productive I can be with it.
pawelduda · 8h ago
I've had occasions where a relatively short prompt solved me an entire day of debugging and fixing things, because it was tech stack I barely knew. Most impressive part was when CC knew the changes may take some time to be applied and just used `sleep 60; check logs;` 2-3 times and then started checking elsewhere if something's stuck. It was, CC cleaned it up and a minute later someone pinged me that the it works.
cpursley · 8h ago
I feel like I could have written this myself; I'm truly dumbfounded. Maybe I am just a crappy coder but I don't think I'd be getting such good results with Claude Code if I were.
d-lisp · 7h ago
Basic engineering skills (frontend development, python, even some kind of high level 3d programming) are covered. If you do C/C++, or even Java in a preexisting project then you will have a hard time constantly explaining the LLM why <previous answer> is absolute nonsense.
Everytime I tried LLMs, I had the feeling of talking with a ignorant trying to sound VERY CLEVER: terrible mistakes at every line, surrounded with punchlines, rocket emojis and tons of bullshit. (I'm partly kidding).
Maybe there are situations where LLMs are useful e.g. if you can properly delimit and isolate your problem; but when you have to write code that is meant to mess up with the internal of some piece of software then it doesn't do well.
It would be nice to know from each part of the "happy users" and "mecontent usere" of LLMs in what context they experimented with it to be more informed on this question.
AaronAPU · 7h ago
If you’re working with a massive complicated C++ repository, you have to take the time to collect the right context and describe the problem precisely enough. Then you should actually read the code to verify it even makes sense. And at that point, if you’re a principle level developer, you could just as easily do it yourself.
But the situation is very different if you’re coding slop in the first place (front end stuff, small repo simple code). The LLMs can churn that slop out at a rapid clip.
exe34 · 8h ago
it makes me very productive with new prototypes in languages/frameworks that I'm not familiar with. conversely, a lot of my work involves coding as part of understanding the business problem in the first place. think making a plot to figure out how two things relate, and then based on the understanding trying out some other operation. it doesn't matter how fast the machine can write code, my slow meat brain is still the bottleneck. the coding is trivial.
wredcoll · 8h ago
The best part about llm coding is that you feel productive even when you aren't, makes coding a lot more fun.
No comments yet
myflash13 · 8h ago
CC is so damn good I want to use its agent loop in my agent loop. I'm planning to build a browser agent for some specialized tasks and I'm literally just bundling a docker image with Claude Code and a headless browser and the Playwright MCP server.
apwell23 · 8h ago
cool
radleta · 8h ago
I’d be curious to know what MCPs you’ve found useful with CC. Thoughts?
nuwandavek · 5h ago
(blogpost author here)
I actually found none of them useful. I think MCP is an incomplete idea. Tools and the system prompt cannot be so cleanly separated (at least not yet). Just slapping on tools hurts performance more than it helps.
I've now gone back to just using vanilla CC with a really really rich claude.md file.
ftyuiiooool · 38m ago
Qetyuuioooooddfhj
roflyear · 7h ago
Claude Code is hilarious because often it'll say stuff that's basically "that's too hard, here's a bandaid fix" and implement it lol
sergiotapia · 8h ago
Is Claude Code better than Amp?
whoknowsidont · 7h ago
It's not that good, most developers are just really that subpar lol.
HacklesRaised · 8h ago
Delusional asshats trying to draft the grift?
dingnuts · 8h ago
the article says CC doesn't use RAG, but then describes how it uses tools to Retrieve context to Aid Generation... RAG
what am I missing here?
edit: lol I "love" that I got downvoted for asking a simple question that might have an open answer. "be curious" says the rules. stay classy HN
ebzlo · 8h ago
Yes technically it is RAG, but a lot of the community is associating RAG with vector search specifically.
dingnuts · 8h ago
it does? why? the term RAG as I understand it leaves the methodology for retrieval vague so that different techniques can be used depending on the, er, context.. which makes a lot more sense to me
koakuma-chan · 8h ago
> why?
Hype. There's nothing wrong with using, e.g., full-text search for RAG.
nuwandavek · 5h ago
(blogpost author here)
You're right! I did make the distinction in an earlier draft, but decided to use "RAG" interchangeably with vector search, as it is popularly known today in code-gen systems. I'd probably go back to the previous version too.
But I do think there is a qualitative different between getting candidates and adding them to context before generating (retrieval augmented generation) vs the LLM searching for context till it is satisfied.
BoorishBears · 8h ago
If you want to be really stringent, RAG originally referred to going from user query to retrieving information directly based on the query then passing it to an LLM: With CC the LLM is taking the raw user query then crafting its own searches
But realistically lots of RAG systems have LLM calls interleaved for various reasons, so what they probably mean it not doing the usual chunking + embeddings thing.
theptip · 8h ago
Yeah, TFA clearly explains their point. They mean RAG=vector search, and contrast this with tool calling (eg Grep).
LaGrange · 8h ago
[flagged]
dang · 8h ago
Please don't post unsubstantive comments to Hacker News, and especially not putdowns.
The idea here is: if you have a substantive point, make it thoughtfully. If not, please don't comment until you do.
I appreciate the vague negative takes on tools like this where it feels like there is so much hype it's impossible to have a different opinion. "It's bad" is perfectly substantiative in my opinion; this person tried it, didn't like it, and doesn't have much more to say because of that, but it's still a useful perspective.
Is this why HN is so dang pro-AI? the negative comments, even small ones, are moderated away? explains a lot TBH
danielbln · 8h ago
There is no value in a single poster saying "it's bad". I don't know this person, there is zero context on why I should care that this user thinks it's bad. Unless they state why they think it's bad, it adds nothing to the conversation and is just noise
dang · 2h ago
HN is by no means "pro-AI". It's sharply divided, and (as always with these things) each side assumes the other side is dominant.
h4ch1 · 8h ago
I think this comment would be a little better by specifying WHY it's bad instead of just a "it's bad" like it's a Twitter thread.
LaGrange · 8h ago
The subject is pretty exhausted. The reason I post "it's bad" because, honestly, expending on it just feels like a waste of time and energy. The point is demonstrating that this _isn't_ a consensus, and not much more than that.
Edit: bonus points if this gets me banned.
dang · 2h ago
(We don't ban people for posting like this!)
If it felt like a waste of time and energy to post something substantive, rather than the GP comment (https://news.ycombinator.com/item?id=44998577), then you should have just posted nothing. That comment was obviously neither substantive nor thoughtful. This is hardly a borderline call!
We want substantive, thoughtful comments from people who do have the time and energy to contribute them.
Btw, to avoid a misunderstanding that sometimes shows up: it's fine for comments to be critical; that is, it's possible to be substantive, thoughtful, and critical all at the same time. For example, I skimmed through your account's most recent comments and saw several of that kind, e.g. https://news.ycombinator.com/item?id=44299479 and
https://news.ycombinator.com/item?id=42882357. If your GP comment had been like that, it would have been fine; you don't have to like Claude Code (or whatever the $thing is).
exe34 · 8h ago
that wasn't a negative comment though. a negative comment would explain what they didn't like about it. this was the digital equivalent of flytipping.
on_the_train · 7h ago
The lengths people will go through to avoid to code is astonishing
apwell23 · 7h ago
writing code is not the fun part of coding. I only realized that after using claude code.
In any case, I wish it would stay alive and help me lol.
It makes me think that the language/platform/architecture that is "most known" by LLMs will soon be the preferred -- sort of a homogenization of technologies by LLM usage. Because if you can be 10x as successfully vibey in, say, nodejs versus elixir or go -- well, why would you opt for those in a greenfield project at all? Particularly if you aren't a tech shop and that choice allows you to use junior coders as if they were midlevel or senior.
I hear people say things like, “AI isn’t coming for my job because LLMs suck at [language or tech stack]!”
And I wonder, does that just mean that other stacks have an advantage? If a senior engineer with Claude Code can solve the problem in Python/TypeScript in significantly less time than you can solve it in [tech stack] then are you really safe? Maybe you still stack up well against your coworkers, but how well does your company stack up against the competition?
And then the even more distressing thought accompanies it: I don’t like the code that LLMs produce because it looks nothing like the code I write by hand. But how relevant is my handwritten code becoming in a world where I can move 5x faster with coding agents? Is this… shitty style of LLM generated code actually easier for code agents to understand?
Like I said, I don’t endorse either of these ideas. They’re just questions that make me uncomfortable because I can’t definitively answer them right now.
So if you need to avoid GC issues, or have robust type safety, or whatever it is, to gain an edge in a certain industry or scenario, you can't just switch to the vibe tool of choice without (best case) giving up $$$ to pay to make up for the inefficiency or (worst case) having more failures that your customers won't tolerate.
But this means the gap between the "hard" work and the "easy" work may become larger - compensation included. Probably most notably in FAANG companies where people are brought in expected to be able to do "hard" work and then frequently given relatively-easy CRUD work in low-ROI ancillary projects but with higher $$$$ than that work would give anywhere else.
And the places currently happy to hire disaffected ex-FAANG engineers who realized they were being wasted on polishing widgets may start having more hiring difficulty as the pipeline dries up. Like trying to hire for assembly or COBOL today.
For now LLMs still suffers from hallucination and lack of generalizability, The large amount of code generated is sometimes not necessarily a benefit, but a technical debt.
LLMs are good for open and fast, prototype web applications, but if we need a stable, consistent, maintainable, secure framework, or scientific computing, pure LLMs are not enough, one can't vibe everything without checking details
I use some ai tools and sometimes they're fine, but I won't in my lifetime anyway hand over everything to an AI, not out of some fear or anything, but even purely as a hobby. I like creating things from scratch, I like working out problems, why would I need to let that go?
How do we get the LLM to gain knowledge on this new language that we have no example usage of?
If you use GitHub Copilot - which has its own system level prompts - you can hotswap between models, and Claude outperforms OpenAI’s and Google’s models by such a large margin that the others are functionally useless in comparison.
Try using opus with cline in vs code. Then use Claude code.
I don't know the best way to quantify the differences, but I know I get more done in CC.
With a subscription plan, Anthropic is highly incentivized to be efficient in their loops beyond just making it a better experience for users.
A few takeaways for me from this (1) Long prompts are good - and don't forget basic things like explaining in the prompt what the tool is, how to help the user, etc (2) Tool calling is basic af; you need more context (when to use, when not to use, etc) (3) Using messages as the state of the memory for the system is OK; i've thought about fancy ways (e.g., persisting dataframes, parsing variables between steps, etc, but seems like as context windows grow, messages should be ok)
for context, i want to build a claude code like agent in a WYSIWYG markdown app. that's how i stumbled on your blog post :)
It dumps out a JSON file as well as a very nicely formatted HTML file that shows you every single tool and all the prompts that were used for a session.
You can see the system prompts too.
It's all how the base model has been trained to break tasks into discrete steps and work through them patiently, with some robustness to failure cases.
That repository does not contain the code. It's just used for the issue tracker and some example hooks.
[1]: https://github.com/badlogic/lemmy/tree/main/apps/claude-brid...
I know, thus the :trollface:
> Happen to know where I can find a fork?
I don't know where you can find a fork, but even if there is a fork somewhere that's still alive, which is unlikely, it would be for a really old version of Claude Code. You would probably be better off reverse engineering the minified JavaScript or whatever that ships with the latest Claude Code.
I really like a lot of what Google produces, but they can't seem to keep a product that they don't shut down and they can be pretty ham-fisted, both with corporate control (Chrome and corrupt practices) and censorship
My tactic is to work with Gemini to build a dense summary of the project and create a high level plan of action, then take that to gpt5 and have it try to improve the plan, and convert it to a hyper detailed workflow xml document laying out all the steps to implement the plan, which I then hand to claude.
This avoids pretty much all of Claude's unplanned bumbling.
I should mention I made that one for my research/stats workflow, so there's some specific stuff in there for that, but you can prompt chat gpt to generalize it.
For the command line tool (claude code vs gemini code)? It isn't even close. Gemini code was useless. Claude code was mostly just slow.
I think Claude is much more predictable and follows instructions better- the todo list it manages seems very helpful in this respect.
That's been my experience, anyway. Maybe it hates me? I sure hate it.
Nothing in the world is simply outright garbage. Even the seemingly worst products exist for a reason and is used for a variety of use cases.
So, take a step back and reevaluate whether your reply could have been better. Because, it simply "just sucks"
https://hexdocs.pm/usage_rules/readme.html
That it authored in the first place?
I have also a code reviewer agent in CC that writes all my unit and integration tests, which feeds into my CI/CD pipeline. I use the "/security" command that Claude recently released to review my code for security vulnerabilities while also leveraging a red team agent that tests my codebase for vulnerabilities to patch.
I'm starting to integrate Claude into Linear so I can assign Linear tickets to Claude to start working on while I tackle core stuff. Hope that helps!
Honestly I don’t think customers care.
Why????????????
Why do you want devs to lose cognaizance of their own "work" to the point that they have "existential worry"?
Why are people like you trying to drown us all in slop? I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.
Is it because you're lazy?
Actually, no. When LLMs produce good, working code, it also tends to be efficient (in terms of lines, etc).
May vary with language and domain, though.
It may be the size of the changes you're asking for. I tend to micromanage it. I don't know your algorithm, but if it's complex enough, I may have done 4 separate prompts - one for each step.
Let the LLM do the boring stuff, and focus on writing the fun stuff.
Also, setting up logging in Python is never fun.
If it's a new, non-trivial algorithm, I enjoy writing it.
if true this seems like a bloated approach but tbh I wouldn't claim to know totally how to use Claude like the author here...
I find you can get a lot of mileage out of "regular" prompts, I'd call them?
Just asking for what you need one prompt at a time?
I still can't visualize how any of the complexity on top of that like discussed in the article adds anything to carefully crafted prompts one at a time
I also still can't really visualize how claude works compared to simple prompts one at a time.
Like, wouldn't it be more efficient to generate a prompt and then check it by looping through the appendix sections ("Main Claude Code System Prompt" and "All Claude Code Tools"), or is that basically what the LLM does somewhat mysteriously (it just works)? So like "give me while loop equivalent in [new language I'm learning]" is the entirety of the prompt... then if you need to you can loop through the appendix section? Otherwise isn't that a massive over-use of tokens, and the requests might even be ignored because they're too complex?
The control flow eludes me a bit here. I otherwise get the impression that the LLM does not use the appendix sections correctly by adding them to prompts (like, couldn't it just ignore them at times)? It would seem like you'd get more accurate responses by separating that from whatever you're prompting and then checking the prompt through looping over the appendix sections.
Does that make any sense?
I'm visualizing coding an entire program as prompting discrete pieces of it. I have not needed elaborate .md files to do that, you just ask for "how to do a while loop equivalent in [new language I'm learning]" for example. It's possible my prompts are much simpler for my uses, but I still haven't seen any write-ups on how people are constructing elaborate programs in some other way.
Like how are people stringing prompts together to create whole programs? (I guess is one question I have that comes to mind)
I guess maybe I need to find a prompt-by-prompt breakdown of some people building things to get a clearer picture of how LLMs are being used
Had a similar problems until I saw the advice "Dont say what it shouldn't but focus on what it should".
i.e. make sure when it reaches for the 'thing', it has the alternative in context.
Haven't had those problems since then.
It’s not consistent, though. I haven’t figured out what they are but it feels like there are circumstances where it’s more prone to doing ugly hacky things.
For those who’ve built coding agents: do you think LLMs are better suited for generating structured config vs. raw code?
My theory is that agents producing valid YAML/JSON schemas could be more reliable than code generation. The output is constrained, easier to validate, and when it breaks, you can actually debug it.
I keep seeing people creating apps with vibe coder tools but then get stuck when they need to modify the generated code.
Curious if others think config-based approaches are more practical for AI-assisted development.
[1] https://github.com/lowdefy/lowdefy
This is essential to productivity for humans and LLMs alike. The more reliable your edit/test loop, the better your results will be. It doesn't matter if it's compiling code, validating yaml, or anything else.
To your broader question. People have been trying to crack the low-code nut for ages. I don't think it's solvable. Either you make something overly restrictive, or you are inventing a very bad programming language which is doomed to fail because professional coders will never use it.
Then add a grader step to your agentic loop that is triggered after the files are modified. Give feedback to the model if there any errors and it will fix them.
Config files should be mature programming languages, not Yaml/Json files.
Might sound crazy but we built full web apps in just yaml.. Been doing this for about 5 years now and it helps us scale to build many web apps, fast, that are easy to maintain. We at Resonancy[1] have found many benefits in doing so. I should write more about this.
[1] - https://resonancy.io
I’m in the middle of some refactoring/bug fixing/optimization but it’s constantly running into issues, making half baked changes, not able to fix regressions etc. Still trying to figure out how to make do a better job. Might have to break it into smaller chunks or something. Been pretty frustrating couple of weeks.
If anyone has pointers, I’m all ears!!
Another thing is that before, you were in a greenfield project, so Claude didn't need any context to do new things. Now, your codebase is larger, so you need to point out to Claude where it should find more information. You need to spoon-feed the relevant files with "@" where you want it to look up things and make changes.
If you feel Claude is lazy, force it to use more thinking budget "think" < "think hard" < "think harder" < "ultrathink.". Sometimes I like to throw "ultrathink" and do something else while it codes. [1]
[1]: https://www.anthropic.com/engineering/claude-code-best-pract...
Give programming a try, you might like it.
Next…
Also if the task runs out of context it will get progressively worse rather than refresh its own context from time to time.
Either I'm worse than then at programming, to the point that I find an LLM useful and they don't, or they don't know how to use LLMs for coding.
Despite the persistent memes here and elsewhere, it doesn't depend very much on the particular tool you use (with the exception of model choice), how you hold it, or your experience prompting (beyond a bare minimum of competence). People who jump into any conversation with "use tool X" or "you just don't understand how to prompt" are the noise floor of any conversation about AI-assisted coding. Folks might as well be talking about Santeria.
Even for projects that I initiate with LLM support, I find that the usefulness of the tool declines quickly as the codebase increases in size. The iron law of the context window rules everything.
Edit: one thing I'll add, which I only recently realized exists (perhaps stupidly) is that there is a population of people who are willing to prompt expensive LLMs dozens of times to get a single working output. This approach seems to me to be roughly equivalent to pulling the lever on a slot machine, or blindly copy-pasting from Stack Overflow, and is not what I am talking about. I am talking about the tradeoffs involved in using LLMs as an assistant for human-guided programming.
(Though now that I think of it, I might start interrupting people with “SUMMARIZING CONVERSATION HISTORY!” whenever they begin to bore me. Then I can change the subject.)
There are various hacks these tools take to cram more crap into a fixed-size bucket, but it’s still fundamentally different than how a person thinks.
Do you understand yourself what you just said? File is a way to organize data in memory of a computer by definition. When you write instructions to LLM, they persistently modify your prompts making LLM „remember“ certain stuff like coding conventions or explanations of your architectural choices.
> particularly if I have to do it
You have to communicate with LLM about the code. You either do it persistently (must remember) or contextually (should know only in context of a current session). So word „particularly“ is out of place here. You choose one way or another instead of bring able to just tell that some information is important or unimportant long-term. This communication would happen with humans too. LLMs have different interface for it, more explicit (giving the perception of more effort, when it is in fact the same; and let’s not forget that LLM is able to decide itself on whether to remember something or not).
> and in any case, it consumes context
So what? Generalization is an effective way to compress information. Because of it persistent instructions consume only a tiny fraction of context, but they reduce the need for LLM to go into full analysis of your code.
> but it’s still fundamentally different than how a person thinks.
Again, so what? Nobody can keep in short-term memory the entire code base. It should not be the expectation to have this ability neither it should not be considered a major disadvantage not to have it. Yes, we use our „context windows“ differently in a thinking process. What matters is what information we pack there and what we make of it.
I've yet had the "forgets everything" to be a limiting factor. In fact, when using Aider, I aggressively ensure it forgets everything several times per session.
To me, it's a feature, not a drawback.
I've certainly had coworkers who I've had to tell "Look, will you forget about X? That use case, while it look similar, is actually quite different in assumptions, etc. Stop invoking your experiences there!"
I get lost a bit at things like this, from the link. The lessons in the article match my experience with LLMs and tools around them (see also: RAG is a pain in the ass and vector embedding similarity is very far from a magic bullet), but the takeaway - write really good prompts instead of writing code - doesn't ring true.
If I need to write out all the decision points and steps of the change I'm going to make, why am I not just doing it myself?
Especially when I have an editor that can do a lot of automated changes faster/safer than grep-based text-first tooling? If I know the language the syntax isn't an issue; if I don't know the language it's harder to trust the output of the model. (And if I 90% know the language but have some questions, I use an LLM to plow through the lines I used to have to go to Google for - which is a speedup, but a single-digit-percentage one.)
My experience is that the tools fall down pretty quickly because I keep trying to make them to let me skip the details of every single task. That's how I work with real human coworkers. And then something goes sideways. When I try to pseudocode the full flow vs actually writing the code I lose the speed advantage, and often end up with a nasty 80%-there-but-I-don't-really-know-how-to-fix-the-other-20%-without-breaking-the-80% situation because I noticed a case I didn't explicitly talk about that it guessed wrong on. So then it's either slow and tedious or `git reset` and try again.
(99% of these issues go away when doing greenfield tooling or scripts for operations or prototyping, which is what the vast majority of compelling "wow" examples I've seen have been, but only applies to my day job sometimes.)
If I only ever wrote small Python scripts, did small to medium JavaScript front end or full stack websites, or a number of other generic tasks where LLMs are well trained I’d probably have a different opinion.
Drop into one of my non-generic Rust codebases that does something complex and I could spent hours trying to keep the LLM moving in the right direction and away from all of the dead ends and thought loops.
It really depends on what you’re using them for.
That said, there are a lot of commenters who haven’t spent more than a few hours playing with LLMs and see every LLM misstep as confirmation of their preconceived ideas that they’re entirely useless.
A lot of programmers work on maintaining huge monolith codebases, built on top of 10-years old tech using obscure proprietary dependencies. Usually they dont have most of the code to begin with and APIs are often not well documented.
The thing is, a lot of the code that people write is cookie-cutter stuff. Possibly the entirety of frontend development. It's not copy-paste per se, but it is porting and adapting common patterns on differently-shaped data. It's pseudo-copy-paste, and of course AI's going to be good at it, this is its whole schtick. But it's not, like, interesting coding.
If people are getting faster responses than this regularly, it could account for a large amount of the difference in experiences.
(Not disagreeing, but most of these comments -- on both sides -- are pretty vague.)
And you start from the stratch all the time so you can generate all the documentation before you ever start to generate code. And when LLM slop become overwhelming you just drop it and go to check next idea.
First of all, keep in mind that research has shown that people generally overestimate the productivity gains of LLM coding assistance. Even when using a coding assistant makes them less productive, they feel like they are more productive.
Second, yeah, experience matters, both with programming and LLM coding assistants. The better you are, the less helpful the coding assistant will be, it can take less work to just write what you want than convince an LLM to do it.
Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.
I've noticed the stronger my opinions are about how code should be written or structured, the less productive LLMs feel to me. Then I'm just fighting them at every step to do things "my way."
If I don't really have an opinion about what's going on, LLMs churning out hundreds of lines of mostly-working code is a huge boon. After all, I'd rather not spend the energy thinking through code I don't care about.
I don’t think this research is fully baked. I don’t see a story in these results that aligns with my experience and makes me think “yeah, that actually is what I’m doing”. I get that at this point I’m supposed to go “the effect is so subtle that even I don’t notice it!” But experience tells me that’s not normally how this kind of thing works.
Perhaps we’re still figuring out how to describe the positive effects of these tools or what axes we should really be measuring on, but the idea that there’s some sort of placebo effect going on here doesn’t pass muster.
I guess most people are not paying and cant therefore apply the project-space (which is one of the best features), which unleashes its full magic.
Even if I'm currently without a job, I'm still paying because it helps me.
https://jj-vcs.github.io/jj/
I know it, because I recently learned jj, with a lot of struggling.
If a human struggles learning it, I wouldn't expect LLMs to be much better.
Everytime I tried LLMs, I had the feeling of talking with a ignorant trying to sound VERY CLEVER: terrible mistakes at every line, surrounded with punchlines, rocket emojis and tons of bullshit. (I'm partly kidding).
Maybe there are situations where LLMs are useful e.g. if you can properly delimit and isolate your problem; but when you have to write code that is meant to mess up with the internal of some piece of software then it doesn't do well.
It would be nice to know from each part of the "happy users" and "mecontent usere" of LLMs in what context they experimented with it to be more informed on this question.
But the situation is very different if you’re coding slop in the first place (front end stuff, small repo simple code). The LLMs can churn that slop out at a rapid clip.
No comments yet
I've now gone back to just using vanilla CC with a really really rich claude.md file.
what am I missing here?
edit: lol I "love" that I got downvoted for asking a simple question that might have an open answer. "be curious" says the rules. stay classy HN
Hype. There's nothing wrong with using, e.g., full-text search for RAG.
But I do think there is a qualitative different between getting candidates and adding them to context before generating (retrieval augmented generation) vs the LLM searching for context till it is satisfied.
But realistically lots of RAG systems have LLM calls interleaved for various reasons, so what they probably mean it not doing the usual chunking + embeddings thing.
The idea here is: if you have a substantive point, make it thoughtfully. If not, please don't comment until you do.
https://news.ycombinator.com/newsguidelines.html
Is this why HN is so dang pro-AI? the negative comments, even small ones, are moderated away? explains a lot TBH
Edit: bonus points if this gets me banned.
If it felt like a waste of time and energy to post something substantive, rather than the GP comment (https://news.ycombinator.com/item?id=44998577), then you should have just posted nothing. That comment was obviously neither substantive nor thoughtful. This is hardly a borderline call!
We want substantive, thoughtful comments from people who do have the time and energy to contribute them.
Btw, to avoid a misunderstanding that sometimes shows up: it's fine for comments to be critical; that is, it's possible to be substantive, thoughtful, and critical all at the same time. For example, I skimmed through your account's most recent comments and saw several of that kind, e.g. https://news.ycombinator.com/item?id=44299479 and https://news.ycombinator.com/item?id=42882357. If your GP comment had been like that, it would have been fine; you don't have to like Claude Code (or whatever the $thing is).