In my experience there's really only three true prompt engineering techniques:
- In Context Learning (providing examples, AKA one shot or few shot vs zero shot)
- Chain of Thought (telling it to think step by step)
- Structured output (telling it to produce output in a specified format like JSON)
Maybe you could add what this article calls Role Prompting to that. And RAG is its own thing where you're basically just having the model summarize the context you provide. But really everything else just boils down to tell it what you want to do in clear plain language.
dachris · 4h ago
Context is king.
Start out with Typescript and have it answer data science questions - won't know its way around.
Start out with Python and ask the same question - great answers.
LLMs can't (yet) really transfer knowledge between domains, you have to prime them in the right way.
lexandstuff · 10h ago
Even role prompting is totally useless imo. Maybe it was a thing with GPT3, but most of the LLMs already know they're "expert programmers". I think a lot of people are just deluding themselves with "prompt engineering".
Be clear with your requirements. Add examples, if necessary. Check the outputs (or reasoning trace if using a reasoning model). If they aren't what you want, adjust and iterate. If you still haven't got what you want after a few attempts, abandon AI and use the reasoning model in your head.
dimitri-vs · 7h ago
It's become more subtle but still there. You can bias the model towards more "expert" responses with the right terminology. For example, a doctor asking a question will get a vastly different response than a normal person. A query with emojis will get more emojis back. Etc.
denhaus · 9h ago
As a clarification, we used fine tuning more than prompt engineering because low or few-shot prompt engineering did not work for our use case.
>We test three representative tasks in materials chemistry: linking dopants and host materials, cataloging metal-organic frameworks, and general composition/phase/morphology/application information extraction. Records are extracted from single sentences or entire paragraphs, and the output can be returned as simple English sentences or a more structured format such as a list of JSON objects. This approach represents a simple, accessible, and highly flexible route to obtaining large databases of structured specialized scientific knowledge extracted from research papers.
faustocarva · 10h ago
Did you find it hard to create structured output while also trying to make it reason in the same prompt?
demosthanos · 9h ago
You use a two-phase prompt for this. Have it reason through the answer and respond with a clearly-labeled 'final answer' section that contains the English description of the answer. Then run its response through again in JSON mode with a prompt to package up what the previous model said into structured form.
The second phase can be with a cheap model if you need it to be.
faustocarva · 8h ago
Great, will try this! But, in a chain-based prompt or full conversational flow?
demosthanos · 7h ago
You can do this conversationally, but I've had the most success with API requests, since that gives you the most flexibility.
Pseudo-prompt:
Prompt 1: Do the thing, describe it in detail, end with a clear summary of your answer that includes ${THINGS_YOU_NEED_FOR_JSON}.
Prompt 2: A previous agent said ${CONTENT}, structure as JSON according to ${SCHEMA}.
Ideally you use a model in Prompt 2 that supports JSON schemas so you have 100% guarantee that what you get back parses. Otherwise you can implement it yourself by validating it locally and sending the errors back with a prompt to fix them.
faustocarva · 1m ago
Thanks!
haolez · 15h ago
Sometimes I get the feeling that making super long and intricate prompts reduces the cognitive performance of the model. It might give you a feel of control and proper engineering, but I'm not sure it's a net win.
My usage has converged to making very simple and minimalistic prompts and doing minor adjustments after a few iterations.
taosx · 14h ago
That's exactly how I started using them as well. 1. Give it just enough context, the assumptions that hold and the goal. 2. Review answer and iterate on the initial prompt. It is also the economical way to use them. I've been burned one too many times by using agents (they just spin and spin, burn 30 dollars for one prompt and either mess the code base or converge on the previous code written ).
I also feel the need to caution others that by letting the AI write lots of code in your project it makes it harder to advance it, evolve it and just move on with confidence (code you didn't think about and write it doesn't stick as well into your memory).
apwell23 · 10h ago
> they just spin and spin, burn 30 dollars for one prompt and either mess the code base or converge on the previous code written ).
My experience as well. I fear admitting this for fear of being labled a luddite.
scarface_74 · 11h ago
How is that different than code I wrote a year ago or when I have to modify someone else’s code?
For another kind of task, a colleague had written a very verbose prompt. Since I had to integrate it, I added some CRUD ops for prompts. For a test, I made a very short one, something like "analyze this as a <profession>". The output was pretty much comparable, except that the output on the longer prompt contained (quite a few) references to literal parts of that prompt. It wasn't incoherent, but it was as if that model (gemini 2.5, btw) has a basic response for the task it extracts from the prompt, and merges the superfluous bits in. It would seem that, at least for this particular task, the model cannot (easily) be made to "think" differently.
conception · 8h ago
I’d have to hunt, but there is evidence that using the vocabulary of an expert versus a layman will produce better results. Which makes sense since places where people talk “normally” in spaces are more likely to be incorrect. Whereas in places where people speak in the in the professional vernacular they are more likely to be correct. And the training will associate them together in their spaces.
pjm331 · 12h ago
Yeah I had this experience today where I had been running code review with a big detailed prompt in CLAUDE.md but then I ran it in a branch that did not have that file yet and got better results.
matt3210 · 8h ago
At what point does it become programming in legalese?
efitz · 4h ago
It already did. Programming languages already are very strict about syntax; professional jargon is the same way, and for the same reason- it eliminates ambiguity.
wslh · 14h ago
Same here: it starts with a relatively precise need, keeping a roadmap in mind rather than forcing one upfront. When it involves a technology I'm unfamiliar with, I also ask questions to understand what certain things mean before "copying and pasting".
I've found that with more advanced prompts, the generated code sometimes fails to compile, and tracing the issues backward can be more time consuming than starting clean.
lodovic · 12h ago
I use specs in markdown for the more advanced prompts. I ask the llm to refine the markdown first and add implementation steps, so i can review what it will do. When it starts implementing, i can always ask it to "just implement step 1, and update the document when done". You can also ask it to verify if the spec has been implemented correctly.
heisenburgzero · 7h ago
In my own experience, if the problem is not solvable by a LLM. No amount of prompt "engineering" will really help. Only way to solve it would be by partially solving it (breaking down to sub-tasks / examples) and let it run its miles.
I'll love to be wrong though. Please share if anyone has a different experience.
TheCowboy · 7h ago
I think part of the skill in using LLMs is getting a sense for how to effectively break problems down, and also getting a sense of when and when not to do it. The article also mentions this.
I think we'll also see ways of restructuring, organizing, and commenting code to improve interaction with LLMs. And also expect LLMs to get better at doing this, and maybe suggesting ways for programmers to break problems down that it is struggling with.
stets · 7h ago
I think the intent of prompt engineering is to get better solutions quicker, in formats you want. But yeah, ideally the model just "knows" and you don't have to engineer your question
ColinEberhardt · 15h ago
There are so many prompting guides at the moment. Personally I think they are quite unnecessary. If you take the time to use these tools, build familiarity with them and the way they work, the prompt you should use becomes quite obvious.
Disposal8433 · 12h ago
It reminds me that we had the same hype and FOMO when Google became popular. Books were being written on the subject and you had to buy those or you would become a caveman in a near future. What happened is that anyone could learn the whole thing in a day and that was it, no need to debate about whether you would miss anything if you didn't knew all those tools.
wiseowise · 2h ago
You’re only proving the opposite: there’s definitely a difference between “experienced Google user” and someone who just puts random words and expects to find what they need.
marliechiller · 1h ago
Is there? I feel like google has optimised heavily for the caveman input rather than the enlightened search warrior nowadays
verbify · 7h ago
I certainly have better Google fu than some relatives who are always asking me to find something online.
sokoloff · 11h ago
I think there are people for whom reading a prompt guide (or watching an experienced user) will be very valuable.
Many people just won't put any conscious thought into trying to get better on their own, though some of them will read or watch one thing on the topic. I will readily admit to picking up several useful tips from watching other people use these tools and from discussing them with peers. That's improvement that I don't think I achieve by solely using the tools on my own.
awb · 2h ago
Many years ago there were guides on how to write user stories: “As a [role], I want to be able to do [task] so I can achieve [objective]”, because it was useful to teach high-level thinkers how to communicate requirements with less ambiguity.
It may seem simple, but in my experience even brilliant developers can miss or misinterpret unstructured requirements, through no fault of their own.
TheCowboy · 6h ago
It's at least useful for seeing how other people are being productive with these tools. I also sometimes find a clever idea that improves that I'm already doing.
And documenting the current state of this space as well. It's easy to have tried doing something a year ago and think they're still bad.
I also usually prefer researching some area before reinventing the wheel by trial/failure myself. I appreciate when people share what they've discovered with their own their time, as I don't always have all the time in the world to explore it as I would if I were still a teen.
orochimaaru · 12h ago
A long time back for my MS CS I took a science of programming course. The way to verify has helped me craft prompts when I do data engineering work. Basically:
Given input (…) and preconditions (…) write me spark code that gives me post conditions (…). If you can formally specify the input, preconditions and post conditions you usually get good working code.
1. Science of programming, David Gries
2. Verification of concurrent and sequential systems
bsoles · 9h ago
There is no such thing as "prompt engineering". Since when the ability to write proper and meaningful sentences became engineering?
This is even worse than "software engineering". The unfortunate thing is that there will probably be job postings for such things and people will call themselves prompt engineers for their extraordinary abilities for writing sentences.
NitpickLawyer · 3h ago
> Since when the ability to write proper and meaningful sentences became engineering?
Since what's proper and meaningful depends on a lot of variables. Testing these, keeping track of them, logging and versioning take it from "vibe prompting" to "prompt engineering" IMO.
There are plenty of papers detailing this work. Some things work better than others (do this and this works better than don't do this - pink elephants thing). Structuring is important. Style is important. Order of information is important. Re-stating the problem is important.
Then there's quirks with family models. If you're running an API-served model you need internal checks to make sure the new version still behaves well on your prompts. These checks and tests are "prompt engineering".
I feel a lot of people take the knee-jerk reaction to the hype and miss critical aspects because they want to dunk on the hype.
wiseowise · 2h ago
God, do you get off of word “engineer”? Is it cultural?
SchemaLoad · 3h ago
AI sloperators are desperate to make it look like they are actually doing something.
yowlingcat · 7h ago
I would caution against thinking it's impossible even if it's not something you've personally experienced. Prompt engineering is necessary (but not sufficient) to creating high leverage outcomes from LLMs when solving complex problems.
Without it, the chances of getting to a solution are slim. With it, the chances of getting to 90% of a solution and needing to fine tune the last mile are a lot higher but still not guaranteed. Maybe the phrase "prompt engineering" is bad and it really should be called "prompt crafting" because there is more an element of craft, taste, and judgment than there is durable, repeatable principles which are universally applicable.
yuvadam · 12h ago
Seems like so much over (prompt) engineering.
I get by just fine with pasting raw code or errors and asking plain questions, the models are smart enough to figure it out themselves.
Kiyo-Lynn · 7h ago
At first I kept thinking the model just wasn't good enough, it just couldn’t give me what I wanted.
But over time, I realized the real problem was that I hadn’t figured out what I wanted in the first place.
I had to make my own thinking clear first, then let the AI help organize it.
The more specific and patient I am, the better it responds.
leshow · 13h ago
using the term "engineering" for writing a prompt feels very unserious
vunderba · 10h ago
I came across a pretty amusing analogy back when prompt "engineering" was all the rage a few years ago.
> Calling someone a prompt engineer is like calling the guy who works at Subway an artist because his shirt says ‘Sandwich Artist.’
All jokes aside I wouldn't get to hung up on the title, the term engineer has long since been diluted to the point of meaninglessness.
Because you’ll hurt ops huge ego. God forbid you put the godly title of ENGINEER near something so trivial as sandwich.
ozim · 10h ago
Because your imagination stopped at chat interface asking for funny cat pictures.
There are prompts to be used with API an inside automated workflows and more to it.
dwringer · 11h ago
Isn't this basically the same argument that comes up all the time about software engineering in general?
leshow · 10h ago
I have a degree in software engineering and I'm still critical if its inclusion as an engineering discipline, just given the level of rigour that's applied to typical software development.
When it comes to "prompt engineering", the argument is even less compelling. Its like saying typing in a search query is engineering.
klntsky · 3h ago
googling pre-LLMs was a required skill. Prompting is not just for search if you build LLM pipelines. Cost commonly can be easily optimized 2x if you know what you are doing.
theanonymousone · 3h ago
I understand your point, but don't we already have e.g. AWS engineers? Or I believe SAP/Tableau/.. engineers?
kovac · 9h ago
IT is where words and their meanings come to die. I wonder if words ever needed to mean something :p
morkalork · 12h ago
For real. Editing prompts bares no resemblance to engineering at all, there is no accuracy or precision. Say you have a benchmark to test against and you're trying to make an improvement. Will your change to the prompt make the benchmark go up? Down? Why? Can you predict? No, it is not a science at all. It's just throwing shit and examples at the wall in hopes and prayers.
echelon · 12h ago
> Will your change to the prompt make the benchmark go up? Down? Why? Can you predict? No, it is not a science at all.
Many prompt engineers do measure and quantitatively compare.
morkalork · 12h ago
Me too but it's after the fact. I make a change then measure, if it doesn't I roll back. But it's as good as witch craft or alchemy. Will I get I get gold with this adjustment? Nope, still lead. Tries variation #243 next
a2dam · 12h ago
This is literally how the light bulb filament was discovered.
MegaButts · 11h ago
And Tesla famously described Edison as an idiot for this very reason. Then Tesla revolutionized the way we use electricity while Edison was busy killing elephants.
nexoft · 11h ago
"prompt engineering" ....ouch. I would say "common sense"\
also the problem with software engineering is that there is an inflation of SWE, too much people applying for it for compensation level rather than being good at it and really liking it, we ended up having a lot of bad software engineers that requires this crutch crap.
adamhartenz · 11h ago
"Common sense" doesn't exist. It is a term people use when they can't explain what they actually mean.
marliechiller · 1h ago
Not sure I fully agree - sometimes maybe, but I think in the majority of cases it's used when people feel they dont need to explain exactly what they mean because it should already be obvious to most people.
Example. "Always look when you cross the road" is a snippet common sense, with lack of heeding to that sense resulting in you potentially being hit by a car. Even a 4 year old wouldnt need the latter explanation, but most people could articulate that if they needed to. Its just a way of speeding up communication
ozim · 10h ago
Also how common sense can exist with LLM?
There is no common sense with it - it is just an illusion.
wiseowise · 2h ago
Presumably it is your common sense.
wiseowise · 2h ago
The less you use “crutches” the better you get, right? Judging by your comment, you don’t use Google, Stack Overflow, public forums (for programming assistance), books, courses, correct?
jjmarr · 11h ago
Markets shift people to where they are needed with salaries as a price signal.
There aren't enough software engineers to create the software the world needs.
the_d3f4ult · 10h ago
>There aren't enough software engineers to create the software the world needs.
I think you mean "to create the software the market demands." We've lost a generation of talented people to instagram filters and content feed algorithms.
ozim · 10h ago
To maintain ;) the software.
ozim · 10h ago
Lots of those „prompt engineering” things would be nice to teach to business people as they seem to lack common sense.
Like writing out clear requirements.
akkad33 · 15h ago
I'd rather write my own code than do all that
Lich · 7h ago
Yeah I don’t get it. By the time I’m done writing all these prompts to get what I want, refining over and over not to mention the waiting time for the characters to appear on the screen, I could have written what I want myself. I find LLMs more useful as a quick documentation search and for the most basic refactoring and template building.
echelon · 12h ago
Your boss (or CEO) probably wouldn't.
coffeefirst · 9h ago
Writing the code would also save you time over whatever I just read.
vrnvu · 3h ago
We are teaching programmers how to prompt engineer instead of programming…
What a world we live in.
awb · 2h ago
Your native language is now a programming language.
1_08iu · 2h ago
If people feel that they need to learn different language patterns in order to communicate effectively in their native language with an LLM then I'm not sure if I agree. I think that if your native language truly was a programming language then there wouldn't be any need for prompt engineering.
Regardless, I think that programmers are quite well-suited to the methods described in the article, but only in the same way that programmers are generally better at Googling things than the average person; they can imagine what the system needs to see in order to produce the result they want even if that isn't necessarily a natural description of their problem.
wiseowise · 2h ago
What shall we teach them instead? Creating bazillion CRUDs? Serializing/Deserializing JSON?
neves · 14h ago
Any tip that my fellow programmers find useful that's not in the article?
didibus · 14h ago
Including a coding style guide can help the code looks like what you want. Also including an explanation of the project structure, and overall design of the code base. Always specify what libraries it should make use of (or it'll bring in anything or implement stuff a library has already).
You can also make the AI review itself. Have it modify code, than ask to review the code, than ask to address review comments, and iterate until it has no more comments.
Use an agentic tool like Claude Code or Amazon Q CLI. Then ask it to run tests after code changes and to address all issues until test pass. Make sure to tell it not to change the test code.
taosx · 14h ago
Unless your employer pays for you to use agentic tools, avoid them. They burn through money and tokens like there's no tomorrow.
trcf22 · 12h ago
I found that presenting your situation and asking for a plan/ideas + « do not give me code. Make sure you understand the requirements and ask questions if needed.» works much better for me.
It also allows me to more easily control what the llm will do and not end up reviewing and throwing 200 lines of code.
In a nextjs + vitest context, I try to really outline which tests I want and give it proper data examples so that it does not cheat around mocking fake objects.
I do not buy into the whole you’re a senior dev etc. Most people use Claude for coding so I guess it’s engrained by default.
At this point it's a bit dogfooded, so why not just ask the LLM for a good prompt for the project you're working on?
MontagFTB · 7h ago
Is there some way to quantify the effects of these prompt modifications versus not including them? How do we know the models are producing better output?
ofrzeta · 14h ago
In the "Debugging example", the first prompt doesn't include the code but the second does? No wonder it can find the bug! I guess you may prompt as you like, as long as you provide the actual code, it usually finds bugs like this.
About the roles: Can you measure a difference in code quality between the "expert" and the "junior"?
sherdil2022 · 17h ago
So it looks like we all need to have firm understanding and tailor our prompts now to effectively use LLMs. Isn't this all subjective? I get different answers based upon how I word my question. Shouldn't things be little bit more objective? Isn't this random that I get different results based upon just wording? This whole thing is just discombobulating to me.
fcoury · 15h ago
And to add to it, here's my experience: sometimes you spend a lot of time on this upfront prompt engineering and get bad results and sometimes you just YOLO it and get good results. It's hard to advocate for a determined strategy for prompt engineering when the tool you're prompting itself is non-deterministic.
Edit: I also love that the examples come with "AI’s response to the poor prompt (simulated)"
prisenco · 15h ago
Also that non-determinism means every release will change the way prompting works. There's no guarantee of consistency like an API or a programming language release would have.
jwr · 7h ago
I find the name "prompt engineering" so annoying. There is no engineering in throwing something at the wall and seeing if it sticks. There are no laws or rules that one can learn. It's not science, and it is certainly not engineering.
dimitri-vs · 7h ago
It's really just technical writing. Majority of tricks from the GTP-4 era are obsolete with reasoning models.
b0a04gl · 14h ago
tighter prompts, scoped context and enforced function signatures. let it selfdebug with eval hooks. consistency > coherence.
m3kw9 · 9h ago
The “you are a world class expert..” prompt to me is more like a superstious thing a sports player do before they play. I’ve used it and it still gives similar results, but maybe on a good day(that random seed) it will give me superior results
Avalaxy · 13h ago
I feel like all of this is nonsense for people who want to pretend they are very good at using AI. Just copy pasting a stack trace with error message works perfectly fine, thank you.
yoyohello13 · 12h ago
Seriously, I seem to get good results by just "being a good communicator". Not only that, but as the tools get better, prompt engineering should get less important.
abletonlive · 11h ago
In most HN LLM Programming discussion there's a vocal majority saying LLMs are useless. Now we have this commenter saying all they need to do is vibe and it all works out.
WHICH IS IT?
rognjen · 11h ago
I downvoted your comment because of your first sentence. Your point makes is made even without it.
groby_b · 13h ago
"State your constraints and requirements well and exhaustively".
Meanwhile, I just can't get over the cartoon implying that a React Dev is just a Junior Dev who lost their hoodie.
bongodongobob · 10h ago
None of this shit is necessary. I feel like these prompt collections miss the point entirely. You can talk to an LLM and reason about stuff going back and forth. I mean, some things are nice to one shot, but I've found keeping it simple and just "being yourself" works much better than a page long prompt.
- In Context Learning (providing examples, AKA one shot or few shot vs zero shot)
- Chain of Thought (telling it to think step by step)
- Structured output (telling it to produce output in a specified format like JSON)
Maybe you could add what this article calls Role Prompting to that. And RAG is its own thing where you're basically just having the model summarize the context you provide. But really everything else just boils down to tell it what you want to do in clear plain language.
Start out with Typescript and have it answer data science questions - won't know its way around.
Start out with Python and ask the same question - great answers.
LLMs can't (yet) really transfer knowledge between domains, you have to prime them in the right way.
Be clear with your requirements. Add examples, if necessary. Check the outputs (or reasoning trace if using a reasoning model). If they aren't what you want, adjust and iterate. If you still haven't got what you want after a few attempts, abandon AI and use the reasoning model in your head.
The second phase can be with a cheap model if you need it to be.
Pseudo-prompt:
Prompt 1: Do the thing, describe it in detail, end with a clear summary of your answer that includes ${THINGS_YOU_NEED_FOR_JSON}.
Prompt 2: A previous agent said ${CONTENT}, structure as JSON according to ${SCHEMA}.
Ideally you use a model in Prompt 2 that supports JSON schemas so you have 100% guarantee that what you get back parses. Otherwise you can implement it yourself by validating it locally and sending the errors back with a prompt to fix them.
My usage has converged to making very simple and minimalistic prompts and doing minor adjustments after a few iterations.
I also feel the need to caution others that by letting the AI write lots of code in your project it makes it harder to advance it, evolve it and just move on with confidence (code you didn't think about and write it doesn't stick as well into your memory).
My experience as well. I fear admitting this for fear of being labled a luddite.
I've found that with more advanced prompts, the generated code sometimes fails to compile, and tracing the issues backward can be more time consuming than starting clean.
I'll love to be wrong though. Please share if anyone has a different experience.
I think we'll also see ways of restructuring, organizing, and commenting code to improve interaction with LLMs. And also expect LLMs to get better at doing this, and maybe suggesting ways for programmers to break problems down that it is struggling with.
Many people just won't put any conscious thought into trying to get better on their own, though some of them will read or watch one thing on the topic. I will readily admit to picking up several useful tips from watching other people use these tools and from discussing them with peers. That's improvement that I don't think I achieve by solely using the tools on my own.
It may seem simple, but in my experience even brilliant developers can miss or misinterpret unstructured requirements, through no fault of their own.
And documenting the current state of this space as well. It's easy to have tried doing something a year ago and think they're still bad.
I also usually prefer researching some area before reinventing the wheel by trial/failure myself. I appreciate when people share what they've discovered with their own their time, as I don't always have all the time in the world to explore it as I would if I were still a teen.
Given input (…) and preconditions (…) write me spark code that gives me post conditions (…). If you can formally specify the input, preconditions and post conditions you usually get good working code.
1. Science of programming, David Gries 2. Verification of concurrent and sequential systems
This is even worse than "software engineering". The unfortunate thing is that there will probably be job postings for such things and people will call themselves prompt engineers for their extraordinary abilities for writing sentences.
Since what's proper and meaningful depends on a lot of variables. Testing these, keeping track of them, logging and versioning take it from "vibe prompting" to "prompt engineering" IMO.
There are plenty of papers detailing this work. Some things work better than others (do this and this works better than don't do this - pink elephants thing). Structuring is important. Style is important. Order of information is important. Re-stating the problem is important.
Then there's quirks with family models. If you're running an API-served model you need internal checks to make sure the new version still behaves well on your prompts. These checks and tests are "prompt engineering".
I feel a lot of people take the knee-jerk reaction to the hype and miss critical aspects because they want to dunk on the hype.
Without it, the chances of getting to a solution are slim. With it, the chances of getting to 90% of a solution and needing to fine tune the last mile are a lot higher but still not guaranteed. Maybe the phrase "prompt engineering" is bad and it really should be called "prompt crafting" because there is more an element of craft, taste, and judgment than there is durable, repeatable principles which are universally applicable.
I get by just fine with pasting raw code or errors and asking plain questions, the models are smart enough to figure it out themselves.
> Calling someone a prompt engineer is like calling the guy who works at Subway an artist because his shirt says ‘Sandwich Artist.’
All jokes aside I wouldn't get to hung up on the title, the term engineer has long since been diluted to the point of meaninglessness.
https://jobs.mysubwaycareer.eu/careers/sandwich-artist.htm
https://en.wikipedia.org/wiki/Audio_engineer
There are prompts to be used with API an inside automated workflows and more to it.
When it comes to "prompt engineering", the argument is even less compelling. Its like saying typing in a search query is engineering.
Many prompt engineers do measure and quantitatively compare.
Example. "Always look when you cross the road" is a snippet common sense, with lack of heeding to that sense resulting in you potentially being hit by a car. Even a 4 year old wouldnt need the latter explanation, but most people could articulate that if they needed to. Its just a way of speeding up communication
There is no common sense with it - it is just an illusion.
There aren't enough software engineers to create the software the world needs.
I think you mean "to create the software the market demands." We've lost a generation of talented people to instagram filters and content feed algorithms.
Like writing out clear requirements.
What a world we live in.
Regardless, I think that programmers are quite well-suited to the methods described in the article, but only in the same way that programmers are generally better at Googling things than the average person; they can imagine what the system needs to see in order to produce the result they want even if that isn't necessarily a natural description of their problem.
You can also make the AI review itself. Have it modify code, than ask to review the code, than ask to address review comments, and iterate until it has no more comments.
Use an agentic tool like Claude Code or Amazon Q CLI. Then ask it to run tests after code changes and to address all issues until test pass. Make sure to tell it not to change the test code.
It also allows me to more easily control what the llm will do and not end up reviewing and throwing 200 lines of code.
In a nextjs + vitest context, I try to really outline which tests I want and give it proper data examples so that it does not cheat around mocking fake objects.
I do not buy into the whole you’re a senior dev etc. Most people use Claude for coding so I guess it’s engrained by default.
About the roles: Can you measure a difference in code quality between the "expert" and the "junior"?
Edit: I also love that the examples come with "AI’s response to the poor prompt (simulated)"
WHICH IS IT?
Meanwhile, I just can't get over the cartoon implying that a React Dev is just a Junior Dev who lost their hoodie.