Show HN: We launched an AI builders podcast (builtthisweek.com)
2 points by Jmetz1 1h ago 1 comments
Show HN: Zenta – Mindfulness for Terminal Users (github.com)
170 points by ihiep 14h ago 33 comments
At Amazon, some coders say their jobs have begun to resemble warehouse work
580 milkshakes 867 5/25/2025, 11:48:30 AM nytimes.com ↗
That just strikes me as an odd thing to say. I’m convinced that this is the dividing line between today’s software engineers and tomorrow’s AI engineers (in whatever form that takes - prompt, vibe, etc.) Reed’s statement feels very much like a justification of “if it compiles, ship it!”
> “It would be crazy if in an auto factory people were measuring to make sure every angle is correct,” he said, since machines now do the work. “It’s not as important as when it was group of ten people pounding out the metal.”
Except that the machines doing that work aren’t regularly hallucinating angles, spurious welding joints, etc.
Who's filling that role in this brave new world?
The robot operates deterministically, it has a fixed input and a fixed output. This is what makes it reliable.
Your “AI coder” is nothing like that. It’s non deterministic on its best day, and it gets everything thrown at it so even more of a coin toss. This seriously undermines any expectation of reliability.
The guy’s comparison shows a lack of understanding of either of the systems.
I think this inversion is what a lot of people are missing, or just don't understand (because they don't understand what code is or how it works).
Industrial automation works by taking a rigorously specified designs developed by engineers and combining it with rigorous quality control processes to ensure the inputs and outputs remains within tolerances. You first have to have a rigorous spec, then you can design a process for manufacturing a lot of widgets while checking 1 out of every 100 of them for their tolerances.
You can only get away with not measuring a given angle on widget #13525 because you're producing many copies of exactly the same thing and you measured that angle on widget #13500 and widget #13400 and so on and the variance in your sampled widgets is within the tolerances specified by the engineer who designed the widget.
There's no equivalent to the design stage or to the QC stage in the vibe-coding process advocated for by the person quoted above.
I never said it is. The code is the code that controls the robot and makes it behave deterministically.
To put it simply, the chances that an LLM will output the same result every time given the same input is low. The LLM does not operate deterministically, unlike the manufacturing robot who will output the same door panel every single time. Or as ChatGPT put it:
> The likelihood of an LLM like ChatGPT generating the exact same code for the same prompt multiple times is generally low.
Technically correct is the least useful kind of correct when it's wrong in practice. And in practice the process AI coding tools use to generate code is not deterministic which is what matters. To make matters worse in the comparison with a manufacturing robot, even the input is never the same. While a robot get the exact command for a specific motion and the exact same piece of sheet metal, in the same position, a coding AI is asked to work with varied inputs and on varied pieces of code.
Even stamping metal could be called "non-deterministic" since there are guaranteed variations, just within determined tolerances. Does anyone define tolerances for generated code?
That's why the comparison shows a lack of understanding of either of the systems.
First you're assuming a brand new conversation: no context. Second you're assuming a local-first LLM because a remote one could change behavior at any time. Third, the way the input is expressed is inexact, so minor differences in input can have an effect. Fourth, if the data to be operated on has changed you will be using new parts of the model that were never previously used.
But I understand how nuance is not as exciting as using the word WRONG in all caps.
Addressing your comment, there was no assumption or indication on my part that determinism only applies to a new "conversation". Any interactions with any LLM are deterministic, same conversation, for any seed value. Yes, I'm talking about local systems, because how are you going to know what is going on on a remote system? On a local system, a local LLM, if the input is expressed in the same way, the output will be generated in the same way, for all of the token context and so on. That means, for a seed value, after "hi", the model may say "hello", and then the human's response may be "how ya doin'", and then the model would say "so so , how ya doin?", and every single time, if the human or agent inputs the same tokens, the model will output the same tokens, for a given seed value. This is not really up for question, or in doubt or really anything to disagree about. Am I not being clear? You can ask your local LLM or remote LLM and they will certainly confirm that the process by which a language model generates is deterministic, by definition. Same input means same output, again I must mention that the exception is hardware bit flips, such as those caused by cosmic rays, and that's just to emphasize how very deterministic LLMs are. Of course, as you may know, online providers stage and mix LLMs, so for sure you are not going to be able to know that you are wrong by playing with chatgpt, grok/q, gemini, or whatever other only LLMs you are familiar with. If you have a system capable of offline or non-remote inference, you can see for yourself that you are wrong when you say that LLMs are non-deterministic.
When many humans interact with the same model, then maybe the model should try different seed values, and make measurements. When model interaction is limited to a single human, then maybe the model should try different seed values, and make measurements.
An entire generation of devs, who grew up using unaudited, unverified, unknown license code. And which at a moments notice, can be sold to a threat actor.
And I've seen devs try to add packages to the project without even considering the source. Using forks of forks of forks, without considering the root project. Or examing if it's just a private fork, or what is most active and updated.
If you don't care about that code, why care about AI code? Or even your own?
After a month, I can say that the inmates run that whole ecosystem, from the language spec, to the interpreter, to packaging. And worse, the tools for everyone else have to cater to them.
I can see why someone who has never had a stable foundation to build a project on would view vibe coding as a good idea. When you're working in an ecosystem where any project can break at any time because some dependency pushed a breaking minor version bundled with a security fix for a catastrophic exploit, rolling the LLM gacha to see if it can get it working isn't the worst idea.
https://www.youtube.com/watch?v=Uo3cL4nrGOk
(you've probably already seen it--everyone else has. But if not, you're in for a treat)
[0] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...
[1] https://pluralistic.net/2024/08/02/despotism-on-demand/
It’s been a joke for decades and decades that “engineer” is used to church up any job, including “domestic engineering” (housekeeping/homemaking).
Whether people do, or not, is a different question.
Us?
(Yeah, we’re fucked)
I just finished creating a multiplayer online party game using only Claude Code. I didn't edit a single line. However, there is no way someone who doesn't know how to code could get where I am with it.
You have to have an intuition on the sources of a problem. You need to be able to at least glance at the correct and understand when and where the AI is flailing, so you know to backtrack or reframe.
Without that you are as likely to totally mess to you app. Which also means you need to understand source control and when to save and how to test methodically.
For example in your case there is the concept of message routing where a message that gets sent to the room is copied to all the participants.
You have timers, animation sheets, events, triggers, etc. A question that extracts such architectural decisions and relevant pieces of code will help the user understand what they are actually doing and also help debug the problems that arise.
It will of course take them longer, but it is possible to get there.
Hypothetically, if you codified the architecture as a form of durable meta tests, you might be able to significantly raise the ceiling.
Decomposing to interfaces seems to actually increase architectural entropy instead of decrease it when Claude Code is acting on a code base over a certain size/complexity.
By "I didn't edit a single line", do you still prompt the agent to fix any issues you found? If so, is that consided an edit?
I think only once did I ever give it an instruction that was related to a handful of lines (There certainly were plenty of opportunities, don't get me wrong).
When troubleshooting occasionally I did read the code. There was an issue with player to player matching where it was just kind of stuck and gave it a simpler solution (conceptually, not actual code) that worked for the design constraints.
I did find myself hinting/telling it to do things like centralize the CSS.
It was a really useful exercise in learning. I'm going to write an article about it. My biggest insight is that "good" architecture for an current generation AI is probably different than for humans because of how attention and context works in the models/tools (at least for the current Claude Code). Essentially "out of sight out of mind" creates a dynamic where decomposing code leads to an increase in entropy when a model is working on it.
I need to experiment with other agentic tools to see how their context handling impacts possible scope of work. I extensively use GitHub Copilot, but I control scope, context, and instructions much tighter there.
I hadn't really used hands off automation much in the past because I didn't think the models were at a level that they could handle a significantly sized unit of work. Now they can with large caveats. There also is a clear upper bound with the Claude Code, but that can probably be significantly improved by better context handling.
"You read manuals?!?"
"... Yeah? (pause) Wait, you don't?!?!?"
/s
"Teach me how to use Linux [but I hate reading documentation]".
It infuriated me.
From a higher up’s perspective what they do is not that different from vibe coding anyway. They pick a direction, provide a high level plan and then see as things take shape, or don’t. If they are unhappy with the progress they shake things up (reorg, firings, hirings, adjusting the terminology about the end goal, making rousing speeches, etc)
They might realise that they bet on the wrong horse when the whole site goes down and nobody inside the company can explain why. Or when the hackers eat their face and there are too many holes to even say which one they did come through. But these things regularly happen already with the current processes too. So it is more of a difference in degree, not kind.
Your point about management being vibe coding is spot on. I have hired people to build something and just had to hope that they built it the way I wanted. I honestly feel like AI is better than most of the outsourced code work I do.
One last piece, if anyone does have trouble getting value out of AI tools, I would encourage you to talk to/guide them like you would a junior team member. Actually "discuss" what you're trying to accomplish, lay out a plan, build your tests, and only then start working on the output. Most examples I see of people trying to get AI to do things fail because of poor communication.
Building the thing may be the primary objective, but you will eventually have to rework what you've built (dependency changes, requirement changes,...). All the craft is for that day, and whatever that goes against that is called technical debt.
You just need to make some tradeoffs between getting the thing out the faster possible and being able to alter it later. It's a spectrum, but instead of discussing it with the engineers, most executive suites (and their manager) wants to give out edicts from high.
This is so good I just wanted to quote it so it showed up in this thread twice. Very well said.
Any manufacturing process is subject to quality controls. Machines are maintained. Machine parts are swapped out long before they lead to out-of-tolerance work. Process outputs are statistically characterised, measured and monitored. Measurement equipment is recalibrated on a schedule. 3d printed parts are routinely X-rayed to check for internal residue. If something can go wrong, it sure as hell is checked.
Maybe things that can't possibly fail are not checked, but the class of software that can't possibly fail is currently very small, no matter who or what generates it.
Software isn't like that. Because code is relatively easy to reuse, novelty tends to dominate new code written. Software developers are acting like integrators in at least partly novel contexts, not stamping out part number 100k of 200k that are identical.
I do think modern ML has a place as a coding tool, but these factory like conceptions are very off the mark imo.
On the software side, the THERAC story is absolutely terrifying - you replace a physical interlock with a software-based one that _can't possibly go wrong_ and you get a killing machine that would probably count as unethical for executions of convicted terrorists.
I am a strong proponent of hardware level interlocks for way more mundane things than that. It helps a lot in debugging to narrow down the possible states of things.
A few things on this illusion:
* Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns
* QA workers are often pressured to let small things fly and cave easily because they simply do not get paid enough to care and know they won't win that fight unless their employer's product causes some major catastrophe that costs lives
* Most common goods and infrastructure are built by the lowest bidder with the cheapest materials using underpaid labor, so as for "quality" we're already starting at the bottom.
There is this notion that because things like ISO and QC standards exist, people follow them. The enforcement of quality is weak and the reach of any enforcing bodies is extremely short when pushed up against the wall by the teams of lawyers afforded to companies like Boeing or Stellantis.
I see it too regularly at my job to not call out this idea that quality control is anything but smoke and mirrors, deployed with minimal effort and maximum reluctance. Hell, it's arguably the reason why I have a job since about 75% of the machines I walk in their doors to fix broke because they were improperly maintained, poorly implemented or sabotaged by an inept operator. It leaves me embittered, to be honest, because it doesn't have to be this way and the only reason why it is boils down to greed and mismanagement.
Perhaps this is industry dependent?
In my country’s automotive industry, quality control standards have risen a lot in the past few decades. These days consumers expect the doors and sunroof not to leak, no rust even after 15 years being kept outdoors, and the engine to start first time even after two weeks in an airport carpark.
How is this achieved? Lots of careful quality checking.
For context, I am in the US and in a position to see what goes on behind the scenes in most of the major auto-maker factories and some aerospace, but that's about as far as I can talk about it, since some of them are DoD contracters.
Quality Control is a valuable tool when deployed correctly and, itself, monitored for consistency and areas where improvement can happen. There is what I consider a limp-wristed effort to improve QC in the US, but in the end, it's really about checking some bureaucratic box as opposed to actually making better product, although sometimes we get lucky and the two align.
Can you define the differences between "real" QC and other versions? Does this imply a "fake" QC? Does that mean that our auto and aerospace manufacturers can't hold themselves to the same quality standards as Big Pharma, since both are ultimately trying to achieve the same goal in avoiding the litigation that comes with putting your customers at risk?
Let's not pretend that pharma co's have never side-stepped regulation or made decisions that put swaths of the population in a position to harm themselves.
My argument was dispelling the general idea that just because rules are in place, they are being followed. Believe me, I'd love to live in that world, but have seen little evidence that we do.
Your > * Any manufacturer will do everything in their power to avoid meeting anything but the barest minimums of standards due to budget concerns*
set the adversarial bar, and OP was just countering in kind.
The post they are complaining about was a driveby dismissive statement that didn't add anything to the discussion whatsoever.
Please do not conflate the two.
in capitalistic countries, yes
Software doesn’t exactly work the same way. You can make “AI” that operates more like [0,1] but at the end of the day the computer is still going to {0,1}.
Code already lets us automate work away! I can stamp out ten instances of a component or call a function ten times and cut my manual labor by 90%
I'm not saying AI has nothing to add, but the "assembly line" analogies - where we precisely factor out mundane parts of the process to be automated - is what we've been doing this whole time
AI demands a whole other analogy. The intuitions from automating factories really don't apply, imo.
Here's one candidate: AI is like gaining access to a huge pool of cheap labor, doing tasks that don't lend themselves to normal automation. Something like when manufacturing got offshored to China in the late 20th century
If you're chronically doing something mundane in software development, you're doing something wrong. That was true even before AI.
Sure, if you're stuck in a horrible legacy code base, it's harder. But you can _still_ automate tedious work, given you can manage to put in the proverbial stop for gas. I've seen loads of developers just happily copy paste things together, not stopping to wonder if it was perhaps time to refactor.
I'll admit that assuming it's correct, an AI can type faster than me. But time spent typing represents only a fraction of the software development cycle.
But, it'll take another year or two on the hype cycle for the gullible managers being sold AI to realise this fully.
I spent quite a bit of time as a CTO, and at some point there's a conversation about the business value of refactoring. That's a great conversation to have I think, it should ultimately be about business value, but good code vs bad code is a bit hard to quantify. What I usually reached for is that refactoring brings down lead time of changes, i.e. makes them faster. Tougher story these days I guess :D
I've found that it's very hard for people to conceptualize what else it would be that we're spending our time doing.
But the truth is that the way the computer works is alien and anything useful becomes very complex. So we've come up with all those abstractions, embed them in programming languages with which we create more abstractions trying to satisfy real world constraints. It's an imaginary world which is very hard to depict to other people. It's not purely abstract like mathematics, nor it's fully physical like mechanics.
The issue with LLMs is whatever they produce have a great chance of being distorted. At first glance, it looks like it's being correct, but the more you add to it, the more visible the flaws are until you're left with a Frankenstein monster.
But to your last part, this is why I think the worst fears I see from programmers (here and in real life) are unlikely to be a lasting problem. If you're right - and I think you are - that the direction things may be headed as-is, with increasingly less sophisticated people relying increasingly more on AIs to build an increasingly large portion of software, is going to result in big messes of unworkable software. But if so, people are going to get wise to that, and stop doing it. It won't be tenable for companies to go to market with "Frankenstein monsters", in the long term.
The key is to look through the tumultuous phase and figure out what it's gonna look like after that. Of course this is a very hard thing to predict! But here are the outcomes I personally put the most weight on:
1. AIs might really get good enough that none of us write code anymore, in the same way that it's quite rare to write assembly code now.
In this case, I think entrepreneurship or research will be the way to go. We'll be able to do so much more if software is truly easy to create!
2. We're still writing, editing, and debugging code artifacts, but with much better tools.
In this case, I think actually understanding how software works will be a very valuable skill, as knocking out subtly broken software will be a dime a dozen, while getting things working well will be a differentiator.
Honestly I don't put much weight on the version of this where nobody is doing anything because AI is running everything. I recognize that lots of smart people disagree with me about this, but I remain skeptical.
I don't have much hope for that, because the move from assembly to higher level programming languages is a result of finding patterns that are highly deterministic. It's the same as metaprogramming currently. It's not much about writing the code to solve a problem, but to find the hidden mechanism behind a common class of problems and then solve that instead. Then it becomes easier to solve each problem inside the class. LLMs are not reliable for that.
> 2. We're still writing, editing, and debugging code artifacts, but with much better tools.
I'd put a lot more weight on that, but we've already have a lot of tooling that we don't even use (or replicate across software ecosystems). I'd care much more about a nice debugger for go than LLMs tooling. Or a modern smalltalk.
But as you point out, the issue is not tooling. It's understanding. And LLMs can't help with anything if you're not improving that.
[0]: https://lisp-docs.github.io/cl-language-reference/chap-6/g-c...
I think you and I probably mostly agree on where things are heading, except that just inferring from your comment, I might be more bullish than you on how much AIs will help us develop those "much better tools".
This is one of the reasons I like the movie Hackers - the visualizations are terrible if you take it at face value, but if you think of it as a representation of what's going on inside their minds it works a whole lot better, especially compared to the lines-of-code-scrolling-past version usually shown in other movies/tv.
For anyone who doesn't know what I'm talking about, just the hacking scenes: https://youtu.be/IESEcsjDcmM?t=135
Do you really want your auto tool makers to not ensure the angle of the tools are correct _before_ you go and build 10,000 (misshaped) cars?
I’m not saying we don’t embrace tooling and automation as appropriate at the next level up, but sheesh that is a misguided analogy.
This is, I think very important especially for non-technical managers to grasp (lol, good luck with that).
Do they YOLO the angles of tools and then produce 10,000 misshapen cars? Yes. But do they also sell those cars? Impressively, also yes, at least up until a couple months ago. Prior to Elon's political shenanigans of the last few months consumers were remarkably tolerant of Tesla's QC issues.
They are.
Mechanical engineers measure more angles and measurements than a consultant might guess - its a standard part of quality control, although machines often do the measuring with the occasional human sampling as a back-up. You'd be suprised just how much effort goes into getting things correct such as _packs of kitkats_ or _cans of coke_.
If getting your angles wrong risks human lives, the threat of prosecution usually makes the angles turn out right, but if all else fails, recalls can happen because the gas pedal can get stuck in the driver-side floor carpet.
Assembly-line engineering has your favour that (A) CNC machines don't randomly hallucinate; they can fail or go out of tolerance, but usually in predictable ways and (B) you can measure a lot of things on an assembly line with lasers as the parts roll through.
It was thankfully a crazy one-off that someone didn't check that _the plugs were put back into the door_, but that could be a sign of bad engineering culture.
To someone who used to automate assembly plants, sounds to me as a rationalization of someone who has never worked in manufacturing. Quality people rightly obsess over whether or not the machine is making “every angle” correct. Imagine trying to make a car where parts don’t fit together well. Software tends to have even more interfaces, and more failure modes.
I’ve also worked in software quality and people are great at rationalizing reasons for not doing the hard stuff, especially if that means confronting an undesired aspect of their identity (like maybe they aren’t as great of a programmer as they envision). We should strive to build processes that protect us from our own shortcomings.
Don't have to imagine, just walk over to your local Tesla dealership.
The thing that gets me is how everyone is attaching subsidized GPU farms to their workflows, organizations and code bases like this is just some regulated utility.
Sooner of later this whole LLM thing will get monetized or die. I know that people are willing to push bellow par work. I didn't know people were ready to put on the leash of some untested new sort of vendor lock-in so willingly and even arguing this is the way. Some may even have the worst of the two worlds and end up on the hook for a new class of sticker shock, pay down and later have these products fail from under them and left out to dry.
Someone will pay for these models, the investors or the users so dependent they'll pay whatever price is charged.
This article (and several that follow) explain his ideas better than this out of context quote.
https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
The other thing that I have noticed, is that '(misplaced) trust erodes controls'. "hey the code hasn't broke for 6 months, so let's remove ABC and DEF controls", and then boom goes the app (because we used to test integration but 'come on - no need for that).
Now.. this is probably the paranoid (audit/sec) in me, but stuff happens, and history repeats itself.
Also.. Devs are cost center, not a profit center. They are "value enablers" not "value adders". Like everything and everyone else, if something can be replaced with something 'equally effective' and cheaper, it is simply a matter of time.
I feel that companies want to both run for this new gold-rush, while at the same time do it slowly and see if this monster bites (someone else first).
I don’t understand this and I think it would require breaking my brain in order to.
A person pays the company to provide a service.
The developers create and maintain (part of) that service.
The value of the products coming from research and development are not on the spreadsheet everyone is looking at. The cost to develop them is.
If it’s not on the spreadsheet, it doesn’t exist to the people who make the decisions about money. They have orders to cut spending and that’s what they’ll do.
This may sound utterly insane, but business management is a degree and job. Knowledge about what you are managing is secondary to knowing how to manage.
That’s why there is an entire professional class of people who manage the details of a project. They also don’t need to know what they are getting details for. Their job is to make numbers for the managers.
At no point does anyone actually care what these numbers mean for the company as a whole. That’s for executives.
Many executives just look at spreadsheets to make decisions.
But it does mean that the value that software brings isn't directly attributable to investment in software (as far as the business can see). And being more invisible means that it tends to get the shaft somewhat on the business side of things, because the costs are still fully visible.
Audit, Security, IT (internal infra people), cleaning personnel, SPEND money.
Sales, Product Development, MAKE money.
Once the "developers" can charge per-hour to the clients, then we love them because they BRING money. But those 'losers' that slow down the 'sales of new features' with their 'stupid' checks and controls and code-security-this and xss-that, are slowing down the sales, so they SPEND money and slow down the MAKING of money.
Now, in our minds, it is clear that those 'losers' who do the code check 'stuff' are making sure that whoever buys today, will come and buy again tomorrow. But as it has been discussed here, the CEOs need to show results THIS quarter, so fire 20% of the security 'losers' to reduce HR costs, hire 5 prompt engineers to pump out new features, and pray to your favourite god that things don't go boom :)
Meanwhile most CEOs have a golder parachute, so it is definitely worth the risk!
Software the same way. It’s even more automated than auto factories. Assembly is 100% automated. Design is what we get paid to do, and that requires understanding, just like the engineers at Ford need to understand how their cars work.
I haven't touched CAD for a couple of years, but I get the impression that (inevitably) the generative design hype significantly exceeds the current capability.
[0] https://zoo.dev/design-studio
Is it though? It could be interpreted as an acknowledgement. Five years from now, testing will be further improved, yet the same people will be able to take over your iPhone by sending you a text message that you don't have to read. It's like expecting AI to solve the spam email problem, only to learn that it does not.
It's possible to say "we take the security and privacy of our customers seriously" without knowing how the code works. That's the beauty of AI. It legitimizes and normalizes stupid humans without measurably changing the level of human stupidity or the quality or efficiency of the product.
Sold! Sold! Sold!
If you want to draw parallels between software delivery and automotive delivery then most of what software engineers do would fall into the design and development phases. The bit that doesn’t: the manufacturing phase - I.e., creating lots of copies of the car - is most closely modelled by deployment, or distribution of deliverables (e.g., downloading a piece of software - like an app on your phone - creates a copy of it).
The “manufacturing phase” of software is super thin, even for most basic crud apps, because every application is different, and creating copies is practically free.
The idea that because software goes through a standardised workflow and pipeline over and over and over again as it’s built it’s somehow like a factory is also bullshit. You don’t think engineers and designers follow a standardised process when they develop a new car?
It would be crazy for auto factory workers to check every angle. It is absolutely not crazy for designers and engineers to have a deep understanding of the new car they’re developing.
The difference between auto engineering and software engineering is that in one your final prototype forms the basis for building out manufacturing to create copies of it, whereas in the other your final prototype is the only copy you need and becomes the thing you ship.
(Shipping cadence is irrelevant: it still doesn’t make software delivery a factory.)
This entire line of reasoning is… not really reasoning. It’s utterly vacuous.
This is not true from a manager's perspective (indoctrinated by Taylorism). From a manager's perspective, development is manufacturing, and underlying business process is the blueprint.
I don't think it's bs. The pipeline system is almost exactly like a factory. In fact, the entire system we've created is probably what you get when cost of creating a factory approaches instantaneous and free.
The compilation step really does correspond to the "build" phase in the project lifecycle. We've just completely automated by this point.
What's hard for people to understand is that the bit right before the build phase that takes all the man-hours isn't part of the build phase. This is an understandable mistake, as the build phase in physical projects takes most of the man-hours, but it doesn't make it any more correct.
Apparently the Cybertruck did not. And that sort of speaks for itself.
The vast majority of software, especially since waterfall methods were largely abandoned, has the planning being done at the same time as the "execution". Many edge cases aren't discovered until the programmer says "oh, huh, what about this other case that the specs didn't consider?" And outsourcing then became costly because that feedback loop for the spec-refinement ran really slowly, or not at all. Spend lots of money, find out you got the wrong thing later. So good luck with complex, long-running projects without deeply understanding the system.
Alternately, compare to something more bespoke and manual like building a house, where the tools are less precise and more of the work is done in the field. If you don't make sure all those angles are correct, you're gonna get crappy results.
(The most common answer here seems to be "just tell the agent what was wrong and let it iterate until it fixes it." I think it remains to be seen how well "find out everything that is wrong after all the code is written, and then tell the coding agent(s) to fix all of those" will work in practice. If nothing else, it will require a HUGE shift in manual testing appetite. Maybe all the software engineers turn into QA engineers + deployment engineers.)
Any data on that? I see everyone trying to outsource as much as they can. Sure, now it is moving toward AI, but every company I walk into have 10-1000s of FTEs in outsource countries.
I see most fortune 1000 companies here doing some type of agile/planningexecution which is in fact more waterfall. The people here in the west are more management and client facing, the rest is 'thrown over the fence'.
Outsourcing means laying off your FTEs and shoving the entire project over to a WITCH consulting shop.
And all because the MBAs yearn for freedom from dependencies and thus reality.
That cannot be any furthest from the truth.
Take a decent enterprise CNC machine (look in youtube, lots of videos) that is based on servos, not the stepper motor amateur machines. That servo-based machine is measuring distances and angles hundreds of times per second, because that is how it works. Your average factory has a bunch of those.
Whoever said that should try getting their head out of their ass at least every other year.
Not really. More like, if the fopen works fine, don't bother looking how it does so.
SWE is going to look more like QA. I mean, as a SWE if I use the webrtc library to implement chat and it works almost always but just this once it didn't, it is likely my manager is going to ask me to file a bug and move on.
Yeah, but there's still something checking the angles. When an LLM writes code, if it's not the human checking the angles, then nothing is, and you just hope that the angles are correct, and you'll get your answer when you're driving 200 km/h on the Autobahn.
They need to read “the code is the design”. When you are cutting the door for the thousandth car, you are past design and into building.
For us building is automatic - take code turn into binary.
The reason we measure the door is we are at the design stage abs you need to make sure everything fits.
Agree, this comment makes no sense.
So you are telling me, your AI code passes a six sigma grade of quality control?
I have a bridge to sell you. No, Bridges!
It's even funnier when you consider that Toyota has learned how bad of an idea lean manufacturing/6-Sig/5S can be thanks to the pandemic - they're moving away from it in some degrees, now.
What Toyota realized in 2011 due to the Fukushima disaster however is that this completely fails for computer chips because the pipeline is too long. So they kept JIT for steel, plastic parts etc but for microcontrollers, power supply chips, etc they stockpile large quantities.
Six Sigma came out of Motorola, who still practice it today.
It was then adopted by the likes of GE, before finding its way into the automotive and many other manufacturing industries.
>Harper Reed is an American entrepreneur
Ah, that's a more realistic indicator of his biases. Either there's some misunderstanding, or he's incorrect, or he's being dishonest; it's my job to make sure the code that I ship is correct.
This is along the same lines as why I don't doubt the syntactical features to break. I assume they work. You have to accept some abstraction work, and build on top of it.
We will reach some point where we will have to assume AI is generating the correct code.
You are correct tho. I do think that we are approaching the point of "If it compiles, ship it"
Without proper understanding of CS, this is what we get. Lack of rigour.
This is frankly the most idiotic statement I have heard about programming yet.
That quote really misrepresents his writing.
And just as the proliferation of factories abroad has made it cheap and easy for entrepreneurs to manufacture physical products, the rise of A.I. is likely to democratize software-making, lowering the cost of building new apps. “If you’re a prototyper, this is a gift from heaven,” Mr. Willison said. “You can knock something out that illustrates the idea.”
Why do they cite bloggers who relentlessly push this technology rather than interviewing a representative selection of programmers?
To be clear, I'm sure some critical software engineering jobs will be replaced by AI though. Just not in the way that zealots want us to think. From the looks of it right now, AI is far from replacing software engineers in terms of competence. The utter incompetence was in full public display just last week [1]. But none of that will matter to greedy corporate executives, who will prioritize short-term cost savings. They will hop from company to company, personally reaping the benefits while undermining essential systems that users and society rely on with AI slop. That's part of the reason why the C-suites are overhyping the technology. After all, no rich executive has faced consequences for behaving this way.
[1]: https://news.ycombinator.com/item?id=44050152
I recently had the (dis)pleasure of fixing a bug in a codebase that was vibe coded.
It ends up being a collection of disorganized business problems converted into code, without any kind of structure.
Refinements are implemented as super-narrow patches, resulting in complex and unorganized code, whereas a human developer might take a step back to try and extract more common patterns.
And once you reach the limit of the context window you're essentially suck, as the LLM can no longer keep track of its patches.
English (or all spoken human language) is not precise enough to articulate what you want your code to do, and more importantly, a lot of time and experience precedes code that a senior developer writes.
If you want to have this senior developer 'vibe' code, then you'll need to have a way to be more precise in your prompts, and be able to articulate all learnings from your past mistakes and experience.
And that is incredibly heavy. Remember, this is opposite from answering 'why did you write it like this'. This is an endless list of items that say 'don't do this, but this, in this highly specific context'.
The problem with vibe coding is more behavioral I think: the person more likely to jump in the bandwagon to avoid writing some code themselves is probably not the one thinking about long term architecture and craftsmanship. It’s a laziness enhancer.
Reading "couldn't" as, you would technically not be able to do it because of the complexity or intricacy of the problem, how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?
Your comment makes it sound like you're now dependent on AI to refactor again if dire consequences are detected way down the line (in a few months for instance), and the problem space is already just not graspable by a mere human. Which sounds really bad if that's the case.
Theres some fidelity loss but it works for text, because there's quite often so much redundancy.
However, I'm not sure this technique could work on code.
But if someone else were to do it for me I would gratefully review the merge request.
For me personally, the activation energy is higher when reviewing: it’s fun to come up with the solution that ends up being used, not so fun to come up with a solution that just serves as a reference point for evaluation and then gets immediately thrown away. Plus, I know in advance that a lot of cycles will be wasted on trying to understand how someone else’s vision maps onto my solution, especially when that vision is muddy.
Submitting LLM barf for review and not reviewing it should be grounds for termination. The only way I can envision LLM barf being sustainable, or plausible, is if you removed code review altogether.
What does it mean to have to review your own code as a separate activity? Do many people contribute code that they wrote but… never read?
> Submitting LLM barf
Oh right…
If you need an example, it's easy to add a debugging/logging statement like `console.log`, but if the coder committed and submitted the log statement, then they clearly didn't review the code at all, and there are probably much bigger code issues at stake. This is a problem even without LLMs.
If person A committed code that looks bad to person B, it just means person A commits bad code by the standard of person B, not that person A “does not review own code”.
Maybe it’s a subjective difference, same as you could call someone “rude” or you could say the same person “didn’t think before saying”.
The concept and design were by that point iterated on, so it doesn’t happen that I need to rewrite a significant amount of code.
If you can do it using an LLM in a few hours however, suddenly making your life, and the lives of everyone that comes after you, easier becomes a pretty simple decision.
AI is a sharp tool, use it well and it cuts. Use it poorly and it'll cut you.
Helping you overcome the activation barrier to make that redactor is great if that truly is what it is. That is probably still worth billions in the aggregate given git is considered billion dollar software.
But slop piled on top of slop piled on top of slop is only going to compound all the bad things we already knew about bad software. I have always enjoyed the anecdote that in China, Tencent had over 6k mediocre engineers servicing QQ then hired fewer than 30 great ones to build the core of WeChat...
AI isn't exactly free and software maintenance doesn't scale linearly
While that is true, AI isn’t going to make the big difference here. Whether the slop is written by AI or 6000 mediocre engineers is of no matter to the end result. One might argue that if it were written by AI at least those engineers could do something useful with their lives.
There's a difference between not intellectually understanding something and not being able to refactor something because if you start pulling on a thread, you are not sure what will unravel!
And often there just isn't time allocated in a budget to begin an unlimited game of bug testing whack-a-mole!
My confidence in LLMs is not that high and I use Claude a lot. The limitations are very apparent very quickly. They're great for simple refactors and doing some busy work, but if you're refactoring something you're too afraid to do by hand then I fear you've simply deferred responsibility to the LLM - assuming it will understand the code better than you do, which seems foolhardy.
A lot of repetitive slight variations on the same easy to describe change sounds pretty good to ask an LLM to do quickly.
I’m very much able to understand the result and test for consequences, I wouldn’t think of putting code I don’t understand in production.
Not necessarily. It may have refactored the codebase in a way that is more organized and easier to follow.
> how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?
Perhaps extensive testing? Or a prayer.
It only looks effective if you remove learning from the equation.
It's the wrong tool for the job, that's what it is.
I kind of view this use case as enhanced code linters
I quit my last very good job because I became so fed up with this situation. It was bad enough before the CTO started using LLMs. It was ABSURD after.
(This was a YC company that sold quickly after I quit, at a loss, presumably because they didn't know what else to do)
someone holding a hammer by the head and failing at getting the nail in doesnt mean a hammer is a bad tool for nailing
No part of what I said suggested the tool wasn't capable of being a useful tool.
Do you expect an incoming collapse of modern society?
That's the only case where LLM would be "not there anymore." Even if this current hype train dies completely, there will still businesses providing LLM interference, just far less new models. Thinking LLM would be "not there anymore" is even more delusional than thinking programmer as a job would cease to exist due to LLM.
It's effective on things that would take months/years to learn, if someone could reasonably learn it on their own at all. I tried vibe coding a Java program as if I was pair programming with an AI, and I encountered some very "Java" issues that I would not have even had the opportunity to get experience in unless I was lucky enough to work on a Fortune 500 Java codebase.
AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.
What do you mean? There is no difference between waterfall or agile in what you do during a few hours.
Throw a big context window model like Gemini at it to document the architecture unless good documentation exists. Then use modify that document to drive development of new or modified code.
Many big waterfall projects already have a process for this - use the AI instead of marginally capable offshore developers.
Are you being ironic?
Trying to get some third party hardware working with raspi
The hardware provider provides 2 separate code bases with separate documentation but only supports the latest one.
I literally had to force feed the newer code base into ChatGPT, and then feed in working example code to get it going, else it constantly reference the wrong methods.
If I just kept going Code / output / repeat it would maybe have stumbled on the answer but it was way off.
Recently, I was listening to a podcast about realistic real world uses for an LLM. One of them was a law firm trying to review details of a case to determine a strategy. One of podcasters (sp?) recoiled in horror: "An LLM is writing your briefs?" They replied: "No, no. We use it generate ideas. Then, we select best." It was experts (lawyers, in this case) using an LLM as a tool.
"Technology acts as an amplifier of human intentions"
...so, if someone is just doing a sloppy job, AI-assisted vibe coding will enable them to do it faster.
In any case a very rare and specific corner case you mention, a dev can go on a decade or two (or lifetime or two) without ever experiencing similar requirement. If it should be a convincing argument for almighty llm it certainly isnt.
You can have AI almost generated anything, but even AI has limited to understanding requirements, if you cannot articulate what you want very precisely, it's difficult to get "AI" to help you with that.
And it’s better in the long run.
https://news.ycombinator.com/item?id=44050152
Part of the reason is that the reporters are themselves not in the trenches coding to be skeptical of the claims they hear.
Exactly. And this is why I feel like we are going to go full circle on this. We've seen this cycle in our industry a couple times now:
"Formal languages are hard, wouldn't it be great if we could just talk in English and the computer would understand what we mean?" -> "Natural languages are ambiguous and not precise, wouldn't it be great if we could use a formal langue so that the computer can understand precisely what we mean?"
The eternal hope is that someday, somehow, we will be able to invent a natural language way of communicating something precise like a program, and it's just not going to happen.
Why do I think this? Because we can't even use natural language to communicate unambiguously between intelligent people. Our most earnest attempt at this, the law, is so fraught with ambiguity there's an entire profession dedicated to arguing in the gray area. So what hope do we have controlling machines precisely in this way? Are future developers destined to be equivalent to lawyers, who have to essentially debate the meaning of a program before it's compiled, just to resolve the ambiguities? If that's where this ends up, I will be very sad indeed.
My take is more nuanced.
First, there is some evidence [1] that human language is neither necessary nor sufficient to enable what we experience as "thinking".
Second, our intuition, thinking etc are communicated via natural languages and imagery which form the basis for topics in the humanities.
Third, from communication via natural language - slowly emerges symbolism, and formalism which codifies intuitions in a manner which is operational, and useful.
As an example, socratic dialog was a precursor to euclidean geometry which operationally codifies our intuitions of space around us in a manner which becomes useful.
However, formalism is stale as there are always new worlds we experience which cannot be captured by any formalism. The genius of the human brain which is not yet captured in LLMs is to be able create symbolisms of these worlds almost on demand.
ie, if we were to order in terms of expressive power, it would be something like:
1) perception, cognition, thinking, imagination 2) human language 3) formal languages and computers codifying worlds experienced via 1) and 2)
Meanwhile, there is a provocative hypothesis [2] which argues that our "thinking" process lies outside computation as we know it.
[1] https://www.nature.com/articles/s41586-024-07522-w [2] https://www.amazon.com/Emperors-New-Mind-Concerning-Computer... [3] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
Future developers? You sound like you've never programmed in C++.
what we operationally mean by "precise" involves formalism. ie, there is an inherent contradiction between precise, and natural languages.
I think the closest might be the constructed language "Ithkuil". Learning it is....difficult, to put it mildly.
https://ithkuil.net/
this matches the description of every codebase (except one) I came across in my 30-year career
One thing especially, is the loss of knowledge about the codebase. While there was always some stackoverflow-coding, when seeing a weird / complicated piece of code, I used to be able to ask the author why it was like that. Now, I sometimes get the answer "idk, its what chatgpt gave me".
"Code increases in complication to the first level where it is too complicated to understand. It then hovers around this level of complexity as developers fear to touch it, pecking away here and there to add needed features."
http://h2.jaguarpaw.co.uk/posts/peter-principle/
Seems like I'm not the first one to notice this either:
https://nigeltao.github.io/blog/2021/json-with-commas-commen...
LLMs will produce code with good structure, as long as you provide that architecture before hand.
I can only accept that as true if I also accept the fact that the vast majority of jobs won’t be on a good team
I've noticed this over the last decade where tech people (of which I am one) have considered themselves above the problems of ordinary workers such as just affording to live. I really started to notice this in the lead up to the 2016 election where many privileged people did not recognize or just immediately dismissed the genuine anger and plight of working people.
This dovetails into the myth of meritocracy and the view that not having enough money or lacking basic necessities like food or shelter or a personal, moral failure and not a systemic problem.
Tech people in the 2010s were incredibly privileged. Earnings kept going up. There was seemingly infinite demand for our services. Life was in many ways great. The pandemic was the opportunity for employers to reign in runaway (from their perspective) labor costs.
Permanent layoff culture is nothing more than suppressesing wages. The facade of the warm, fuzzy Big Tech employer is long gone. They are defense contractors now. Google, Microsoft or Amazon are indistinguishable from Boeing, Lockheed Martin and Northrop Grumman.
So AI won't immediately replace you. It'll start by 4 engineers with AI being able to do the job that was previously done by 5. Laying off that one person saves that money directly but also suppresses the wages of the other 4 who won't be asking for raises. They're too afraid of losing their jobs. Then it'll be 3. Then 2.
A lot of people, particularly here on HN, are going to find out just how replaceable they are and how aligning with the interests of the very wealthiest was a huge mistake. You might get paid $500K+ a year but you are still a worker. Your interests align with nurses, teachers, baristas, fast food workers and truck drivers, not the Peter Thiels of the world.
In the future no one will have to code. Well compile the business case from UML diagrams!
But these groups also don't have strong unions and generally don't have the class consciousness you are talking about, especially as the pay increases.
The "middle class" is propaganda we've been fed for decades by our governments, the media and the very wealthy. It's just another way to pit workers against one another, like how the flames of white supremacy were fanned after the slaves were freed so poor whites wouldn't socially align with freed slaves. It's why politics now tends to focus on socially divisive issues rather than economics, like abortion, LGBTQIA+ people, migrants, Islamophobia, etc.
Doctors are workers. Lawyers are workers. Professional athletes are workers.
They could buy/build a decent home in a safe neighborhood, had decent health care, good schools for their kids, disposable income for leisure.
Now, every single fucking productivity gain is going for the finance overlords.
And it's an understandable impulse, but at some point you'd think people would learn instead of being mesmerized by the promise of slightly better treatment by the higher classes in exchange for pushing down the rest of the working class.
Now it's our turn as software engineers to swallow that bitter pill.
They have already learned, fortunately the entire 20th century was devoted to this. That is why any worker with even a little bit of brain and ability to learn perceives socialists as the main threat to his well-being.
Of course, it's not like one needs to be a Marxist or even any other sort of a socialist to see the whole "employers are screwing their employees", since I do doubt that many employees working in Amazon's tech like AWS and whatnot would be in the ideology. In fact, that has become a fairly popular position even outside of traditionally leftist politics.
Only leftist politics will actually be able to address the issues. Of course people will just mindlessly scream "sOcIaLiSm!" at anything and everyone, but it ultimately doesn't matter. We still have to be optimistic, once enough people have the vocabulary to think about their increasingly parlous circumstance, things will change in our direction inevitably.
It’s like comparing a machine gun to a matchlock pistol and saying they’re the same thing.
My boss wanted me to put blockchain in everything (so he could market this to our clients). I printed a small sign and left on my desk. Everytime someone asked me about blockchain, I would point the sign "We don't need blockchain!"
AI is extremely new, impressive and is already changing things.
How can that be in any way be similar to crypto and VR?
It isn't true.
The tool he's bragging about most went about changing JDK8 to JDK17 in a build config file, and if you're lucky tweaking log4j versions. 4,500 years my ass. It was more regex and AI
It doesn't matter if it's not working perfect yet. It's only a few years got 3 came out.
Having a reasonable chat with a computer was also not possible at all. AI right now already feels like another person.
You bought gasoline for the first cars in a pharmacy.
No comments yet
Facebook of AI is coming and it's going to be much, much worse. and it'll still contain as you describe.
No comments yet
Nonetheless it's different things.
Just because they struggle with that makes all the other things wrong? Underwhelming?
I don't think so.
> Just because they struggle with that makes all the other things wrong?
No, it makes their grandiose claims very tenuous. But for me, yes I'm very underwhelmed by what AI is capable of. I think it's a useful tool, like a search box, but that's it at this point. That they are branding it as a Ph.D. researcher is just blowing smoke.
Not only did Claude respond correctly, I also wrote it in German.
And I often switch between English and German for single words just because ai is already that good.
This wasn't even imaginable a few years back. We had 0 technology for that.
I created single page html Javascript pages for small prototypes withhin 10 minutes with very little repromting.
I can literally generate an image of whatever I want with just text or voice.
I can have a discussion with my smartphone with voice.
Jules detected my programming language, how to build and running it by me writing 'generaten GitHub action for my project and add a basic test to it's
I don't get how people can't be amazed by these results.
What are you normally do when you try it out?
You're missing the point though. If it worked for you when you tried it, that's great, but we know these tools are stochastic, so that's not enough to call it "solved". That I tried it and it didn't work tells me it's not. And that's actually worse than it not working at all, because it leads to the false confidence you are expressing.
The strawberry example highlights like all abstractions, AI is a leaky one. It's not the oracle they're selling it as, it's just another interface you have to know the internal details of to properly use it beyond the basics. Which means that ultimately the only people who are really going to be able to wield AI are those who understand what I mean when I say "leaky abstraction". And those people can already program anyway, so what are we even solving by going full natural language?
> I created single page html Javascript pages for small prototypes withhin 10 minutes with very little repromting.
You can achieve similar results with better language design and tooling. Really this is more of an indictment of Javascript as a language. Why couldn't you write the small prototype in 10 minutes in Javascript?
> I can literally generate an image of whatever I want with just text or voice.
Don't get me wrong, I enjoy doing this all the time. But AI images are kind of a different breed because art doesn't ever have to be "right". But still, just like generative text, AI images are impressive only to a point where details start to matter. Like hands. Or character consistency. Multiple characters in a scene. This is still a problem in latest models of AI not doing what you tell it to do.
> I can have a discussion with my smartphone with voice.
Yes, talking to your phone is a neat trick, but is it a discussion? Interacting with AI often feels more like an improv session, where it just makes things up and rolls with vibes. Every time (and yes I mean every) I ask it about topics I'm an expert about, it gets things wrong. Often subtly, but importantly still wrong. It feels like less of a discussion and more like I'm being slowly gaslight. This is not impressive, it's frustrating.
I think you get my point. Yes, AI can be impressive at first, but when you start using it more and more you start to see all the cracks. And it would be fine if those cracks were being patched, or even if they just acknowledged them and the limitations of LLMs. But they won't do that; instead what they are doing is pretending they don't exist, pretending these LLMs are actually good, and pushing AI into literally everything and anything. They've been promising exponential improvements for years now, and still the same problems from day 1 persist.
And we live now in such a fast pace that it feels like Ai should be perfect already and it's of course not.
I also see that an expert can leverage Ai a lot better because you still need to know enough to make good things without.
But Ai progresses very fast and still has achieved things were we have not had any answer than before.
What it can already do is still very aligned of what I assume/expect it from the current hype.
Llama made our ocr 20% better just by using it. I prompted plenty of code snippets and stuff which saved me time and was fun using it. Including python scripts for a small ml pipeline and I normally don't write python.
It's the first tech demo ever which people around me just 'got' after I showed it to them.
It's the first chatbot I have seen which doesn't flake out after my second question.
Chatgpt pushed billions into new computer. Blackwell is the first chip to hit the lower estimation for brain compute performance.
It changed the research field of computer linguistics.
I believe it's fundamental to keep a very close eye on it and trying things out regular otherwise it will over roll us suddenly.
I really enjoy using Claude.
Edit: and eli5 on research paper. That's so so good
They would point to the increasing model sizes and the ability to solve various benchmarks as proof of that exponential rise, but things have really tapered off since then in terms of how rapidly things are changing. I was promised a Ph.D. level researcher. A 20% better result on your OCR is not that. That's not to say it's a good thing and an improvement, but it's not what they are selling.
> Blackwell is the first chip to hit the lower estimation for brain compute performance.
What does that even mean? That's just more hype and marketing.
But hey it seems I can't convince you of my enthusiasm regarding ai. It's fine I still will play around with it often and looking forward to it's progress.
Regarding your researcher: NotebookLM is great and you might need to invest a few hundred bucks to really try out more.
We will see anyway we're it's going
No comments yet
Indeed, which is exactly what's happening with AI.
It stopped being fun to code around that point. Too many i's to dot to make management happy.
However, this has to be substantive code review by technical peers who actually care.
Unit tests also need the be valued as integral to the implementation task. The author writes the unit tests. It helps to guide the thought process. You should not offload unit tests to an intern as "scutwork".
If your code is sloppy, a stylistic mess, and unreviewed, then I am going to put it behind an interface as best I can, refer to it as "legacy", rely on you for bugfixes (I'm not touching that stinking pile), and will probably try to rally people behind a replacement.
And frankly, giving ownership to code ("it's yours") has, also in my experience, been an excellent way to give an engineer "pride of ownership". No one wants to have that "stinking pile".
In my experience unit tests have been simply a questionable yardstick management uses to feel at ease shipping code.
"98% code coverage with unit tests? Sounds good. It must be 98% bug-free — ship it."
Not that anyone ever exactly said that but that's essentially what is going on.
Code reviews seem to bring out the code-nazi types. Code reviews then break any goodwill between team members.
I preferred when I would go to a co-workers office and talk through an issue, come up with a plan to solve the problem. We trusted the other to execute.
When code reviews became a requirement it seemed to suck the joy out of being a team.
Too frequently a code review would turn into "here's how I would have implemented it therefore you're wrong, rewrite it this way, code does not pass."
Was that a shitty code reviewer? Maybe. But that's just human nature — the kind of behaviors that code "gate keeping" invites.
Once upon a time (my career there was 26 years) code reviews, unit tests were alien. I enjoyed my job much more then.
It's hard, though, to keep code reviews from turning into style and architecture reviews. Code reviewing for style is subjective. (And if someone on the team regularly produces very poor quality code, code review isn't the vehicle for fixing that.) Code reviewing for architecture is expensive; settle on a design before producing production-ready code.
My $0.02 from the other side of the manager/programmer fence.
"All changes are reviewed by a subject matter expert who verifies that the change meets the planned activity as described in the associated issue/ticket. Changes are not deployed to production environments until authorized by a subject matter expert after review. An independent reviewer evaluates changes for production impact before the change is deployed..."
If you are doing code review already, might as well leverage it here.
That's exactly right. After said process, it comes down to trusting your coworkers to execute capably. And if you don't think coworker is capable, say so (or if they're junior, more prudently hand them the simpler tasks — perhaps code review behind their back and let it go if the code is "valid" — even if it is not the Best Way™ in your opinion.)
I usually give up, stop arguing why it is actually better than the way the gatekeepers suggest and redo my code, less time wasted.
Coverage results don't mean much. Takes some experience to know how easy it is to introduce a major bug with 100% test coverage.
Tests are supposed to tell you if a piece of code works as it should. But I have found no good way of judging how well a test suite actually works. You somehow need tests for tests and to version the test suite.
A overemphasis on testing also makes the code very brittle and a pain to work with. Simple refactorings and text changes need dozens of tests to be fixed. library changes break things in weird ways.
Unless I know the system being tested, I take no interest in tests.
There's clever hacky ways to test systems that will never pass the "100% coverage" requirement and are a joy to work with. But they're the exception.
it's synonymous with LOC. don't bring it up anywhere.
However, usually no one really cares about testing at all. Also many projects are internal, not critical, etc.
Make fast, break things, deliver crappy software.
Tests that are written to comply with a policy that requires that all components must have a unit test, and that test must be green, could be good. Often, they are just more bullshit from the bullshit factory that is piling up and drowning the product, the workers, the management, and anyone else who comes too close.
I feel that it’s still correct to call both of these things tests, because in isolation, they do the same thing. It’s the structure they’re embedded in that is different.
This codebase was quick to deploy at Microsoft. We'd rollout every week. Compared to other projects that took months to rollout with a tangling release pipeline
Anyways I left for a startup & most of this fast moving team dissolved, so the Ruby codebase has been cast aside in favor of projects with tangling release pipelines
https://techcommunity.microsoft.com/blog/adforpostgresql/how...
In a decade and a half, we had very few issues, all easy to handle, and ie app has its own clustering via Hazelcast so its pretty robust with minimal resources. Simply nothing business could point a finger to and complain about. Since it was mostly just me, a pretty low cost solution that could be bent to literally any requirement pretty quickly.
Come 2025, now its part of agile team and efforts, all runs on openshift which adds nothing good but a lot of limitations, we waste maybe 0.5-1md each week just on various agile meetings which add 0 velocity or efficiency, in fact we are much slower (not only due to agile, technology landacape became more and more hostile to literally any change, friction for anything is maasive compared to a decade ago and there is nothing I can do with that).
I understand being risk averse against new unknown stuff, but something that proved its worth over 15 years?
Well it aint my money being spend needlessly, I dont care and find life fulfillment completely outside of work (the only healthy approach for devs in places like banking megacorps). But smart or effective it ain't.
It's really frustrating; I'm all for some bit of code not needing a test, but it should be because the code doesn't need to be unit tested. Breaking unit testing, not knowing how to fix it, and removing all testing is not a good reason.
Yes, of course, some rockets may explode (almost 10 soon), or some people may have accident, but that's ok from their perspective.
Unfortunately, management often dictates this for all engineers.
As an employee at a company with a similar attitude, I cannot agree more with this.
A burning need to dominate in a misguided attempt to fill the gaping void inside
Broken and hurting people spreading their pain as they flail blindly for relief
Our world creaks, cracks splintering out like spider thread
The foundations tremble
In a way it's only fair. Automation has made a lot of jobs obsolete or miserable. Software devs are a big contributor to automation so we shouldn't be surprised that we are finally managing to automate our own jobs away,
Yeah the consistent "reporting" of "status" on "stand-ups" where you say some filler to get someone incapable of understanding what it is that you're doing off your back for 24 more hours has consistently been one of the most useless and unpleasant parts of the job.
This sucks for the 50% or so who are like you, but there's another 50% who won't really get much done otherwise, either because they don't know what to do and aren't self-motivated or capable enough to figure it out (common) or because they're actively cheating you and barely working (less common)
Idk I barely ever work with people who are like this, and if people become like this, it's usually obvious to everyone that it's happened and they get a talking to in the office then get shown the door
The mediocre unmotivated person is dragging down the other, killing their motivation. You'd be better off without them even if you couldn't replace them.
In my experience it is human nature to think you are doing something that people around you can't or don't understand. The graveyard is full of irreplaceable people is an old saying. Sometimes the people you report to are morons, but if you consistently report to a moron its time for introspection. There's more that you can do than just suffer that. One place to start is to have some charity for the people you work with.
I am not special and make no claims of it; I am entirely replaceable and I'd make no claims to the contrary.
This has nothing to do with me or anyone like me, and everything to do with the "adult daycare" style of project managers.
I'm tired of re-iterating to non-technical project managers that status of tickets, why things are "blocked" or why the ask isn't feasible given constraints, over and over again. Time is a flat-circle.
If they understood the problem scope better, such questions would not arise. I know this from experience.
The majority of them are completely stateless and I'll repeat things daily for weeks on end, explaining the same things over and over again, while they make 0 effort to "unblock" issues.
I've had one good project manager in my career that advocated for his technical staff and understood the both the project and business deeply; he was invaluable and a pleasure to work with.
I've had many many others that served no tangible purpose whatsoever.
My frustration is ostensibly there is a purpose for these jobs beyond employing people with the role of "attending meetings"; I've rarely seen it.
But it's never true. Team A depends on Team B, who is busy with work for Team C, and none of these teams are talking to each other because they're too busy writing code. Team D just lost two people and can't make the date that they promised, which sets Teams E and F back a few months unless we can figure it out. Or they're behind because they up and decided to do a big refactoring in the middle of the project without telling anyone. Or people just estimated poorly, like orders-of-magnitude poorly, and while the marketing team is ready, and the trade shows are scheduled, and the factory is ramping the device that the software should be flashed on, but the software won't be ready for another three months.
I empathize with engineers since I was once one, and can understand why some of them see us as adversarial. We tend to interact with them in places that Software Engineers hate, like in meetings and standups and via "update" E-mail blasts. Or we're sending them JIRA tickets which they also hate. I do my best to shield my teams from these things that I know they don't like, but sometimes they have to happen.
But I fall short of declaring the 1990s or 2000s or 2010s were the glory days and now things suck. I think part of it is nostalgia bias. I can think of a job I spent 4 years and list all the good parts of the experience. But I suspect I’m forgetting over a lot of mediocre or negative stuff.
At any rate I still like the work today. There are still generally hard challenges that you can overcome, people that depend on you, new technologies to learn about. Generically good stuff.
I guess these strategies boil down to having some MBA on top or an engineer that has no board of MBAs to bow down to. I strive to stay with private owned companies for this reason but ofc these are less loud on the internet, so you can easily miss them while jobhunting.
I crave novelty and have a love for bad technology. I was an early nodejs adopter and loved es4 but newer versions of the language is too easy to use lol!
The weird thing about this is, many developers wanted this. They wanted the theater of Agile, JIRA tickets, points etc.
I'm in the same boat yet still need to squeeze out another 10 years or so but personally working om multiple-side projects so I can get out of this boring, mundane shit.
From my integrations pov Ebay was ahead of their time with their data structure and pushed for deprecation fast to not keep the debt. Amazon ooth only looks more modern through acquiring new market fields instantly followed by throwing a lot of money to facade up the mess. Every contact like key account managers there were usually pushed for numbers, this has nothing to do with coders being coders.
Bosses always look for ways to instantly measure coders output which is just short-sighted way of thinking. My coworkers were measured by lines of code obviously. I wonder how you measure great engineering.
So no, this has not changed, you can still work uninterrupted on stuff for months or years if you want and skip these places, maybe even proved over your career that previous designs are stable for years to come.
Eventually everyone was expected to understand a good deal of the code they were working on. The analyst and the coder became the same person.
I'm deeply skeptical that the kind of people that enjoy software development are the same kind of people that enjoy steering and proofing LLM generated code. Unlike the analyst and the coder, this strike me as a very different skill set.
indeed. people generally hate foreign/alien code, or rather - love their style too much. it is not hard to recognize this pattern - ive seen it with colleagues, with my students, with some topnotch 10x-coders back in the day. so proofing is a skill one perhaps develops by teaching others do things right, but is not something most people entertain about.
on the other hand, people who lack time and patience to implement complex stuff may benefit from this process. particularly if they are good code-readers, and some seasoned devs become such people. i can see little chance they wont be using llms to spit code out.
but the two groups largely don't overlap and are different as astronomers and astronauts.
The real software engineering role, with architecture, customer management, discovery phase, risk analysis and all the other kind of stuff, not yet.
Reading and debugging slop code is not the same thing, not even close.
Not everyone gets to code the next ground breaking algorithm at some R&D department.
Most programming tasks are rather repetitive, and in many countries there is hardly anything to look up to software developers, it is another blue collar job.
And in many cultures if you don't go into management after about five years, usually it is seen as a failure to grow up on their career.
I don't see how that's possible. Wouldn't such a norm result in something like a 7:1 ration of managers to engineers (i.e., assuming a 40ish year career, the first 5 years are spent as an engineer, and the remaining 35 as a manager)? For team managers, I've generally seen around a 1:10 ratio of engineers to managers. So a 7:1 ratio of managers to engineers just doesn't seem plausible, even including non-people leaders in management.
The mindset, mentality, and culture required to do new software for an ambiguous problem is different from the mentality to produce boilerplate code or maintain an existing codebase. The later is pure execution and the former is more like R&D.
What does that mean?
It's these sorts of jobs that will be replaced by AI and a vibe coder, which will cost much less because you don't need as much experience or expertise.
Seeing Like a State by James Scott
https://en.wikipedia.org/wiki/Seeing_Like_a_State
Explains a lot of the confusing stuff I've experienced, in that eureka sort of way.
Like, they hadn't realized they were turning humans into compilers for abstract concepts, yet now they are telling humans to get tf out of the way of AI
I'm not sure what: "'deskilling' to something reliable through bureaucratic procedures" ... means.
I'm the Managing Director of a small company and I'm pretty sure you are digging at the likes of me (int al) - so what am I doing wrong?
From the 19th century onwards, businesses have wanted to replace high-skilled craftsmen with low-skilled workers who would simply follow a repeatable process. A famous example is Ford. Ford didn't want an army of craftsmen, who each knew how to build a car. He wanted workers to stay at one station and perform the same single action all day. The knowledge of how to build a car would be in the system itself, the individual workers didn't have to know anything. This way, the workers have limited leverage because they are all replaceable, and the output is all standardized.
You can see this same approach everywhere. McDonalds for instance, or Amazon warehouses, or call centers.
We give a shit.
I hypothesize that it takes some period of time for vibe-coding to slowly "bit rot" a complex codebase with abstractions and subtle bugs, slowly making it less robust and more difficult to maintain, and more difficult to add new features/functionality.
So while companies may be seeing what appears to be increases in output _now_, they may be missing the increased drag on features and bugfixes _later_.
Imagine a future where the prompts become the precious artifact. That we regularly `rm -rf *` the entire code base and regenerate it with the original prompts perhaps when a better model becomes available. We stop fretting about code structure or hygiene because it won't be maintained by developers. Code is written for readability and audibility. So instead of finding the right abstractions that allow the problem to be elegantly implemented the focus is on allowing people to read the code to audit that it does what it says it does. No DSLs just plain readable code.
Because if you let every stakeholder add their requirements to the prompts, without checking that it doesn't contradict others, you'll end up with a disaster.
So you need someone able to gather all the requirements and translate it in a way that the machine (the AI) can interpret to produce the expected result (a ephemeral codebase).
Which means you now have to carefully maintain your prompts to be certain about the outcome.
But if you still need someone to fix the codebase later in the process, you need people with two sets of skills (prompts and coding) when, with the old model, you only needed coding skills.
They burn a pile of money. Maybe it’s their life savings, their parents’ money or their friends or some unlucky investors. But they go in thinking they’re going to join the privileged labourers without putting any of the time to develop the skills and without paying for that labour. GenAI the whole thing. And they post about it on socials like they’re special or something.
Then boom. A month later. “Can everyone stop hacking me already, I can’t make this stop. Why is this happening?”
Mostly I feel sorry for the people who get duped into paying for this crap and have their data stolen.
There’s like almost zero liability for messing around like this.
No comments yet
People may worry that the "ASM" codebase will be bit-rot and no one can understand the compiler output or add new feature to the ASM codebase.
and that probably to some extent all involved (depending on how delusional they are) know that it's simply an excuse to do layoffs (replaced by offshoring) by artificially so-called raising the bar to what is unrealistic for most people
Technology always automates jobs away. I had a dedicated database systems team 25 years ago that was larger than an infrastructure team managing 1000x more stuff today. Dev teams are bloated in most places, today.
The working class is those who own no significant means of production and thus must sell their labor at whatever price the market bears.
That the market for SE labor is good(for the workers), doesn't mean SE's don't need to work to earn money.
Petite bourgeoisie: small business owners, shopkeepers
Haute bourgeoisie: industrialists, financiers
Managerial class (in some frameworks): high-paid non owners who control labor
Within the proletariat, you can distinguish:
Lumpenproletariat[3]: unemployed, precarious
Skilled laborers vs unskilled laborers
Labor aristocracy: better-paid, sometimes ideologically closer to capital
https://en.wikipedia.org/wiki/Proletariat [1]
https://en.wikipedia.org/wiki/Bourgeoisie [2]
https://en.wikipedia.org/wiki/Lumpenproletariat [3]
> Within the bourgeoisie, you can distinguish: [...] Managerial class [...] non owners who control labor
Contradiction?
Managerial Class != Bourgeoisie
This was a loose usage of the term “bourgeoisie”, meant in the sociological rather than economic sense. Sorry.
In late capitalism, the PMC (Professional Managerial Class) occupies a weird liminal space:
Economically they're proletariat
Socially/culturally they're aligned with bourgeois values
Politically they often acts in defense of capital (because of career dependency)
Hence: managerial class != bourgeoisie, even if they act like them or aspire to be them.
The distinction here is "do you get your money from owning assets or do you get your money from working" because where you get your money is where you get your incentives and the incentives of owning are opposite the incentives of working in many important regards.
The economy is inhabited by people who work for a living but it is controlled by people who own things for a living. That's not a conspiracy theory, it's the definition of capitalism. If you do not own things for a living and do not know people who do, spend some time pondering "the control plane." It should seem like an alien world at first, but it's an alien world with a wildly outsize impact on your life and it behooves you to understand it in broad strokes even if you aren't trying to climb into it.
I have a bridge to sell you if you're interested. Let me know.
Is the concept of "intellectual capital" a figment of my imagination, or a flaw of the traditional class identifiers? Or both?
What’s “the factory” for software? Our equivalent of the factory is the organization we work in, and our connection to the people who turn our software into money.
You can write software at home by yourself, just like you can do machining on your own. But there are a lot of steps between making software, or machining, and turning that output into money. By working for a company, you can have other people handle those steps. The tradeoff is that this structure is something owned by someone else.
Therein lies the propaganda.
The means of producing an AI is a huge data centre for training. Having a lot of money but no chips of any kind wouldn't get you an AI. We had money 10 years ago, but they did not make AIs out of them.
American popular usage defers from traditional economic role-based class analysis to be instead do income-based “class” terminology which instead of defining the middle class as the petit bourgeois who apply their own labor to their own capital in production (or otherwise have a relation to the economy which depends on both applying labor and ownership of the non-financial means of production) defines middle class as the segment around the median income, almost entirely within the traditional working class.
This is a product of a deliberate effort to redefine terminology to impair working class solidarity, not some kind of accident.
https://en.wikipedia.org/wiki/Professional%E2%80%93manageria...
I agree. Let's hope it will have much less impact on the 21th century.
Core economic concepts are things like elasticity of demand, market equilibrium, externality, market failure, network effect, opportunity cost and comparative advantage, and AFAIK Marx and his follower had essentially no role in explaining or introducing any of those.
If this seems like an absurd comparison, I would suggest reading both Philosophiæ Naturalis Principia Mathematica and Das Kapital.
A) a coal miner with $60 000/y salary
B) Elon Musk: $381 000 000 000
Sources: - https://www.indeed.com/career/software-engineer/salaries
- https://www.glassdoor.com/Salaries/coal-miner-salary-SRCH_KO...
- https://finance.yahoo.com/news/elon-musk-rich-6-8-170106956....
Is the average amount of properties (1-2) owned by a software developer closer to those of:
A) a worker at Walmart
B) Mark Zuckerberg?
> Well, for the time SEs are substantially better paid than working-class jobs, they are not the working class.
That's what they have been telling SEs to prevent us from unionizing :) All so they can put you where you stand now, when they (wrongly) think they don't need you. SE jobs are working class jobs, and have always been.
I don't think it makes sense to group the "don't have to go to work anymore" people with the "can buy anything" people, but they don't have a lot in common with the working class, either.
To what extent are SWEs working class? I guess that depends on how many of them still have to go to work. A salary of $350k certainly puts you on the road to never having to work again.
You use that word, it does not mean what you think it means when you immediately talk about income.
Do you own businesses, land, investments, and other forms of capital that generate wealth independently of direct labor? Enough wealth so you don't have to work for the foreseeable future? Is this the average software engineer for you (in or out of a union)? Because that's the definition of NOT being a part of the working class.
Comparing those in unions who are more likely to be in the video game making or industry or government to faangs like Amazon where you work day and night for a 4 year vesting offer that pays out very little until the 4th year and where on average most worker work less than 2 years at Amazon.
Please tell me a union SWE shop that has better benefits and comp than I get?
Also "working class" has a historical, social component, by which programmers are certainly not included.
When ownership of things can keep you and your family fed, clothed, and sheltered in comfort, you're part of the owning class. If it can't, you're a worker. Maybe a skilled worker, maybe a highly paid worker, maybe a worker that owns a lot of expensive 'tools' or credentials, or licenses, or a company truck, or a trillion worthless diluted startup shares that have an EV of ~$50, but you're still a worker.
If you're the owner of a small owner-operated business, and the business will go kaput because you didn't show up to do work, you're also a worker. The line is drawn at the point where most of your contribution to it is your own (or other peoples') capital, not your own two-hands labour.
Now, if you're some middle manager, with no meaningful ownership stake - you are still a worker. You still need to go to work to get your daily bread. It just so happens that your job is imposing the will of the owners on workers underneath you.
If you have somewhere between $5M and $10M in a HCoL American city, you are probably no longer working class insofar as you could quit, get on ACA healthcare, and rent a decent house or buy / mortgage a decent house and live a pretty comfortable life indefinitely. But you're on the very low end of not-working-class and are living a modest life (if you quit and stop drawing a salary).
If you have under that threshold (in a big expensive US city), you are probably still working class.
A lot of software engineers can get to $5M-$10M range in like 10-30 years depending on pay and savings rate. But also a lot of software engineers operate their budgets almost paycheck-to-paycheck, and will never get there.
$5-$10M for 30 years, but only if you save every penny in between? Wow, that's very impressive and totally life-changing! Reminds me of the story how millennials are not able to afford buying a house because of avocado toast!
Over 50% of that $160k floor is eaten up by just housing and private or ACA insurance.
So your housing costs for like a 1k-2k sqft spot, all in (rent, or if owning then insurance, upkeep, etc) costs you something like $50k+, your health insurance for two people on ACA costs you like $40k yr assuming kids are out of the picture (more if not), and you have a decent chunk leftover to spend on living a decent life, but not like egregiously large amounts. You're not flying first class, probably not taking more than 2 big vacations a year, driving nice but not crazy expensive car, etc.
If you elect to leave the big expensive US city, then of course you can do it with substantially lower amounts (especially so long as you can swing ACA subsidies and are willing to risk your "not-working-class" existence on the whims of the govt continuing that program).
Obviously if you live in some place (read: everywhere except the US?) where the floor for medical costs of two people not working but still having income from capital isn't around $40k/yr, then the amount can go wayyyy down.
Oh, really? Is that why both "white-collar worker" and "blue-collar worker" contain the word "worker"? Working class is everyone who has to work for their money. Can most programmers, on a whim, afford to never work again? An average programmer's salary is 2x the average coal miner's. A CEO is nowadays paid 339 the salary of their average worker https://www.epi.org/publication/ceo-pay-in-2023/.
Programmers are just one prolonged sickness or medical debt away from being destitute, the same as every other member of the working class. Lawyers, teachers, doctors, programmers, those are all working class, along with agriculture, mining, utilities and all people who have to get up and work for their daily bread and a roof over their head. Sure, there is a discrepancy in pay, but it's not as glaring as it is between a worker and the oligarchs like Trump and Elon Musk. The biggest con in society is that you are so far distanced from the obscene wealth of the rich, that it's not in your face to see how little you have and how much they do.
Both the guy in an old Dodge and the guy in the new Tesla are stuck in traffic, and you fail to realize realize there are people out there right now flying on a private jet for a cocktail? You think the guy living in an apartment is so much different than a guy living in a house in suburbia? How about the guy whose real-estate company bought the whole development and now is cranking up prices?
You make $200k yearly as a welder? Still working class.
You own a small business with 10 workers working for you? Still working class.
You manage a team of devs in a FAANG and are doing alright for yourself? Still working class.
Your parents donated a wing to Yale and own a hotel chain? Not working class.
Your savings account and stocks generate enough for you that you never have to work again? You are not working class.
This is because wealth wise, you are still closer to how much an unemployed person on benefits than to a CEO of a multinational company, and that's a fact.
The objective level of reproduction of labor force is about $2 per day. Cheaper for warm climates, slightly more expensive for cold ones.
So by that logic there is no working class in the US whatsoever because you don't have to work to survive. At all. Maybe half a year in your entire lifetime.
You just choose to spend all your money on things you don't need to survive, that's the only reason you needed to work. But that doesn't make you a worker class any more than Elon Musk becomes a worker class by buying 10 companies like Twitter.
So, using your logic, "You are making more than 50 cents an hour? You're not working class. You don't have to work most of your life to survive yourself or to provide for your children. You're closer to Elon Musk than to workers forced to work for $2 a day to survive."
I also like making random numbers up. Here are other numbers. 420. 1337. 1911.
> So by that logic there is no working class in the US whatsoever because you don't have to work to survive. At all. Maybe half a year in your entire lifetime.
I have no words to express how weak this argument is. The US has MOSTLY working class people because less and less people can survive on their salaries.
https://www.cbsnews.com/news/cost-of-living-income-quality-o...
> So, using your logic, "You are making more than 50 cents an hour? You're not working class. You don't have to work most of your life to survive yourself or to provide for your children. You're closer to Elon Musk than to workers forced to work for $2 a day to survive."
I am not sure if this sentence is a troll, or comes from genuine misunderstanding. I don't know what to advice. I genuinely chuckled. Here are a three numbers, elementary math:
7.25
53
1 600 000
Which two of these numbers are closer to one another? 7.25 and 53, right (I hope)? Well, let's look what those numbers mean:
7.25 - minimum wage https://www.dol.gov/general/topic/wages/minimumwage
53 - average hourly salary of a software engineer: https://www.indeed.com/career/software-engineer/salaries
1 600 000 average hourly wage of Elon Musk: https://moneyzine.com/personal-finance/how-much-does-elon-mu...
So...who is a software engineer closer in terms of income to? Elon Musk or a minimum wage worker?
("A CEO is nowadays paid 339 the salary of their average worker" you say? If we are nitpicking, that's obviously false; only a tiny, tiny fraction of CEOs are paid that well.)
Aside from that, I'd wager a rather large fraction of HN can easily afford never to work again. This place is crawling with millionaires and they're definitely not embarrassed about it, temporarily or otherwise. Good luck convincing them.
We are nitpicking, and you are wrong:
https://therealnews.com/average-ceo-makes-339-times-more-tha...
https://www.epi.org/publication/ceo-pay-in-2023/
https://www.statista.com/statistics/261463/ceo-to-worker-com...
"In 2022, it was estimated that the CEO-to-worker compensation ratio was 344.3 in the United States. This indicates that, on average, CEOs received more than 344 times the annual average salary of production and nonsupervisory workers in the key industry of their firm."
> Aside from that, I'd wager a rather large fraction of HN can easily afford never to work again. This place is crawling with millionaires and they're definitely not embarrassed about it, temporarily or otherwise. Good luck convincing them.
You can wager whatever you want, but statistically you'd be wrong.
https://www.bbc.com/worklife/article/20240404-global-retirem...
https://www.cbsnews.com/news/retirement-crisis-savings-short...
So, while 350 seems to be a small number, these 350 companies employ the largest chunk of the US market, and that's why they are the most representative to do the study with.
And that's my point! If you select a random worker in the US, there is a HUGE chance they are employed by Amazon, Walmert or one of the big-s. And there is a HUGE chance their salary is 339 times less than their CEO's.
The boundary between working class and not working class is not at the 99th percentile where you would have it. The diminishing marginal utility of money means you get 90% of the security of being wealthy at 0.1% of the net worth of a billionaire.
Bullshit.
https://edition.cnn.com/2024/07/23/business/inflation-cost-o...
https://ssti.org/blog/large-percentage-americans-report-they...
https://www.cnbc.com/2023/01/18/amid-inflation-more-middle-c...
Between 30 and 70% of Americans can't make ends meet. What used to be called "middle class" is disappearing, making way to only 1%-ers and us, the rest. The fact that I drive a Tesla and some guy drives a Dodge, doesn't mean we are not both stuck in traffic while there is some shcmuck flying on their private jet to reach their yacht.
Yes, the people who were prone to dying already did so years ago. But the rate of long term disability in every single country is skyrocketing.
The average person has had 4.7 covid infections by now. Now look into the literal thousands of studies of long term effects of that.
Future generations will never forgive us for throwing them under the bus.
Please don't put others down like that on HN. It's mean and degrades the community.
https://news.ycombinator.com/newsguidelines.html
> “It’s more fun to write code than to read code,” said Simon Willison, an A.I. fan who is a longtime programmer and blogger, channeling the objections of other programmers. “If you’re told you have to do a code review, it’s never a fun part of the job. When you’re working with these tools, it’s most of the job.”
> This shift from writing to reading code can make engineers feel as if they are bystanders in their own jobs. The Amazon engineers said that managers have encouraged them to use A.I. to help write one-page memos proposing a solution to a software problem and that the artificial intelligence can now generate a rough draft from scattered thoughts.
> They also use A.I. to test the software features they build, a tedious job that nonetheless has forced them to think deeply about their coding.
Maybe I'm weird, but chasing down bugs is like solving a puzzle. Writing green-field code is maybe a little bit enjoyable, but especially in areas I know well, it's mostly boring now. I'd rather do just about anything than write another iteration of a web form or connect some javascript widget to some other javascript widget in the framework flavor of the week. To some extent, then, working with LLMs has restored some of the fun of coding because it takes care of the tedious part, and I get to solve the interesting problems.
I solve a problem, let the AI mull on the next bit, solve another problem etc.
Like what exactly is Amazon giving us here? I don't get it. Also, I want to see Andy Jassy start writing some codes or fixing issues/bugs for next 5-10 years and have those reviewed by anonymous engineers before I take any word from him. These marketers/sales sleezy dudes claim garbage about things they do not do or know how to do but media picks up everything they say. It is like my grandmother who never went to school starts telling me about how brain surgery is slow and needs productivity else more people will die and those doctors need to adapt. Shameless behavior of these marketing/sales idiots as well as the dark side of media has reached new extreme in this AI bubble.
Meanwhile, I can see from comments how a lot of HNers totally agree everything this salesy guy says as holy bible verse and my colleague is sending me freaked out texts about how he is planning to switch career as Amazon Super Boss is talking about vibe coding now but became calm after I told him, these dudes are mostly sales/MBA who never wrote code of fixed issues, same way our PO doesn't know the diff between var and const.
The problem with that is that you don't want to get caught doing that directly.
So you need to hire a sleazy offshore firm to launder that for you.
Or (faster and cheaper) use a sleazy "AI" to launder it.
Google had too much class (and comfortable market position) to launch the sleazy "AI" gold rush.
But plenty of upstarts currently see an irresistible opportunity to get rich, by leveraging automated mass copyright violation, while handwaving "AI".
But I don't want to confuse the point, and accidentally be thinking "plagiarism of code isn't important, because code isn't important" (or the limiting factor).
For one reason, code is important, and valuable...
There's plenty of evidence throughout many types of software development to suggest that a lot of money has to be plowed into simply producing bulk of code -- excluding domain or market understanding, requirements analysis, holistic system design, etc.
A typical growth startup wasn't in a hiring frenzy for great analytical minds, but simply to scale up production of raw bulk of code (and connecting together legally-obtained off-the-shelf pieces). And they were often tracking quantitative metrics on that bulk, and consciously aware of how much all those code-monkey salaries were costing on the balance sheet. They paid it because code bulk was very necessary, the way they were working.
There's also plenty of evidence of the popularity of copy&paste reuse for many years. Including from StackOverflow.
Copying entire projects and subsystems has been much-much less common, partly due to the aforementioned not wanting to get caught doing it. But "AI" laundering is paving the way. See all the blog posts and videos about "I created an app/game/site in an hour with AI".
We can also look at startup schools of thought, where execution is widely regarded to be everything (ideas are a dime a dozen). There is plenty of thinking that churning out code is one of the most time-consuming or frequently-blocking parts of execution.
In both startups and established corporate environments, there are normally big lags between when a need for specific code is identified, and when that code can be delivered. Doesn't that look like a limiting factor, or at least it must be important.
I think that's enough reason that we not dismiss the value of code, nor dismiss the importance of plagiarism of code.
Based on my life experience from the last 15 years, you should assume that any "open source" code you leave online is going to be plagiarized heavily. It's unfortunate, but the hard truth for a billion different reasons.
It is simply a shift from shovel-ware to a service model, and does what DRM did for the music businesses.
Personally, I often release under Apache license as both a gift and a curse... Given it is "free" for everyone including the competition, and the net gain for bandits is negative three times the time cost (AI obfuscation would also cost resources.)
The balance I found was not solving corporate "hard" problems for free, and focusing on features that often coincidentally overlap with other community members. Thus, our goals just happen to align, and I am glad if people can make money recycling the work. =3
Why bandits are not as relevant as people assume:
"The Basic Laws of Human Stupidity" (Carlo M. Cipolla)
https://harmful.cat-v.org/people/basic-laws-of-human-stupidi...
In my opinion, less restrictions on all users naturally fragments any attempt to lock-down generic features in the ecosystem.
Submarine Patent attacks using FOSS code as a delivery mechanism is far more of a problem these days, as users often can't purchase rights even if trying to observe legal requirements. The first-to-file laws also mean published works may be captured even if already in the public domain.
It is a common error to think commercial entities are somehow different from the university legal outreach departments. Yet they often offer the same legal peril, and unfeasible economic barriers to deployment.
Best regards, =3
https://www.youtube.com/watch?v=i8ju_10NkGY
I think I see where you're coming from. I'd imagine in some applications, there is very niche/specific implementation on Github that is both expensive to reimplement and not permissively licensed. I can see the "AI laundering" angle there.
No, I think Google was just slow this time
But decisions might've come down to what keeps the ad revenue machine fed. Or it might not have made it to decisions.
If they were slow, wasn't it only in being prepared for when the boat got rocked? Until the boat got rocked, they had a monopoly, so why rock it themselves?
I wouldn't write in my promotion packet, "Disrupted our golden cash cow, and put our dominance and entire company at risk, before someone else did, because it would happen eventually, you'll thank me later, pls promo kthx."
That hits hard from my youth (and I'm only in my early 30s). Can we all please actively communicate to people that open source is not a business model, and any large firms promoting it are likely just looking for free work? Like communicate that in an empowering way?
Yeah, it would probably decimate open source contributions in the short term. But honestly, our field would be so much healthier overall in the long term.
The same could be said about copyrights/patents (another point argued over the years). If you get rid of these protections for everyone, the only people that will end up on top are the large companies with lots of resources that can just take your ideas and not compensate you.
But isn't that exactly what we have now with the existing patents and copyrights implementation?
Maybe some people are getting some money, but I know if Amazon (random example company name) violate copyright or a patent of mine, I don't have the resources to take them to court over it much less win.
The firms that will fight them are few and far between, and are priced accordingly.
We all need to wake up to this.
That's not what people who are using AI for though? The typical use case is "write a program that does [insert business problem]". I highly doubt there's a codebase that solves that exact business problem.
ChatGPT was hugely irresponsible in many ways and rests solely on the shoulders of Sam Altman, someone allegedly implicated in various schemes.
The game is to learn new tools quickly and learn to use them better than most of your peers, then stay quietly a bit ahead. But know you have to keep doing this forever. Or to work for yourself or in an environment where you get the gains, not the employer. But "work for yourself" probably means direct competition with others who are just as expert as you with AI, so that's no panacea.
The first is that the entire global codebase starts to become an unstable shitpile, and eventually critical infrastructure starts collapsing in a kind of self-inflicted Y2k event. Experienced developers will be rehired at astronomical rates to put everything back together, and then everyone will proceed more cautiously. (Perhaps.)
The second is that AI is just about good enough and things muddle along in a not-great-not-terrible way. Dev status and salaries drop slowly, profits increase, reliability and quality are both down, but not enough to cause serious problems.
The third is that the shitpile singularity is avoided because AI gets much better at coding much more quickly, and rapidly becomes smarter than human devs. It gets good enough to create smart specs with a better-than-human understanding of edge cases, strategy, etc, and also good enough to implement clean code from those specs.
If this happens development as we know it would end, because the concept of a codebase would become obsolete. The entire Internet would become dynamic and adaptive, with code being generated in real time as requirements and condition evolve.
I'm sure this will happen eventually, but current LLMs are hilariously short of it.
So for now there's a gap between what CEOs believe is happening - option 3. And what is really happening - option 1.
I think a shitpile singularity is quite likely within a couple of years. But if there's any sane management left it may just about be possible to steer into option 2.
Just like clothing and textile work. They are getting cheaper and cheaper, true, but even with centuries of automation, they are still getting shittier in the process.
For the first time now I can feel the joy of what I do slipping away from me. I don't even mind my employer capturing more productivity, but I do mind if all the things I love about the job are done by robots instead.
Maybe I'm in the minority but I love writing code! I love writing tests! If I wanted to ask for this stuff to be done for me, I would be a manager!
Now, I'll need to use gen AI to replace the fun part of the job, or I'll be put out to pasture.
It's not a future I look forward to, even if I'm able to keep up and continue working in the industry.
This will work until the capitalists realize the stock market let's plebs do well and they'll unlist the best companies.
Marx is the originator of precisely none of those thoughts, you couldn't find an economist that disagrees with them. "Unions" is also not the obvious solution for the problems of an individual. Unless you have a specific, existing union with a contact phone number that you're referring to, one that has a track record of making sure that individuals are not affected negatively by technological progress over the span of their entire careers, you're just lazily talking shit.
If it's the solution, so much easier than keeping ahead of the technology treadmill, and it's so obvious to you, it's strange that you haven't set up the One Big Union yet and fixed all the problems.
Right, but the observation here is that many, maybe most, individuals in a particular field are having this same problem of labor autonomy and exploitation. So... unions are pretty good for that.
SWE is somewhat unique in that, despite us being the lowest level assembly-line type worker in our field, we get paid somewhat well. Yes, we're code monkeys, but well-paid code monkeys. With a hint of delusions of grandeur.
> Companies will always try to capture the productivity gains from a new tool or technique
Ie "capitalists" are not rewarded for deploying capital and mitigating risk but for extracting as much from the labor as possible. And yes Marx is absolutely the "originator" of these ideas and yes absolutely you ask any orthodox economist (and many random armchair economists on here) they will deny it till they're blue in the face. In fact you're doing it now :)
https://en.m.wikipedia.org/wiki/Labor_theory_of_value
Edit: it's the same thing that plagues the rest of American civil society: "voting against your [communal] interests because someone convinced you that your exceptional". Ie who needs unions when I'm a 10x innovator/developer. Well I guess enjoy your LLM overlords then Mr 10x <shrug>.
That's true. And employers have been consistently the one with more bargaining power, and that's why our wages haven't kept up with the productivity gains. This is also known as productivity-pay-gap.
We, the working class, are supposed to be paid roughly 50% more than we are paid now, if the gains from productivity were properly distributed. But they are not, concentrated to a large extent in the owning class, which is what's unfair and why we, the workers, should unite to get what's rightfully ours.
https://www.epi.org/productivity-pay-gap/
Interesting to see A imagine what B meant, then assert that A believes some metric will always go up because they always saw it go up? It's not clear what they meant, making this response as nonsensical as the response. An AI level exchange.
I know reading skills are in short supply in a group of people that only read code but I thought it was pretty obvious what I was alluding to. But even if it weren't (admittedly you have to have actually read Marx for it to jump out at you) by the time you responded there was another comment that very clearly spells it out, complete with citations.
This kind of statement does not make a point, nor is it appealing to engage with. Good luck with whatever.
> It's not clear what they meant, making this response as nonsensical as the response. An AI level exchange.
Does this kind of statement make a point? Is it appealing to engage with?
I saw this on Reddit and it captured this phenomenon beautifully: you're not a victim here, you're just starting a fight and then losing that fight.
Or, you know, being a member of society, you can find other members of society who feel like you, and organize together to place demands on employers that...you know...stops them from exploiting you.
- That's how you got the weekend: https://www.bbc.com/worklife/article/20200117-the-modern-phe...
- And that's how you got the 8-hour working week: https://en.wikipedia.org/wiki/Eight-hour_day_movement
- And that's how you got children off the factories: https://en.wikipedia.org/wiki/Child_labour
But, you know, you can always hustle against your fellow SEs, and try to appease your masters. Where others work the bare minimum of 8 hours, why not work 12, and also on the weekend? It's also fine.
Generating shareholder value is very important for the well-being of society! /s
In fact, most people would have a version of this pyramid in order of importance:
1. Personal mental and physical well-being and the same for your loved ones
2. Healthy and functioning society and robust social safety nets, e.g retirement, paid leave, social housing, public transport etc
...
1337. The composition of sand on Mars
...
...
...
...
4206919111337. Shareholder value
"Stonks go up" is not a proxy for success. Success is when pharma executives don't tremble like the villains they are from hearing the name of Mario's little brother. Success is when normal people get from the social contract at least as much as they put in. If we, the people, get less than from the social contract that we put in, as we nowadays observe, I can guarantee you we will break down the social contract, and the ones having most to lose from that are your precious stakeholders.
AI blew up and suddenly I'm seeing seasoned people talking about KLOC like in the 90s.
I think this attitude has been taken too far to the point that people (especially senior+ engineers) end up spending massive amounts of time debating and aligning things that can just be done in far less time (especially with AI, but even without it). And these big companies need to change that if they want to get their productivity back. From the article:
> One engineer said that building a feature for the website used to take a few weeks; now it must frequently be done within a few days. He said this is possible only by using A.I. to help automate the coding and by cutting down on meetings with colleagues to solicit feedback and explore alternative ideas."
At least something good comes out of this.
And these people have become advocates in their respective companies, so that everyone is actually following inaccurate claims about productivity improvements. These are the same people quoting Google's ceo who say that 30% of newly generated code at Google is written by AI, without possibility to deny or validate it. Just blindly quote a company that has a huge conflict of interests in this field and you'll look smarter than you are.
This is where we're at today. I understand these are great tools, but all I am seeing madness around. And who works with these tools on a daily basis knows it. Knows what it means, how they can be misleading, etc.
But hey, everyone says we must use them ...
Exactly, but I would go further, anyone who worked in big corps know that other non 'generating new code' part is usually pretty inefficient and I would argue AI is going to change that too. So there will be much less of that abstract yapping in endless meetings or there will be less people involved in that.
people think all devs work FAANG like companies when there is loads of companies where devs are treated like dirt only now one FAANG company catches up with reality.
Where business people seem to never be measured in any way.
If requirements were shitty - well dev team did a bad job - not that business made up stupid decisions on stupider time lines.
I've tried interviewing at that place, and have't felt that kind of hostility ever - one guy even wanted me to prove that K-means in 2d is in P. This I later found was a was the topic of a key paper in this niche of ML theory.
Without sacrificing code quality, it only makes coding more productive _if you already know_ what you're doing.
This means that while it has a big potential for experienced programmers (making them push out more good code), you cannot replace them by an army of code monkeys with LLMs and expect good software.
This seems like a crazy solution to a situation.
"Good" software only matters in narrow use cases--look at how much money and how many contracts companies like Deloitte and Accenture make/have.
Sure, you can't "vibe" slop your way to a renderer for a AAA title, but the majority of F500s have no conception of quality and do not care nor know any "better."
One specific endpoint has a function that takes 2-3 minutes to execute. When I was hitting the endpoint I was getting a timeout error. I asked Claude to fix it, and it came up with a huge change in the front end (polling) and backend (celery) to support asynchronous operations.
I instead increased the nginx timeout from 1 minute to 5 minutes. Problem solved without introducing 100x the complexity.
You are just waiting for this to deteriorate, and at some point that 5min timeout is not long enough, and you'll have to come up with something else again.
And more generally, there are multiple issues with properly handling long running actions -- you want a good UX for these actions in your app, they may need to be queued or be allowed to cancel etc. I don't know the exact situation, but I assume this is a small app so you don't care much about it. But in any serious application where UX is important and you don't want someone to submit 10000 such requests (or 10000 users submitting at the same time) to blow up your backend, the sane design is to do this asynchronously with other mechanism to manage the actions.
No not really. The engineer’s job is work within the constraints first and when a big change is really warranted, pursue it.
It’s having a structural issue that can be fixed with a reinforcement, and an automated system suggests to demolish and rebuild part of the most structure altogether. It’s not that clear cut. You maybe right but I wouldn’t jump into conclusions like this.
Disclaimer: not one of those "you didn't prompt right whenever LLMs fail" people, but just something I've noticed in my use of LLMs - they usually don't have the problem context so can't really tell you "should we even be going down this path"
But I was trying to simulate the vibe coding experience where you are completely clueless and rely exclusively on the LLM. It works, but it creates even more work (we call it debt) for the future.
PS I was using Claude within Copilot, so it could definitely see the docker & nginx configuration.
If what you wanted was a simple ductape quick fix I'm sure you could have asked for that and Claude would have recommended what you did, increasing the timeout window which I guess "fixes" the problem.
What is the risk introduced by a long request that requires you to increase to your code complexity by 100x?
Sure I could also dump completely Django and build from scratch with the provision for 10B concurrent users.
My point is that when coding, business context is needed. I was expecting to see from the model what the root cause was and clear statement of the assumptions used when providing the solution. Instead, what I got was "this is a long request, here is the most likely solution for long requests, enjoy your new 1000 loc and dependencies"
I wonder if this is something you could get Claude to think about by adding some rules about your business context, how it should always prefer the simplest solution no matter what.
"Business context is needed." Then why don't you provide that context? You expect Claude to read your mind? What are we talking about here?
Dude, be humble. If all you want to do is to argue instead of having a productive discussion, this is not the place for you.
Out of curiosity I asked Claude.
“I have an application, it only has a few users, hosted on nginx. There is 1 endpoint which can take up to 5 minutes to execute. What’s the best way to fix this?”
Response:
“Immediate fix…
Increase nginx timeout settings explains how”
I happen to have a php project here hosted on nginx with docker locally for running and testing. So I asked cursor the exact same question. It suggested refactoring to make the call async, or changing nginx. I said change nginx. It updated the config and rebuilt docker compose to make sure the change happened.
It’s always a user issue when it comes to using AI.
Perl programmers are as always ahead of the curve. Writing code that looks like LLM slop before LLMs!
I found this ultra-depressing, and far from what coding was for me - a creative role with great creativity and autonomy. Coding was always solving problems, and never felt like some sort of assembly line. But in a lot of companies, this is how it was constructed, with PMs setting up sprints and points, etc.
Similarly, I spoke to a doctor about how much they loved being able to work remotely at their role - with 2-3 days a week where they just responded to email and saw patients over telehealth. It felt very "ticket" focused and not at all the high status doctor role I imagined.
I suspect that both those roles will be lost to AI. If your role is taking a ticket that says "the box should be green, not red", and nothing more, that's the sort of thing that AI is very capable of.
Based on my experience with sprint teams, breaking things down into just a couple hours of work per ticket implies that someone else is doing an enormous amount of prep work to create a dozen tickets per feature. I agree that your friend is performing the work of a development system. I've heard this called "programming in Developer" as opposed to whatever language the developer is using.
It's incredibly frustrating to try and get anything done in a team like that. The reality of most software jobs I've had is that problem discovery is part of it. The only people who know the code well enough to know the problens it has are the developers.
Where do I sign up?
Now we’re going to set up a whiteboard test here and you can demonstrate to us your best copying and pasting.”
“errrr, do I do any actual coding in the job?”
“Well, yes, inasmuch as anyone does these days. It’s mostly copying and pasting though, but hey that’s what coding IS now, right?”
“OK are you ready for your coding test, here it is: what key is COPY? And what key is PASTE?”
“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.” — Linus Torvalds
But the interesting work: thinking about software architecture, about implementation strategies for large projects, or about finding a good debugging strategy for bugs in large code bases, that work is still for me.
(That said I'm all for more dystopian stories so we can get past this AI-replacing-coders fad)
maybe 10 years ago. HR is like a rabid dog, fighting for every dollar.
Typically if you squeeze every penny, you end up with shit. You can do it, but those costs don't just - poof - disappear. They might, and often do, become more difficult to measure. So if you're the measuring guy, it might look great to you.
But also humans are messy apparently, so rather than try to introspect about what went wrong to make themselves a better manager, or improve their hiring process, or figure out how to remove bad managers, they punt: "Humans are messy. Metrics are actionable. Humans aren't metrics. shrug."
Humans can be replaced, sure, but you can't replace a surgeon with a landscaper -- but that's not going to stop some managers from trying!
As for developing becoming more factory-like. Not sure about that, you have the same leverage as a developer and you can just work for the other guy.
No why would there be? Unless you are spiritual, there isn't any reason any of the physical processes that make up human thought can't be done artificially, probably much more efficiently. Society needs to confront the myth that automation is going to always open up more jobs that need human labor. It's comforting for people who hate the idea of UBI or other safety nets that people can keep "retraining". Eventually there's going to be nothing to retrain to (at least of nothing of economic value)
Or maybe it is? When I have some time, I'll find out.
https://www.youtube.com/watch?v=J-PTzq1bv9M
They’ve opted into shitty working conditions for years and played a pretty big part in spreading those conditions to other places.
I have found that former Amazon employees can get jobs at other FAANG companies. But smaller/medium companies that compete on culture rather than pay, don’t tend to want to hire those folks.
Why generate that memo at all - maybe scattered thoughts would be enough and the AI could help managers understand them. Then again, if the goal is to replace the engineers and not the other way round, then this makes sense.
We know AIs are fine for bouncing ideas off of, but they’re not senior-level architects yet.
What needs to happen is the education of "junior programmers" needs to be revamped to embrace generative AI. In the same way we embraced google or stackoverflow. We're at a weird transition state where the juniors are being taught to code with an abacus, while the industry has moved on to different tools. Generative AI feels taboo in education circles instead of embraced.
Now there will eventually be a generation of coders just "born" into AI, etc, and they will do great in this new ecosystem. Eventually education will catch up. But the cohort currently coming up as juniors will feel the most pain.
No they don't. They need to actually learn how to use their brains first.
Then the internet came, and it felt like 'cheating'.
Then forums came and it felt like cheating, Then SO, and so on and so forth.
Now AI is eating the [software] world, and to a lot of people, it feels like cheating. I am just amazed of what i can build.
In 10-15 years software will become a commodity, along with books/stories and maybe even music/art. I don't know how it looks like. But darn im excited to be here to experience it.
That’s no longer true. And that democratizes these skills, which I agree could be a great thing.
But do you agree that it’s important for kids to learn to think critically and systematically? Because it’s super hard to stay motivated to learn those things when LLMs do that for you (and you’re too young to tell when they’re doing a bad job of it).
Nah, only the teachers feel this, Gen AI is extremely popular in students. In fact, you would look weird if you don't use Gen AI for school work.
What we need is teaching them how to use gen AI effectively.
The current LLM based "AI" isn't good enough and we're already seeing way to many unable to code without the assistance of an AI agent. Sure, many of these people couldn't code at all before, or only very poorly, but at least their output was limited. We're producing way to much code (and to much content in general). The heavy leaning into AI at this point is going to set us back 10 - 15 years, for a short term profit. It's the dotcom bubble all over again in that respect. Way to many unskilled people are producing garbage code, and there aren't enough skilled people around to fix it, because the output volume is to high.
> Google recently told employees it would soon hold a companywide hackathon in which one category would be creating A.I. tools that could “enhance their overall daily productivity,” according to an internal announcement. Winning teams will receive $10,000.
If it's really that great, why the competition? Shouldn't this happen pretty organically? Companies are pushing "AI" hard, why to hard, it's not yet there where it can realistically deliver what is expected on the business side. I think even Google developers know this, but hey, $10,000 is $10,000.
I'm very concerned that we eroding trust, safety and quality long term, for a short term profit. It's not that LLMs can't be helpful, save money or improve quality, but you have to be a fairly skilled developer to get those advantages safely.
Source: I worked there.
Personally I find babysitting AI quite boring. It's easy to just stop caring about the quality of the output when one just is wage slaving it, and the process itself is no longer satisfying.
I am sick of these verbose articles that boil down to nothing basically. What the f does it mean to "produce code"? Like are we just churning out LoCs daily just for the sake of doing so?
The cog in machine effect has always been there in the corporate world, but somehow it feels like the technique has been refined in the last couple of years.
All these narratives about user freedom, for any purpose etc. are just propaganda these days.
The thing I don't understand is why anyone thinks this is an improvement. I think anyone that's written code knows that writing code is a lot more fun than reading code, but for some reason we're delegating to the AI the actual enjoyable task and turning ourselves into glorified code reviewers. Except we don't actually review the code, it seems, we just go on with bug ridden monstrosities.
I fail to see why anyone would want this to be the future of code. I don't want my job to be reviewing LLM slop to find hallucinations and security vulnerabilities, only to try to tweak the 20,000 word salad prompt.
IME there's an inverse relationship between how excited the person is about AI coding and their seniority as an engineer.
Juniors (and non-coders) love it because now they can suddenly do things they didn't know how to do before.
Seniors don't love it because it gets in their way, causes coworkers to put up low-quality AI-generated code for peer review, and struggles with any level of complexity or nuance.
My fear is that AI code assistants will inadvertently stop people from progressing from Junior --> Senior since it's easier to put out work without really understanding what you're doing. Although I guess I could have said the same thing about Stack Overflow 10 years ago.
A slightly less bleak example is data analysis. When I am analyzing some dataset for work or home, being able to skip over the “rote” parts of the work is invaluable. Examples off the top of my head being: when the data isn’t in quite the right structure, or I want to add a new element to a plot that’s not trivial. It still has to be done with discipline and in a way that you can be confident in the results. I’ll generally lock down each code generation to only doing small subproblems with clearly defined boundaries. That generally helps reduce hallucinations, makes it easier to write tests if applicable and makes it easier to audit the code myself.
All of that said, I want to make clear that I agree that your vision of software engineering Becoming LLM code review hell sounds like… well, hell. I’m in no way advocating that the software engineering industry should become that. Just wanted to throw in my two cents
As a comparison point I've gone through over 12,000 games on Steam. I've seen endless games where large portions of it are LLM generated. Images, code, models, banner artwork, writing. None of it is worth engaging with because every single one has a bunch of disjointed pieces shoved together combined.
Codebases are going to be exactly the same. A bunch of different components and services put together with zero design principal or cohesion in mind.
I am not a professional programmer, and I tend to bounce between lots of different languages depending on the problem I'm trying to solve.
If I had all of the syntax for a given language memorized, I can imagine how an LLM might not save me that much time. (It would still be helpful for e.g. tracking down bugs, because I can describe a problem and e.g. ask the AI to take a first pass through the codebase and give me an idea of where to look.)
However, I don't have the syntax memorized! Give me some Python code and I can probably read it, but ask me to write some code from scratch, and LLMs I would have needed to dive into the language documentation or search Stack Overflow. LLMs, and Claude Code in particular, have probably 10x'd what I am capable of, because I can describe the function I want and have the machine figure out the minutia of syntax. Afterwards, I can read what the produced and either (A) ask it to change something specific or (B) edit the code by hand.
I also do find writing code to be less enjoyable than reading/editing code, for the reason described above.
Generally you spend 80% of the time wrangling abstractions, especially in mature project. The coding part is often a quick mental break where you just doing translation. Checking the syntax is often a quick action that no one mind.
That's kind of what I mean by "syntax". For example, "how do I find a value that matches X in this specific type of data structure?" AI is very good at this and it's a huge time saver for me. But I can imagine how it might be less helpful if I did this full time.
You talk of memorizing syntax like its a challenge but excluding a small number of advanced languages no programmer thinks syntax is hard. But if you don't understand the basics how can you expect to be able to understand if the solution an LLM presents is decent and not rife with bugs (security and otherwise)
I guess my issue is people are confusing a shortcut with actually being able to do the thing. If you can't remember syntax I don't really want your code anywhere I care about
That way you know you're (usually) strictly making an improvement.
This feels like we are forcing people who rather look at code to start talking in plain language, which not every dev likes or is proficient in.
Devs won’t be replaced by AI. Devs will be replaced by people that can (and want to) speak to LLMs.
Hum yeah, because it's insanely hard to properly review a CR that's more than a few pages long?
Will SDE interviews change ? Are these companies gearing up to let AI engineers in ? I highly doubt this is ever going to happen.
There is a quote by buffet that I think applies to a lot of scenarios not just investing : 'to be fearful when others are greedy and to be greedy only when others are fearful.'
ML IoT 5G blockchain etc. so many technologies are great that had their gaussian curve moment during greed. But these things take a back seat after that.
https://gmplib.org/
Granlund's gcc optimizations probably save Amazon millions in electricity each year. But evidently they don't care about real programmers.
... you will use them anyway because, customer service or no, there’s a good chance you don’t have a choice that doesn’t cost half again as much. (Regional availability may vary.)
It has changed dramatically over the past 5 or so years into this.
https://geizhals.eu/
Amazon is not nearly the cheapest or most reliable one for hardware.
https://geizhals.eu/supermicro-h13ssl-n-bulk-mbd-h13ssl-n-b-...
The vendors with the cheapest price have good customer reviews as well, unlike Amazon, which has terrible ones.It's interesting how the trend which took over manufacturing is now also taking over software development. This trends towards low-quality, mass-production is taking over everything.
I also looked at the study and noticed a few aspects that were surprising:
(a) some of the 95% CIs crossed zero, meaning no benefit is a possible interpretation in figures 6 and 7
(b) did anyone account for what happens when two workers are in different experimental groups and sit next to each other? I imagine it was likely common people in the experimental group to use co-pilot queries for their friends.
(c) for experienced workers, the mean value is actually negative (lol) in the unweighted data in Figure 7
There are a lot of other subtleties in the interpretation of this paper.
"...developers who were less productive before the experiment are significantly more likely to accept any given suggestion...."
...curious lol
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566
> The engineers said that the company had raised output goals and had become less forgiving about deadlines.
There are two issues this article brings to mind:
1. Feels like we are back when lines of code was a measure of productivity.
2. We’ve heard this tune before, and the problem we have now is that you don’t understand what you didn’t write. Software development is in large part about understanding the system from the highest level to its smallest details, and this takes away a key part of how our world works, in favor of a “cattle not pets” view of code.
Now, if you don’t expect your programmers to have an understanding of the system they built, and you treat code as disposable, then you’ll center around a world where folks aren’t trained to learn a system, and I don’t see that as a world that is compatible with increased reliance on A.I.
I'm generally amazed at the JS stuff it can come up with from scratch, but asking it to implement features is often not quite great.
Far more often it can't even use the damn edit tool given to the LLM! I have to take some of the "excrement" left behind; may be do a git reset; and then carefully fix the code.
I'm not surprised the managerial class has gotten high on the kool-aid of replacing workers...
The engineers are smart and fast folks, which helps a great deal. But don’t seek true creativity and deep expertise at Amazon, unless we’re talking about legacy L6+ (or likely even L7+) level engineers who get to have the leeway needed. The picker robots I had been working with were created, originally, by a Netherlands company, not Amazon. Amazon’s job was scaling it all up for their massive warehouse operation. It was a bunch of hacks on top of hacks. Heck, not even Harel state machines (which is kind of what one would expect to have on a properly designed event-driven robot that’s not overly tied to its PLC implementation, where things can be modularized, in and out, as needed). Just a bunch of switch statements and a menagerie of globals, with only two key people intimately familiar with most of its key code structure, one of whom decided to leave the group in frustration.
In fairness, the mess didn’t originate just at Amazon, but there never was a technical push at Amazon to have the initial work done in the right way. Not the software part anyway.
I can see that the craziness not only didn’t abate but has gotten worse. The money was good but it’s not worth the stress and other frustrations IMO, unless you’re a noncontractor and at least a technical program manager or above, where you won’t be coding as much as reviewing others’ code anyway.
Otherwise, I can totally understand why people just work there for a couple years, to get their resume looking good and then leave
Of course I'm fantasizing and we sadly don't live in this kind of world (Amazon or Google would stomp you if you come for their users) but hey a man can dream.
- High degree of warehouse injuries (https://www.nytimes.com/2024/12/16/business/economy/amazon-w...)
- Making its delivery drivers piss in bottles (https://www.forbes.com/sites/katherinehamilton/2023/05/24/de...)
- Illegally busting unions (https://www.thenation.com/article/activism/amazon-union-bust...)
- Forcing people back into their offices on five day RTO (https://www.businessinsider.com/amazon-5-day-rto-mandate-pol...)
Is now making its white-collar employees' life resemble ...warehouse work? Unthinkable!
Usage of a tool becoming a performance metric is a clear sign that the tool isn't doing what it's promised to do. This is managers desperately trying to prevent their investment in GenAI from blowing up in their faces.
This is as clear an admission as any that the supposed productivity or quality gains from LLM coding aren't showing up in the results. If a tool is effective, management generally doesn't have to micromanage and force employees into using it.
LLM's were trailed as customer service bots, which backfired (for the most part). Now they're exploring different areas.
In fact, a common meme within Google is that Engineers are mere plumbers routing protos from one place to another.
(Given how brittle these systems are though, I'm not sure if I trust LLMs to be better; but they may be more suited for dealing with un-interactive delayed-feedback baroque systems that are the norm in such places.)
I'm getting 503s ("Sorry! Something went wrong on our end") on searches and "We're sorry, an error has occurred. Please reload this page and try again" on product pages.
The AIs should probably fix that.
Study paid by MS finds that MS tool is improving productivity...
You need way way less people for that.
I've been in Amazon for close to a decade, and I constantly think "I can't believe X hasn't been automated in the 30 years that Amazon has existed and is still done on Excel".
Most engineers will work on new features for at least half of the year, and I personally work on brand new projects or at least features constantly.
[1] https://docbarraiser.com/annual-business-planning-what-the-o...
horrified to think the totes are done the same way.
I can safely say this article is bullshit. While there are a lot of programs ongoing to allow builders to adopt GenAI tooling, and while there is definitely a lot of communication ongoing around those programs, nobody is, at all, forced to use any of the AI tools. None are installed by default or enabled by default anywhere, and everyone is free to completely ignore them.
That said, is there an increase in expectations? Yes. But that's just normal Amazon in an employer's market, and has nothing to do with LLMs and GenAI.
The comparison to Microsoft where we can witness in public the .Net maintainers fighting with the shit code generated by Copilot on their repos is ridiculous. Amazon is probably one of the companies pushing the least and being the most prudent about GenAI adoption.
Q is installed by default in all browsers on Amazon laptops now, and literally cannot be uninstalled. If you don’t have it installed in your IDE, you get a non-dismissible popup nagging you to install it until you do. Many teams are being told they must use AI every single day (some VPs have sent out org-wide emails saying that AI must be used), and engineers have to tell their managers how they are making use of it day-to-day. In my org, OP1 docs must include at least one section about how the team will increase use of AI. Hackathons aren’t allowed to happen anymore unless they are AI-themed. I could keep going. Amazon is absolutely forcing AI usage, and the article undersells how egregious it is.
There is absolutely no company-wide mandate to use GenAI. If some SDM is pushing it on his SDEs, that's an outlier and on that person alone.
There is an STeam goal for adoption and usage. There is a QS dashboard for SDMs to see statistics on their org's adoption and abandonment rates. There is BT guidance being propagated out to VPs and directors on how to roll out programs. As placardloop said, there was a mandatory OP1 FAQ question on GenAI usage.
You said “None are installed by default or enabled by default anywhere” - this is also false. I’m looking at an installed by default (and uninstallable) AI browser addon on my work laptop right now.
It’s not hearsay, you’re either commenting in bad faith or you’re just clueless about what’s going on at your own company.
> There is absolutely no company-wide mandate
And now you’re just moving the goalposts.
No comments yet
I don't recognise this at all, not in myself, neither in the well functioning teams I've been a part of.
Executing and reviewing code is the job. That's the core thing we do. Writing code is just a fluent activity, like walking or absentmindedly spinning a pen between the fingers.
Only because the people responding have never done warehouse work.
Am I accountable for the angles or not? Don’t think so! My job is just to bolt it on.
Is a developer not accountable for the code, regardless of how it was created?
Seems indeed to be like Warehouse work, which is why Web developers will be the first to be affected by AI.
Doesn't matter if you are "senior" or "staff" in Web development. AI is already senior staff level in that.
Because it’s quite different…
LLMs need to get better at it - more precision, less hallucination, larger context windows, and stricter reasoning. We also need a clearer and more domain-specific vocabulary when working with them.
In short LLM will end up as one more programming language - but one that’s easier for humans to understand, since it operates using the terminology of the problem domain.
But humans will still need to do the thinking: the what-ifs, the abstractions, the reasoning.
That’s where AI becomes unsettling - too many people have been trained not to think at all.
The tote goes onto a conveyor where it is shot forwards, backwards and sideways, all around the warehouse until it reaches the compiler station. There, another worker will lint each Code with a lint roller and compile them by selecting the appropriately sized cardboard box.
Next, this executable bundle will be linked with its customer, possibly by means of a plane or 18-wheeler truck, with last-mile linking performed by a smaller vehicle suitable for urban traffic ...
Bruh, no competitor is going to set up an army of datacenters and warehouses on the gargantuan scale of amazon anytime soon... What do customers want? How about fixing your search function accuracy and policing the disgusting influx of scam products with fake reviews!!! Ugh. How about you get AI working on THAT jackknife...
Considering Amazon kills and steals all kinds of small businesses and ideas that would be a godsend for Americans.
Google, Microsoft, OpenAI, Oracle, and xAI covet what AWS has. They are absolutely coming for AWS.
Three?
I, for one, look forward to the contracting gigs fixing garbage code. :)
The pull back on this will be brutal.
Have coders really psyopped themselves into thinking their job is somehow that much more special than the rest simply because it paid better due to temporarily market conditions?
I thought that was a joke where everyone was in on it, not that they were serious. I assumed it was clear we're all replaceable cogs in a machine, baring a few exceptions of brilliant and innovation people.
Yes. We don't need to pay $$$ for simply changing elements on a page or adopting the next web framework to replace another. The hype in many web technologies that lots of developers that have fell for also contributed to the low quality of the software that you use right now.
All of this work to pay developers to over-engineer inefficient solutions and to give a false sense of meaningful work contributed to the "psyop" of how highly inflated their salaries were to do their jobs in the ZIRP era.
And AI has shown which developer jobs it is really good at, and it is consistently good at web developer roles.
So I'd expect those roles to be significantly less valuable.
I am not a software engineer and have never felt stable in my 30 year career.
It always feels like the rug could get pulled out from under me at any time.
So what? It is still better than working in coal mine. It is still more interesting than working at a gas station.
Hard to feel sorry for people basically complaining the work ping pong table doesn't have quite the quality ping pong balls they were expecting.
It is an interesting mix of being both super elitist and completely infantile at the same time.
No comments yet
Assemblers, Linkers, Compilers, Copying, Macros, Makefiles, Code gen, Templates, CI & CD, Editors, Auto-complete, IDEs are all terms that describe types of automation in software development.
LLM-generated code is another form of automation, but it certainly isn't the first. Right now most of the problems are when it is inappropriately used. The same applies to other forms of automation too - but LLMs are really hyped up right now. Probably it will go through the normal hype cycle where there'll be a crash, and then a plateau where LLMs are used to drive productivity but expectations of their capability are more aligned to reality.
The whole field is about automating yourself out of a job, and it's right in the name.
https://grodiko.fr/informatique?lang=en
German site claims that "Informatik", which is practically the same, is a contraction of "Information" and "Mathematik":
https://www.pctipp.ch/praxis/gewusst/kam-begriff-informatik-...
https://www.larousse.fr/dictionnaires/francais/informatique/...
This cites the same person (Philippe Dreyfus) but with "automatic":
https://www.caminteresse.fr/societe/quelle-est-lorigine-du-m...
> Il faut attendre 1962 pour réentendre parler d’informatique dans les médias. « Informatique » est en effet le terme utilisé pour la première fois par un scientifique français pour désigner le traitement automatique des données. Il s’agit de Philippe Dreyfus, fondateur de la société SIA, acronyme pour « Société d’Informatique Appliquée ».
And then enters the dictionary in 1966.
In 1957 in Germany, Karl Steinbuch describes Informatik as Automatische Informationsverarbeitung.
Another option would be to join forces to collectively demand more equitable distribution of the fruits of technological development. Sadly it doesn't seem to be very popular.
Strange enough the people that have the most to gain from keeping things the same, are really successful at convincing the masses who have the most to benefit from change in this regard to vote against it.
https://pjhollis123.medium.com/careful-mate-that-foreigner-w...
The problem I have with unions is that they can be too unreasonable. They're too much on the other side, they're too hardline just like the ultracapitalists/neoliberals but on the other side. In a good system we wouldn't have to fight for our rights because we'd already have them anyway.
You have fallen for capitalist propaganda. Time to re-evaluate.
Note: I'm not living in the US obviously :)
I do say a balance because of course we're not living in a communist state. So even with a socialist government there is still capitalism. Just not unrestrictedly so as it is in the US.
I'm not sure how it works in the US but in our company the union is many bitching about stupid stuff like breastfeeding rooms (when there are no women who bring their babies to work anyway - they just work from home after their 6 months maternity leave). All our basic rights are already sorted. We can't work too many hours, we have unlimited sick leave (though of course validated by a doctor for long absences), we're entitled to a lot of money when fired etc. But this is all national law level stuff. Not industry level.
Having strong and independent unions is how you keep a good socialist government. Almost anytime you hear “With a good government, you don’t need <whatever>”, you are hearing a recipe for guaranteeing that good government is an exceptional, transitory state. If your society isn't prepared for bad government, it will have one sooner than you’d like, and it will be difficult to dislodge it.
A true committed exclusionary socialist.
The bad thing is they converted the welfare room for this which was used all the time :(
I really needed that place because of the move to hotdesking so I'm constantly sitting besides blabby sales people. Formerly we had an IT floor where people knew concentration is sometimes needed. So I'd go there to sit in silence and de-stress for a while.
But I have to say the company is good otherwise, I told them about my difficulty and the H&S people let me work from home much more than others.
I hate the way companies are going back from full remote to hybrid hotdesking though because that is the worst of both worlds.
Nothing about your example is an overreach of unions. In fact, it is a perfect example of the value of organised labour.
In honour of recent comments by dang, I won't be as direct as I'd historically be and instead invite you to think about - in the grand scheme of things - how accessibility, including expressing mothers, may be a societal and absolute good.
As a secondary exercise, maybe it's worth thinking about the ethics of presence sensors.
You claim you’re trying to balance individualism and collectivism but don’t actually support things that make collectivism work so you end up de facto supporting individualism.
Its a way to support individualism while allowing people to feel extra good about themselves for supporting collectivist ideas, on paper.
But where I live we just have strong labour rights from the government so individual unions fighting for each type of labour's rights are not needed as much. Sometimes they are, when there are specific risks like chemicals that they work with. But for overall "not get taken advantage of" stuff, it's just not needed so much.
Is that dramatic? No.
More specifically: Things can be inevitable and also horrible. It is not some kind of cognitive dissonance to care about people losing their livelihoods and also agree that it had to happen. We can structure society to help people, but instead we hate the imaginary stereotypes we've generalized every conceivable grouping of real people into, politics being the most obvious example, but we do it with everything, as you have.
The electrician doesn't "deserve" punishment for "advocating" away the jobs replaced by electricity. The engineer doesn't "deserve" punishment for "advocating" away the jobs replaced by engineering. A person isn't an asshole who deserves his family to suffer because he committed his life to learning to write application servers, or whatever.
It might have been a selling point, but the status quo is that we are inventing new jobs faster than phasing out old ones. The new jobs aren't necessarily more enjoyable, though, and there are no more smoking breaks.
It is not in fact law in the US.
If directors consistently chose "do less with more" - they'd certainly lose under virtually any legal standard?
Edit: I guess it's technically Michigan law, but as far as I'm aware is de facto? Even Aronson v. Lewis wouldn't allow that. (IANAL)
Modern AI encroaches upon what software engineers consider to be interesting work, and also adds more of what they find less enjoyable — using natural language instead of formal language (aka code) for detailed specification — which creates a conflict that didn’t previously exist in software technology.
If you are the person who lost their job, you get all the downside.
Overall, over the whole of the economy, the entire population, and a reasonable period of time, this increasing efficiency is a core driver of the annual overall increase in wealth we know as economic growth.
When an economy is growing, there is in general demand for workers, and so pay and conditions are encouraged; when an economy is shrinking, there is less demand than supply, and pay and conditions are discouraged.
This is only true while wealth inequality is decreasing, which it is not.
If everyone is becoming better off, but at different rates such that there is increase in inequality, then everyone experiences economic growth.
Thought experiment.
We have two people, one with 1000 wealth one with 100.
We have 10% growth per year.
So we see;
1000 -> 1100 -> 1210 -> 1331 100 -> 110 -> 121 -> 133.1
Difference in wealth;
900 -> 990 -> 1089 -> 1198
The ratio of wealth remains 10:1, but the absolute difference becomes larger and larger.
I do not know, and I would like to know, how numbers for wealth inequality are being computed.
Web dev for e-commmerce displaced brick and mortar retail. Web dev for streaming displaced video rentals and audio sales.
Ergo, web devs are directly contributing to the outcomes that e-commerce enables.
If it sounds like I'm including a lot of jobs, it's because every non-service job in the history of the post-industrial revolution economy has revolved around making things more efficient. Software development is not some uniquely evil domain.
FWIW, I spent many years as a cashier. It's not something I find inherently more valuable to the world. If we could trust people not to steal, we wouldn't need them.
Now what needs to be done is to give back the profits to everyone, inclusively, as a kind of "universal basic income", so that we all enjoy it together, and not just the billionaires
It will change the job yes but it also can mean the job can go in new directions because we can do more with less.
This is naive of course. Once you have identified yourself as corporate servants (like for example the CPython developers) the companies will disrespect you and fire you when convenient (as has happened at Google and Microsoft).
It will cause a displacement of job types for sure. But I think it means change more than decline. When industrialisation happened, lots of factory workers were afraid of their jobs and also lost them. But these days nobody even wants to do a menial factory job, slaving away on the production line for minimum wage. In fact most people have a far better life now than the masses did before industrialisation. We also had the computer automation that made entire classes of jobs obsolete. Yet it's almost impossible to find skilled workers in Holland now.
And companies need customers with purchasing power. They can't replace everyone with AI because there will be nobody left with money to sell things to. In the end there will be another balance. The interim time, that's the difficult part. Though it is temporary, it can really hurt specific people.
But I don't see AI as a downward spiral that will never recover. In the end it will enable us to look more towards the future (and I am by no means an "AI bro", I think current capabilities of AI have been ridicuously overhyped)
I think we need to redraw society too to compensate. Things like universal basic income, better welfare etc. Here in Europe we already have that but under the neoliberal regimes of the last 20 years (and the fallout from the American banking crisis), things have been austerised too much.
In America this won't happen as it seems to go only the other way (very hardline capitalism, with a fundamentalist almost taliban-like religious streak) but well, it's what they voted for.
The electrician is more like the person laying fibre optic cable.
Yes? I know I did, still do, and will continue to at least.
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
https://news.ycombinator.com/newsguidelines.html
Only a minority of dev jobs are automating people out of work. There are entirely new industries like game dev that can't exist without it.
Software development has gained such a political whipping-boy status, you'd be forgiven for forgetting it's been the route to the middle classes for a lot of people who would otherwise be too common, weird or foreign.