The Evolution of Software Development: From Machine Code to AI Orchestration

14 guptadeepak 17 5/29/2025, 8:25:34 PM guptadeepak.com ↗

Comments (17)

proc0 · 3d ago
One recurring pattern I see with the current AI predictions is a kind of contradicting idea of AI being really intelligent agents, but also AI needs to be checked carefully. Either they're intelligent enough to not need humans or we are changing the definitions of intelligence.

For example, in the article:

> We're approaching a inflection point where the barrier to creating software will be primarily conceptual rather than technical.

And then...

> Developers will need to audit AI-generated code for vulnerabilities, implement security best practices, and ensure compliance with increasingly complex regulations.

Again, if AI is going to be so good at coding, why would it not be able to implement best practices, and generate perfectly compliant code with a few prompts? I think it's interesting that the promise of AI clearly implies that it will do everything humans can, yet I keep reading how engineers need to still check what the AI is doing. It's like self-driving cars that still need you to have your eyes on the road and hands on the wheel. Seems like the implied promise of the technology cannot quite reach its destination.

If we use the metaphor of the bird and the airplane, we're basically expecting airplanes to fly like birds, takeoff from the ground, flap its wings. Airplanes are much faster than birds, but needs a runway for takeoff and lots of fuel. Similarly current LLMs can synthesize huge amounts of text, summarize it, etc., but they have cognitive limitations that are crucial to solving problems in the way humans do.

I think there is something beyond this metaphor though. I think the brain is tapping into some algorithm from which mathematical reasoning emerges. This algorithm has side-effects that look like human reasoning, and it's also the missing ingredient to make machines properly communicate and collaborate with humans (and also allow them to be properly agentic).

BoiledCabbage · 21h ago
> a kind of contradicting idea of AI being really intelligent agents, but also AI needs to be checked carefully. Either they're intelligent enough to not need humans or we are changing the definitions of intelligence.

Why is everyone so flummoxed by this? Your coworker is intelligent, but still needs code reviews.

Why is it than whenever people think of artificial intelligence the only options they see are dumb as a rock / pure parrot, or some omniscient god? There is no intelligence on earth that falls in either of those categories, but thats the only two options people can use to visualize AI and if it not one to them it must be the other.

Intelligence exists without being perfect gods. I feel like people have watched too much sci-fi.

JambalayaJimbo · 6h ago
Rubber stamp PRs have the norm at every single place I have worked. No one has the time or mental energy for reading other’s code.

Also why would we not just get another AI to do code review? It would be significantly faster and cheaper than a human, if equivalent.

sarchertech · 20h ago
That’s the thing though coworkers don’t actually need code reviews.

Years ago reviews of every code change before deployment weren’t a thing. Most of the time it was fine.

Today, it’s fairly rare for a code review to find a serious bug in a PR.

The majority of PRs from the majority of developers work fine. They do what they’re supposed to and they don’t bring production crashing down.

That’s not close to true for AI.

keiferski · 21h ago
I’m not a programmer so take my metaphor with a grain of salt.

I am, however, a writer, and so your comment made me think of the difference between writing and editing. It’s perfectly possible (and common, even) for someone to be an amazing writer but a terrible editor, and vice versa. The writer focuses on production, and typically even thinking about editing during that process is detrimental to the quality of the work.

The Ai-human coding situation seems relatively analogous to this.

skydhash · 22h ago
Are they tools? So we can apply the reliability measure, as to how often they help us do something more efficiently instead of hindering us?

Are they assistants, a step further than tools? So after training the can provide a more conxtetual help and we can offload menial tasks to them while whe do the more abstract thinking?

So far there’s no answer. They promises us assistants while delivering something that’s worse than any tooling. It’s an impressive tech, but not that useful on its own.

proc0 · 21h ago
I think assistants is a good descriptions. There are several sci-fi universes where there exists these two kinds of AI, one is like what we have now (which is already incredible really) and the other is AI that has sentience and doesn't even consider itself to be artificial but just consciousness in a different substrate. I think the game Mass Effect has this, Warhammer 40K as well, and some others, but I think that's the distinction that we need. Assistant smart algorithms vs. human-like artificial minds.
localghost3000 · 21h ago
I’ve warmed to this tech a bit, but christ would I like to hear more takes from dudes who aren’t running a fucking AI company. It’s impossible to take anything they say as anything other than a god damn ad.
visarga · 21h ago
I recognize some GPTisms in this article. "Holistic Thinking" and "Ethical Considerations" being things humans are necessary for... it likes to say that a lot. I don't agree much, it seems superficial LLM limitation chanting.

What I consider things AI can't do without humans

- responsibility and accountability, because we have bodies, we can be punished

- the very specific life experience we have which is essential for grounding AI, like imagine your experience on the job after a few years

- expressing our preferences, nobody can do that for us, we are supposed to ask and evaluate the outcome ourselves

- walking, touching, and accessing the world physically; we can implement and test ideas for real, and validate AI work concretely; AI without validation is just an ungrounded idea generator

- provide the opportunities for AI to generate value; we bring the problems, we collect the outcomes of AI work; AI can't create value on its own

So accountability, tacit experience, telos, validation, opportunities for value creation.

userbinator · 21h ago
s/Evolution/Devolution/g
guptadeepak · 3d ago
I've been building software for over two decades, from debugging assembly code in India to now running AI companies. The pace of abstraction in our field continues to accelerate in ways that fundamentally change what it means to be a developer.

Major tech companies are already generating 25-30% of their code through AI. At GrackerAI and LogicBalls, we're experiencing this shift firsthand. What previously took weeks can now be prototyped in hours.

Three key insights from this transformation:

Architecture becomes paramount: AI can generate functional code, but designing robust distributed systems and making trade-offs between performance, cost, and maintainability remains distinctly human.

Quality assurance complexity scales: As more code becomes AI-generated, ensuring security, maintainability, and efficiency requires deeper expertise. The review process becomes more critical than initial coding.

Human-AI collaboration evolves: We're moving from imperative programming (telling computers how) to declarative (describing what) to natural language goal specification.

The most interesting challenge: while AI excels at pattern matching, true innovation—creating entirely new paradigms—remains human.

For those integrating AI into development workflows: what unexpected quality challenges have you discovered between AI-generated code and existing systems?

Deepak

skydhash · 22h ago
> We're moving from imperative programming (telling computers how) to declarative (describing what) to natural language goal specification.

We have that a long time ago with Prolog (53 years ago). Which is just a formal notation for logical proposition. Lambda calculus isn’t imperative as you’re describing relations between inputs and outputs.

The more complex the project, the more detailed a spec needs to be and the more efficient code is compared to natural languages at getting the details across.

userbinator · 20h ago
we're experiencing this shift firsthand

I read that as "we're experiencing this shit firsthand"... and I'd agree with that assessment. Software has gone quantity-over-quality and AI is only going to accelerate that decline.

proc0 · 3d ago
> while AI excels at pattern matching, true innovation—creating entirely new paradigms—remains human.

If humans remain in the loop, the promise of AI is broken. The alternative is that AI is still narrow AI and we're just applying it to natural language and parts of software engineering.

However the idea that AI is a revolution implies it will take over absolutely everything. If AI keeps improving in the same direction, the prediction is that it will even be innovating and also doing all of the creative and architectural work.

Saying there is a middle ground is basically admitting AI is not good enough and we are not on the track that will produce AGI (which is what I think so far).

Terretta · 1d ago
> If humans remain in the loop, the promise of AI is broken.

In any craft, if assistants remain in the loop, the promise of mastery is broken.

Or is it?

In the contemporary art world, artists and their workshops enjoy a remarkably symbiotic relationship. ... It can be difficult, however, from our contemporary perspective to reconcile the group mentality of workshop practice with the pervasive characterization of individual artistic talent. This enduring belief in the singular “genius” of artists ... is a construct slowly being dismantled through scholarly probing of the origins and functions of the renaissance workshop.

The modern engineer would do well to model after Raphael:

Soon after he arrived in Rome, Raphael established a vibrant network of artists* who were able to channel his “brand” and thereby meet (or at the very least, attempt to meet) the extraordinary demand for his work.

https://www.khanacademy.org/humanities/renaissance-reformati...

* read: agents

proc0 · 1d ago
The difference here is that there wouldn't even be a need for Raphael, at least if all the projections are right as to where AI is going.

Replacing humans with other humans is one thing. Replacing humans with machines is on a completely different level. Anyone that says we will work along AI has not thought this through. In contrast to the industrial revolution where machines are doing things that humans are not capable of, i.e. lifting heavy things, bending and shaping steel, mixing tons of cement, etc., AI is taking over what makes humans unique as a species: our cognitive abilities.

Of course, all of this hinges on whether AI will reach this level of reasoning and cognition, which right now is not certain. LLMs did scale up to have impressive and surprising abilities, but it's not clear if more scaling will produce actually intelligent agents that can correct themselves and have reliable output. Not to mention the compute cost which is orders of magnitude more than the human brain and will be a huge limitation.

kriro · 22h ago
When I was teaching my first AI 101 class (must have been around 2010) I ended the first lecture with a reading assignment of "Man–Computer Symbiosis" by J.C.R. Licklider and asked the students to discuss if the future will be all AI or AI assistants. I still recommend this paper today and personally think if there's a path to AGI there will be a longish period of symbiosis before and not just a paradigm shift overnight.