AI Changes Everything

52 tosh 86 6/4/2025, 7:37:56 AM lucumr.pocoo.org ↗

Comments (86)

jordwest · 1d ago
Great article, I also have no doubt we're in for some serious changes in society. But I disagree with the conclusion. I think the ideal place to be is neither optimistic nor pessimistic, but neutral. A conviction that the future is bright isn't enough to actually make the future bright.

I was optimistic when Facebook first came out, and it turned into a dystopian nightmare. I was optimistic when self checkouts appeared, and now I feel like I'm a criminal by default every time I get groceries. I was optimistic about more people being online, and now we all feel like we can't get off it. I've become increasingly cynical about tech because of these and a thousand other disappointments about the utopia that was promised turning into not much more than an efficient means of extracting money from all of us.

I don't think it's ever a good idea to be totally optimistic or pessimistic about a new technology, because then we're liable to ignore anything that goes against our narrative. Really, we don't know, and we can't know what's best for ourselves, usually until it's too late.

sam-cop-vimes · 1d ago
Well said - all of this resonates with me. We have to try to ensure that these new tools do not create more dystopian nightmares as well based on our experience of the past couple of decades.
saubeidl · 1d ago
I believe that'll be impossible. These new tools inherently shift power away from labor and towards capital, in a world that is already heavily skewed that way. The inevitable result will be even worse lives for most people, while the very rich will be able to live in even greater excess.
sam-cop-vimes · 1d ago
Maybe not impossible, but very hard indeed. Imagine this was the start of the search engine era. Everyone jumped on Google as it was hip and they promised not to be evil. We all know how that panned out.

What we need is open source implementations which governments can take and provide for their citizens. Yes I know many will laugh at this idea, but in principle it is the right thing to do. If these services become a part of everyday life, it is the responsibility of government to make it available to their citizens in a way that is not harmful.

saubeidl · 1d ago
Funnily enough, the closest to that is China with Deepseek. Both of which are demonized by the VC class. Funny, that.
_petronius · 1d ago
I have been thinking a lot about this parable (introduced to me by Bluey) in relation to the current surge of GenAI https://btribble.com/well-see/

> A conviction that the future is bright isn't enough to actually make the future bright.

Well said.

We have to be curious, open, and ready to use the benefits of technological change, but also skeptical, concerned, and willing to defend social and political norms, too. Wild optimism is potentially very dangerous; hardline pessimism would leave us insufficiently ready to face the moment when we do need to act proactively.

thunky · 1d ago
Tailing off the pessimism part:

A lack of conviction that the future is bright is enough to make the future dark.

AnimalMuppet · 1d ago
Well, it's at least enough to make the future dark for us. If you believe that there isn't any future worth having, and act on that belief, then your future is likely to be fairly crummy.

And, if enough people in a society believe and act that way, the future is likely to be crummier for the society as a whole, too.

jordwest · 1d ago
I love that parable. It stands in such contrast to a world where we all feel a pressure to know what’s right and wrong, how things will go etc. The reality is none of us do.
Karrot_Kream · 1d ago
I'm curious where you think this pressure comes from. I don't necessarily feel it myself but a lot of message board content I consume seems to react to this pressure, a pressure to have conviction in right and wrong. Is there something in the zeitgeist you feel that pushes people to have a strong stance?
jordwest · 1d ago
I can only speak for myself, but what I discovered internally were a few mechanisms of my mind:

1. A fear of the unknown. The mind tries to construct a knowable future in order to feel safe and prepared. It doesn’t matter how accurate it is, it just chooses what’s comfortable.

2. A desire to fit in, be accepted or loved. If our beliefs are reflected by others, it means we are accepted. If the beliefs are attacked, it’s not just a challenge to a belief system but it’s an attack on me and my validity as a person.

I think both of these seem to combine and play out online as we find groups of people who have acceptable beliefs to us. For some people, total pessimism is actually the comfortable perspective (that was me for sure) and for others maybe a blind optimism makes them feel safe.

When both of these fell away I realised how distorting they had been to my world view and my ability to actually engage with changes.

the_mitsuhiko · 1d ago
> I don't think it's ever a good idea to be totally optimistic or pessimistic about a new technology, because then we're liable to ignore anything that goes against our narrative.

A healthy dose of skepticism is always necessary, that goes for anything in life. That is obviously also true for AI.

The backdrop of this post is what I believe to be an disproportionate amount of skepticism right now in my circles that is largely based on things you can already refute. There for sure will be knock-on effects from this we cannot anticipate yet, and when we see inklings of that we will need to deal with that.

Facebook and social media is a good example of this, because we did miss the effects it had or we dismissed them. However if you go back to what the original skepticism of Facebook was (mostly privacy), it mostly missed the actual dramatic effects social media had on the psyche of children years later.

aikinai · 1d ago
How does self-checkout make you feel like a criminal? I didn’t follow that one.
sam-cop-vimes · 1d ago
Not OP, but the way self-checkouts are built does make one feel like they are being treated as a potential thief. There are cameras in each till recording you as you are scanning your items. The moment you scan your item, the thing keeps nagging you to put it in the basket because it wants to weigh it to make sure you are not placing anything extra. If for whatever reason you don't want to put it in the basket, it doesn't allow you to scan anything else. The list goes on.

Yes I know all about "shrinkage" but self-checkouts just feel like some smart arse built a machine which barely replaces the person at the till and sold it to supermarkets as a feature.

aikinai · 1d ago
Oh, I see. I live in Japan where self-checkouts are extremely common and have no particular security, so I’m not familiar with those systems.
jordwest · 1d ago
Ahh perhaps it’s because Japan is such a high trust society. One of the few places where you can typically expect your lost wallet returned with all the cash inside.

Here in Australia, several Coles supermarkets have installed exit gates that track you on camera as you walk through the self checkout area, and if the machine decides you didn’t pay, it will physically close the gate and flash red.

Woolworths also have cameras in front and above each self checkout, and when the machine inevitably decides that the weight of your reusable grocery bag is an attempt to steal something, it will replay the footage from above to the assistant.

I have actually noticed I’m hassled far less by the Aldi self checkouts, they seem to need a lot less operator intervention so I tend to shop there more nowadays.

JohnFen · 1d ago
This is the main reason why I stopped using self-checkouts entirely. I'd rather wait in line than feel like I'm a suspect. Also, there is too much risk of a false theft accusation. Better to let a store employee do the work and not expose myself to that risk.
oulipo · 1d ago
Looool, your post is "let's be neutral", and then going on about making a list of how EVERYTHING has gone not neutral, but ACTIVELY BAD.

That's exactly why we need to have a pessimist view on these kind of tech, because we've monitored them for a while now, and they've shown that THEY MOSTLY BRING BAD CHANGES (except to a select few white rich westerners, who is the main population here, so I'm expecting the downvotes, and who lack real empathy, like the OP, to understand what "keeping their privileges" is doing to other vulnerable populations)

jordwest · 1d ago
I could go on with a list of ways I think technology has been good too - the internet allowing people to connect across the world almost for free, less estrangement from other cultures, new artistic mediums and ways to share art with other people, enabling indie writers, artists, and engineers to sell their work to the whole world and make a living from it. Those are things I think most people would be happy to see more of.

I agree though that (particularly in the last decade) tech advancements have been more heavily weighed towards exploitative than helpful.

oulipo · 1d ago
Well, so again we agree, that's exactly why we need pessimists indeed: as of late, most developments have been for the worse, not the better
d4rkn0d3z · 1d ago
I think if you stop labelling it "AI" and instead use "meta-software" or "natural language programming tools", then all of the hidden and externalized costs resurface. And a better understanding of skepticism arises. These tools may cost $20/month now but once all of the tooling to write software manually vanishes or becomes rarified, will the price stay the same? And will not the purveyors of this meta-software have a new form of control? Will intelligence itself be somehow sequestered?

Probably not because what "AI" is giving you is not intelligence. Intelligence involves realizing that all human knowledge is fiction, careful, logical, predictive fiction. And "intelligence" involves remembering in exquisite detail what makes your knowledge fiction, so that you do not inadvertently become convinced that your careful logical predictive fiction is "truth".

Predictions about "AI" involve various super exponentials combined and if you leave out just one of those that pulls in the wrong direction then the loose renormalization performed by wishful thinking fails. So rather than an "AI" programming utopia we end up in the same old efforts to manifest hegemony over computing, will to power. One algorythm to rule them all...the more things change the more they stay the same.

Karrot_Kream · 1d ago
You can, today, use open source agents on open-weight models. You can point aider to locally running Gemma3 and no purveyor can stop you.
csomar · 1d ago
> Do I program any faster? Not really.

This really got my attention. I use Claude heavily and although I am sure it is saving me considerable time, I feel like my "delivery" speed is roughly the same. I think the realization that will come from this is that writing code is not that consuming after all; unless you are doing cryptography or math.

Most of your time is spent testing, deploying, figuring out some weird bug, writing docs somewhere, signing up to some service, etc...

Frieren · 1d ago
While writing code I think about what I am doing, improvements, etc. To write easy boilerplate code helps me to find improvements or even realize that there is some risk I overlooked. It is a zen state that helps with the deep thinking part.

Not writing code forces me to read and reread the code to get the same realization. That is why it is harder to work with code written by other people.

In the age of AI all code is code written by someone else that I have to maintain, it looks like a nightmare.

fhd2 · 1d ago
Yeah I have the same. In general, writing (and experimenting) is how I _think_. If I read code, it takes me a lot longer to understand.

Sure, I save some time by not having to do any trial and error, I'm looking at a solution that already works according to some testing. But then I start to wonder about edge cases, leaky abstractions and such, and not having done the work, that's where a lot of the effort saved by not writing it comes back, at a stage where the work is seemingly already done and "just" needs reviewing, which is somehow more frustrating.

Perhaps people are just different. I work great on a blank canvas. I know a lot of people struggle immensely with it. Hell, some people type so slow or have such low mastery of their tools, I really feel their pain watching them.

AlexeyBrin · 1d ago
One old trick that helps a bit with understanding, is to not copy paste what the AI suggested, but rather to manually copy the code suggested the way you would do it with a physical book.
amelius · 1d ago
Yes, AI is taking all the fun jobs.
herbst · 1d ago
I am undecided about AI coding.

Ive done some vibe coding with working but code wise horribly results. But I have had working prototypes faster than ever.

But I recently starting doing some redesign and architecture changes (manually) in an old code base and notice that i tab trough whole files auto fixing code one by one with suggestions which is amazing.

I think we need to find a way to work with it and keep it far away from making architectural decisions for now.

the_mitsuhiko · 1d ago
> Most of your time is spent testing, deploying

I became hyper attentive to iteration speeds as a result. This is not just something a human runs into, the agent runs into it even more because it loves to run tests etc. The quicker that goes, the more efficient the agent is. Obviously the same should be true for humans but because the agent doesn't make breaks quite like a human does, it becomes more noticeable.

csomar · 1d ago
It gets me frustrated. Here you are one second flying, in the zone, moving from concept to concept at faster than human speeds; and then boof, you have to pull up some random api doc, open postman, test the api, come back to the chat console, etc...
the_mitsuhiko · 1d ago
What's nice is that a codebase optimized for agent also becomes a codebase optimized for humans.
saubeidl · 1d ago
Disagree. A codebase optimized for agents is overly verbose and takes more conscious effort to parse.
the_mitsuhiko · 1d ago
Overly verbose codebases are not great for agents in my experience because they confuse them greatly and make refactoring hard. However codebases that I find to work really well with agents are quite lite on abstractions and I always felt that this is the right way to build software for humans too.

I have a lot of functions, I pass a lot of data, I embrace a lot of thread local state and this works really well for both humans and AI to understand what is going on.

I gave a talk on this recently (not the agentic view), but I found this to work exceptionally well for agents: https://www.youtube.com/watch?v=ej5RsTtVvQE

But here is where agents and humans both agree: quick and easy tooling. Make it easy to iterate on a test, make it easy to manage resources, make the iteration cycle as quick as possible. I'm writing this as my partner's job forces iteration cycles of minutes on her. No agent can do better in that space than a human.

meindnoch · 1d ago
>writing code is not that consuming after all; unless you are doing cryptography or math

Most cryptographic algorithms fit on a napkin. 99% of the work goes into proving the mathematical properties of the algorithm.

yencabulator · 50m ago
But that napkin may not be constant time.
amelius · 1d ago
Maybe this is different for 10x versus 1x engineers?
js8 · 1d ago
I see LLMs (AI if you will) as both regression and progression (and that's why I think people are so divided on it).

It's progression for obvious reasons - it automates natural language processing (human language), which even 5 years ago was an unsolved problem.

However, the regression part is less obvious. I think human progress, since enlightenment, was marked by increasingly externalizing human thoughts and judgement. That is, instead of relying on judgement of an authority (or intuition, lived and anecdotal experience, etc.), we increasingly demanded transparency and rigour in our thinking. This was facilitated by printing press and development of scientific method, but is seen everywhere (to the extent even philosophy split into analytical branch).

However, with LLMs, we return to internalized thinking, based on judgement, this time of an algorithm. There is no way extract formal argument from an LLM, and so no way to ensure correctness in some sense.

Whether you're an adopter or a skeptic depends where you stand on this issue (in a way, it's similar to type checking debate in programming). Personally, I think the correct way to use LLMs is to formalize the problem and then solve that using formal methods.

000ooo000 · 1d ago
I can't help but feel these hyper optimistic takes on AI are just pure naïveté. Does one really think that redirecting the earning potential of a good chunk of the white collar population into the pockets of the already-wealthy is going to end well? I can't see a soft-landing scenario.
matltc · 1d ago
There is a good chance this creates greater wealth disparity, and it seems certain that the white-collar job market will contract in the near-term. I am not sure that earning potential will necessarily be limited by this, however; the bar to do things has drastically been lowered for anyone with a computer and $200 a year to spend on a subscription to Claude or the like.

Earning potential may increase, even for the displaced, in the long run. Still early

squishington · 1d ago
Earning potential should be considered in the context of increased wealth disparity though. After all, the rich and the poor are competing for the same resources. I work for a well known chip design company as an embedded software developer, and wages in my country (Australia) have not kept up with house prices (like many countries). Part of the housing price equation is buying power of the wealthy, which prices out younger people and those without family wealth. There's no scenario where my wages get me out of the rent cycle, no matter how productive I am and how much unpaid overtime I do to further the company goals.
matltc · 1d ago
I agree that the purchasing power of wages ought to be considered in this context. It is certainly true that one's wages must increase at a faster rate than prices for it to be considered a net positive.

I would argue that housing prices are artificially inflated. Supply is constrained by regulations like zoning and rent controls, which causes scarcity and hence higher prices for existing housing.

I would argue that, like other technological advances, the advent of AI will lead to greater productivity, and thus more abundant resources, given that the supply of labor does not shrink.

Agree that housing is definitely an issue, but not one without a solution.

squishington · 1d ago
I agree regarding housing. What I'm sceptical about is whether these potential productivity gains benefit those who work to survive or those who live (or at least derive a substantial income) from passive gains from assets. Have we not increased productivity since the year 2000? In those 25 years things have not got better for workers. My parents come up into the middle class from abject poverty in the 50s (think wards of the state and foster homes). Free education and a much less competitive job market, and low house prices and rents, enabled them to get up in the economic hierarchy. If they were born in the 90s like I was, there would be no chance of that.
DocTomoe · 1d ago
Ultimately, we will have to end capitalism as we know it and establish some form of UBI - for the mere reason that the bigwigs have no-one to sell things to when everyone is out of a job and there is no money. Which would give rise to a secondary economy, which is not ai-replaceable: actual artistic expression, pottery, woodworking, ... - what people like to do, not what people are forced to do out of economic necessities.

It's that, or French Revolution 2.0 and Ludditism, and everyone becomes a slave again. I hope that we have learned from the mistakes of the past.

pjio · 1d ago
After decades of trying to make the code encapsulate as much meaning in as few words as possible now there seems to be a new trend of "more code is better" and because AI writes and reads it, it's ok. I'm already sick of finding subtle bugs in AI generated code, looking at fever dream like AI images and reading text without the intentional style which indicates a human writer.
_pdp_ · 1d ago
> Do I program any faster? Not really. But it feels like I've gained 30% more time in my day because the machine is doing the work.

This makes no sense.

Besides, this is sort of written confession / admission that AI in this instance is as good as the OP's skills and the obvious outcome will follow soon.

Personally I don't see AI as means to cut work by 30% but to do more with less.

afro88 · 1d ago
He means 30% of the time is waiting around for agents to reach a conclusion and need human review and a next task to tackle.
bachmeier · 1d ago
> Do I program any faster? Not really. But it feels like I've gained 30% more time in my day because the machine is doing the work. I alternate between giving it instructions, reading a book, and reviewing the changes. If you would have told me even just six months ago that I'd prefer being an engineering lead to a virtual programmer intern over hitting the keys myself, I would not have believed it. I can go can make a coffee, and progress still happens. I can be at the playground with my youngest while work continues in the background.

By this description, the economic case for AI is quite limited. I use AI all the time and for some things it is incredible. It has real downsides (getting things wrong, wasting time due to communication, size of prompt, context limits, etc.) that mean everything needs a thorough review. It also has financial costs.

Benefits? Sure. A big shift for "everything"? Not remotely close to email or internet search. Let's keep some perspective. It's massive speculation to call AI a revolution based on current LLMs.

the_mitsuhiko · 1d ago
> By this description, the economic case for AI is quite limited.

If you do not assume concurrency that is absolutely correct. But even the Claude docs are pointing out that you can run more than one agent in parallel and that is really where the economic benefits would start to kick in.

d4rkn0d3z · 1d ago
How much does it cost to run several agents?
the_mitsuhiko · 1d ago
You can more than one even on the base max plan. So 100 USD gets you started.
bdangubic · 1d ago
that is the wrong question, the right question is whether the cost is justified by benefits and the answer to (in the hands of right people) is yes
d4rkn0d3z · 1d ago
You cannot ask this follow-up unless you first answer mine as to cost. And not cost today but cost for the long haul.
bdangubic · 1d ago
the cost is $7bn - is that too much if it generates $10bn in revenue?
xrisk · 1d ago
> conviction that this future will be bright and worth embracing

This is a bold claim and one that needs backing up.

roxolotl · 1d ago
> We've built something beyond what we imagined, and instead of being curious, many are dismissive and denying its capabilities. What is that?

I’ll keep saying this but the problem is the hype. The AI is Normal Technology[0] piece is the first piece that made me even remotely excited. The hype destroys everything positive about the technology because it totally fills the discussion space with nonsense. For many it feels like fruit from the poison tree both because the data was acquired using dubious means and the people selling AI are only interested in world domination.

Of course it’s going to be transformative. Of course it’s a powerful tool. Of course it has limitations. Of course some people are going to make boat loads of money. Of course some people are going to lose a lot of money. Same as it was with many other technologies that changed everything.

0: https://knightcolumbia.org/content/ai-as-normal-technology

intended · 1d ago
You, the producer of an intellectual good, are searching for a buyer for your output and your labor.

You, the consumer of intellectual good, have a machine that gives you average output.

Most of the conversations of AI which worry about the impact, become easier when the producer and consumer positions are split and separated.

If the tool is as good as the Hype claims (a high bar), then there is smaller space for human and human price matching.

This gets further complicated, when we bring in the impact over time.

r9295 · 1d ago
How is one supposed to context switch between reading a book and rigorously understanding and verifying potentially hundreds of lines of code generated by an AI?
afro88 · 1d ago
The same way you context switch between whatever your day to day is and reviewing peers code
mronetwo · 1d ago
I get being optimistic but there’s a lot of ethical considerations that we’re choosing to ignore. The result is techno feudalism.

Sure AI can help me with small things but it’s weird to be the guy preaching the gospel. In the end this is a product, sold by people who have more power than a single person ever had. They can do the marketing and hype, my interest lies in staying skeptical, especially in the incoming storm of AI generated misinformation or wave of students getting through university by cheating with AI.

frankc · 1d ago
I dont think that is really true. There are many open weights models you can run yourself, including state of the art models like deepseek. Right now you its still expensive to run them at a reasonable speed, but for instance a $9500 mac studio can run deepseek at a reasonable, if not spectacular, speed.
nosianu · 1d ago
I would like to point to my comment in another AI thread not even a day ago:

https://news.ycombinator.com/edit?id=44168022

You can, just like you can theoretically use something else than Windows or MS Office. Until it's an entire ecosystem after a decade or two, and a workforce trained for that ecosystem, instead of right now easily substitutable AI providers.

The current huge investments into the AI providers are not made because the investors are looking forward to everlasting fierce competition far beyond the initial stage.

the_mitsuhiko · 1d ago
> You can, just like you can theoretically use something else than Windows or MS Office. Until it's an entire ecosystem after a decade or two, and a workforce trained for that ecosystem, instead of right now easily substitutable AI providers.

Author here: I had that thought for a while but I don't currently think that AI will unfold the same way (and I mentioned that in the post linked). I believe that at the speed this is going, and the innovation happening everywhere this will be a market with many models and players.

nosianu · 1d ago
Yes, and so was OS and office suite.

Nobody says it is going to happen next year, or even within five years.

But eventually, as the technology matures and gets more and more integrated in businesses' IT, they will have ever more dependencies around the AI. Glue code on several levels (from dev-ops to user level code), 3rd party, training. There is no technical reason for React to dominate Javascript GUI dev either, and plenty of alternatives - and that is a far milder amount of external dependencies than you have with fundamental tech like AI, since it is far down towards the leave nodes of business dependencies.

If people start adding a lot of code around a solution, training, and include third party apps, libs or tools they have to choose one.

Unless you think that the interfaces for all basic AI interfaces and APIs will be standardized, and you will be able to exchange them while keeping everything you yourself added around that AI?

Or, do you think those AIs will be used as-is, no additional dependencies will be added on the business side? And that being trained for one - as average person, not the leading-edge people interested enough to teach themselves as they do now - means you can use them all?

AnimalMuppet · 1d ago
Well, if you don't want that future a decade from now, then vote now by choosing which AI provider to use.

But if you say "we have to use the same AI provider that everyone else uses, because that's where the ecosystem will be generated", well, you're contributing to there being only a few that make it far enough to generate ecosystems.

oulipo · 1d ago
Exactly, it seems like for him all the controversy about AI is just "will AI get my very own (white privileged programmer) job"

and he doesn't even think about those:

- will it feed disinformation and disrupt democracies (like it has already proven to)

- will it be used to kill people (cf war in Gaza)

- will it require underpaid work from data labelers in Africa and Asia

- will it consume CO2 and energy resources that would be better allocated elsewhere (he doesn't care that he's now using much more energy for "not coding faster, but being able to read a book meanwhile" — well nice, one more privilege for the white western guy, and one more thing to suck up for the people living in climate-vulnerable locations)

etc

the fact that those guys are so naive and disconnected is really tiring

the_mitsuhiko · 1d ago
> and he doesn't even think about those:

Author here. I am absolutely thinking about those too. I just also happen to think that all those issues are not dramatically changing because of AI. Disinformation has been a problem prior to AI too, and social networks are a much bigger harm in comparison and income inequalities or global warming were a problem prior to AI too.

I absolutely see a trend of piling all world's problem on top of AI and I think that this is wrong. These problems need to dealt with in isolation of AI, because the are largely non AI issues.

oulipo · 1d ago
"I just also happen to think that all those issues are not dramatically changing because of AI"

-> this is your white westerner privilege right here, "I just to happen to think it's not such a bad thing" (mainly because you're not confronted to the problems it actually already generates for people in vulnerable positions)

that's exactly the point I'm making

Yes you "happen to think that all that is not too bad". BECAUSE you're privileged. And lacking the empathy or will to put time into thinking in how it already dramatically effect some populations... you're mostly just proving my point

pedrogpimenta · 1d ago
While I agree, I feel we can't dismiss a technology just because we don't agree with the values of the people doing it, you know? The tech is amazing, let's build it in a nice way.
intended · 1d ago
To the contrary, no one worried about its failure states is dismissing it. They believe that even if it doesn’t match its hype, it’s going to destabilize society.

It’s already trivializing being rewarded for art. You want to be paid while you learn how to make excellent art work and CGI? Well just how will that work?

Phishing scams are now profitable for victim types it wasnt before.

Education is taking a hammering, that biblical is a fair adjective to apply. Most courses will have to resort to pen and paper exams, a reversal of digitization changes since the 90s.

This isn’t even if things hit the hype levels

saubeidl · 1d ago
And the question to ask here is: for what?

Why are we doing all this to ourselves? So the rich get even richer while the rest is even worse off?

Why are smart minds enabling this? My university had mandatory computer science ethics classes. I used to think they were a waste of time. Clearly, I was wrong.

saubeidl · 1d ago
The tech inherently requires enormous capital investment and thus further entrenches the power of capital.

There's no nice way to do what is inherently a power grab, taking power from labor and giving it to capital.

piva00 · 1d ago
> The tech is amazing, let's build it in a nice way.

We tried that before, during the early 2000s there was huge optimism about tech, democratisation of information, people would be more well informed, with access to all the knowledge in the world.

In the end it wasn't built in a nice way, moneyed interests took over, social media exploded, fewer companies captured a lot of different markets after getting extremely well capitalised, buying competitors to stamp them out, or buying them to integrate into their own ecosystems and control new markets (e.g.: social media again).

The tech is amazing, the corporations behind it not so much, the capital investments required are absurdly large which gives even more power to already capitalised entities which, generally speaking, do not behave in moral and ethical ways.

There's no opportunity to build it in a nice way, that's not where the incentives are so inevitably that's not where it will go, hence the pessimism about it founded on historical facts.

ChrisArchitect · 1d ago
> For sure the job of programmers and artists will change, but they won't vanish.

But they will vanish for the vast majority. It's the scale of it. The scale of the vanishing is not miniscule.

> People will always value well crafted products.

But again, in this case what will be left is only a few crafters, the privileged few, because no one else can afford to delve into whatever space. It's the scale of the destruction.

Lose me in these articles when they take on this sort of privileged engineer dismissiveness.

cubefox · 1d ago
In my frank opinion this optimism is simply, sorry, stupid.

These tools will not stop at where they are. They will get better. They will more and more outperform more and more humans. They will become both more general (as opposed to good at some narrow tasks like writing text) and more capable.

They will become "smarter than humans", first a little smarter in some respects, then much smarter on more respects. Eventually they will become superintelligent, in the same way in which we are superintelligent relative to a chimpanzee or an an ant.

We will lose our jobs. Human labor will increasingly become worthless. We will be more hindrance than help, similar to how a monkey can't earn small amounts of money. His limited intelligence is not of limited use, instead it has negative value in any real work environment.

But things will not stop at mass unemployment. People (e.g.) in Europe will not receive a significant UBI when the taxable AI companies are all US or Chinese companies. As wages tend to zero, people without investments in stocks or land will be increasingly impoverished compared to the rest.

There will be an arms race, likely between the US an China. Other countries without big AI companies will be left in the dust. Each country big in AI will be incentivized to push forward as fast and recklessly as possible, security be damned. The Manhattan or Apollo programs will be a joke in comparison. Because only the country that achieves true superintelligence first will get a chance of controlling the future.

But the question is not so much whether the US or China wins this arms race. The question is whether the resulting superintelligent system(s) will be controllable at all in the long term. They might escape human control relatively quickly once they significantly outperform us in all cognitive abilities. Similar to how animals can't control humans and children aren't in charge of their parents.

And even if we somehow solve the problem of controlling things that are vastly smarter than us: In the long term, gradual disempowerment awaits. We will voluntarily offload more and more tasks to AI, more and more decisions, because anything else will become increasingly inefficient. And at some point we will realize that we have lost control, silently and forever, quite a while ago. The point of no return would be invisible.

And eventually, the coming AI race may simply get rid of us, not because of maliciousness, but because we are in the way. In the way of projects too large for us to comprehend, similar to how no animal can begin to comprehend why its habitat is getting destroyed by human urbanization.

The only slim hope is that superintelligence will be created in such a way which ensures that they want to care for us, like some people genuinely care for their favorite pets. Then we will get our AI utopia, then our future will be bright. But I wouldn't bet on it.

Yet all these considerations are readily ignored. Most people can't or refuse to extrapolate the exponential AI growth of the last years. Or they think AI will "hit a wall" slightly before superhuman intelligence. Which is astonishingly stupid. No way say it in any other way.

It's all so obvious.

It's all so obvious.

senectus1 · 1d ago
i suspect its somewhere in the middle. There will be an economic crash, when that happens most the AI companies will go because they have been devouring loads of of the economy for very little improvement.

A lot of people will lose their jobs, and houses and family. The people wont have the money to buy the products the AI tools are supposed to make cheap. MORE companies will go under. this leads to civil unrest.... almost globally.

A new form or government will arise. nfi what its going to be, but the concept of plundering the earth for corp greed and printing the economy into oblivion will be the back bone of what the new forms of gov and economy WONT do.

vdupras · 1d ago
All of a sudden, a nuclear war that destroys our global capability to produce advanced microchips doesn't sound so bad, does it?
DarkCrusader2 · 1d ago
> Where I used to spend most of my time in Cursor, I now mostly use Claude Code, almost entirely hands-off. Do I program any faster? Not really. But it feels like I've gained 30% more time in my day because the machine is doing the work. I alternate between giving it instructions, reading a book, and reviewing the changes.

This is very interesting to me and I did not know that our current LLMs can do this. Does anyone has a live coding video/stream of this workflow to do some non trivial task successfully, like developing/merging a feature for some open-source project? There is so much crap around AI on youtube that I have just stopped engaging with that stuff and not sure who are producing genuine content and who is just selling their latest grift.

saubeidl · 1d ago
AI changes everything... for the worse.

Of course it'll change things. But not a single thing will be better as a result, unless you're Mark Zuckerberg or one of his peers.

vdupras · 1d ago
Articles like this makes this uneasy though resurface: something is wrong with this path. It's not really about AI itself, but about the efficiency of the path.

AI proponents tell us that this tool enable them to be purely creative by expressing their ideas without having any of the churn associated with the implementation.

But rather than telling us that we "need" AI, doesn't it tell us that modern tools are mis-designed? Wouldn't it be possible to imagine a set of tools that is better designed and allow us to express creative ideas more purely, with less churn and, most importantly, without the uneasy "fuzziness" associated with AI?

mdp2021 · 1d ago
> AI ... can accelerate innovation in medicine, science, and engineering. // Right now it's messy and raw

With all due respect to Armin, he sounds confused. That sentence above amounts to a "[Software] can accelerate innovation in medicine, science, and engineering. // Right now it's messy and raw" - as I have to note again, after a piece hours ago from the MIT¹. The linear reply is "Dude, what are you talking about? ?! !!".

¹ https://news.ycombinator.com/item?id=44167355

incomingpain · 1d ago
I will happily lose more HN score to agree. AI is amazing.

Pycharm's AI i guess defaults to claude sonnet 3.5 even though 4 is available? Is that claude code? It has multiple options, i guess im happy with 3.5. It has ollama and lm studio options as well for those who need privacy. I got LM studio working yesterday but so far very bad; it desparately needs the ability to check the internet.

>Do I program any faster? Not really. But it feels like I've gained 30% more time in my day because the machine is doing the work.

I wonder if anyone has objectively tried to figure out if you code faster or not.

>While all this is happening, I've found myself reflecting a lot on what AI means to the world and I am becoming increasingly optimistic about our future.

I am so optimistic. The worker shortage of 2030 depends on no significant boost to productivity. But AI is going to be a significant boost to productivity.

AI is likely going to enable us to automate multiple industries. Could we eliminate food scarcity by fully automating food production?

>That's what makes this moment feel so strange — half revolutionary, half prelude. And yet, oddly, there are so many technologists who are holdouts. How could techies reject this change?

I think it's a silent majority situation. The techies are all working on new projects at lightning speed with AI; too busy to post on social media. Whereas those without AI are complaining about AI.

brador · 1d ago
Using AI, the mental strain and toll of programming is reduced to what feels like 10% of usual. And that 10% is high level architecting and project planning.

The most valuable 10x $ per hour time for a software engineer right now is in the shower, on the toilet, or just before shut eye. Not at the keyboard. Thinking time. Unfortunately for now and many phone time.

The value of retaking that time is huge and growing.

ReptileMan · 1d ago
One of the things I learned after starting using AI - (can anyone give information WTF are coding agents Google is flooded by info presented by cryptobros btw) is that I have really hated the whole typist part of the development. To the point where I prefer to make 2 prompts to the AI instead of modifying something myself even though it will take comparable time.

Anyway the way it is going I was convinced that the oldest profession will be the only left standing when the AI runs its course in couple of decades. Now I am not sure even in its survival.

Edit: We will know we are in the endgame when AI can reverse engineer a piece of software just by observing it's inputs and outputs. Like a game. Or crack Denuvo

zihotki · 1d ago
> Or crack Denuvo

That would be a nice benchmark!

Zoethink · 1d ago
I genuinely feel like we’re living through another big shift — like what smartphones did in the 2000s, AI is doing now.

I’m not a developer, but I can already feel how the way I use the internet is changing. I used to search for things, now I just ask — and the AI replies like a person. It’s subtle, but powerful.

I really think in a few years, most of us will have some kind of personal AI. Not just a tool, but more like a life companion — helping us plan, decide, learn, even reflect. Kind of like how we can’t imagine life without Google now.

inferiorhuman · 1d ago
Yeah it's definitely changed the internet. All of sudden every site is behind anti-bot garbage (usually Cloudflare). Sites I use (like Sourcehut) are getting DDOS'd by AI scrapers. It might be a wee bit reductive to say that AI is destroying the internet, but this certainly is the death of the open internet. AI has just handed a huge amount of power to companies like Cloudflare and made it that much more difficult to avoid being tracked.