AI is just a next evolution of the computer

25 vincirufus 34 8/31/2025, 1:32:27 PM vincirufus.com ↗

Comments (34)

Peritract · 7h ago
This is unsupported puffery. You can call any X the next Y if you cherry-pick previous evolutions to simplify the narrative and then theorise beyond your data.

I get that it's fashionable to do blue-sky AI evangelism with the breathless tone, but I also expect at least some depth.

davikr · 7h ago
I have AI fatigue.
jsheard · 7h ago
It's extremely reminiscent of the way that crypto boosters talked about Web3 being the obvious and inevitable future of the internet.
danielbln · 7h ago
Crypto had no use besides gambling (which includes speculation) and black market currency.

LLMs have massive problems with externalities, but they have concrete and undeniable usefulness. So at the very least from that angle it's decidedly not the same as with crypto.

The hypsters will hype, that will always be the case.

LtWorf · 4h ago
The only use I have for them at work is to generate verbose texts to appease management that would not be equally happy with short concise and to the point answers.

Basically a total waste of time.

danielbln · 4h ago
That says more about you than the technology though.
LtWorf · 3h ago
Not my fault it doesn't anything more useful.
vincirufus · 6h ago
How do you think people will write code and generate images 6 months to a year from now. Claude code, nano banana, perplexity, copilots, comet will become the default tools

No comments yet

A4ET8a8uTh0_v2 · 7h ago
Whenever I read stuff like this, I tend to look not at the message ( because I already know what I think and have largely decided on what I think is reasonable ), but at the argumentation and language used. I do it for several reasons. It is a lot easier to find bandwagon people, shills and other undesirable content flagged, because, despite now having the ability to make the exact wording very different across media landscape, it is the exact same note across media landscape. But beyond that, it is also interesting to see shifts in words.

What I find increasingly interesting is that 'democratizing' is being used in a way that is sure to make this word become a pejorative and I can't help but wonder whether it is intentional.

edit: added missing sentence fragment

DanHulton · 7h ago
I've begun to hate the term "democratizing."

In fact, I now tend to see it as a strong shibboleth from people who don't actually value the thing being "democratized" - computing, art, music, and who think in terms of "barrier to entry" instead of terms of understanding and appreciating.

In the end, this bizarre drive just ends up cheapening our enjoyment and interactions. We get shallow music, soulless art, and miserable computer programs, because there's no active intelligence involved in their creation that truly understands what's being created.

GuinansEyebrows · 6h ago
Funny you say this. I’ve been thinking about it a lot lately. It really does seem like “democratization” is recuperative shorthand for “commodification”.
lewisjoe · 7h ago
This touches an important debate on user interfaces. Chat/human language based AI vs GUI augmented AI.

OpenAI started with chat based AI but have since then realized "text only" doesn’t scale for serious business needs. We see openAI pivoting towards richer interfaces like "Canvas" that offers richer editing interface (GUI) with AI as an embedded collaborator.

There’s even news floating around that OpenAI is building a Google Docs / Microsoft Word competitor.

Now, take Microsoft. Microsoft owns some of the most serious business products ever like Office Suite. Microsoft however is working back towards a chatbot experience to an extent that they’ve even renamed their entire online office suite as Microsoft Co-pilot 365 - which makes little sense as a name for an office suite :)

But the bigger question is which approach is right? Maybe there isn’t a single right approach. Maybe that’s why Google is being Google and travelling both ways.

They’re building Gemini, Gemini Canvas as well as already owning an office suite and working towards integrating AI capabilities into their office editors.

We are living in interesting times!

JKCalhoun · 7h ago
As a user interface, sure, another … branch?

Plenty of "creatives" I think are still going to be hands-on-mouse(stylus) in the coming decades.

I'm not sure that collaborative computing follows. Like when DropBox famously debuted and perhaps some were touting "cloud computing", Steve Jobs called it a "feature", not a platform. (Or words to that extent.) He was deflating the concept a bit too much in my opinion but perhaps the truth was somewhere in between.

agentultra · 7h ago
It shouldn’t take a hosted cloud service and and an LLM to use a computer.

Or to even design a good, humane interface.

I can’t imagine why I would want a system that could happily delete my backups when I asked it to reschedule an appointment. Having to constantly review and confirm everything it is about to do is annoying. Only to have it do the wrong thing anyway is also annoying at best.

hollerith · 7h ago
>AI Is Just the Next Evolution of the Computer

My preferred framing is that the computer is a lot more dangerous than we thought.

jqpabc123 · 7h ago
My preferred framing is that people are a lot more dangerous and callous than we have long thought them to be.

Allowing AI to make life or death decisions is just the latest example of their dangerous and uncaring nature.

https://gizmodo.com/trump-medicare-advantage-plan-artificial...

hollerith · 6h ago
I think AI is dangerous even if the creators and operators have only the best intentions. All that is needed is for them to be overconfident of their ability to stay in control as the AI increases in capability.
jqpabc123 · 5h ago
All that is needed is for them to be overconfident of their ability to stay in control as the AI increases in capability.

Current AI is like a 5 year old with a good memory.

I'm not too worried about losing control to something that that has trouble counting the "r's" in "strawberry".

I'm much more worried about people proposing to allow AI to make healthcare decisions.

hollerith · 5h ago
I agree that there is no need to worry about a lack of controllability in the current crop of AIs.
effed3 · 7h ago
About efficiency (energetic), correctness and safety (logical), seems way long from a forward step (by now). [Lemma (: from Murphy's Laws of Computer Programming, wisdom from a more civilized era: Build a system that even a fool can use, and only a fool will want to use it.]
bgwalter · 7h ago
I don't think computation is democratized when most of the barely usable models are in the hands of mega corporations.

Computers have been way more accessible in the DOS era, when travel agents could effortlessly handle console programs. In that era more people had some idea of the basics of computing, there was more diverse and open hardware.

Nowadays people know how to click to upload a video to YouTube, which means they are just sharecroppers. If the upload sequence is replaced by another middleman like "Open""AI" who will store and monetize your data, you are a sharecropper of "Open""AI" and YouTube.

tiahura · 7h ago
it takes time. It took decades to go from mainframe to micro. It’s going to take a while for a model 3-4x ChatGPT 5 that you can run on your watch.
righthand · 7h ago
Idk, to me there is this weird fringe of people that want computers to be this friendly “machine turns on and smiles and says hello and you talk to it”. But that dream (as far as I’ve always understood it) stems from Steve Jobs quotes about selling computers and products.

I’ve always used the computers we evolved today and have never had the desire to seek a friendship where the other “being” was the computer. It’s always been a tool and writing full sentences/paragraphs to get back correct information doesn’t feel like the next evolution of computers where before I was dropping keywords and filtering myself.

The examples in the article are designed to support the points made but are not remotely accurate.

> Consider the difference:

> GUI era: “Open Photoshop → Create new file → Set dimensions to 1200x628 → Select rectangle tool → Draw rectangle from coordinates (0,0) to (1200,628) → Fill with color #3B5998…”

> AI era: “Create a Facebook cover image with our company logo and a modern blue background.”

The LLM prompt in the article looks simpler but really there is a lot of hidden prompt describing the output which is probably the same info as the GUI era example. Which blue will my LLM pick and how will it know without me telling. How will it know how big to scale the logo and to tilt it 3 degrees left? How does it even know what the “company logo” is?

LLMs might also collect the wrong data. Does my LLM use Facebook header image size specs from 2015 or 2022? Most “blogs” online might be how-to blogspam with outdated answers.

IMO LLMs are an attempt to filter blogspam from search and make knowledge gathering and scraping of walled-gardens easier, less how everyone will be using the computer.

vincirufus · 6h ago
What are your thoughts on the latest nano banana
vincirufus · 6h ago
This article was written a couple of months back look at the example of Photoshop and connect it to the nano banana release last week
encomiast · 7h ago
Another totally arbitrary narrative you could build from history is one of reducing size and increasing efficiency. We gone from room filling monsters to the raspberry pi zero.

It’s hard to make giant datacenters that require their own powerplants fit into that narrative. But I don’t know why I should prefer one narrative over another.

I can’t help hearing Karl Popper raging against historicism when I see people try to create a narrative and project it into the future as we move to some idea state.

croes · 7h ago
> Understanding Human Intent

That won‘t work with language alone. Natural languages are ambiguous.

Just take double negatives. Some use them to create a positive, some to emphasize the negative.

Skynet, don’t kill no people!

easybake · 7h ago
Not only ambiguous, but also recursive.
dwringer · 7h ago
I'd argue human intent itself often [always?] is ambiguous
morninglight · 6h ago
The SpaceX Falcon Heavy is just a next evolution of the Bicycle.
vincirufus · 6h ago
Yup thats a valid way to look at it
Barrin92 · 7h ago
>The history of computing can be viewed as a steady progression toward more intuitive human-computer interaction.

You can probably make the opposite case, as Dijkstra did in his piece of "On the foolishness of "natural language programming".

"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."

Computers, as machines, derive their power exactly by what they prohibit. They provide interfaces narrow enough, like modern mathematics, that make the expression of a whole lot of nonsense impossible which is what enables the automation of tasks in a correct manner. Going back to some sort of alchemy where you have to beg the computer with incantations to do things that may or may not be correct is actually going backwards in history. The fact that people think expressing themselves in a programming language as a burden, when the limitations are exactly what give it its power, says more about modern programming as a practice than anything else. As he jokes in the piece someone being glad they don't "need to" write SQL any more is like someone saying they avoided mathematical notation for the sake of clarity.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

ath3nd · 7h ago
Pompous bs.

Computation is in the hands of a few megacorporations. How the f is it democratized to use their models?

If anything, this is acceleration of more clickbait articles, ads you can't block, compute and water waste so you can see a summary of a 5 line email, and a total dumbification of an already pretty dumb general population.

Trump got elected on propaganda 2.0, I cant even imagine what kind of dictator would get elected on brainrot 3.0, which is sure to happen when the megacorpotations, starving for money, lend their computing power to the next fascist oligarch's election campaign.

bit1993 · 7h ago
I agree with the title, but for me the evolution is more higher level and based on Data. No AI without search engines and social networks, No search engines without WWW, no WWW withou TCP/IP and no TCP/IP without the computer.