Where do scientists think this is all going?

39 rolph 55 5/3/2025, 3:46:56 PM quantamagazine.org ↗

Comments (55)

biophysboy · 17h ago
I did a biophysics PhD, and I do think the main value of AI in academia will be rapid bespoke scripting. Most of my code in grad school was little one-off scripts to perform a new experiment or show a new result. The code is not really the goal; in fact, it is frequently annoying and in the way of your actual goal. I would've killed for a tool that could style my figure for a talk, or perform a rolling average on a time trace.
HPsquared · 17h ago
This applies to a lot of professional jobs that involve programming.
biophysboy · 16h ago
Yes, exactly. AI will soon be treated like yet another technology, not a sentient alien, and will be used largely to accomplish mundane, well-trodden tasks.

Its analogous to how social media founders dreamily promised global democracy at first; in reality, we got an app to sell a used monitor, complain about a new thing, and look at some cat pictures.

Balgair · 10h ago
HN's bubble/biases are pretty obvious here and we should expect them. But, as someone that uses code but is not a coder, I'll confirm this.

Nearly none of my coworkers were hired as coders. Yet we all code in some small way or another. As such, 100% of our code is really bad. No, its okay, we know it, it really is bad.

To echo the GP too; I had a friend in grad school that was trying to do some neuroscience experiments and analyze the data. He wanted my help with some matlab code and I said, sure, I'll sit down with you for a six pack. After the 11th nested if-statement, I upped the price to a case.

Like, most of the people I work with do not care at all about the code itself. They care about the result. I know much of HN does care about the code, and I'm not calling you out on it. Your feelings are quite valid! But so are those of myself and my coworkers.

LLMs that can, and very much do, code for us? That is the thing I think HN is really missing out on, understandably so. The power of AIs is not going to be trying to figure out the code base. From what I hear on here, it's bad at this. The power I see in my life is that suddenly, things are possible that we never thought we'd ever be able to do. And most of those things are under 200 lines of code, probably under 15 lines really.

I tend to think of AI as a wheelchair or other mobility aids. For a lot of people I know, AI/LLMs let us get moving at all. The dark ages where we just sat there knowing that we weren't smart enough to do the code we wanted to, to get the results we need? Those days are over! It's so nice!

HPsquared · 55m ago
I find LLM coding reduces the "activation energy" required to get started. It's like a "knowledge catalyst" for these kinds of tasks. I've leveraged them to make a few scripts (talking 100 lines out less) that added a lot of value to our little data processing team.

Funnily enough, a lot of the time some huge expensive software tool is purchased or even built to order, but all you really need from it can be done with some small scripts. It completely changes the economics of small tasks.

qoez · 17h ago
Not really seeing any answer to where they think it's going other than "we don't know" or the typical worries about education. Was hoping for a more scifi or more interesting answer.
pwndByDeath · 17h ago
My money is on, "meh" once we get over the hype, deep learning is a mix of the useful English major intern who writes well but doesn't always understand what they write, and the asshat in the meeting who says the ultra obvious but with confidence that appeals to the other pretenders.
nosianu · 17h ago
I think its strengths already lie where complete accuracy does not matter.

Campaigns - political or other, dialog for dialog's sake, keeping people busy and "engaged", and then there is the huge field of generating pictures, video and audio.

Then there will be the applications in business and administration where accuracy would matter, except that a certain percentage of failure is allowed in the system. Support, and decision making, e.g. in insurance. Those lucky may get a second chance to speak to a real human after a decision against them. The government is going to allow it as long as it saves costs and not too many people are impacted, a certain amount of dissatisfaction is priced into such systems already anyway. Worse for those without means.

pwndByDeath · 16h ago
I want to ammend my comment. I think the honest conclusion is the product of our intelligence can be duplicated by a slime mold with the right preconditions.
HarHarVeryFunny · 16h ago
I see a lot of people here replying on the assumption that AI=LLMs, which I don't think will last for very long. LLMs have unlocked a primitive level of AI faster then many people expected, but it is only that. Where AI is going is surely going to be more complex/structured ANN-based architectures, built for the job (i.e. cognitive architectures), not simplistic pass-thru transformers which were never intended to be more than a seq-2-seq architecture.

I don't see any reason to suppose that we won't achieve human-level/human-like AGI, and do so fairly soon. Transformers may or may not be part of it, but I think we've now seen enough of what different ANN architectures can do, the "unexpected" power of prediction, have sufficient compute, etc, that the joke of AI/AGI always being 50(?) years away no longer applies.

I think achieving real human-level AGI is now within grasp as more of an engineering challenge (and not such a big one!) than an open-ended research problem. Of course (was it Chollet who said this?) LLMs have sucked all the oxygen/funding out of the room, so it may be a while until we see a radical "cognitive architecture" direction shift from any of the big players, although who knows what Sutskever or anyone else operating in stealth mode is working on?!

So, I think the interesting way to interpret the question of "where is this all going" is to assume that we do achieve this, and then ask what does that look like?

One consequence would seem to be that the vast majority of all white collar jobs (including lawyers, accountants, managers - not just tech jobs) will be done by computers, at least in countries where salaries are/were high enough to justify this, probably leading to the need for some type of universal basic income, meaning a big reduction in income for this segment of society. One could dream of an idyllic future where we're all working far less pursuing hobbies, etc, but it seems more likely that we're headed for a dystopian future where the masses live poorly and at the grace of the wealthy elite who are profiting from the AI labor, and only vote for UBI to extent of preventing the mass riots that would threaten their own existence.

While white collar and intellectual jobs disappear, and likely become devalued as the realm of computers rather than what makes humans special, it seems that (until that falls to AI too) manual and human-touch jobs/skills may become more valued and regarded as the new "what makes us special".

Over time even emotions and empathy will likely fall to AI, since these are easy to understand at mechanical level in terms of how they operate in the brain, although it'd take massive advances in robotics for them to be able to deliver the warm soft touch of a human.

TheOtherHobbes · 17h ago
It's 2075. The stock markets are doing better than ever. Resource wars are a thing of the past. Climate change is no longer a problem.

The colonies on the Moon, Mars, and the major asteroids are thriving. Research suggests subquantum physics may make FTL possible by the end of the century.

The last human died five years ago.

dirtyhippiefree · 17h ago
Philipov missed the last line, “The last human died five years ago.”

Correction: The Holocene Extinction has been happening for a couple of centuries and we can’t bear to look, even indirectly.

All hail Earth’s coming overlords…

No comments yet

peterlk · 17h ago
I am surprised at the negativity and cynicism in this thread. I suppose the pendulum of hype has high amplitude.

Just because AI isn’t going to be some kind of all-knowing sci-fi AGI doesn’t make it all a sham. The recent research models are miracles of technology - we can now get in ~15 minutes an undergraduate-level report (that would take an undergrad days or weeks) on almost any topic. That’s incredible! The capability of an AI model is approximately junior level in the fields I’ve tested it in (programming, law, philosophy, neuroscience). If you’d don’t see any possible uses for the technology, keep thinking about it.

lsy · 16h ago
While the technology is indeed incredible, the question is not whether someone, somewhere, will find it useful for something, but whether the sorts of things it's useful for will economically justify the massive expenditure in both financial and human capital this trend is currently soaking up.

E.g. if "undergraduate-level reports" were something there was a mass market for, the economics of university education would be pretty different. And the same goes for idle searches, sycophantic therapizing, blog article generation, and toy code development: there is a solid user base while costs are free or low, but that says little about whether there is an appetite to pay for these tools, especially if the prices are commensurate with the cost of operation.

peterlk · 13h ago
I think you’re fixating on the specific example that I cited rather than imagining the possibilities of the technology. We now have: zero-shot classification capabilities for basically any task, performance that is consistent enough for independent LLMs to collaboratively produce robust long-form responses, the ability to produce almost any web UI element on command with functional hookups to an API. And that’s just LLMs. SAM2 has good performance for realtime object detection in video.

Perhaps a more fruitful line of inquiry could be: what would the internet look like if every web application implemented support for A2A?

Kamq · 16h ago
> That’s incredible! The capability of an AI model is approximately junior level in the fields I’ve tested it in (programming, law, philosophy, neuroscience). If you’d don’t see any possible uses for the technology, keep thinking about it.

It is absolutely incredible from a technical perspective, but your next statement does not follow.

In a lot of (most?) fields, juniors are negative ROI, and their main value is that they will eventually be seniors. If AI isn't on the road to that, then the majority of the hype has been lies, and it's negative value for a lot of fields. That changes AI from a transformative technology to an email summarizer for a lot of people.

hybrid_study · 16h ago
So true. It's equivalent to someone saying "so you can search billions of internet pages - big deal" when Google first appeared; or someone saying to Guttenberg, "sure you can mass produce books, but you still gotta read them".

It's as if cynics are jealous or so deeply trouble by the technology that their primary responses are mostly negative. Or they want a perfect solution (immediately, right this minute).

Totally irrational.

quantumHazer · 17h ago
How do you judge? Are you an expert in four different and complex fields? I'm not saying that LLMs are useless anyway, I'm just curious to know how you can judge a report as "undergraduate level".
horhay · 17h ago
My brain just bleeps out "x-level" as a description of LLM output. It doesn't really make sense and is usually most used by Silicon Valley folk who can't even begin to quantify such definitions with any type of consistency.
api · 16h ago
I’ve been on HN for a very long time as you can probably tell from my user name.

I lived through the incredible tech optimism of the late 90s into the early to mid 2000s. I miss that era, but then again many of its optimistic predictions were wrong. The failure of so much of that optimism to pan out has inspired a strong counter reaction of techno pessimism.

Here’s a few of the optimistic ideas we had versus the reality.

Optimism: the Internet will be a massive engine for decentralization and democracy. Reality: this one is exactly wrong. Networks drive centralization, winner take all markets, and empower authoritarians by making “command and control” much easier at all levels of society and in technology.

Optimism: the Internet will be a huge engine for education and will place the collective knowledge of humanity into everyone’s hands. Reality: it did this, but nobody cares. People would rather scroll and stare at unbelievably vacant crap. What happened was that concurrent with the Internet delivering on this promise, Internet companies figured out how to build machines to hack the dopamine system to create ad platforms. We did think about this back then but vastly underestimated how successful it would be. I am just floored by the awesome brain rotting power of engagement algorithms and how well they steer people toward absolute garbage.

Optimism: decentralized cryptocurrency. Reality: what was created to provide a decentralized democratized alternative to financial capitalism turned into an absurdist parody of financial capitalism, then into a pure casino, which is what it is today.

Optimism: people will have privacy and control over their own digital world. Reality: lol. This one is also possible, but only for the tech savvy. What we didn’t understand was that making computers easy to use is so difficult that only very well funded companies can do it. Computers are actually incredibly confusing to 95% of people and require years to master, and the UI/UX aspect of making a product is often orders of magnitude more time consuming and costly than the more technical parts.

I could write a lot more on the causes for these failed predictions, but a lot of it boils down to not thinking through the economic reality behind these things. The net turned into an addictive chum machine and a casino because it was not built from the ground up with a billing and payment system built in, and because making and delivering media is expensive. So that role was taken up by ads and other less savory things like surveillance.

Now people think about the downsides first. When AI comes around, the first thought is “how will this ruin the world and fuck us?” That’s an over correction. We were too blindly optimistic before and are being too blindly pessimistic now.

the_snooze · 16h ago
From the history you've summarized, I think we're at the right level of pessimism. All this tech is amazing, and the smart people who put in the work to make it happen should be proud of it.

But at the end of the day, economics and game theory will drive the values that get propagated through the tech. The past several decades of technological progress have shown that values like "resiliency," "reliability," and "user empowerment" aren't at the top of the list, so why should we believe otherwise with AI and give it the benefit of the doubt?

I like to put AI systems through its paces with low-stakes but easy-to-check sports trivia. It should absolutely ace that given the plethora of accurate data and text out there. Yet it fails again and again. "Reliability" is not a design priority for this technology.

api · 15h ago
I’ve toyed around with an ultimate heresy: that the Internet, or at least the way we built it, was a mistake, and that something more like the telecoms and their OSI channel based network would have been socially superior.

The PC age was incredible. Jump in your time machine and go back and buy a decent PC in 1995, probably the height of the pre Internet PC era. It’d be kind of a big clunky box, sure, but the striking thing you’d find was a machine brimming with features and software almost all built primarily to serve the user of the machine. It was a product designed for you, the customer, full of attempts to deliver value to you.

It was a product of the good kind of capitalism, the kind where you try to create and trade value for value. (BTW think on this and you’ll understand why libertarian thought was popular then. Capitalism didn’t look so bad in this era. A 90s PC was an argument that Ayn Rand was right.)

You might use this PC to call BBSes, which were of course slow and very limited, but they too were either volunteer efforts aimed at building a community or services to serve, well, their users. Volunteer free or low cost BBSes were pubs, third spaces, while things like Compuserve were more like paid libraries, basically the pro version.

Ten years later in 2005 you can already see this world giving way to the dystopia of today where you are the product and the machine is there as a host for things to hack your dopamine system.

A telco OSI net would have been more expensive and limited. It would have been basically fast data calls. But it would have been point to point. Your PC would have called other PCs or PC services like with modems, just faster. No NAT and more importantly no unpermissioned access to your machine so no security armageddon driving the installation of firewalls that break end to end connectivity.

You probably would have gotten cloud eventually but its role might be different. You might not have gotten the www as we know it, and that might not be a bad thing. You might not have gotten Facebook or Instagram or TikTok, and that’s like saying we might never have had the AIDS epidemic. Social media has been a pretty strong net negative for humanity.

That network would probably have had a billing mechanism built in too. You’d be able to put up the equivalent of 1-900 numbers, services that automatically bill their callers. That would have allowed a profusion of small businesses serving and aggregating data with working business models that do not inevitably lead to enshittification.

I’m just speculating of course. You can’t rerun history. But I do wonder if a more limited and managed network would have counter intuitively led to a more free, open, and decentralized computing landscape with an economic model centered around the user as the customer.

Instead we’ve gone down this terrible road where the net and computer tech is primarily about delivering the user to the real customer: advertisers, political parties, and ultimately authoritarian political regimes. It’s becoming increasingly obvious to me that this ends with a command and control architecture where a small number of despotic god-kings drive humanity by mass dopamine system hacking with the assistance of AI. That is a dark, ugly future.

indigodaddy · 17h ago
I don't see any article really, just some blurbs, wtf
macawfish · 17h ago
The blurbs are from a series of articles linked at the top.
indigodaddy · 17h ago
Ah I actually thought that was just an ad to another article on the website, should have known that I guess