Using AI to perceive the universe in greater depth

50 diwank 22 9/5/2025, 2:35:31 AM deepmind.google ↗

Comments (22)

hodgehog11 · 11h ago
It's good to see that at least one tech company is interested in using machine learning for scientific research. You know, research that plausibly benefits humanity rather than providing a tool for students to cheat with.

Several colleagues of mine have had to switch out of scientific machine learning as a discipline because the funding just isn't there anymore. All the money is in generic LLM research and generating pictures slightly better.

Jach · 11h ago
hodgehog11 · 6h ago
No, thank you for sharing. At first glance, I would argue this is still along the lines of what DeepMind has already done, and unlike DeepMind, they don't seem to care to engage with the communities that have been involved with this for a long time. But still, this suggests scientific machine learning is not abandoned by OpenAI, and maybe some others that I have missed. Hopefully there is a change in the winds over the next few years!
conartist6 · 9h ago
The headline is borderline offensive in what it wink-wink suggests. The content is just about normal boring stuff engineers deal with --vibration damping.
macleginn · 9h ago
But the solution -- using reinforcement learning -- is arguably novel and AI-related. (And also less deterministic?)
conartist6 · 8h ago
Yeah, totally fine. Once I read past the headline I was fine with all of it. It's just egregious clickbait that is actively misleading until you click through
jebarker · 8h ago
I don’t see what’s misleading. Is it that people read “perceive” to mean “understand”? The headline seems like a reasonable simplification of the actual work to me.
conartist6 · 8h ago
With none of the context, as is the case before you click a headline, it makes sounds like they're claiming ChatGPT is a philosopher.
jebarker · 8h ago
Thanks for explaining. I think it is the interpretation of “perceive” that does that. When I read perceive I think about sensors and its prior to any interpretation, but I guess that’s not how everyone (most?) people read it.
therealpygon · 7h ago
None of those things quite fit the definition of perceiving, or “becoming aware of” something, unless you stretch the definition of “awareness”. However, by technical definition, if AI assists in us being able to see deeper into space, then the title is accurate. But, I have to agree that it is a bit ambiguous for a title, but as they say, being technically right is still right.
yosito · 15h ago
I hate that both this kind of machine learning applied to scientific research and consumer focused LLMs are both called "AI", that neither is "intelligent" and that consumers don't know the difference.
molticrystal · 14h ago
Well the term Artificial Intelligence came from 1955 conference entitled "The Dartmouth Summer Research Project on Artificial Intelligence".

To quote their purpose:

>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

While you may argue it is not intelligent, it is certainly AI, which is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards simulating intelligence and learning.

card_zero · 14h ago
... by people working on AI, and suckers.

This is "it's just an engineering problem, we just have to follow the roadmap", except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.

ben_w · 13h ago
> This is "it's just an engineering problem, we just have to follow the roadmap",

No, this is "it's a science problem". All this:

> except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.

is what makes it science rather than engineering.

card_zero · 12h ago
I mean, thinking about it a lot and trying stuff out is good, but you can't claim anything you tried was a step toward the eventual vital insight, except retrospectively. It's not incremental like a progress bar, it's more like a spinner. Maybe something meaningful is going on, maybe not.
auggierose · 11h ago
I'd say if you are doing proper science, all your steps are towards the eventual vital insight. Many of the steps may turn out to lead down the wrong lane, but you cannot know that in advance. A simplified way to view this: If you are searching for a certain node in a graph, visiting wrong nodes in the process cannot be avoided, and of course is part of finding the right node.

From the outside though, it is tough to decide if somebody is doing proper science. Maybe they are just doing nonsense. Following a hunch or an intuition may look like nonsense from the outside, though.

card_zero · 11h ago
But (connecting this back to the start of the thread) then you can say things like "controlled nuclear fusion can in principle be achieved, therefore my experiments in cold fusion in a test tube are an incremental step toward it, therefore I am actually doing fusion, gib money".
auggierose · 8h ago
First, nobody is obliged to give you money. You'll need to convince them first.

Second, not sure what you are saying exactly, do you think "experiments in cold fusion in a test tube" are a step forward for science? Do you think a serious scientist would believe that?

As I said, playing science, and doing proper science, are two entirely different things, but hard to distinguish from the outside.

dumpsterdiver · 11h ago
Now say what you just said in a really excited TV announcer voice, as if you’re really excited to find out, and boom - science.
merelysounds · 14h ago
If it helps, it is not a new thing - we’ve experienced that with e.g. “cloud” before (and “ajax”, “blockchain”, “metaverse”, etc). Eventually buzzwords fall out of fashion; although they do get replaced by new ones.
magicmicah85 · 9h ago
AI is just a broader term. It's like saying "we used computers". Consumers also don't need to know the difference, but a compsci major should.