It's good to see that at least one tech company is interested in using machine learning for scientific research. You know, research that plausibly benefits humanity rather than providing a tool for students to cheat with.
Several colleagues of mine have had to switch out of scientific machine learning as a discipline because the funding just isn't there anymore. All the money is in generic LLM research and generating pictures slightly better.
You think that overall LLMs are net negative for humanity?
In a past comment you admitted to using LLMs:
> I should know, I've been using LLM thinking models to help brainstorm ideas for stickier proofs.
Maybe you should stop, you are encouraging LLM development by using them.
yosito · 5h ago
I hate that both this kind of machine learning applied to scientific research and consumer focused LLMs are both called "AI", that neither is "intelligent" and that consumers don't know the difference.
merelysounds · 4h ago
If it helps, it is not a new thing - we’ve experienced that with e.g. “cloud” before (and “ajax”, “blockchain”, “metaverse”, etc). Eventually buzzwords fall out of fashion; although they do get replaced by new ones.
molticrystal · 4h ago
Well the term Artificial Intelligence came from 1955 conference entitled "The Dartmouth Summer Research Project on Artificial Intelligence".
To quote their purpose:
>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
While you may argue it is not intelligent, it is certainly AI, which is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards simulating intelligence and learning.
card_zero · 4h ago
... by people working on AI, and suckers.
This is "it's just an engineering problem, we just have to follow the roadmap", except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
ben_w · 3h ago
> This is "it's just an engineering problem, we just have to follow the roadmap",
No, this is "it's a science problem". All this:
> except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
is what makes it science rather than engineering.
card_zero · 1h ago
I mean, thinking about it a lot and trying stuff out is good, but you can't claim anything you tried was a step toward the eventual vital insight, except retrospectively. It's not incremental like a progress bar, it's more like a spinner. Maybe something meaningful is going on, maybe not.
auggierose · 1h ago
I'd say if you are doing proper science, all your steps are towards the eventual vital insight. Many of the steps may turn out to lead down the wrong lane, but you cannot know that in advance. A simplified way to view this: If you are searching for a certain node in a graph, visiting wrong nodes in the process cannot be avoided, and of course is part of finding the right node.
From the outside though, it is tough to decide if somebody is doing proper science. Maybe they are just doing nonsense. Following a hunch or an intuition may look like nonsense from the outside, though.
card_zero · 1h ago
But (connecting this back to the start of the thread) then you can say things like "controlled nuclear fusion can in principle be achieved, therefore my experiments in cold fusion in a test tube are an incremental step toward it, therefore I am actually doing fusion, gib money".
dumpsterdiver · 1h ago
Now say what you just said in a really excited TV announcer voice, as if you’re really excited to find out, and boom - science.
Several colleagues of mine have had to switch out of scientific machine learning as a discipline because the funding just isn't there anymore. All the money is in generic LLM research and generating pictures slightly better.
In a past comment you admitted to using LLMs:
> I should know, I've been using LLM thinking models to help brainstorm ideas for stickier proofs.
Maybe you should stop, you are encouraging LLM development by using them.
To quote their purpose:
>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
While you may argue it is not intelligent, it is certainly AI, which is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards simulating intelligence and learning.
This is "it's just an engineering problem, we just have to follow the roadmap", except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
No, this is "it's a science problem". All this:
> except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
is what makes it science rather than engineering.
From the outside though, it is tough to decide if somebody is doing proper science. Maybe they are just doing nonsense. Following a hunch or an intuition may look like nonsense from the outside, though.