Ask HN: Why hasn't x86 caught up with Apple M series?
450 points by stephenheron 7d ago 620 comments
Ask HN: Did Developers Undermine Their Own Profession?
8 points by rayanboulares 15h ago 16 comments
AI enters the grant game, picking winners
76 JeanKage 57 9/1/2025, 2:21:12 PM science.org ↗
- Many PIs are writing grant proposals with the help of AI
- Most grants are written by AI
- Grants are reviewed by AI
- Adversarial attacks on grant review AI
- Arms race between writing and reviewing AI
- Realization that none of this is science
Where it goes from there is anyone's guess: it could be the collapse of publicly funded science, an evolution toward a increasingly elitist requirements (which could lead to the former), or maybe some creative streamlining of the whole grant process. But without intervention it seems like we're liable to end up in a situation worse than we started in.
See how much of a waste of time it is?
LLMs are a godsend for writing these formulaic things and since the starting point is a situation with useless processes where everyone wastes inordinate amounts of time, I can't imagine them being harmful overall for the grant process. The bar is just very low.
in my experience since 1982, reviewers will pick you to death. Peers know the real complexity. Sure, you will occasionally get reviewers who completely miss the point, but it is usually more common that your negative reviewers know way more than you do or they have a different ax to grind then the one you want to grind.
Fortunately the interesting part is where LLMs don't help (at least for now) and the pointless parts are where they help immensely.
This is preposterous. You are completely forgetting that there are actual groups of humans, people who know and respect each other, coming together and discussing the proposals.
Being involved in ranking grant applications is already a thankless job, and many scientists still do it, for the greater good if you want. They will therefore also critically look at any AI reviews, and eliminate them as soon as they are not convincing.
(I am, of course, not saying that the grant review process is without flaws - that is a different discussion.)
Yes, and in this case, they've complained that they have more papers than they can possibly pay attention to, and thus are wholesale filtering out large swaths of papers using a computer program that cannot be debugged or tested in any reliable fashion.
> They will therefore also critically look at any AI reviews, and eliminate them as soon as they are not convincing.
The article implies the opposite is happening. Which means to review the AIs filtering decisions they have to go through the removed proposals and not the remaining ones. Which puts them back at square one as far as the (work : human) ratio is concerned.
Also, for a very high stakes proposal, I doubt people are just going to ask ChatGPT to do it, which would basically guarantee that their proposal is indistinguishable from some equally lazy competitors in their field.
What AI is revealing imo is that "Software Engineers" really, really do not deserve that title
If the NIH responds by globally lowering the limit to two or three proposals per year, they hurt 1%er mega labs that expect to have several active grants and now need to bat web above .500 to stay afloat. So I think it's likely that we see elitist criteria as you said, maybe a sliding scale for the proposal limit where labs that currently draw large amounts of funding are allowed to submit more proposals than smaller labs.
One place this may end up is with grant proposals requiring a live presentation component. You can use AI to crank out six proposals in a day, but rehearsing and practicing six presentations will still take quite some time and effort.
Job applications, story pitches, now grant applications, everyone is overwhelmed.
This is quite different from screening the numerous candidates who present themselves. Perhaps more similar to "talent scouts"?
The Hollywood system has serious flaws but at least it’s manageable.
Bringing back in-person pitches, applications and presentations would go a long way though.
"AI" on the other hand makes binary decisions and completely hides it's internal confidence rating.
I often ask AI systems for pros and cons and thy do as good a job as I would in many situations in which I am knowledgeable.
So I think it's clear from that and the context of the article that is absolutely _not_ how it's being used here.
Even where AI is widely lauded (such as in programming), it needs a lot of "hand holding".
The biggest risk is that an even greater amount of time would be wasted by those who would have to screen grant applications.
- Arms race between writing and reviewing AI
As if there weren't numerous grant requests for dead end research before LLMs. Not saying this to discredit past research but when AI is used on both sides this changes none of the fundamental issues or incentives.
Using AI on both sides likely results in lower-risk lower-reward science which provides society fewer benefits per dollar spent.
AI can be used as an editorial assistant, research assistant, and copy editor. This is a huge benefit already.
AI systems are also almost brilliant in helping to tighten text. Try compressing a 1500 word discussion down to 750 words for an article in Nature.
I agree that using AI to write text from scratch is lazy ghost writing—-but not the end of the world. It may be a huge problem in undergraduate teaching, but there are also some hidden up-sides.
I definitely think AI should be used to pre-review NIH grant applications to help both the applicant, NIH, and reviewers. My institution’s “pre-award” team checks the budget, check compliance with admin policies, and a few other details. They have no capacity to help with the science.
AI systems can be highly effective as surrogate pre-review reviewers. They perform at least as well as the hard-pressed unpaid human reviewers who spend 6 hours with you application if you are very lucky, and who will definitely not be perusing your latest 10 papers. AI can do both in minutes.
I give my applications to Claude with a prompt of this general format:
“You are a grumpy and highly stressed scientist who should be working on your own research but you have nobly decided to support NIH by reviewing 6 applications, each of which will take you at least 4 hours to review and another 2 hours to write-up half-fairly. Your job as reviewer today is to enumerate what you see as theoretical and experimental deficiencies of each of the three aims and sub-aims. List at three problems per aim. Having generated this list of deficiencies, then go through the application carefully and see if the applicants actually already addressed your concerns. Did you miss their pitfalls section or “alternative strategies”?
Writing a grant application is just another form of REAL science. I learn a ton every time I write (or help write) a grant application. It is more thoughtful in some important ways than DOING science and even than WRITING UP science.
Isn't this the fundamental problem with the grant review process? That is, that the reviews are often rather cursory, despite the lauded credentials of the reviewers and panels? How could current LLMs conceivably improve this?
It can even more easily lead to a situation where only bad actors get ahead, without any chance or uncertainty.
I wish they elaborated on how they measure commercial promise. I've seen papers that attempt to link grants to value via a 4 step chain: grants fund projects, projects make papers, papers make patents, patents create jumps in stock for US firms. Of course, this is a reductive way to measure progress, but if you want to use AI you'll need a reductive metric.
> And so far, public funders are being cautious. In 2023, the U.S. National Institutes of Health banned the use of AI tools in the grant-review process, partly out of fears that the confidentiality of research proposals would be jeopardized.
It sort of annoys me that this is framed as "fear" about a single issue. The NIH is increasingly criticized for funding low-risk, low-reward inefficient science. People are suggesting that they instead fund high-variance work, stuff that goes against the grain or lets the researcher chart a new path. Using AI would prevent this, because it tends to be a conventional wisdom machine. Its trained on our body of knowledge; how could it do otherwise?
Why (the fuck, I may add) would they focus on signs of commercial promise in the first place?
> AI ... tends to be a conventional wisdom machine
And it would therefore confidently pick submissions that look like older successes. Sending in a copy of something that was patented just before the model's cutoff date would be a good strategy.
There’s certainly a case to be made about using LLMs to find needles in haystacks, since most grants tend to be awarded to “repeat offenders” rather than newcomers and outsiders* with different methodologies.
Meta-analysis one of the areas I would expect machine learning to become competent.
Pro-actively approaching researchers, rather than hoping they will submit a grant application to your own organization, is also a very innovative approach that I would like to see happen more often.
The submitted proposals can involve including a year or more of "pilot data", being experiments showing that the proposed approach is feasible. In addition 3 or 4 months of writing and admin (budgets,legal requirements, etc) are needed to complete the application.
So an application can easily be 1.5 to 2 man-years of work to prepare.
A screening process then takes place. This involves the awarding organization vetting the credentials of the applicants, as well as recruiting specialists to give their expert opinion. This takes at least 6 months.
Then there will be panel of experts who review all the above and vote. The vote is "taken into consideration" by the executive of the awarding body who make the final decisions. They might award funding for three years of work for maybe 10-15% of applicants.
As you can see, it is a massively burdensome process, with typically a very poor return on investment on the behalf of researchers.
And no, I don't think current AI is anywhere near competent enough to replace most of this process, apart from maybe the admin and legal sides.
Three to four months to design. write, and refine a 13 page document is working at a very leisurely pace in my opinion (43 years of writing more than 100 applications).
And yes, I do think AI systems can improve much of this process and result in much stronger science.
Essentially, the overall return on research investment for academic biomedical researchers themselves as a whole is negative. Which is why a substantial subsidy in the form of underpaid students, etc, is needed.
There are a few top performers who achieve a positive return from grant applications. A more streamlined and less fickle review process would be indeed improve matters, though I very much doubt LLMs in their current state of development will help this process.
It's actually one of my favorite parts of the job, getting to read new, promising ideas that I think might genuinely help humanity.
Surely if they're going to use AI there are better ways than this?
I was expecting something more than what sounds like a prompt