The IP point: this hasn't yet been fully resolved, and I expect companies to continue to fudge it by sticking "if disney return false" in their image generators. My own employer has a stance of "no AI generated code in production use" (but allowed for testing and infrastructure which will not be distributed).
I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.
captn3m0 · 7h ago
The submission title should be “Asahi Linux Generative AI Policy”
aleksjess · 6h ago
I mean, I used the page totle, and an excerpt from the test which mostly summarizes the test. If youre right, I'll change the title
mpalmer · 6h ago
No it shouldn't.
tim333 · 15m ago
Yeah, I think it's fair for the HN title to say what the piece is about.
am17an · 6h ago
In the future there will be safe havens where LLM generated code has not been merged. It will marketed as “hand-crafted” by Romanian programmers or something like that, akin to Swiss watches. It will be extremely high quality, but too expensive to mass produce.
TrackerFF · 6h ago
I am by no means an AI/ML evangelist, but these types of posts just come off as something written by modern day luddites.
tim333 · 9m ago
The issue
>FOSS projects like Asahi Linux cannot afford costly intellectual property lawsuits in US courts
seems quite practical and non Ludditeish.
terminalbraid · 6h ago
Luddites in what sense? Because there's the lazy, gross stereotype that's been perpetuated with negative connotation and the movement that was not at face anti-technology, but against how that technology was used to suppress labor.
The historical luddites were not heroes of the working class. They were a special interest that used violence to keep less skilled people out of their industry, so they could keep making more money than the hoi pollo.
Dumb or evil, either way they are not people to be celebrated.
krapp · 6h ago
>They were a special interest that used violence to keep less skilled people out of their industry, so they could keep making more money than the hoi pollo.
So apart from the use of violence, the Luddites were essentially equivalent to modern tech culture. It's weird that they don't get more sympathy on HN.
Regardless, the complaints of the Luddites resonate because they would eventually apply to everyone, including the working class. It just happens that they came for the textile workers first.
salawat · 4h ago
>The historical luddites were not heroes of the working class. They were a special interest that used violence to keep less skilled people out of their industry, so they could keep making more money than the hoi pollo.
You mean like Doctors and lawyers? Make no mistake. Laws are threats. You can tell yourself it's about some imagined Quality concern, but the extreme resource burden to enter the field creates a stark class boundary that is in general very hard to get past.
If anything, the Luddites actually understood what society was doing even if that same society said they weren't.
The only difference between a craftsman destroying a machine, and a lawyer or doctor erecting entry barriers through political means is the tools being employed at the time.
>Dumb or evil, either way they are not people to be celebrated.
I see them as neither dumb, nor evil. I see them as both wise, intelligent, motivated, and thoroughly realistic about the direction society was headed and recognizing no one wanted to be on the hook for answering the question of "what about me?"
It's a natural response for a class suddenly evicted from a meta stability in the social order, and if anything should be vilified, it's the kind of rapid progress at all costs, bundled with lack of care who gets left behind by it that so characterizes the modern incarnation of tech-capitalism.
lucumo · 4h ago
> it's the kind of rapid progress at all costs, bundled with lack of care who gets left behind by it
As opposed to the status quo at all costs, bundled with a lack of care for who gets to stay in the gutter?
They were just another special interest group that wanted to keep more of the pie for themselves, at the expense of people poorer than them. And they used violence to try to achieve that.
salawat · 2h ago
>As opposed to the status quo at all costs, bundled with a lack of care for who gets to stay in the gutter?
How about slower incremental change with actual effort put into the costs and downstream effects/Externalities? I know, who has time for that though. Someone else's problem right?
>They were just another special interest group that wanted to keep more of the pie for themselves, at the expense of people poorer than them.
Ever heard the phrase "Every accusation an admission?" Often we end up projecting what we can't consciously think and work forward with through onto the intents of others. It's too destructive to the conscious narrative, so ends up externalized into the "other" in spite of blossoming just as fruitfully in the self. You've demonstrated to me, at least you see the game, and you recognize how it's played, and the consequences of not playing it.
>And they used violence to try to achieve that.
They broke machines. And they were beaten, strike broken, vilified and abused by capital wielders who were, in point of fact, not much poorer than them, as they could afford the machines in question, and when faced with the opportunity to divest themselves of the burden of doing business with those pesky tradesmen, were only so happy to do so.
Statistical voyeurism is a capitalist's favorite pass time. All the sexy of number go up, none of the burden of of the other side of balance sheet.
kcplate · 6h ago
Pretty much all technology suppresses human labor to a degree, that’s the whole point
satisfice · 21m ago
To you.
simianwords · 7h ago
Environmental aspect is listed as one of the reasons which makes me think this is more ideological than practical.
ezst · 3h ago
How are environmental aspects of LLMs "non-practical"?
mathiaspoint · 5h ago
I'm generally against the environmental movements and I think part of the problem is that the utility markets aren't elastic enough (or at all) to appropriately charge these companies for what they're doing.
But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.
dkdbejwi383 · 5h ago
> I'm generally against the environmental movements
why is that, may I ask? Always interested to learn about alternative viewpoints.
mathiaspoint · 3h ago
Often it's either overblown or outright wrong and just used by wealthy people to manipulate the public. I think protecting the environment is very important but most political environmental movements are doing something else.
dkdbejwi383 · 22m ago
Yeah fair enough, there is a lot of bullshit greenwashing and attempts to blame the public for corporations use of plastic, burning fuel, etc.
Green movements can sometimes be their own worst enemy too, letting perfect be the enemy of good and blocking small improvements because they don’t go far enough.
mschuster91 · 7h ago
First of all, it isn't wrong. The power consumption of AI training and inference is massive.
Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.
simianwords · 7h ago
>First of all, it isn't wrong. The power consumption of AI training and inference is massive.
This is massively overstated. We ought to be more careful in performing such calculations.
vrighter · 6h ago
No it is not, honestly. The argument that inference is cheaper than training does not hold up when you keep in mind that as the current model is being used for inference, another one at least as big is being trained simultaneously. Current models can't adapt to new information. So training is an ongoing thing and inference comes in addition to the training costs, not instead of.
mpalmer · 6h ago
This doesn't refute the (unmade) argument that inference is cheaper than training. Inference is cheaper than training, regardless of how much of each is taking place.
Not only that, both keep getting less expensive in various ways. small models, though still not cheap to train, are making inference vanishingly cheap.
mschuster91 · 6h ago
To quote [1]:
> Alex de Vries is a Ph.D. candidate at VU Amsterdam and founder of the digital-sustainability blog Digiconomist. In a report published earlier this month in Joule, de Vries has analyzed trends in AI energy use. He predicted that current AI technology could be on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year).
That's a lot of power, and the availability of electricity is already seen as a bottleneck in the development of AI models [2].
> He predicted that current AI technology could be on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year).
He didn’t though, did he? If you click through and read the report, it is quite clear in saying that this is a worst case scenario if you ignore things like it being unrealistic for Nvidia to even produce the requisite number of chips. It’s also based on the efficiency of 2023 models; he assumes 3 Wh per prompt when today it is 0.34 Wh.
It also completely ignores the fact that AI use can displace greater energy use:
The carbon emissions of writing and illustrating are lower for AI than for humans
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. […] the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.
It isn’t a bad thing if AI uses x Wh if not using AI uses 1500x Wh. Looking at the absolute usage without considering displacement only gives you half the picture.
simianwords · 6h ago
what is it as a percentage of all electricity used in the world? as a percentage of air travel? of using netflix? of eating meat?
Population of Ireland is .06% of world's population btw.
rootnod3 · 4h ago
AI is on track to use more energy than Bitcoin this year. And Netflix and others are activety trying to reduce their usage, whereas all I see on the AI side right now is people scrambling to use more for bigger models in the that at some point something usable might come out of it.
aleksjess · 6h ago
Objectively, using as much power as a country with a population of a few millions is a lot of energy
rootnod3 · 6h ago
Not to mention that the amuont of energy compared to the debatable benifits is...worrying.
serf · 24m ago
Have whatever opinion you want , we're going to rapidly run into issues where certain organizations are outran by others that have more liberal policies about this kind of work.
Call it slop all you want, doesn't change the unreasonable effectiveness that some individuals seem to have with such systems.
satisfice · 17m ago
How do you know it is effective? Many of us think it isn’t. Prove us wrong, if you can.
A problem with the slop coding movement is that they are happy living with wishful thinking.
meindnoch · 6h ago
So far I was aversive of Asahi Linux mostly due to the furry anime girl roleplay thing, but this is something I can stand by!
openmarkand · 6h ago
Do I miss something? I've been following the project since the beginning and just checked the wiki, the website and all over the documentation and haven't found something relevant to anime.
Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.
There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?
Ironically, the author could well benefit from running this slop through an llm to make it more professional.
doesnt_know · 7h ago
Interesting that you could have posted about any of the points being made in the link but you choose this one.
Personally I think the term is well deserved and am glad it continues to be popularised.
Fade_Dance · 6h ago
To be clear I don't mind it as a casual term, I'm simply saying that, to me, it comes off as puerile in this context. It's akin to putting out a press release for a project with "Suckerberg" written everytime Meta comes up, or for an older reference, Micro$oft. It personally made the article hard to take seriously, and cast a bad first impression on the project. It may not come across that way to all - I've simply never been a fan of that highly editorialized and charged communication style when it comes to community management. It almost has a combative tone, sort of like when CMs argue with users that have opinions they don't like. Take it or leave it.
_____________
As for the individual points:
The initial concerns about copyright are convincing.
The point about resource impact ending with "these resources would literally be better spent anywhere else" devolved into meaningless grandstanding. I wouldn't mind seeing a project take a stand because of environmental impact, but again it just ends up sounding like the author has a bone to pick rather than a genuine concern about the environment. If that's not the case, then that's a prime example of why tone matters in communication.
The Reddit comment paragraph where the author berates users for using LLMs on social media is just odd and out of place. Maybe better suited to the off-topic section of their community forum/discord.
And the last point I simply disagree with. Highly knowledgeable people in a field that requires precision use LLMs every day. It's a tool like any other. I use it in financial trading (ex: it's great for scanning reams of SEC filings and earnings report transcripts), I know others who use it successfully in trading, and I know firms like Jane Street have it deeply integrated in their process.
tonypapousek · 7h ago
> I know nothing of this project
It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
True, that would effectively strip out all the heart and soul from the prose.
FeteCommuniste · 6h ago
It would read so much nicer with a dozen em dashes and a bunch of “not only _, but _” constructions.
Fade_Dance · 6h ago
It's not a high bar that the author failed to cross, correct.
washadjeffmad · 5h ago
That mindset is itself part of the problem. A healthy sign of free speech is poor taste.
skerit · 7h ago
I agree. They have some good reasons on why AI-generated code should not be used in this project, but the page just devolves into a "All AI is bad" and the constant use of "Slop generator" just makes me think of all the people that used to write "Micro$oft" (Well, I did too. When I was twelve)
RUnconcerned · 7h ago
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?
JimDabell · 7h ago
> Have you considered that it is not the intent of the author to appear professional?
They have no obligation to sound professional but if they intended to appear incredibly childish then they succeeded, and people will judge them accordingly.
rimbo789 · 6h ago
But anything made by an AI is, by definition, slop
TrackerFF · 6h ago
So what happens when AI slop is nicer looking, and more robust than human slop?
rimbo789 · 4h ago
If a human makes it, it is inherently more valuable than what ai produces.
nurettin · 6h ago
Here's some context into what they mean by "AI slop"
Reading the first two vulnerability reports makes it very clear.
andybak · 6h ago
Except they don't mean "specific examples of AI content are slop". They unambiguously say that all uses of generative AI are slop.
nurettin · 4h ago
That is indeed controversial, I'm just saying that after seeing enough examples like the above one might be persuaded to think that way.
electricboots · 7h ago
I think for a project like asahi, and frankly any project with a reasonable blast radius, software or otherwise, the article is on point.
The authors lack of professionalism is a reasonable counter to the completely unhinged mainstream takes on ai/llms that we hear daily.
I think the Reddit example provides useful, generally relatable context, otherwise missing to the average reader.
My opinions are not to detract from the use of the tech or engineers working in the space, but motivated by a disgust for the hype.
Fade_Dance · 4h ago
The mainstream seems sort of anti-AI from what I get in the non-tech sphere. Ex: artists generally hate it. Writers generally hate it. Many people think it's damaging to society and are very pessimistic.
Marketing is completely ridiculous when it comes to the topic, but when isn't that enough in the case with the next shiny thing. They even extolled the life changing virtues of 3D TVs for one of the cycles.
I honestly hear far more unhinged AI doomer stuff and constant pessimism that makes me sort of sad (after all it is cool tech that will do a lot of new things) than AI sycophantism, do you not? If so, where? Granted this is a US perspective, where there is currently a deep seated pessimistic undercurrent about just about everything right now.
rootnod3 · 2h ago
It seems apparent in this thread that any remote criticism of AI results in downvotes.
Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.
I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.
cadamsdotcom · 5h ago
Agreed on the IP point. Strongly agreed on not wanting to go to court in the current US political climate.
On the rest, especially the confidently incorrect argument.. Not. so. much.
Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.
Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.
Unfortunately none of that matters because the IP point is a blocker. Bummer.
WithinReason · 5h ago
> This is fundamentally the same mathematics that is used to predict the weather.
And yet, weather prediction works. Therefore, LLMs work?
harringdev · 6h ago
This is very hard to take seriously, it feels ideologically driven.
The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.
The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.
I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.
EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.
roxolotl · 6h ago
Why is something ideologically driven inherently not worth taking seriously? I appreciate maybe their understanding will be more challenging to change but people don’t necessarily arrive at ideologies for unserious reasons nor do ideologies have unserious results.
harringdev · 6h ago
You make a good point, and I will clarify what I mean.
A small disclaimer, I am not a AI-booster. I think LLMs do have issues, and one should be careful with them.
I've found that there's a large group of people who dislike LLMs strongly and claim they are totally useless or even pernicious. I think this is grounded in truth -- they are trained on copyright work, they are used for spreading misinformation, they can produce sh^t code. Although some folks take a radical/extremist approach and totally dismiss them as useless -- often without actually using LLM-powered tools in any meaningful way.
They are useless for many applications, but programming is not one of them. I think a blanket ban on LLMs has to be somewhat unfounded/radical because they do have practical applications in writing code. The tab-completion models are extremely useful for example.
For this more niche project I would think LLMs might not be as useful as they are for other projects. This said, I still think there would be a variety of use-cases where they could be helpful.
nsksl · 3h ago
Because decisions like these have to be taken purely on the basis of technical merits, not political ideology.
000ooo000 · 3h ago
Says who?
josefritzishere · 2h ago
Two large scale studies so far. It's worth saying that thus far there have only been two studies on the efficiency gains AI produces and they both report that it's a negative number. Perhaps later studies will rebuke those findings but SCIENCE! At this time we lack findinfs that support the contention that AI is a benefit at least when measures as work hours.
pjc50 · 5h ago
Linux is, in and of itself, ideological.
mpalmer · 6h ago
I think the better word is extremist, because choosing to brand all LLMs as "Slop Generators" is just that, an extremist position with a rather self-satisfied air about it.
> "These resources are better used on quite literally anything else."
What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.
RUnconcerned · 6h ago
While I agree that the value of LLMs is wildly overstated, I disagree that it is less useful than bitcoin mining, which is entirely useless. At least LLMs can produce usable output.
pfoof · 7h ago
I can agree with all but last point.
There’s no proof nor counter-proof that human brain doesn’t work like that.
krige · 7h ago
Can't prove the negative. Prove that it does instead.
pfoof · 7h ago
Will come back to this comment in 15 years and admit you are right
I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.
>FOSS projects like Asahi Linux cannot afford costly intellectual property lawsuits in US courts
seems quite practical and non Ludditeish.
https://en.wikipedia.org/wiki/Luddite
Dumb or evil, either way they are not people to be celebrated.
So apart from the use of violence, the Luddites were essentially equivalent to modern tech culture. It's weird that they don't get more sympathy on HN.
Regardless, the complaints of the Luddites resonate because they would eventually apply to everyone, including the working class. It just happens that they came for the textile workers first.
You mean like Doctors and lawyers? Make no mistake. Laws are threats. You can tell yourself it's about some imagined Quality concern, but the extreme resource burden to enter the field creates a stark class boundary that is in general very hard to get past.
If anything, the Luddites actually understood what society was doing even if that same society said they weren't.
The only difference between a craftsman destroying a machine, and a lawyer or doctor erecting entry barriers through political means is the tools being employed at the time.
>Dumb or evil, either way they are not people to be celebrated.
I see them as neither dumb, nor evil. I see them as both wise, intelligent, motivated, and thoroughly realistic about the direction society was headed and recognizing no one wanted to be on the hook for answering the question of "what about me?"
It's a natural response for a class suddenly evicted from a meta stability in the social order, and if anything should be vilified, it's the kind of rapid progress at all costs, bundled with lack of care who gets left behind by it that so characterizes the modern incarnation of tech-capitalism.
As opposed to the status quo at all costs, bundled with a lack of care for who gets to stay in the gutter?
They were just another special interest group that wanted to keep more of the pie for themselves, at the expense of people poorer than them. And they used violence to try to achieve that.
How about slower incremental change with actual effort put into the costs and downstream effects/Externalities? I know, who has time for that though. Someone else's problem right?
>They were just another special interest group that wanted to keep more of the pie for themselves, at the expense of people poorer than them.
Ever heard the phrase "Every accusation an admission?" Often we end up projecting what we can't consciously think and work forward with through onto the intents of others. It's too destructive to the conscious narrative, so ends up externalized into the "other" in spite of blossoming just as fruitfully in the self. You've demonstrated to me, at least you see the game, and you recognize how it's played, and the consequences of not playing it.
>And they used violence to try to achieve that.
They broke machines. And they were beaten, strike broken, vilified and abused by capital wielders who were, in point of fact, not much poorer than them, as they could afford the machines in question, and when faced with the opportunity to divest themselves of the burden of doing business with those pesky tradesmen, were only so happy to do so.
Statistical voyeurism is a capitalist's favorite pass time. All the sexy of number go up, none of the burden of of the other side of balance sheet.
But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.
why is that, may I ask? Always interested to learn about alternative viewpoints.
Green movements can sometimes be their own worst enemy too, letting perfect be the enemy of good and blocking small improvements because they don’t go far enough.
Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.
This is massively overstated. We ought to be more careful in performing such calculations.
Not only that, both keep getting less expensive in various ways. small models, though still not cheap to train, are making inference vanishingly cheap.
> Alex de Vries is a Ph.D. candidate at VU Amsterdam and founder of the digital-sustainability blog Digiconomist. In a report published earlier this month in Joule, de Vries has analyzed trends in AI energy use. He predicted that current AI technology could be on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year).
That's a lot of power, and the availability of electricity is already seen as a bottleneck in the development of AI models [2].
[1] https://spectrum.ieee.org/ai-energy-consumption
[2] https://www.tomshardware.com/tech-industry/artificial-intell...
He didn’t though, did he? If you click through and read the report, it is quite clear in saying that this is a worst case scenario if you ignore things like it being unrealistic for Nvidia to even produce the requisite number of chips. It’s also based on the efficiency of 2023 models; he assumes 3 Wh per prompt when today it is 0.34 Wh.
It also completely ignores the fact that AI use can displace greater energy use:
The carbon emissions of writing and illustrating are lower for AI than for humans
> Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. […] the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.
— https://www.nature.com/articles/s41598-024-54271-x
It isn’t a bad thing if AI uses x Wh if not using AI uses 1500x Wh. Looking at the absolute usage without considering displacement only gives you half the picture.
Population of Ireland is .06% of world's population btw.
Call it slop all you want, doesn't change the unreasonable effectiveness that some individuals seem to have with such systems.
A problem with the slop coding movement is that they are happy living with wishful thinking.
Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.
There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?
Ironically, the author could well benefit from running this slop through an llm to make it more professional.
Personally I think the term is well deserved and am glad it continues to be popularised.
_____________
As for the individual points:
The initial concerns about copyright are convincing.
The point about resource impact ending with "these resources would literally be better spent anywhere else" devolved into meaningless grandstanding. I wouldn't mind seeing a project take a stand because of environmental impact, but again it just ends up sounding like the author has a bone to pick rather than a genuine concern about the environment. If that's not the case, then that's a prime example of why tone matters in communication.
The Reddit comment paragraph where the author berates users for using LLMs on social media is just odd and out of place. Maybe better suited to the off-topic section of their community forum/discord.
And the last point I simply disagree with. Highly knowledgeable people in a field that requires precision use LLMs every day. It's a tool like any other. I use it in financial trading (ex: it's great for scanning reams of SEC filings and earnings report transcripts), I know others who use it successfully in trading, and I know firms like Jane Street have it deeply integrated in their process.
It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
True, that would effectively strip out all the heart and soul from the prose.
Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?
They have no obligation to sound professional but if they intended to appear incredibly childish then they succeeded, and people will judge them accordingly.
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
Reading the first two vulnerability reports makes it very clear.
The authors lack of professionalism is a reasonable counter to the completely unhinged mainstream takes on ai/llms that we hear daily.
I think the Reddit example provides useful, generally relatable context, otherwise missing to the average reader.
My opinions are not to detract from the use of the tech or engineers working in the space, but motivated by a disgust for the hype.
Marketing is completely ridiculous when it comes to the topic, but when isn't that enough in the case with the next shiny thing. They even extolled the life changing virtues of 3D TVs for one of the cycles.
I honestly hear far more unhinged AI doomer stuff and constant pessimism that makes me sort of sad (after all it is cool tech that will do a lot of new things) than AI sycophantism, do you not? If so, where? Granted this is a US perspective, where there is currently a deep seated pessimistic undercurrent about just about everything right now.
Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.
I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.
On the rest, especially the confidently incorrect argument.. Not. so. much.
Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.
Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.
Unfortunately none of that matters because the IP point is a blocker. Bummer.
And yet, weather prediction works. Therefore, LLMs work?
The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.
The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.
I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.
EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.
A small disclaimer, I am not a AI-booster. I think LLMs do have issues, and one should be careful with them.
I've found that there's a large group of people who dislike LLMs strongly and claim they are totally useless or even pernicious. I think this is grounded in truth -- they are trained on copyright work, they are used for spreading misinformation, they can produce sh^t code. Although some folks take a radical/extremist approach and totally dismiss them as useless -- often without actually using LLM-powered tools in any meaningful way.
They are useless for many applications, but programming is not one of them. I think a blanket ban on LLMs has to be somewhat unfounded/radical because they do have practical applications in writing code. The tab-completion models are extremely useful for example.
For this more niche project I would think LLMs might not be as useful as they are for other projects. This said, I still think there would be a variety of use-cases where they could be helpful.
What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.
There’s no proof nor counter-proof that human brain doesn’t work like that.