Reading through the github issues, the author of this article comes off as very rude and entitled.
I'm on the side of the CoC committee who told the author they engaged without enough consideration or kindness.
Reporting bugs is nice. It's less nice if, when a maintainer asks for a clearer reproduction, you respond with "I already gave you a reproduction, even if you have to edit it a little. I'm not a programmer, all I can give you is some AI spam. I'll leave it up to you to do your jobs" (edited only lightly from what the author really wrote).
kordlessagain · 1h ago
Seyfarth admits he might’ve crossed a line, but he always wraps it in some version of “yeah, but look what they did first.” That’s textbook rationalization. He can’t be wrong if he was provoked. Reading more of his stuff, it’s clear the guy has a serious fixation on procedural control. As long as the system works in his favor, he’s its biggest fan. But the second it doesn’t validate him? He flips the table, blames everyone else, and rebrands it as a systemic failure. What starts as a disagreement turns into a legal crusade every time.
hyperhello · 1h ago
> He derided my attempt to use an AI summary to bridge a communication gap (I explicitly stated I'm not a programmer) as a "...stochastic parrot designed to produce lies instead of actionable information...".
I don't really have a dog in the race, but I think people should react this way to AI communication. They should be shunned and informed in no uncertain terms that they are not welcome to communicate any more.
TheDong · 1h ago
Even that phrasing is kinda rude. "bridge a communication gap", presumably the gap between programmer and non-programmer, right?
He used AI as in he gave the AI the patch that regressed, asked the AI to "find the bug", and pasted that output.
This would be akin to walking up to the architect for a house, and saying "it's not my job to build buildings, but I tried to use this lego design to show you how to do your job. Look, the legos snap together, can't you do that with the house?"
Using AI to try and explain things to a subject matter expert, which you yourself do not understand, will come off like that, like you're second-guessing, belittling, and underestimating their expertise all at once.
carols10cents · 39m ago
And the architect is a volunteer for Habitat for Humanity.
welferkj · 1h ago
Agreed, but this needs to be codified in the CoC, otherwise people will use it to rules-lawyer and treat morally correct anti-AI bias as a character flaw.
KingOfCoders · 1h ago
The accusations and bad CoC behaviour was before the usage or mention of AI. This is mixing up cause and effect.
zzrrt · 1h ago
I have a nitpick about about how AI is supposedly "designed to produce lies." That's pretty clearly false, unless you really believe the creators of AI are intentionally spreading lies through their products and that was their intention from the start. Call them careless or their technology inherently flawed if you want, but neither means "design[ing]" liar technology.
This might have been too petty to comment on, if it weren't for the irony that an arrogant human asserting that AIs are fallible made his own logical error or exaggeration in the same sentence. Was he designed to produce lies too?
Edit: I'm not really defending a layman using AI to produce patches, just criticizing the idea that everything AI produces is a non-actionable lie by design.
tomovo · 1h ago
> Maybe the comment that voiced my anger crossed a line, too. I take full responsibility for that. But I think that this provoked reaction is understandable after all the time and effort spent to solve this issue constructively by a non-technical person.
> it further demonstrated my good intentions
> "you are arguing with a law professional"
> "AI summary," ... shows the effort I am willing to invest ...
Wow.
QuadmasterXLII · 1h ago
The author of this blog post does not come off as well as he thinks he does.
geocar · 1h ago
Agreed.
I feel like it's probably a Google VP who works on Gemini astroturfing.
yeputons · 1h ago
> Open source thrives on collaboration. ... Central to this ecosystem are Codes of Conduct (CoCs), designed to ...
Open source thrived back in early 2000-s too. Although I don't remember anything even remotely resembling Code of Conduct back then, I wasn't paying attention. Was it a thing?
I found that Drupal adopted CoC in 2010, and Ubuntu had one already no later than 2005 (the "Ubuntu Management Philosophy" book from 2005 mentions it).
Reading the original issue (instead of the article)
"But I am not blaming you for not having a degree in software engineering."
But then
"And you admitted that the large scripts in question contain hardcoded information such as your personal computer's login username, clearly those scripts won't work out of the box on someone else's machine".
while the other committer ends with
"You're free to propose a patch yourself instead. "
So the committer is acknowledging that the user is no software developer, but then the two of them demand the user to do things that the user might not be able to do.
That's not going to work.
whstl · 58m ago
I don't think anyone involved ends up looking good.
KingOfCoders · 18m ago
I'd agree.
watusername · 1h ago
Looking at the whole interaction as well as the AI patch (https://github.com/llvm/llvm-project/pull/125602#issuecommen...) the author submitted, I have to disagree. It removes the flag setting altogether and adds useless code. It demonstrates that the author really has no understanding of the code, which may be okay for your weekend SaaS but definitely not for build system code in critical compiler infrastructures. To put it bluntly: This _is_ AI slop.
There's no denying that AI is helpful, _when_ the human has some baseline knowledge to check the output and steer the model in the correct direction. In this case, it's just wasting the maintainers' time.
I've seen many instances of this happening in the support channels for Nix/NixOS, which has been around long enough for the models to give plausible responses, yet too niche for them to emit usable output without sufficient prompting:
"No, this won't work because [...]" "What about [another AI response that is incorrect]?" (multiple back-and-forths, everyone is tired)
> He derided my attempt to use an AI summary to bridge a communication gap (I explicitly stated I'm not a programmer)
LLVM has already found that AI summaries tend to provide negative utility when it comes to bug reports, and it has a policy of not using them. The moment you admit "an AI told me that...", you've told every developer in the room that you don't know what you're doing, and very likely, trying to get any useful information of you to be able to resolve the bug report is going to be at best painful. (cf. https://discourse.llvm.org/t/rfc-define-policy-on-ai-tool-us...)
Looking over the bug report in question... I disagree with the author here. The original bug report is "hi, you have lots of misnamed compiler option warnings when I build it with my toolchain" which is a very unactionable bug report. The scripts provided don't really provide a lot of insight into what the problem might be, and having loads and loads of options in a configure command increases the probability that it breaks for no good reason. Also, for extra good measure, the link provided is to the latest version of a script, which means it can change and no longer reproduce the issue in question.
Quite frankly, the LLVM developer basically responded with "hey, can you provide a better, simpler steps to reproduce?" To which the author responds [1] "no, you should be able to figure it out from what I've given already." Which, if I were in the developer's shoes, would cause me to silently peace out at that moment.
At the end of the day, what seems to have happened to me is that the author didn't provide sufficient detail in their initial bug report and bristled quite thoroughly at being asked to provide more detail. Eli Schwartz might have crossed the line in response, but the author here was (to me) quite clearly the first person to have thoroughly crossed the line.
Which is a volunteer based job lol. Even if it was said in a heated argument, the bug reporter never really apologizes from what I read.
Maybe that's a "strawman" though
KingOfCoders · 1h ago
Developers feel the end times coming - out the pitchforks for any mention of AIs. Reading the article it does not seem to be an auto-generated AI bug report - nevertheless the pitchforks are out and the mob is even infiltrating unrelated bug threads on a different project to burn the heretic. The end times are coming.
high_na_euv · 1h ago
Why links are going thru Google.com? It is shady as hell
zzrrt · 5m ago
Maybe Blogspot wraps all links like that, to fight malware and SEO.
ajb · 1h ago
Probably just copypasting them out of google search, it's an easy mistake to make if you're not that technical.
faresahmed · 1h ago
I'm not quiet sure why a non-technical person would be engaging in a technical matter such as compiling LLVM, they say they are involved with some Arch Linux derivative but again the question persists.
high_na_euv · 1h ago
For pr link Id agree, but specific comment link?!
malcolmgreaves · 1h ago
So, is this a lawyer who is using his normal legal tactics of intimidation against LLVM devs who are donating their time to provide open source software? And is his aim that, because he’s incompetent, he messed up multiple parts of the process and got his own feelings hurt, and thus now wants…other people to coddle his feelings?
EDIT -
> Once again these two Gentoo developers showed a lack of good manners.
…
> hold a personal grudge against me
Yes, indeed this non technical person seems to have found that while they don’t have a mind sharp enough for software, nor the respect and understanding that they can’t talk to people the same way they do as a lawyer, they’ve well on their way into the subculture of posting their emotional rants onto the internet. (haha!)
itsanaccount · 1h ago
"actually do your job"
What an amazingly effective phrase to get open source developers to do what you want. /s
rho4 · 1h ago
I feel the author. Very often when I report issues to open source projects, the first response is "why don't you submit a patch?", followed by subtle hints that I am a leech profiting off the backs of volunteers. I am also at the point where I seriously ask myself whether I should invest the time to report and provide minimal examples for reproduction.
I'm on the side of the CoC committee who told the author they engaged without enough consideration or kindness.
Reporting bugs is nice. It's less nice if, when a maintainer asks for a clearer reproduction, you respond with "I already gave you a reproduction, even if you have to edit it a little. I'm not a programmer, all I can give you is some AI spam. I'll leave it up to you to do your jobs" (edited only lightly from what the author really wrote).
I don't really have a dog in the race, but I think people should react this way to AI communication. They should be shunned and informed in no uncertain terms that they are not welcome to communicate any more.
He used AI as in he gave the AI the patch that regressed, asked the AI to "find the bug", and pasted that output.
This would be akin to walking up to the architect for a house, and saying "it's not my job to build buildings, but I tried to use this lego design to show you how to do your job. Look, the legos snap together, can't you do that with the house?"
Using AI to try and explain things to a subject matter expert, which you yourself do not understand, will come off like that, like you're second-guessing, belittling, and underestimating their expertise all at once.
This might have been too petty to comment on, if it weren't for the irony that an arrogant human asserting that AIs are fallible made his own logical error or exaggeration in the same sentence. Was he designed to produce lies too?
Edit: I'm not really defending a layman using AI to produce patches, just criticizing the idea that everything AI produces is a non-actionable lie by design.
> it further demonstrated my good intentions
> "you are arguing with a law professional"
> "AI summary," ... shows the effort I am willing to invest ...
Wow.
I feel like it's probably a Google VP who works on Gemini astroturfing.
Open source thrived back in early 2000-s too. Although I don't remember anything even remotely resembling Code of Conduct back then, I wasn't paying attention. Was it a thing?
I found that Drupal adopted CoC in 2010, and Ubuntu had one already no later than 2005 (the "Ubuntu Management Philosophy" book from 2005 mentions it).
https://en.wikipedia.org/wiki/Etiquette_in_technology
"But I am not blaming you for not having a degree in software engineering."
But then
"And you admitted that the large scripts in question contain hardcoded information such as your personal computer's login username, clearly those scripts won't work out of the box on someone else's machine".
while the other committer ends with
"You're free to propose a patch yourself instead. "
So the committer is acknowledging that the user is no software developer, but then the two of them demand the user to do things that the user might not be able to do.
That's not going to work.
There's no denying that AI is helpful, _when_ the human has some baseline knowledge to check the output and steer the model in the correct direction. In this case, it's just wasting the maintainers' time.
I've seen many instances of this happening in the support channels for Nix/NixOS, which has been around long enough for the models to give plausible responses, yet too niche for them to emit usable output without sufficient prompting:
"No, this won't work because [...]" "What about [another AI response that is incorrect]?" (multiple back-and-forths, everyone is tired)
LLVM has already found that AI summaries tend to provide negative utility when it comes to bug reports, and it has a policy of not using them. The moment you admit "an AI told me that...", you've told every developer in the room that you don't know what you're doing, and very likely, trying to get any useful information of you to be able to resolve the bug report is going to be at best painful. (cf. https://discourse.llvm.org/t/rfc-define-policy-on-ai-tool-us...)
Looking over the bug report in question... I disagree with the author here. The original bug report is "hi, you have lots of misnamed compiler option warnings when I build it with my toolchain" which is a very unactionable bug report. The scripts provided don't really provide a lot of insight into what the problem might be, and having loads and loads of options in a configure command increases the probability that it breaks for no good reason. Also, for extra good measure, the link provided is to the latest version of a script, which means it can change and no longer reproduce the issue in question.
Quite frankly, the LLVM developer basically responded with "hey, can you provide a better, simpler steps to reproduce?" To which the author responds [1] "no, you should be able to figure it out from what I've given already." Which, if I were in the developer's shoes, would cause me to silently peace out at that moment.
At the end of the day, what seems to have happened to me is that the author didn't provide sufficient detail in their initial bug report and bristled quite thoroughly at being asked to provide more detail. Eli Schwartz might have crossed the line in response, but the author here was (to me) quite clearly the first person to have thoroughly crossed the line.
[1] Direct link, so you can judge for yourself if my interpretation is correct: https://github.com/llvm/llvm-project/issues/72413#issuecomme...
> Do your job
Which is a volunteer based job lol. Even if it was said in a heated argument, the bug reporter never really apologizes from what I read.
Maybe that's a "strawman" though
EDIT -
> Once again these two Gentoo developers showed a lack of good manners.
…
> hold a personal grudge against me
Yes, indeed this non technical person seems to have found that while they don’t have a mind sharp enough for software, nor the respect and understanding that they can’t talk to people the same way they do as a lawyer, they’ve well on their way into the subculture of posting their emotional rants onto the internet. (haha!)
What an amazingly effective phrase to get open source developers to do what you want. /s