With the advantage of hindsight, the issue should have been entirely dismissed, and the account reported as invalid, right at the third message (November 30, 2024, 8:58pm UTC); the fact that curl maintainers allowed the "dialog" to continue for six more messages shows to be a mistake and a waste of effort.
I would even encourage curl maintainers to upfront reject any report that fails to mention a line number in the source code, or a specific piece input that triggers an issue.
It's unfortunate that AI is being used to worsen the signal/noise ratio [1] of such sensitive topics such as security.
It's pretty clear that in like half of these the "researcher" is just copy pasting the followup questions back into whatever LLM they used originally. What a colossal waste of everyone's time.
I think the only saving grace right this second is that the hallucinations are obvious and text generation is just awkward enough in overly-eager phrasing to recognize. But if you're seeing it for the first time, it can be surprisingly convincing.
bluGill · 5h ago
As time goes on they are getting faster at closing such reports. However they started off with an assumption of honesty and only after peing burned repeatedly given up.
this is bad for the honest person who can't describe a real issue well though.
raverbashing · 9h ago
Honestly? Might be wiser block submissions from certain parts of the world that are known for spamming things like that
Or have an infosec captcha, but that's harder to come by
bgwalter · 7h ago
49 points, 4 hours, but only on page three.
This is a highly relevant log of the destructive nature of "AI", which consumes human time and has no clue what is going on in the code base. "AI" is like a five year old who has picked up some words and wants to sound smart.
I suppose the era of bug bounties is over.
AlSweigart · 5h ago
The primary use case of LLMs is producing undetectable spam.
heybrendan · 9h ago
I worked my way through about half the examples. What appalling behavior by several of the "submitters".
This comment [1] by icing (curl staff) sums up the risk:
> "This report and your other one seem like an attack on our resources to handle security issues."
Maintainers of widely deployed, popular software, including those whom have openly made a commitment to engineering excellence [2] and responsiveness [like the curl project AFAICT], can not afford to /not/ treat each submission with some level of preliminary attention and seriousness.
Submitting low quality, bogus reports generated by a hallucinating LLM, and then doubling down by being deliberately opaque and obtuse during the investigation and discussion, is disgraceful.
The consequence of having an issue report system is that people submit random shit just to report something. The fact that they use AI to autogenerate reports allows them to do that at an unprecedented scale. The obvious solution to this problem is to use AI to filter out reports that aren't valuable. Have AI talk to AI.
This might sound silly but it's not. It's just an advanced version of automatic vulnerability scans.
bfrog · 6h ago
AI slop is coming in all forms. I see people using AI for code reviews on github now and they are net negative leading people to do the wrong things.
With the advantage of hindsight, the issue should have been entirely dismissed, and the account reported as invalid, right at the third message (November 30, 2024, 8:58pm UTC); the fact that curl maintainers allowed the "dialog" to continue for six more messages shows to be a mistake and a waste of effort.
I would even encourage curl maintainers to upfront reject any report that fails to mention a line number in the source code, or a specific piece input that triggers an issue.
It's unfortunate that AI is being used to worsen the signal/noise ratio [1] of such sensitive topics such as security.
[1] http://www.meatballwiki.org/wiki/SignalToNoiseRatio
I think the only saving grace right this second is that the hallucinations are obvious and text generation is just awkward enough in overly-eager phrasing to recognize. But if you're seeing it for the first time, it can be surprisingly convincing.
this is bad for the honest person who can't describe a real issue well though.
Or have an infosec captcha, but that's harder to come by
This is a highly relevant log of the destructive nature of "AI", which consumes human time and has no clue what is going on in the code base. "AI" is like a five year old who has picked up some words and wants to sound smart.
I suppose the era of bug bounties is over.
This comment [1] by icing (curl staff) sums up the risk:
> "This report and your other one seem like an attack on our resources to handle security issues."
Maintainers of widely deployed, popular software, including those whom have openly made a commitment to engineering excellence [2] and responsiveness [like the curl project AFAICT], can not afford to /not/ treat each submission with some level of preliminary attention and seriousness.
Submitting low quality, bogus reports generated by a hallucinating LLM, and then doubling down by being deliberately opaque and obtuse during the investigation and discussion, is disgraceful.
[1] https://hackerone.com/reports/3125832#activity-34389935
[2] https://curl.se/docs/bugs.html (Heading: "Who fixes the problems")
This might sound silly but it's not. It's just an advanced version of automatic vulnerability scans.