Define policy forbidding use of AI code generators

378 todsacerdoti 234 6/25/2025, 11:26:55 PM github.com ↗

Comments (234)

benlivengood · 8h ago
Open source and libre/free software are particularly vulnerable to a future where AI-generated code is ruled to be either infringing or public domain.

In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.

In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.

Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.

graemep · 1h ago
Proprietary source code would not usually end up training LLMs. Unless its leaked, how would an LLM have access to it?

> it would require speculative copyright owners to disassemble their binaries

I wonder whether AI might be a useful tool for making that easier.

If you have evidence then you can get courts to order disclosure or examination of code.

> And plenty of proprietary software has public domain code in it already.

I am pretty sure there is a significant amount of proprietary code that has FOSS code in it, against license terms (especially GPL and similar).

A lot of proprietary code is now been written using AIs trained on FOSS code, and companies are open about this. It might open an interesting can of worms.

physicsguy · 1h ago
> Unless its leaked

Given the number of people on HN that say they're using for e.g. Cursor, OpenAI, etc. through work, and my experience with workplaces saying 'absolutely you can't use it', I suspect a large amount is being leaked.

Thorrez · 1h ago
Is there any likelihood that the output of the model would be public domain? Even if the model itself is public domain, the prompt was created by a human and impacted the output, so I don't see how the output could be public domain. And then after that, the output was hopefully reviewed by the original prompting human and likely reviewed by another human during code review, leading to more human impact on the final code.
AndrewDucker · 32m ago
There is no copyright in AI art. Presumably the same reasoning would apply to AI code: https://iclg.com/news/22400-us-court-confirms-ai-generated-a...
AJ007 · 8h ago
I understand what experienced developers don't want random AI contributions from no-knowledge "developers" contributing to a project. In any situation, if a human is review AI code line by line that would tie up humans for years, even ignoring anything legally.

#1 There will be no verifiable way to prove something was AI generated beyond early models.

#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.

#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.

There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)

amake · 8h ago
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects

Still waiting to see evidence of AI-driven projects eating the lunch of "traditional" projects.

viraptor · 7h ago
It's happening slowly all around. It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated. But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.

It's only going to get more pervasive from now on.

amake · 3h ago
> It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated

I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?

> But there are also local tools generated

This is really not the same thing as the original claim ("Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects").

TeMPOraL · 2h ago
> I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?

As if tech part was the major part of getting the product to market.

Those businesses are probably everywhere. They just aren't open about admitting they're using AI to speed up their marketing/product design/programming/project management/graphics design, because a) it's not normal outside some tech startup sphere to brag about how you're improving your internal process, and b) because almost everyone else is doing that too, so it partially cancels out - that is what competition on the market means, and c) admitting to use of AI in current climate is kind of a questionable PR move.

WRT. those who fail to leverage the new tools and are destined to be outcompeted, this process takes extended time, because companies have inertia.

>> But there are also local tools generated

> This is really not the same thing as the original claim

Point is that such wins compound. You get yak shaving done faster by fashioning your own tools on the fly, and it also cuts cost and a huge burden of maintaining relationships with third parties[0]

--

[0] - Because each account you create, each subscription you take, even each online tool you kinda track and hope hope hope won't disappear on you - each such case comes with a cognitive tax of a business relationship you probably didn't want, that often costs you money directly, and that you need to keep track of.

amake · 59m ago
> Those businesses are probably everywhere. They just aren't open about admitting

"Where's the evidence?" "Probably everywhere."

OK, good luck, have fun

TeMPOraL · 5m ago
Yup. Or, "Just look around!".
bredren · 2h ago
This is happening right now and it won’t be obvious until the liquidity events provide enough cover for victory lap story telling.

The very knowledge that an organization is experiencing hyper acceleration due to its successful adoption of AI across the enterprise is proprietary.

There are no HBS case studies about businesses that successfully established and implemented strategic pillars for AI because the pillars were likely written in the past four months.

amake · 1h ago
> This is happening right now and it won’t be obvious until

I asked for evidence and, as always, lots of people are popping out of the woodwork to swear that it's true but I can't see the evidence yet.

OK, then. Good luck with that.

alganet · 7h ago
Can you show these 3 things to us?
WD-42 · 6h ago
For some reason these fully functional ai generated projects that the authors vibe out while playing guitar and clipping their toenails are never open source.
TeMPOraL · 1h ago
Going by the standard of "But there are also local tools generated faster than you could adjust existing tools to do what you want", here's a random one of mine that's in regular use by my wife:

https://github.com/TeMPOraL/qr-code-generator

Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro (I forgot to note that down in this project), and recently modified with Claude Code because I had to test it on something.

Getting the first version of this up was literally both faster and easier than finding a QR code generator that I'm sure is not bloated, not bullshit, not loaded with trackers, that's not using shorteners or its own URL (it's always a stupid idea to use URL shorteners you don't control), not showing ads, mining bitcoin and shit, one that my wife can use in her workflow without being distracted too much. Static page, domain I own, a bit of fiddling with LLMs.

What I can't link to is half a dozen single-use tools or faux tools created on the fly as part of working on something. But this happens to me couple times a month.

To anchor another vertex in this parameter space, I found it easier and faster to ask LLM to build me a "breathing timer" (one that counts down N seconds and resets, repeatedly) with analog indicator by requesting it, because a search query to Google/Kagi would be of comparable length, and then I'd have to click on results!

EDIT: Okay, another example:

https://github.com/TeMPOraL/tampermonkey-scripts/blob/master...

It overlays a trivial UI to set up looping over a segment of any YouTube video, and automatically persists the setting by video ID. It solves the trivial annoyance of channel jingles and other bullshit at start/end of videos that I use repeatedly as background music.

This was mostly done zero-shot by Claude, with maybe two or three requests for corrections/extra features, total development time maybe 15 minutes. I use it every day all the time ever since.

You could say, "but SponsorBlock" or whatever, but per what GP wrote, I just needed a small fraction of functionality of the tools I know exist, and it was trivial to generate that with AI.

bredren · 2h ago
Mine is. And it is awesome: https://github.com/banagale/FileKitty

The most recent release includes a MacOS build in a dmg signed by Apple: https://github.com/banagale/FileKitty/releases/tag/v0.2.3

I vibed that workflow just so more people could have access to this tool. It was a pain and it actually took time away from toenail clipping.

And while I didn't lay hands on a guitar much during this period, I did manage to build this while bouncing between playing Civil War tunes on a 3D-printed violin and generating music in Suno for a soundtrack to “Back on That Crust,” the missing and one true spiritual successor to ToeJam & Earl: https://suno.com/song/e5b6dc04-ffab-4310-b9ef-815bdf742ecb

fc417fc802 · 5h ago
> the authors vibe out while playing guitar and clipping their toenails

I don't think anyone is claiming that. If you submit changes to a FOSS project and an LLM assisted you in writing them how would anyone know? Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.

The (admittedly still controversial) claim being made is that developers with LLM assistance are more productive than those without. Further, that there is little incentive for such developers to advertise this assistance. Less trouble for all involved to represent it as 100% your own unassisted work.

EGreg · 4h ago
Why would you need to carefully review code? That is so 2024. You’re bottlenecking the process and are at a disadvantage when the AI could be working 24/7. We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.

AI “assistance” is a short intermediate phase, like the “centaurs” that Garry Kasparov was very fond of (human + computer beat both a human and a computer by itself… until the computer-only became better).

https://en.wikipedia.org/wiki/Advanced_chess

amake · 3h ago
> We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.

Was your comment tongue-in-cheek? If not, where is this huge mass of AI-generated software?

rvnx · 2h ago
All around you, just that it doesn’t make sense for developers to reveal that a lot of their work is now about chunking and refining the specifications written by the product owner.

Admitting such is like admitting you are overpaid for your job, and that a 20 USD AI-agent can do better and faster than you for 75% of the work.

Is it easy to admit that you have learnt skills for 10+ years that are progressively already getting replaced by a machine ? (like thousands of jobs in the past).

More and more, developer is going to be a monkey job where your only task is to make sure there is enough coal in the steam machine.

Compilers destroyed the jobs of developers writing assembler code, they had to adapt. They insisted that hand-written assembler was better.

Here is the same, except you write code in natural language. It may not be optimal in all situations but it often gets the job done.

alganet · 11m ago
I have a complete proof that P=NP but it doesn't make sense to reveal to the world that now I'm god. It would crush their little hearts.
amake · 1h ago
> All around you, just that it doesn’t make sense for developers to reveal that

OK, but I asked for evidence and people just keep not providing any.

"God is all around you; he just works in mysterious ways"

OK, good luck with that.

bonzini · 1h ago
Good luck debugging
dcow · 5h ago
Except this one is (see your sibling).
viraptor · 6h ago
Only the simplest one is open (and before you discount it as too trivial, somehow none of the other ones did what I wanted) https://github.com/viraptor/pomodoro

The others are just too specific for me to be useful for anyone else: an android app for automatic processing of some text messages and a work scheduling/prioritising thing. The time to make them generic enough to share would be much longer than creating my specific version in the first place.

a57721 · 1h ago
> and before you discount it as too trivial, somehow none of the other ones did what I wanted

No offense, it's really great that you are able to make apps that do exactly what you want, but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects" (as someone else suggested above). Complex real world software is different from pomodoro timers and TODO lists.

linsomniac · 5h ago
Not OP, but:

I'm getting towards the end of a vibe coded ZFS storage backend to ganeti that includes the ability to live migrate VMs to another host by: taking snapshot and replicating it to target, pausing VM, taking another incremental snapshot and replicating it, and then unpausing the VM on the new destination machine. https://github.com/linsomniac/ganeti/tree/newzfs

Other LLM tools I've built this week:

This afternoon I built a web-based SQL query editor/runner with results display, for dev/ops people to run read-only queries against our production database. To replace an existing super simple one, and add query syntax highlighting, snippet library, and other modern features. I can probably release this though I'd need to verify that it won't leak anything. Targets SQL Server.

A couple CLI Jira tools to pull a list of tickets I'm working on (with cache so I can get an immediate response, then get updates after Jira response comes back), and tickets with tags that indicate I have to handle them specially.

An icinga CLI that downtimes hosts, for when we do sweeping machine maintenances like rebooting a VM host with dozens of monitored children.

An Ansible module that is a "swiss army knife" for filesystem manipulation, merging the functions of copy, template, file, so you can loop over a list and: create a directory, template a couple files into it, doing a notify on one and a when on another, ensure a file exists if it doesn't already, to reduce duplication of boilerplate when doing a bunch of file deploys. This I will release as a ansible galaxy module once I have it tested a little more.

amake · 3h ago
None of this seems relevant to the original claim: "Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"

I don't feel like it's meaningful to discuss the "competitiveness" of a handful of bespoke local or internal tools.

cess11 · 3h ago
linsomniac · 3h ago
Thanks, I hadn't pushed from my test cluster, check again. "This branch is 12 commits ahead of, 4 commits behind ganeti/ganeti:master"
EGreg · 4h ago
I vibe-coded my own MySQL-compatible database that performs better than MariaDB, after my agent optimized it for 12 hours. It is also a time-traveling DB and performs better on all benchmarks and the AI says it is completely byzantine-fault-tolerant. Programmers, you had a nice run. /s
nijave · 6h ago
Not sure about parent but you could argue Jetbrains fancy auto complete is AI and generates a substantial portion of code. It runs using a local model and, in my experience, does pretty good at guessing the rest of the line with minimal input (so you could argue 80% of each line was AI generated)
mcoliver · 6h ago
80-90% of Claude is now written by Claude
amake · 57m ago
Using AI tools make AI tools is not the impact outside of the AI bubble that people are looking for.
0x457 · 5h ago
And whose lunch is it eating?
rvnx · 2h ago
Your lunch, the developers behind Claude are very rich and do not need their developer career since they have enough to retire
luqtas · 7h ago
that's like driving big personal vehicles and having a bunch of children and eating a bunch of meat and do nothing about because marine and terrestrial ecosystems weren't fully destroyed by global warming
lynx97 · 4h ago
Ahh, there you go, environmental activists outright saying having children is considered a crime against nature. Wonderful, you seem to hit a rather bad stereotype right on the head. What is next? Earth would be better of if humanity was eradicated?
basilgohar · 7h ago
I feel like this is mostly proofless assertion. I'm aware what you hint at is happening, but the conclusions you arrive at are far from proven or even reasonable at this stage.

For what it's worth, I think AI for code will arrive at a place like how other coding tools sit – hinting, intellisense, linting, maybe even static or dynamic analysis, but I doubt NOT using AI will be a critical asset to productivity.

Someone else in the thread already mentioned it's a bit of an amplifier. If you're good, it can make you better, but if you're bad it just spreads your poor skills like a robot vacuum spreads animal waste.

galangalalgol · 7h ago
I think that was his point, the project full of bad developers isn't the competition. It is a peer whose skill matches yours and uses agents on top of that. By myself I am no match for myself + cline.
Retric · 5h ago
That’s true in the short term. Longer term it’s questionable as using AI tools heavily means you don’t remember all the details creating a new form of technical debt.
linsomniac · 4h ago
Dude, have you ever looked at code you wrote 6 months ago and gone "What was the developer thinking?" ;-)
ringeryless · 4h ago
yes, constantly. I also don't remember much contextual domain info of a given section of code about 2 weeks into delving into some other part of the same app.

So-called AI makes this worse.

Let me remind you of gyms, now that humans have been saved of much manual activity...

linsomniac · 4h ago
>So-called AI makes this worse.

The AI tooling is also really, really good at being able to piece together the code, the contextual domain, the documentation, the tests, the related issues/tickets, it could even take the change history into account, and be able to help refresh your memory of unfamiliar code in the context of bugs or new changes you are looking at making.

Whether or not you go to the gym, you are probably going to want to use an excavator if you are going to dig a basement.

CamperBob2 · 4h ago
I don't need to remember much, really. I have tools for that.

Really, really good tools.

otabdeveloper4 · 3h ago
IMO LLMs are best when used as locally-run offline search engines. This is a clear and obvious disruptive technology.

But we will need to get a lot better at finetuning first. People don't want generalist LLMs, they want "expert systems".

danielbln · 41m ago
Speak for yourself, I prefer generalist LLMs. Also, the bitter lesson of ML applies.
blibble · 8h ago
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects

"competitive", meaning: "most features/lines of code emitted" might matter to a PHB or Microsoft

but has never mattered to open source

A4ET8a8uTh0_v2 · 7h ago
I am of two minds of it having now seen both good coders augmented by AI and bad coders further diminished by it ( I would even argue its worse than stack overflow, because back then they would at least would have had to adjust code a little bit ).

I am personally somewhere in the middle, just good enough to know I am really bad at this so I make sure that I don't contribute to anything that is actually important ( like QEMU ).

But how many people recognize their own strengths and weaknesses? That is part of the problem and now we are proposing that even that modicum of self-regulation ( as flawed as it is ) be removed.

FWIW, I hear you. I also don't have an answer. Just thinking out loud.

heavyset_go · 6h ago
Regarding #1, at least in the mainframe/cloud model of hosted LLMs, the operators have a history of model prompts and outputs.

For example, if using Copilot, Microsoft also has every commit ever made if the project is on GitHub.

They could, theoretically, determine what did or didn't come out of their models and was integrated into source trees.

Regarding #2 and #3, with relatively novel software like QEMU that models platforms that other open source software doesn't, LLMs might not be a good fit for contributions. Especially where emulation and hardware accuracy, timing, quirks, errata etc matter.

For example, modeling a new architecture or emulating new hardware might have LLMs generating convincing looking nonsense. Similarly, integrating them with newly added and changing APIs like in kvm might be a poor choice for LLM use.

gadders · 1h ago
I am guessing they don't need people to prove that contributions didn't contain AI code, they just need the contributor to say they didn't use any AI code. That way, if any AI code is found in their contribution the liability lies with the contributor (but IANAL).
graemep · 1h ago
AFAIK in most places it might help with the amount of damages, but does not let you off the hook.
safety1st · 4h ago
It seems to me that the point in your first paragraph argues against your points #2 and #3.

If a project allows AI generated contributions, there's a risk that they'll be flooded with low quality contributions that consume human time and resources to review, thus paralyzing the project - it'd be like if you tried to read and reply to every spam email you receive.

So the argument goes that #2 and #3 will not materialize, blanket acceptance of AI contributions will not help projects become more competitive, it will actually slow them down.

Personally I happen to believe that reality will converge somewhere in the middle, you can have a policy which says among other things "be measured in your usage of AI," you can put the emphasis on having contributors do other things like pass unit tests, and if someone gets spammy you can ban them. So I don't think AI is going to paralyze projects but I also think its role in effective software development is a bit narrower than a lot of people currently believe...

alganet · 7h ago
Quoting them:

> The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax.

So, no need for the drama.

devmor · 3h ago
None of your claims here are based in factual assertion. These are unproven, wishful fantasies that may or may not be eventually true.

No one should be evaluating or writing policy based on fantasy.

brabel · 2h ago
Are you familiar with the futures market? It’s all about what you call fantasy ! Similarly, if you are determining the strategy of your organization, all you have to help you is “fantasy”. By the time evidence exists in sufficient quantity your lunch has already been eaten long ago. A good CEO is one that can see where the market is going before anyone else. You may be right that AI is just a fad , but given how much the big companies and all the major startups in the last few years are investing on it, it’s overwhelmingly a fringe position to have at this point.
XorNot · 6h ago
A reasonable conclusion about this would simply be that the developers are saying "we're not merging anything which you can't explain".

Which is entirely reasonable. The trend of people say, on HN saying "I asked an LLM and this is what it said..." is infuriating.

It's just an upfront declaration that if your answer to something is "it's what Claude thinks" then it's not getting merged.

Filligree · 6h ago
That’s not what the policy says, however. You could be the world’s most honest person, using Claude only to generate code you described to it in detail and fully understand, and would still be forbidden.
Eisenstein · 8h ago
If AI can generate software so easily and which performs the expected functions, why do we even need to know that it did so? Isn't the future really just asking an AI for a result and getting that result? The AI would be writing all sorts of bespoke code to do the thing we ask, and then discard it immediately after. That is what seems more likely, and not 'so much software we have to figure out rights to'.
rapind · 7h ago
> If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust.

Yeah I don’t think so. But if it does then who cares? AI can just make a better QEMU at that point I guess.

They aren’t hurting anyone with this stance (except the AI hype lords), which I’m pretty sure isn’t actually an anti-AI stance, but a pragmatic response to AI slop in its current state.

otabdeveloper4 · 3h ago
> Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects.

There is zero evidence so far that AI improves software developer efficiency.

No, just because you had fun vibing with a chatbot doesn't mean you delivered the end product faster. All of the supposed AI software development gains are entirely self-reported based on "vibes". (Remember these are the same people who claimed massive developer efficiency gains from programming in Haskell or Lisp a few years back.)

Note I'm not even touching on the tech debt issue here, but it is also important.

P.S. The hallucination and counting to five problems will never go away. They are intrinsic to the LLM approach.

koolala · 4h ago
This is a win for MIT license though.
graemep · 57m ago
From what point of view?

For someone using MIT licensed code for training, it still requires a copy of the license and the copyright notice in "copies or substantial portions of the software". SO I guess its fine for a snippet, but if the AI reproduces too much of it, then its in breach.

From the point of view of someone who does not want their code used by an LLM then using GPL code is more likely to be a breach.

zer00eyz · 8h ago
raincole · 6h ago
It's sailed, but towards the other way: https://www.bbc.com/news/articles/cg5vjqdm1ypo
fc417fc802 · 5h ago
That's a brand new ongoing lawsuit. The ship hasn't sailed in either direction yet. It hasn't even been clearly established if Midjourney has liability let alone where the bounds for such liability might lie.

Remember, anyone can attempt to sue anyone for anything at any time in a functional system. How far the suit makes it is a different matter.

zer00eyz · 4h ago
https://www.wired.com/story/ai-art-copyright-matthew-allen/

https://www.cnbc.com/2025/03/19/ai-art-cannot-be-copyrighted...

Here are cases where the product of AI/ML are not the products of people and not capable of being copyrighted. These are about the OUTPUT being unable to be copyrighted.

jssjsnj · 6h ago
QEMU: Define policy forbidding use of AI code generators
deadbabe · 8h ago
If a software is truly wide open source in the sense of “do whatever the fuck you want with this code, we don’t care”, then it has nothing to fear from AI.
candiddevmike · 8h ago
Won't apply to closed source, not public code, which the GPL (QEMU uses) is quite good at ensuring becomes open source...
kgwxd · 8h ago
Can't release someone else's proprietary source under a "do whatever the fuck you want" license and actually do whatever the fuck you want, without getting sued.
rzzzt · 2h ago
The license does exist so you can release your own software under it, however: https://en.wikipedia.org/wiki/WTFPL
TeMPOraL · 2h ago
Only more reason for OSS to embrace AI generation - once it leaks into enough widely used or critical (think cURL) dependencies and exceeds certain critical mass, any judgement on the IP aspects other than "public domain" (in the broader sense) will become infeasible, as enforcing a different judgement would be like doing open heart surgery on the global economy.
iechoz6H · 2h ago
You can do that but the fact you don't get sued is more luck than judgement.
deadbabe · 8h ago
It’d be like trying to squeeze blood from a stone
CursedSilicon · 6h ago
It's incredible watching someone who has no idea what they're talking about boast so confidently about what people "can" or "can't" do
clipsy · 6h ago
It'd be like trying to squeeze blood from every single entity using the offending code, actually.
behringer · 4h ago
Open source is about sharing the source code. You generally need to force companies to share their source code derived from your project, or else companies will simply take it, modify it, and never release their changes,and charge for it too.
TeMPOraL · 2h ago
Sharing is caring, being forced to share does not foster care.

Companies don't care, so if you release something as open source that's relevant to them, "companies will simply take it, modify it, and never release their changes,and charge for it too" - but that is what companies do, that is their very nature, and you knew that when you first opened the source.

You also knew that when you picked a license, and it's a major reason for the particular choice you made. Want to force companies to share? Pick GPL.

If you decide to yoke a dragon, and it instead snatches your shiny lure and flies away to its cave, you don't get to complain that the dragon isn't playing nice and doesn't want to become your beast of burden. If you picked MIT as your license, that's on you.

bgwalter · 11m ago
It is interesting to read the pro-AI rant in the comments on the linked commit. The person who is threatening to use "AI" anyway has almost no contributions either in qemu or on GitHub in general.

This is the target group for code generators. All talk but no projects.

JonChesterfield · 9h ago
Interesting. Harder line than the LLVM one found at https://llvm.org/docs/DeveloperPolicy.html#ai-generated-cont...

I'm very old man shouting at clouds about this stuff. I don't want to review code the author doesn't understand and I don't want to merge code neither of us understand.

compton93 · 8h ago
I don't want to review code the author doesn't understand

This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.

andy99 · 8h ago
Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.

a4isms · 8h ago
> This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.

This is very, very germane and a very quotable line. And these people have been around from long before LLMs appeared. These are the people who dash off an incomplete idea on Friday afternoon and expect to see a finished product in production by next Tuesday, latest. They have no self-awareness of how much context and disambiguation is needed to go from "idea in my head" to working, deterministic software that drives something like a process change in a business.

1dom · 45m ago
The unfortunate truth is that approach does work, sometimes. It's really easy and common for capable engineers to think their way out of doing something because of all the different things they can think about it.

Sometimes, an unreasonable dumbass whose only authority comes from corporate heirarchy is needed to mandate the engineers start chipping away at the tasks. If they weren't a dumbass, they'd know the unreasonable thing they're mandating, and if they weren't unreasonable, they wouldn't mandate the someone does it.

I am an an engineer. "Sometimes" could be swapped for "rarely" above, but the point still stands: as much frustration as I have towards those people, they do occasionally lead to the impossible being delivered. But then again, a stopped clock -> twice a day etc.

bobjordan · 7h ago
You can change "software" to "hardware" and this is still an all too common viewpoint, even for engineers that should know better.
candiddevmike · 8h ago
Imagine a boring dystopia where everyone is given hallucinated tasks from LLMs that may in some crazy way be feasible but aren't, and you can't argue that they're impossible without being fired since leadership lacks critical thinking.
tines · 8h ago
Reminds me of the wonderful skit, The Expert: https://www.youtube.com/watch?v=BKorP55Aqvg
stirfish · 7h ago
dotancohen · 6h ago
That is incredibly accurate - I used to be at meetings like that monthly. Please submit this as an HN discussion.
turol · 2h ago
That is a very good description of the Paranoia RPG.
whoisthemachine · 7h ago
Unfortunately this is the most likely outcome.
joshstrange · 5h ago
I’ve started to experience/see this and it makes me want to scream.

You can’t dismiss it out of hand (especially with it coming from up the chain) but it takes no time at all to generate by someone who knows nothing about the problem space (or worse, just enough to be dangerous) and it could take hours or more to debunk/disprove the suggestion.

I don’t know what to call this? Cognitive DDOS? Amplified Plausibility Attack? There should be a name for it and it should be ridiculed.

whatevertrevor · 52m ago
It's simply the Bullshit Asymmetry Principle/Brandolini's Law. It's just that bullshit generation speedrunners have recently discovered tool-assists.
alluro2 · 8h ago
A friend experienced a similar thing at work - he gave a well-informed assessment of why something is difficult to implement and it would take a couple of weeks, based on the knowledge of the system and experience with it - only for the manager to reply within 5 min with a screenshot of an (even surprisingly) idiotic ChatGPT reply, and a message along the lines of "here's how you can do it, I guess by the end of the day".

I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

OptionOfT · 7h ago
Same here. You throw a question in a channel. Someone responds in 1 minute with a code example that either you had laying around, or would take > 5 minutes to write.

The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.

And of course it didn't work.

But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.

AdieuToLogic · 6h ago
> I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.

A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."

Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.

colechristensen · 7h ago
People keep asking me if AI is going to take my job and recent experience shows that it very much is not. AI is great for being mostly correct and then giving someone without enough context a mostly correct way to shoot themselves in the foot.

AI further encourages the problem in DevOps/Systems Engineering/SRE where someone comes to you and says "hey can you do this for me" having come up with the solution instead of giving you the problem "hey can you help me accomplish this"... AI gives them solutions which is more steps away to detangle into what really needs to be done.

AI has knowledge, but it doesn't have taste. Especially when it doesn't have all of the context a person with experience, it just has bad taste in solutions or just the absence of taste but with the additional problem that it makes it much easier for people to do things.

Permissions on what people have access to read and permission to change is now going to have to be more restricted because not only are we dealing with folks who have limited experience with permissions, now we have them empowered by AI to do more things which are less advisable.

alganet · 7h ago
In corporate, you are _forced_ to trust your coworker somehow and swallow it. Specially higher-ups.

In free software though, these kinds of nonsense suggestions always happened, way before AI. Just look at any project mailing list.

It is expected that any new suggestion will encounter some resistance, the new contributor itself should be aware of that. For serious projects specifically, the levels of skepticism are usually way higher than corporations, and that's healthy and desirable.

petesergeant · 5h ago
> Had someone higher up ask about something in my area of expertise. I said I didn't think is was possible, he followed up with a chatGPT conversation he had where it "gave him some ideas that we could use as an approach", as if that was some useful insight.

I would find it very insulting if someone did this to me, for sure, as well as a huge waste of my time.

On the other hand I've also worked with some very intransigent developers who've actively fought against things they simply didn't want to do on flimsy technical grounds, knowing it couldn't be properly challenged by the requester.

On yet another hand, I've also been subordinate to people with a small amount of technical knowledge -- or a small amount of knowledge about a specific problem -- who'll do the exact same thing without ChatGPT: fire a bunch of mid-wit ideas downstream that you have already thought about, but you then need to spend a bunch of time explaining why their hot-takes aren't good. Or the CEO of a small digital agency I worked at circa 2004 asking us if we'd ever considered using CSS for our projects (which were of course CSS heavy).

nijave · 6h ago
Especially when you try to correct them and they insist AI is the correct one

Sometimes it's fun reverse engineering the directions back into various forum, Stack Overflow, and documentation fragments and pointing out how AI assembled similar things into something incorrect

halostatue · 8h ago
I have just started adding DCO to _all_ of the open source code that I maintain and will be adding text like this on `CONTRIBUTING.md`:

---

LLM-Generated Contribution Policy

Color is a library full of complex math and subtle decisions (some of them possibly even wrong). It is extremely important that any issues or pull requests be well understood by the submitter and that, especially for pull requests, the developer can attest to the Developer Certificate of Origin for each pull request (see LICENCE).

If LLM assistance is used in writing pull requests, this must be documented in the commit message and pull request. If there is evidence of LLM assistance without such declaration, the pull request will be declined.

Any contribution (bug, feature request, or pull request) that uses unreviewed LLM output will be rejected.

---

I am also adding this to my `SECURITY.md` entries:

---

LLM-Generated Security Report Policy

Absolutely no security reports will be accepted that have been generated by LLM agents.

---

As it's mostly just me, I'm trying to strike a balance, but my preference is against LLM generated contributions.

phire · 7h ago
I do use GitHub copilot on my personal projects.

But I refuse to use it as anything more than a fancy autocomplete. If it suggests code that's pretty close to what I was about to type anyway, I accept it.

This ensures that I still understand my code, that there shouldn't be any hallucination derived bugs, [1] and there really shouldn't be any questions about copyright if I was about to type it.

I find using copilot this way speeds me up. Not really because my typing is slow, it's more that I have a habit of getting bored and distracted while typing. Copilot helps me get to the next thinking/debugging part sooner.

My brain really comprehend the idea that anyone would not want to not understand their code. Especially if they are going to submit it as a PR.

And I'm a little annoyed that the existence of such people is resulting in policies that will stop me from using LLMs as autocomplete when submitting to open source projects.

I have tried using copilot in other ways. I'd love for it to be able to do menial refactoring tasks for me. But every-time I experiment, it seems to fall off the rails so fast. Or it just ends up slower than what I could do manually because it has to re-generate all my code instead of just editing it.

[1] Though I find it really interesting that if I'm in the middle of typing a bug, copilot is very happy to autocomplete it in its buggy form. Even when the bug is obvious from local context, like I've typoed a variable name.

dawnerd · 3h ago
That’s how I use it too. I’ve tried to make agent mode work but it ends up taking just as long if not longer than just making the edits myself. And unless you’re very narrowly specific models like sonnet will go off track making changes you never asked for. At least gpt4.1 is pretty lazy I guess.
jitl · 8h ago
When I use LLM for coding tasks, it's like "hey please translate this YAML to structs and extract any repeated patterns to re-used variables". It's possible to do this transform with deterministic tools, but AI will do a fine job in 30s and it's trivial to test the new output is identical to the prompt input.

My high-level work is absolutely impossible to delegate to AI, but AI really helps with tedious or low-stakes incidental tasks. The other day I asked Claude Code to wire up some graphs and outlier analysis for some database benchmark result CSVs. Something conceptually easy, but takes a fair bit of time to figure out libraries and get everything hooked up unless you're already an expert at csv processing.

mattmanser · 2h ago
In my experience, AI will not do a fine job of things like this.

If the definition is past any sort of length, it will hallucinate new properties, change the names, etc. It also has a propensity to start skipping bits of the definitions by adding in comments like "/** more like this here **/"

It may work for you for small YAML files, but beware doing this for larger ones.

Worst part about all that is that it looks right to begin with because the start of the definitions will be correct, but there will be mistakes and stuff missing.

I've got a PoC hanging around where I did something similar by throwing an OpenAPI spec at an AI and telling it to generate some typescript classes because I was being lazy and couldn't be bothered to run it through a formal tool.

Took me a while to notice a lot of the definitions had subtle bugs, properties were missing and it had made a bunch of stuff up.

danielbln · 33m ago
What does "AI" mean? GPT3.5 on a website, or Claude 4 Opus plugged into function calling and a harness of LSP, type checker and tool use? These are not the same, neither in terms of output quality nor in capability space. We need to be more specific about the tools we use when we discuss them. "IDEs are slow to load" wouldn't be a useful statement either.
mistrial9 · 7h ago
oh agree and amplify this -- graphs are worlds unto themselves. some of the high end published research papers have astounding contents, for example..
linsomniac · 4h ago
>I don't want to review code the author doesn't understand

I get that. But the AI tooling when guided by a competent human can generate some pretty competent code, a lot of it can be driven entirely through natural language instructions. And every few months, the tooling is getting significantly more capable.

I'm contemplating what exactly it means to "understand" the code though. In the case of one project I'm working on, it's an (almost) entirely vibe-coded new storage backend to an existing VM orchestration system. I don't know the existing code base. I don't really have the time to have implemented it by hand (or I would have done it a couple years ago).

But, I've set up a test cluster and am running a variety of testing scenarios on the new storage backend. So I understand it from a high level design, and from the testing of it.

As an open source maintainer myself, I can imagine (thankfully I haven't been hit with it myself) how frustrating getting all sorts of low quality LLM "slop" submissions could be. I also understand that I'm going to have to review the code coming in whether or not the author of the submission understands it.

So how, as developers, do we leverage these tools as appropriate, and signal to other developers the level of quality in code. As someone who spent months tracking down subtle bugs in early Linux ZFS ports, I deeply understand that significant testing can trump human authorship and review of every line of code. ;-)

hsbauauvhabzb · 7h ago
You’re the exact kind of person I want to work with. Self reflective and in opposition of lazy behaviours.
rodgerd · 7h ago
This to me is interesting when it comes to free software projects; sure there are a lot of people contributing as their day job. But if you contribute or manage a project for the pleasure of it, things which undermine your enjoyment - cleaning up AI slop - are absolutely a thing to say "fuck off" over.
dheera · 8h ago
> I don't want to review code the author doesn't understand

The author is me and my silicon buddy. We understand this stuff.

recursive · 7h ago
Of course we understand it. Just ask us!
acedTrex · 8h ago
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol

Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.

I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.

SchemaLoad · 8h ago
I've started seeing this at work with coworkers using LLMs to generate code reviews. They submit comments which are way above their skill level which almost trick you in to thinking they are correct since only a very skilled developer would make these suggestions. And then ultimately you end up wasting tons of time proving how these suggestions are wrong. Spending far more time than the person pasting the suggestions spent to generate them.
Groxx · 6h ago
By far the largest review-effort PRs of my career have been in the past year, due to mid-sized LLM-built features. Multiple rounds of other signoffs saying "lgtm" with only minor style comments only for me to finally read it and see that no, it is not even remotely acceptable and we have several uses built by the same team that would fail immediately if it was merged, to say nothing of the thousands of other users that might also be affected. Stuff the reviewers have experience with and didn't think about because they got stuck in the "looks plausible" rut, rather than "is correct".

So it goes back for changes. It returns the next day with complete rewrites of large chunks. More "lgtm" from others. More incredibly obvious flaws, race conditions, the works.

And then round three repeats mistakes that came up in round one, because LLMs don't learn.

This is not a future style of work that I look forward to participating in.

tobyhinloopen · 2h ago
I think a future with LLM coding requires much more tests, both testing happy and bad flows.
danielbln · 28m ago
It also needs proper guideline enforcement. If an engineer produces poorly tested and unreviewed code, then the buck stops with them. This is a human problem more than it is a tool problem.
diabllicseagull · 7h ago
funny enough I had coworkers who similarly had a hold of the jargon but without any substance. They would always turn out to be time sinks for others doing the useful work. AI imitating that type of drag on the workplace is kinda funny ngl.
heisenbit · 2h ago
Probabilistic patterns stringed together are something different from an end-to-end intention driven solidly linked chain of thought that is with pylons grounded in relevant context at critical points.
beej71 · 6h ago
I'm not really in the field any longer, but one of my favorite things to do with LLMs is ask for code reviews. I usually end up learning something new. And a good 30-50% of the suggestions are useful. Which actually isn't skillful enough to give it a title of "code reviewer", so I certainly wouldn't foist the suggestions on someone else.
acedTrex · 7h ago
Yep 100%, it is something I have also observed. Frankly has been frustrating to the point I spun up a quick one off html site to rant/get my thoughts out. https://jaysthoughts.com/aithoughts1
whatevertrevor · 38m ago
Just some feedback: your site is hard to read on mobile devices because of the sidebar.
mrheosuper · 5h ago
People keep telling LLM will improve efficiency, but your comment has proved it's the otherwise.

It look like LLM is not good for cooperation, because the nature of LLM is randomness.

stevage · 2h ago
> Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions.

Yep, and it's not just code. Student essays, funding applications, internal reports, fiction, art...everything that AI touches has this problem that AI outputs look superficially similar to the work of experts.

whatevertrevor · 31m ago
I have learned over time that the actually smart people worth listening to, avoid jargon beyond what is strictly necessary, talk in simple terms with specific goals/improvements/changes in mind.

If I'm having to reread something over and over to understand what they're even trying to accomplish, odds are it's either AI generated or an attempt at sounding smart instead of being constructive.

danielbln · 30m ago
Trajectory so far has been that AI outputs are converging increasingly not just in superficial similarity but also quality of expert output. We are obviously not there yet, and some might say we never will. But if we do, there is a whole new conversation to be had.
itsmekali321 · 6h ago
send your blog link please
acedTrex · 4h ago
https://jaysthoughts.com/aithoughts1 Bit of a rambly rant, but the prediction stuff I was tongue in cheek referring to above is at the bottom.
mattmanser · 1h ago
Looks like your blog post got submitted here and then I assume triggered the flame war flag. A lot of people just reading the title and knee jerking in the comments:

https://news.ycombinator.com/item?id=44384610

Funny, as the entire thing starts off with "Now, full disclosure, the title is a bit tongue-in-cheek.".

ants_everywhere · 8h ago
This is signed off primarily by RedHat, and they tend to be pretty serious/corporate.

I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.

Most hypervisors are closed source and some are developed by litigious companies.

blibble · 8h ago
> but rather the risk that AI will spit out code from its training set that belongs to another project.

this is everything that it spits out

ants_everywhere · 6h ago
This is an uninformed take
Groxx · 6h ago
It is a legally untested take
otabdeveloper4 · 3h ago
No, this is an uninformed take.
duskwuff · 8h ago
I'd also worry that a language model is much more likely to introduce subtle logical errors, potentially ones which violate the hypervisor's security boundaries - and a user relying heavily on that model to write code for them will be much less prepared to detect those errors.
ants_everywhere · 6h ago
Generally speaking AI will make it easier to write more secure code. Tooling and automation help a lot with security and AI makes it easier to write good tooling.

I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.

So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.

tho23i4234324 · 4h ago
I'd doubt this very much - LLMs hallucinate API calls and commit all sorts of subtle errors that you need to catch (esp. if you're on proprietary problems which it's not trained on).

It's a good replacement for Google, but probably nothing close to what it's being hyped out to be by the capital allocators.

ludicrousdispla · 6m ago
>> The tools will mature, and we can expect some to become safely usable in free software projects.

It should be possible to build a useful AI code generator for a given programming language solely from the source code for the language itself. Doing so however would require some maturity.

Havoc · 9h ago
I wonder whether the motivation is really legal? I get the sense that some projects are just sick of reviewing crap AI submissions
esjeon · 8h ago
Possibly, but QEMU is such a critical piece software in our industry. Its application stretches from one end to the other - desktop VM, cloud/remote instance, build server, security sandbox, cross-platform environment, etc. Even a small legal risk can hurt the industry pretty badly.
bobmcnamara · 8h ago
Have you seen how Monsanto enforces their seed right?
gerdesj · 8h ago
The policy is concise and well bounded. It seems to me to assert that you cannot safely assign attribution of authorship of software code that you think was generated algorithmically.

I use the term algorithmic because I think it is stronger than "AI lol". I note they use terms like AI code generator in the policy, which might be just as strong but looks to me as unlikely to becoming a useful legal term (its hardly "a man on the Clapham omnibus").

They finish with this, rather reasonable flourish:

"The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax."

No doubt they do get a load of slop but they seem to want to close the legal angles down first and attribution seems a fair place to start off. This play book looks way better than curl's.

SchemaLoad · 9h ago
This could honestly break open source, with how quickly you can generate bullshit, and how long it takes to review and reject it. I can imagine more projects going the way of Android where you can download the source, but realistically you can't contribute as a random outsider.
b00ty4breakfast · 8h ago
I have an online acquaintance that maintains a very small and not widely used open-source project and the amount of (what we assume to be) automated AI submissions* they have to wade through is kinda wild given the very small number of contributors and users the thing has. It's gotta be clogging up these big projects like a DDoS attack.

*"Automated" as in bots and "AI submissions" as in ai-generated code

hollerith · 9h ago
I've always thought that the possibility of forking the project is the main benefit to open-source licensing, and we know Android can be forked.
ants_everywhere · 9h ago
the primary benefit of open source is freedom
javawizard · 8h ago
This is so tautological that I can't really tell what point you're trying to make.
ants_everywhere · 8h ago
how can it possibly be tautological? The comment just above me said something entirely different: that the primary benefit of open source is forking
zahlman · 7h ago
For many projects you realistically can't contribute as a random outsider anyway, simply because of the effort involved in grokking enough of the existing architecture to figure out where to make changes.
graemep · 55m ago
I think it is yet another reason (potentially malicious contributors are another) that open source projects are going to have to verify contributors.
api · 9h ago
Quality contributions to OSS are rare unless the project is huge.
loeg · 9h ago
Historically the opposite of quality contributions has been no contributions, not net-negative contributions (random slop that costs more in review than it provides benefit).
lmm · 8h ago
No it hasn't? Net-negative contributions to open source have been extremely common for years, it's not like you need an LLM to make them.
loeg · 7h ago
I guess we've had very different experiences!
disconcision · 9h ago
i mean they say the policy is open for revision and it's also possible to make exceptions; if it's an excuse, they are going out of their way to let people down easy
Lerc · 8h ago
I'm not sure which way AI would move the dial when it comes to the median submission. Humans can, and do, make some crap code.

If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.

Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.

I can see some people choosing to avoid AI due to the possibility of legal issues. I'm doubtful of the likelihood of such problems, but some people favour eliminating all possibly over minimizing likelihood. The philosopher in me feels like people who think they have eliminated the possibility of something just haven't thought about it enough.

ehnto · 8h ago
Barrier of entry, automated submissions are two aspects I see changing with AI. You at least have to be able to code before submitting bad code.

With AI you're going to get job hunters automating PRs for big name projects so they can stick the contributions in their resume.

catlifeonmars · 8h ago
> If the problem is too many submissions, that would suggest there needs to be structures in place to manage that. > Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.

This ignores the fact that many open source projects do not have the resources to dedicate to a large number of contributions. A side effect of LLM generated code is probably going to be a lot of code. I think this is going to be an issue that is not dependent on the overall quality of the code.

Lerc · 4h ago
I thought that this could be an opportunity for volunteers who can't dedicate the time to learn a codebase thoroughly enough to be a regular committer. They just have to evaluate a patch to see if it meets a threshold of quality where they can pass it on to someone who does know the codebase well.

The barrier to being able to do a first commit on any project is usually quite high, there are plenty of people who would like to contribute to projects but cannnot dedicate the time n effort to pass that initial threshold. This might allow people an ability to contribute at a lower level while gently introducing them to the codebase where perhaps they might become a regular contributer in the future.

hughw · 8h ago
I'd hope there could be some distinction between using LLM as a super autocomplete in your IDE, vs giving it high-level guidelines and making it generate substantive code. It's a gray area, sure, but if I made a contribution I'd want to be able to use the labor-saving feature of Copilot, say, without danger of it copying an algorithm from open source code. For example, today I generated a series of case statements and Copilot detected the pattern and saved me tons of typing.
dheera · 8h ago
That and also just AI glasses that become an extension of my mind and body, just giving me clues and guidance on everything I do including what's on my screen.

I see those glasses as becoming just a part of me, just like my current dumb glasses are a part of me that enables me to see better, the smart glasses will help me to see AND think better.

My brain was trained on a lot of proprietary code as well, the copyright issues around AI models are pointless western NIMBY thinking and will lead to the downfall of western civilization if they keep pursuing legal what-ifs as an excuse to reject awesome technology.

Aeolun · 7h ago
This seems absolutely impossible to enforce. All my editors give me AI assisted code hints. Zed, cursor, VS code. All of them now show me autocomplete that comes from an LLM. There's absolutely no distinction between that code, and code that I've typed out myself.

It's like complaining that I may have no legal right to submit my stick figure because I potentially copied it from the drawing of another stick figure.

I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway. There's no way the people that write these things aren't aware they're completely unenforceable.

luispauloml · 7h ago
> I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway.

Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message:

    [...] More broadly there is,
    as yet, no broad consensus on the licensing implications of code
    generators trained on inputs under a wide variety of licenses
And in the patch itself:

    [...] With AI
    content generators, the copyright and license status of the output is
    ill-defined with no generally accepted, settled legal foundation.
What other commenters pointed out is that, beyond the legal issue, other problems also arise form the use of AI-generated code.
teeray · 6h ago
It’s like the seemingly-confusing gates passing through customs that say “nothing to declare” when you’ve already made your declarations. Walking through that gate is a conscious act that places culpability on you, so you can’t simply say “oh, I forgot” or something.

The thinking here is probably similar: if AI-generated code becomes poisonous and is detected in a project, the DCO could allow shedding liability onto the contributor that said it wasn’t AI-generated.

Filligree · 6h ago
> Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message

Don’t be ridiculous. The majority of people are in fact honest, and won’t submit such code; the major effect of the policy is to prevent those contributions.

Then you get plausible deniability for code submitted by villains, sure, but I’d like to hope that’s rare.

raincole · 6h ago
I think most people don't make money by submitting code to QEMU, so there isn't that much incentive to cheat.
shmerl · 7h ago
Neovim doesn't force you to use AI, unless you configure it yourself. If your editor doesn't allow you to switch it off, there must be a big problem with it.
wyldfire · 9h ago
I understand where this comes from but I think it's a mistake. I agree it would be nice if there were "well settled law" regarding AI and copyright, probably relatively few rulings and next to zero legislation on which to base their feelings.

In addition to a policy to reject contributions from AI, I think it may make sense to point out places where AI generated content can be used. For example - how much of QEMU project's (copious) CI setup is really stuff that is critical content to protect? What about ever-more interesting test cases or environments that could be enabled? Something like "contribute those things here instead, and make judicious use of AI there, with these kinds of guard rails..."

dclowd9901 · 9h ago
What's the risk of not doing this? Better code but slower velocity for an open source project?

I think that particular brand of risk makes sense for this particular project, and the authors don't seem particularly negative toward GenAI as a concept, just going through a "one way door" with it.

mrheosuper · 5h ago
>Better code but slower velocity for an open source project

Better code and "AI assist coding" are not exclusive of each other.

dijksterhuis · 8h ago
It's a simpler solution is just to wait until legal situation is clearer.

QEMU is (mostly) GPL 2.0 licensed, meaning (most) code contributions need to be GPL 2.0 compatible [0]. Let's say, hypothetically, there's a code contribution added by some patch involving gen AI code which is derived/memorised/copied from non-GPL compatible code [1]. Then, hypothetically, a legal case sets precedent that gen AI FOSS code must re-apply the license of the original derived/memorised/copied code. QEMU maintainers would probably need to roll back all those incompatible code contributions. After some time, those code contributions could have ended up with downstream callers which also need to be rewritten (even in CI code).

It might be possible to first say "only CI code which is clearly labelled as 'DO NOT RE-USE: AI' or some such". But the maintainers would still need to go through and rewrite those parts of the CI code if this hypothetical plays out. Plus it adds extra work to reviews and merge processes etc.

it's just less work and less drama for everyone involved to say "no thank you (for now)".

----

caveat: IANAL, and licensing is not my specific expertise (but i would quite like it to be one day)

[0]: https://github.com/qemu/qemu/blob/master/LICENSE

[1]: e.g. No license / MPL / Apache / Aritistic / Creative Commons https://www.gnu.org/licenses/license-list.html#NonFreeSoftwa...

pavon · 8h ago
This isn't like some other legal questions that go decades before being answered in court. There are dozens of cases working through the courts today that will shed light on some aspects of the copyright questions within a few years. QEMU has made great progress over the last 22 years without the aid of AI, waiting a few more years isn't going to hurt them.
hinterlands · 7h ago
I think you need to read between the lines here. Anything you do is a legal risk, but this particular risk seems acceptable to many of the world's largest and richest companies. QEMU isn't special, so if they're taking this position, it's most likely simply because they don't want to deal with LLM-generated code for some other reason, are eager to use legal risk as a cover to avoid endless arguments on mailing lists.

We do that in corporate environments too. "I don't like this" -> "let me see what lawyers say" -> "a-ha, you can't do it because legal says it's a risk".

kazinator · 9h ago
There is a well settled practice in computing that you just don't plagiarize code. Even a small snippet. Even if copyright law would consider such a small thing "fair use".
bfLives · 7h ago
> There is a well settled practice in computing that you just don't plagiarize code. Even a small snippet.

I think way many developers use StackOverflow suggests otherwise.

kazinator · 7h ago
In the first place, in order to post to StackOverflow, you are required to have the copyright over the code, and be able to grant them a perpetual license.

They redistribute the material under the CC BY-SA 4.0 license. https://creativecommons.org/licenses/by-sa/4.0/

This allows visitors to use the material, with attribution. One can, of course, use the ideas in a SO answer to develop one's own solution.

graemep · 46m ago
> you are required to have the copyright over the code, and be able to grant them a perpetual license.

Which Stack Overflow cannot verify. It might be pulled from a code base, or generated by AI (I would bet a lot is now).

behringer · 4h ago
Show me the professional code base with the attribution to stack overflow and I'll eat my hat.
_flux · 1h ago
Obviously I cannot show the code base, but when I pick a pre-existing solution from Stackoverflow or elsewhere—though it is quite rare—I do add a comment linking to the source: after all, in case of SA the discussion there might be interesting for the future maintainers of the function.

I just checked, though, and the code base I'm now working with has eight stackoverflow links. Not all are even written by me, according to quick check with git blame and git log -S..

graemep · 1h ago
I always do to, for exactly the same reason.
9283409232 · 8h ago
This isn't 100% true meaning it isn't well settled. Have people already forgotten Google vs Oracle? Google ended up winning that after years and years but the judgements went back and forth and there are around 4 or 5 guidelines to determine whether something is or isn't fair use and generative AI would fail at a few of those.
kazinator · 7h ago
Google vs. Oracle was about whether APIs are copyrightable, which is an important issue that speaks to antitrust. Oracle wanted the interface itself to be copyrighted so that even if someone reproduced the API from a description of it, it would infringe. The implication being that components which clone an API would be infringing, even though their implementation is original, discouraging competitors from making API-compatible components.

My comment didn't say anything about the output of AI being fair use or not, rather that fair use (no matter where you are getting material from) ipso facto doesn't mean that copy paste is considered okay.

Every employer I ever had discouraged copy and paste from anywhere as a blanket rule.

At least, that had been the norm, before the LLM takeover. Obviously, organizations that use AI now for writing code are plagiarizing left and right.

overfeed · 6h ago
> Google vs. Oracle was about whether APIs are copyrightable, which is an important issue that speaks to antitrust.

In addition to the Structure, Sequence and Organization claims, the original filing included a claim for copyright violation on 9 identical lines of code in rangeCheck(). This claim was dropped after the judge asked Oracle to reduce the number of claims, which forced Oracle to pare down to their strongest claims.

abhisek · 5h ago
> It's best to start strict and safe, then relax.

Makes total sense.

I am just wondering how do we differentiate between AI generated code and human written code that is influenced or copied from some unknown source. The same licensing problem may happen with human code as well especially for OSS where anyone can contribute.

Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.

catlifeonmars · 4h ago
> Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.

It’s a power saw. A really powerful tool that can be dangerous if used improperly. In that sense the code generator can have more or less of a mind of its own depending on the wielder.

Ok I think I’ve stretched the analogy to the breaking point…

daeken · 9h ago
I've been trying out Claude Code (the tool I've found most effective in terms of agentic code gen/manipulation) for an emulator project of mine for the last few days. Part of it is a compiler from an architecture definition to disassembler/interpreter/recompiler. I hit a fairly minor compiler bug and decided to ask Claude to debug and fix it. Some things I noted:

1. My C# code compiled just fine and ran even, but it was convinced that I was missing a closing brace on a lambda near where the exception was occurring. The diff was ... Putting the existing brace on a new line. Confidently stated that was the problem and declared it fixed.

2. It did figure out that an unexpected type was being seen, and implemented a pathway that allowed for it to get to the next error, but didn't look into why that type had gotten there; that was the actual bug, not the unhandled type. So it "fixed" it, but just kicked the can down the road.

3. When figuring out the issue, it just looked at the stack trace. That was it. It was running the compiler itself; it could've just embedded some debug code (like I did) and work out what the actual issue was, but it didn't even try. The exception was just a NotSupportedException with no extra details to work off of, so adding just a crumb of context would let you solve the issue.

Now, is this the simplest emulator you could throw AI at? No, not at all. But neither is qemu. I'm thoroughly unconvinced that current tools could provide real value on codebases like these. I'm bullish on them for the future, and I use GenAI constantly, but this ain't a viable use case today.

caleblloyd · 3h ago
Signed by mostly people at RedHat, which is owned by IBM, which makes Watson, which beat humans in Jeopardy in 2011.

> These are early days of AI-assisted software development.

Are they? Or is this just IBM destroying another acquisition slowly.

Meanwhile the Dotnet Runtime is fully embracing AI. Which people on the outside may laugh at but you have extremely talented engineers like Stephen Toub and David Fowler advocating for it.

So enterprises: next time you have an IBM rep trying to sell you AI services, do yourself a favor and go to any other number of companies out there who are actually serious about helping you build for the future.

And since I am a North Carolina native, here’s to hoping IBM and RedHat get their stuff together.

b0a04gl · 5h ago
there's no audit trail for how most code gets shaped anyway we're teammate's intuition from a past outage a one-liner from some old jira ticket even the shape of a func pulled from habit none of that is reviewable but still it gets trusted lol

ai moves faster than group consensus this ban won't slow down the tech it'll may make paradigms like qemu harder to enter harder to scale, harder to test thru properly

so if we maintain code like this we gotta know the trade we're making we're preserving trust but limiting throughput maybe fine idk but don't confuse it as future proofing

i kinda feel it does exposes trust in oss is social not epistemic. we accept complex things if we know who dropped it and we reject clean things if it smells synthetic

so the real qn isn't > did we use ai? it's > can we even maintain this in 6mo? and if the answer's yes doesn't really matter who produced the code fr

BurningFrog · 8h ago
Would it make sense to include the complete prompt that generated the code with the code?
catlifeonmars · 4h ago
You’d need to hash the model weights and save the seeds for the temperature prng as well, in order to verify the provenance. Ideally it would be reproducible, right?
danielbln · 21m ago
Maybe 2 years ago. Nowadays LLMs call functions and use tools, good luck capturing that in a way that it's reproducible.
astrobiased · 7h ago
It would need to be more than that. A prompt for one model can have different results vs another. Even when the model has different treatment for inference, eg quantization, the same prompt for the unquantized and quantized model could differ.
verdverm · 7h ago
Even more so, when you come back to understand in a few years, the model will no longer be available
galangalalgol · 6h ago
One of several reasons to use an open model even if it isn't quite as good. Version control the models and commit the prompts with the model name and a hash of the parameters. I'm not really sure what value that reproducibility adds though.
N1H1L · 6h ago
I use LLMs for generating documentation- I write my code, and ask Claude to write my documentation
auggierose · 4h ago
I think you are doing it the wrong way around.
insane_dreamer · 3h ago
Maybe not. I trust Claude to write docs. I don’t trust it to write my code the way I want.
naveed125 · 6h ago
Coolest thing I've seen today.
mattl · 8h ago
I'm interested to see how this plays out. I'd like a similar policy for my projects, but also a similar policy/T&C that prohibits the crawling of the content too.
candiddevmike · 8h ago
Only way to prohibit crawling is to go back to invite only, probably self-hosted repositories. These companies have no shame, your T&Cs won't mean anything to them and you have no way of proving they violated them without some kind of discovery into their training data.

No comments yet

curious_cat_163 · 9h ago
That’s very conservative.
jssjsnj · 6h ago
Oi
jekwoooooe · 9h ago
When will people give up this archaic practice of sending patches over emails?
gerdesj · 8h ago
When enough people don't want to do it anymore. Feel free to step up, live with email patches, and add to the numbers of those who don't like it and say so.

Why is it archaic if it works? I get there might be other ways to do patch sharing and discussion but what exactly is your problem with email as a transport?

You might as well describe voice and ears as archaic!

MobiusHorizons · 8h ago
likely when it stops being a useful way to cut out noise
SchemaLoad · 9h ago
Sending patches over email is basically a filter for slop. Stops the low effort drive by PRs and anyone who actually wants to invest some time in to contributing won't have a problem working out the workflow.
jnwatson · 9h ago
AI can figure out how to send a patch via email a lot faster than a human.
Art9681 · 8h ago
This is a "BlockBuster laughs Netflix out of the room" moment. I am a huge fan of QEMU and used it throughout my career. The maintainers have every right to govern their project as they see fit. But this is a lot of mental gymnastics to justify clinging to punchcards in a world where we now have magnetic tape and keyboards to do things faster. This tech didn't spawn weeks ago. Every major project has had at least two years to prepare for this moment.

Pull your pants up.

catlifeonmars · 4h ago
2 years isn’t that long. It took the Linux kernel 10 years to start accepting code written in Rust. This isn’t quite the same as the typical frontend flavor-of-the week JavaScript library.
add-sub-mul-div · 8h ago
> This is a "BlockBuster laughs Netflix out of the room" moment

I'm not sure that's the dunk you think it is. Good for Netflix for making money, but we're drowning in their empty slop content now and worse off for it.

danielbln · 19m ago
Who is forcing you to watch slop? And mind you, there was a TON of garbage at any local Blockbuster back in the day, with the added joy of having to go somewhere to rent it, being slapped with late and rewind fees or not even have availability of what you want to watch.

Choice is good. It means more slop, but also more gold. Figure out how to find the gold.

9283409232 · 8h ago
You're so dramatic. Like they said in the declaration, these are the early days of AI development and all the problems they mention will be eventually resolved so they have no problem taking a backseat while things sort themselves out and I respect that choice.
teruakohatu · 9h ago
So essentially it’s “let us cover ourselves by saying it’s not allowed” and in practice that means not allowing code that a human thinks is AI generated code.

Universities have this issue too, despite many offering students and staff Grammarly (Gen AI) while also trying to ban Gen AI.

SchemaLoad · 9h ago
Sounds like a good idea to ensure developers are owning the code they submit rather than hiding behind "I don't know why it does that, ChatGPT wrote it".

Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.

JoshTriplett · 9h ago
> Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.

The actual policy is "don't use AI code generators"; don't try to weasel that into "use it if you want to, but if the person on the other side can tell". That's effectively "it's only cheating if you get caught".

By way of analogy, Open Source projects also typically have policies (whether written or unwritten) that you only submit code you are legally allowed to submit. In theory, you could take a pile of proprietary reverse-engineered code that you have no license to, or a pile of code from another project that you aren't respecting the license of, and submit it anyway, and slap a `Signed-off-by` on it. Nothing will physically stop you, and people might not be able to tell. That doesn't make it OK.

SchemaLoad · 9h ago
The way I interpret it is that if you brainstorm using ChatGPT but write your own code using the ideas created in this step that would be fine, the reviewer wouldn't suspect the code of being AI generated because you've made sure it fits in with the project and actually works. The exact wording here is that they will reject changes they suspect of being AI generated, not that you can't have read anything AI generated in the process.

Getting AI to remind you of the libraries API is a fair bit different to having it generate 1000 lines of code you have hardly read before submitting.

Art9681 · 8h ago
What if the code is AI generated and the developer that drove it also understands the code and can explain it?
Filligree · 6h ago
Well, then you’re not allowed to submit it. This isn’t hard.
_fat_santa · 9h ago
Well I guess the key difference is code is deterministic, that is whether an paper accomplishes it's goals is somewhat subjective but with code its an absolute certainty.

I'm sure that if a contributor working on a feature used cursor to initially generate the code but then goes over it to ensure it's working as expected that would be allowed, this is more for those folks that just want to jam in a quick vibe-coded PR so they can add "contributed to the QEMU project" on their resumes.

hananova · 9h ago
You'd be wrong, the linked commit clearly says that anything written by, or derived from, AI code generation is not allowed.
GuB-42 · 9h ago
It more like a clarification.

The rules regarding the origin of code contributions are rather strict, that is, you can't contribute other people code unless you can make sure that the licence is appropriate. A LLM may output a copy of someone else code, sometimes verbatim, without giving you its origin, so you can't contribute code written by a LLM.

pretoriusdre · 6h ago
AI generated code is generally pretty good and incredibly fast.

Seeing this new phenomenon must be difficult for those people who have spent a long time perfecting their craft. Essentially, they might feel that their skillsets are being undermined. It would be especially hard for people who associate a lot of their self-identity with their job.

Being a purist is noble, but I think that this stance is foolish. Essentially, people who chose not to use AI code tools will be overtaken by the people who do. That's the unfortunate reality.

loktarogar · 6h ago
It's not a stance about the merits of AI generated code but about the legal status of it, in terms of who owns it and related concepts.
pretoriusdre · 5h ago
Yes the reasoning behind the decision is clear and as you described. But I would also make the point that the decision also comes with certain consequences, to which a discussion about merits is directly relevant.
loktarogar · 1h ago
> Essentially, people who chose not to use AI code tools will be overtaken by the people who do. That's the unfortunate reality.

Who is going to "overtake" QEMU, what exactly does that mean, and what will it matter if they are?

danielbln · 17m ago
OP said people. QEMU is not people.
sysmax · 7h ago
I wish people would make distinction regarding the size/scope of the AI-generated parts. Like with video copyright laws, where a 5-second clip from a copyrighted movie is usually considered fair use and not frowned upon.

Because for projects like QEMU, current AI models can actually do mind-boggling stuff. You can give it a PDF describing an instruction set, and it will generate you wrapper classes for emulating particular instructions. Then you can give it one class like this and a few paragraphs from the datasheet, and it will spit out unit tests checking that your class works as the CPU vendor describes.

Like, you can get from 0% to 100% test coverage several orders of magnitude faster than doing it by hand. Or refactoring, where you want to add support for a particular memory virtualization trick, and you need to update 100 instruction classes based on straight-forward, but not 100% formal rule. A human developer would be pulling their hairs out, while an LLM will do it faster than you can get a coffee.

halostatue · 5h ago
Not all jurisdictions are the US, and not all jurisdictions allow fair use, but instead have specific fair dealing laws. Not all jurisdictions have fair dealing laws, meaning that every use has to be cleared.

There are simple algorithms that everyone will implement the same way down to the variable names, but aside from those fairly rare exceptions, there's no "maximum number of lines" metric to describe how much code is "fair use" regardless of the licence of the code "fair use"d in your scenario.

Depending on the context, even in the US that 5-second clip would not pass fair use doctrine muster. If I made a new film cut entirely from five second clips of different movies and tried a fair use doctrine defence, I would likely never see the outside of a courtroom for the rest of my life. If I tried to do so with licensing, I would probably pay more than it cost to make all those movies.

Look up the decisions over the last two decades over sampling (there are albums from the late 80s and 90s — when sampling was relatively new — which will never see another pressing or release because of these decisions). The musicians and producers who chose the samples thought they would be covered by fair use.

762236 · 7h ago
It sounds like you're saying someone could rewrite Qemu on their own, with the help of AI. That would be pretty funny.
mrheosuper · 5h ago
Given enough time, a monkey randomly types on typewriter can rewrite QEMU.
echelon · 7h ago
Qemu can make the choice to stay in the "stone age" if they want. Contributors who prefer AI assistance can spend their time elsewhere.

It might actually be prudent for some (perhaps many foundational) OSS projects to reject AI until the full legal case law precedent has been established. If they begin taking contributions and we find out later that courts find this is in violation of some third party's copyright (as shocking as that outcome may seem), that puts these projects in jeopardy. And they certainly do not have the funding or bandwidth to avoid litigation. Or to handle a complete rollback to pre-AI background states.