Ask HN: Where are the best online gathering places for humans?
6 points by jMyles 1d ago 11 comments
Ask HN: Is context-switching the real productivity killer?
8 points by kristel100 13h ago 2 comments
AI tooling must be disclosed for contributions
423 freetonik 209 8/21/2025, 6:49:57 PM github.com ↗
But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”
If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.
Slop generators being available to everyone makes everyone less trustworthy, from a maintainer's POV. Thus, the circle of trust, for any given maintainer, shrinks starkly.
People do not become maintainers because they want to battle malicious, or even criminally negligent crap. They expect benign and knowledgeable contributors, or at least benign and willing to do their homework ones.
Being a maintainer is already hugely thankless. It's hard work (harder than writing code), and it comes with a lot less recognition. Not to mention all the newcomers that (a) maintainers usually eagerly educate, but then (b) disappear.
Screw up the social contract for maintainers even more, and they'll go extinct. (Edit: if a maintainer gets a whiff of some contributor working against them, rather than with them, they'll either ban the contributor forever, or just quit the project.)
Any sane project should categorically ban AI-assisted contributions, and extend their Signed-off-by definition, after a cut-off-date, to carry an explicit statement by the contributor that the code is free of AI-output. If this rules out "agentic IDE"s, that's a win.
Otherwise, what’s the harm in saying AI guides you to the solution if you can attest to it being a good solution?
If I just vibe-coded something and haven't looked at the code myself, that seems like a necessary thing to disclose. But beyond that, if the code is well understood and solid, I feel that I'd be clouding the conversation by unnecessarily bringing the tools I used into it. If I understand the code and feel confident in it, whether I used AI or not seems irrelevant and distracting.
This policy is just shoving the real problem under the rug. Generative AI is going to require us to come up with better curation/filtering/selection tooling, in general. This heuristic of "whether or not someone self-disclosed using LLMs" just doesn't seem very useful in the long run. Maybe it's a piece of the puzzle but I'm pretty sure there are more useful ways to sift through PRs than that. Line count differences, for example. Whether it was a person with an LLM or a 10x coder without one, a PR that adds 15000 lines is just not likely to be it.
This is the core problem with AI that makes so many people upset. In the old days, if you get a substantial submission, you know a substantial amount of effort went into it. You know that someone at some point had a mental model of what the submission was. Even if they didn't translate that perfectly, you can still try to figure out what they meant and we're thinking. You know the submitter put forth significant effort. That is a real signal that they are both willing and able to do so to address going forward to address issues you raise.
The existence of AI slop fundamentally breaks these assumptions. That is why we need enforced social norms around disclosure.
10x engineers create so many bugs without AI, and vibe coding could multiply that to 100x. But let's not distract from the source of that, which is rewarding the false confidence it takes to pretend we understand stuff that we actually don't.
The only reason one may not want disclosure is if one can’t write anything by themselves, thus they will have to label all code as AI generated and everyone will see their real skill level.
If they had used AI, their PRs might have been more understandable / less buggy, and ultimately I would have preferred that.
My little essay up there is more so a response to the heated "LLM people vs pure people" comments I'm reading all over this discussion. Some of this stuff just seems entirely misguided and fear driven.
If you’re unwilling to stop using slop tools, then you don’t get to contribute to some projects, and you need to be accept that.
No comments yet
No comments yet
I don’t get it at all. Feels like modernity is often times just inventing pale shadows of things with more addictive hooks to induce needlessly dependent behavior.
No you don’t. You can’t outsource trust determinations. Especially to the people you claim not to trust!
You make the judgement call by looking at the code and your known history of the contributor.
Nobody cares if contributors use an LLM or a magnetic needle to generate code. They care if bad code gets introduced or bad patches waste reviewers’ time.
Stop trying to equate LLM-generated code with indexing-based autocomplete. They’re not the same thing at all: LLM-generated code is equivalent to code copied off Stack Overflow, which is also something you’d better not be attempting to fraudulently pass off as your own work.
That’s exactly opposite of what the author is saying. He mentions that [if the code is not good, or you are a beginner] he will help you get to finish line, but if it’s LLM code, he shouldn’t be putting effort because there’s no human on the other side.
It makes sense to me.
That's the false equivalence right there
I think you just haven't gotten the hang of it yet, which is fine... the tooling is very immature and hard to get consistent results with. But this isn't a given. Some people do get good, steerable LLM coding setups.
I can generate 1,000 PRs today against an open source project using AI. I think you do care, you are only thinking about the happy path where someone uses a little AI to draft a well constructed PR.
There's a lot ways AI can be used to quickly overwhelm a project maintainer.
Then perhaps the way you contribute, review, and accept code is fundamentally wrong and needs to change with the times.
It may be that technologies like Github PRs and other VCS patterns are literally obsolete. We've done this before throughout many cycles of technology, and these are the questions we need to ask ourselves as engineers, not stick our heads in the sand and pretend it's 2019.
--
[1] https://www.copyright.gov/ai/
[2] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
> • Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material
> • Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs.
- books
- search engines
- stack overflow
- talking to a coworker
then it's not clear why you would have to disclose talking to an AI.
Generally speaking, when someone uses the word "slop" when talking about AI it's a signal to me that they've been sucked into a culture war and to discount what they say about AI.
It's of course the maintainer's right to take part in a culture war, but it's a useful way to filter out who's paying attention vs who's playing for a team. Like when you meet someone at a party and they bring up some politician you've barely heard of but who their team has vilified.
Whether it's prose or code, when informed something is entirely or partially AI generated, it completely changes the way I read it. I have to question every part of it now, no matter how intuitive or "no one could get this wrong"ish it might seem. And when I do, I usually find a multitude of minor or major problems. Doesn't matter how "state of the art" the LLM that shat it out was. They're still there. The only thing that ever changed in my experience is that problems become trickier to spot. Because these things are bullshit generators. All they're getting better at is disguising the bullshit.
I'm sure I'll gets lots of responses trying to nitpick my comment apart. "You're holding it wrong", bla bla bla. I really don't care anymore. Don't waste your time. I won't engage with any of it.
I used to think it was undeserved that we programmers called ourselved "engineers" and "architects" even before LLMs. At this point, it's completely farcical.
"Gee, why would I volunteer that my work came from a bullshit generator? How is that relevant to anything?" What a world.
If you want me to put in the effort- you have to put it in first.
Especially considering in 99% of cases even the one who generated it didn’t fully read/understand it.
- People use AI to write cover letters. If the companies don't filter out them automatically, they're screwed.
- Companies use AI to interview candidates. No one wants to spend their personal time talking to a robot. So the candidates start using AI to take interviews for them.
etc.
If you don't at least tell yourself that you don't allow AI PRs (even just as a white lie) you'll one day use AI to review PRs.
Imagine living before the invention of the printing press, and then lamenting that we should ban them because it makes it "too easy" to distribute information and will enable "low quality" publications to have more reach. Actually, this exact thing happened, but the end result was it massively disrupted the world and economy in extremely positive ways.
Citation needed, I don’t think the printing press and gpt are in any way comparable.
In some cases sure but it can also create the situation where people just waste time for nothing (think AI interviewing other AIs - this might generate GDP by people purchasing those services but I think we can all agree that this scenario is just wasting time and resource without improving society).
Imagine seeing “rm -rf / is a function that returns “Hello World!” and thinking “this is the same thing as the printing press”
https://bsky.app/profile/lookitup.baby/post/3lu2bpbupqc2f
That said, requiring adequate disclosure of AI is just fair. It also suggests that the other side is willing to accept AI-supported contributions (without being willing to review endless AI slop that they could have generated themselves if they had the time to read it).
I would expect such a maintainer to respond fairly to "I first vibecoded it. I then made manual changes, vibecoded a test, cursorily reviewed the code, checked that the tests provide good coverage, ran both existing and new tests, and manually tested the code."
That fair response might be a thorough review, or a request that I do the thorough review before they put in the time, but I'd expect it to be more than a blatant "nope, AI touched this, go away".
Programming languages were a nice abstraction to accommodate our inability to comprehend complexity - current day LLMs do not have the same limitations as us.
The uncomfortable part will be what happens to PRs and other human-in-the-loop checks. It’s worthwhile to consider that not too far into the future, we might not be debugging code anymore - we’ll be debugging the AI itself. That’s a whole different problem space that will need an entirely new class of solutions and tools.
Natural language can be specific, but it requires far too many words. `map (+ 1) xs` is far shorter to write than "return a list of elements by applying a function that adds one to its argument to each element of xs and collecting the results in a separate list", or similar.
If, in the dystopian future, a justice court you're subjected to decides that Claude was trained on Oracle's code, and all Claude users are possibly in breach of copyright, it's easier to nuke from orbit all disclosed AI contributions.
Unreviewed generated PRs can still be helpful starting points for further LLM work if they achieve desired results. But close reading with consideration of authorial intent, giving detailed comments, and asking questions from someone who didn't write or read the code is a waste of your time.
That's why we need to know if a contribution was generated or not.
Any contributor who was shown to post provably untested patches used to lose credibility. And now we're talking about accommodating people who don't even understand how the patch is supposed to work?
Example where this kind of contribution was accepted and valuable, inside this ghostty project https://x.com/mitchellh/status/1957930725996654718
It would be nice if they did, in fact, say they didn't know. But more often they just waste your time making their chatbot argue with you. And the chatbots are outrageous gaslighters.
All big OSS projects have had the occasional bullshitter/gaslighter show up. But LLMs have increased the incidence level of these sorts of contributors by many orders of magnitude-- I consider it an open question if open-public-contribution opensource is viable in the world post LLM.
On the one hand, it's lowered the barrier to entry for certain types of contributions. But on the other hand getting a vibe-coded 1k LOC diff from someone that has absolutely no idea how the project even works is a serious problem because the iteration cycle of getting feedback + correctly implementing it is far worse in this case.
Also, the types of errors introduced tend to be quite different between humans and AI tools.
It's a small ask but a useful one to disclose how AI was used.
or say "fork you."
You might argue that by making rules, even futile ones, you at least establish expectations and take a moral stance. Well, you can make a statement without dressing it up as a rule. But you don't get to be sanctimonious that way I guess.
Not every time, but sometimes. The threat of being caught isn't meaningless. You can decide not to play in someone else's walled garden if you want but the least you can do is respect their rules, bare minimum of human decency.
You get someone that didn't use AI getting accused of using AI and eventually telling people to screw off and contributing nothing.
The only legitimate reason to make a rule is to produce some outcome. If your rule does not result in that outcome, of what use is the rule?
Will this rule result in people disclosing "AI" (whatever that means) contributions? Will it mitigate some kind of risk to the project? Will it lighten maintainer load?
No. It can't. People are going to use the tools anyway. You can't tell. You can't stop them. The only outcome you'll get out of a rule like this is making people incrementally less honest.
If someone really wants to commit fraud they’re going to commit fraud. (For example, by not disclosing AI use when a repository requires it.) But if their fraud is discovered, they can still be punished for it, and mitigating actions taken. That’s not nothing, and does actually do a lot to prevent people from engaging in such fraud in the first place.
Yes that is the stated purpose, did you read the linked GitHub comment? The author lays out their points pretty well, you sound unreasonably upset about this. Are you submitting a lot of AI slop PRs or something?
P.S Talking. Like. This. Is. Really. Ineffective. It. Makes. Me. Just. Want. To. Disregard. Your. Point. Out. Of. Hand.
If this rule discourages low quality PRs or allows reviewers to save time by prioritizing some non-AI-generated PRs, then it certainly seems useful in my opinion.
Total bullshit. It's totally fine to declare intent.
You are already incapable of verifying / enforcing that a contributor is legally permitted to submit a piece of code as their own creation (Signed-off-by), and do so under the project's license. You won't embark on looking for prior art, for the "actual origin" of the code, whatever. You just make them promise, and then take their word for it.
If someone came to you and said "good news: I memorized the code of all the open source projects in this space, and can regurgitate it on command", you would be smart to ban them from working on code at your company.
But with "AI", we make up a bunch of rationalizations. ("I'm doing AI agentic generative AI workflow boilerplate 10x gettin it done AI did I say AI yet!")
And we pretend the person never said that they're just loosely laundering GPL and other code in a way that rightly would be existentially toxic to an IP-based company.
Sure it’s a big hill to climb in rethinking IP laws to align with a societal desire that generating IP continue to be a viable economic work product, but that is what’s necessary.
If you have code that happens to be identical to some else's code or implements someone's proprietary algorithm, you're going to lose in court even if you claim an "AI" gave it to you.
AI is training on private Github repos and coughing them up. I've had it regurgitate a very well written piece of code to do a particular computational geometry algorithm. It presented perfect, idiomatic Python with perfect tests that caught all the degenerate cases. That was obviously proprietary code--no amount of searching came up with anything even remotely close (it's why I asked the AI, after all).
Not for a dozen lines here or there, even if it could be found and identified in a massive code base. That’s like quoting a paragraph of a book in another book, non infringing.
For the second half of your comment it sounds like you’re saying you got results that were too good to be AI- that’s a bit “no true Scotsman”, at least without more detail. But implementing an algorithm, even a complex one, is very much something an LLM can do. Algorithms are much better defined and scoped natural language, and LLMs do a reasonable job of translating to languages. An algorithm is a narrow subset of that task type with better defined context and syntax.
This is far from settled law. Let's not mischaracterize it.
Even so, an AI regurgitating proprietary code that's licensed in some other way is a very real risk.
So. Yes, technically possible. But impossible by accident. Furthermore when you make this argument you reveal that you don't understand how these models work. They do not simply compress all the data they were trained on into a tiny storable version. They are effectively multiplication matrices that allow math to be done to predict the most likely next token (read: 2-3 Unicode characters) given some input.
So the model does not "contain" code. It "contains" a way of doing calculations for predicting what text comes next.
Finally, let's say that it is possible that the model does spit out not entire works, but a handful of lines of code that appear in some codebase.
This does not constitute copyright infringement, as the lines in question a) represent a tiny portion of the whole work (and copyright only protecst against the reduplication of whole works or siginficant portions of the work), and B) there are a limited number of ways to accomplish a certain function and it is not only possible but inevitable that two devs working independently could arrive at the same implementation. Therefore using an identical implementation (which is what this case would be) of a part of a work is no more illegal than the use of a certain chord progression or melodic phrasing or drum rhythm. Courts have ruled about this thoroughly.
Judge Alsup, in his ruling, specifically likened the process to reading text and then using the knowledge to write something else. That’s training and use.
The reality is that programmers are going to see other programmers code.
You're certainly correct. It's also true that companies are going to sue over it. There's no reason to make yourself an easy lawsuit target, if it's trivial to avoid it.
Content on StackOverflow is under CC-by-sa, version depends on the date it was submitted: https://stackoverflow.com/help/licensing . (It's really unfortunate that they didn't pick license compatible with code; at one point they started to move to the MIT license for code, but then didn't follow through on it.)
I don't think anyone who's not monetarily incentivize to pretend there are IP/Copyright issues actually thinks there are. Luckily everyone is for the most part just ignoring them and the legal system is working well and not allowing them an inch to stop progress.
Why do you think that about people who disagree with you? You're responding directly to someone who's said they think there's issues, and not pretending. Do you think they're lying? Did you not read what they said?
And AFAICT a lot of other people think similarly to me.
The perverse incentives to rationalize are on the side of the people looking to exploit the confusion, not the people who are saying "wait a minute, what you're actually doing is..."
So a gold rush person claiming opponents must be pretending because of incentives... seems like the category of "every accusation is a confession".
They can have a moral view that AI is "stealing" but they are claiming there is actually a legal issue at play.
I really appreciate this point from mitchellh. Giving thoughtful constructive feedback to help a junior developer improve is a gift. Yet it would be a waste of time if the PR submitter is just going to pass it to an AI without learning from it.
I’ve completely turned off AI assist on my personal computer and only use AI assist sparingly on my work computer. It is so bad at compound work. AI assist is great at atomic work. The rest should be handled by humans and use AI wisely. It all boils down back to human intelligence. AI is only as smart as the human handling it. That’s the bottom line.
I think I'm slowly coming around to this viewpoint too. I really just couldn't understand how so many people were having widely different experiences. AI isn't magic; how could I have expected all the people I've worked with who struggle to explain stuff to team members, who have near perfect context, to manage to get anything valuable across to an AI?
I was original pretty optimistic that AI would allow most engineers to operate at a higher level but it really seems like instead it's going to massively exacerbate the difference between an ok engineer and a great engineer. Not really sure how I feel about that yet but at-least I understand now why some people think the stuff is useless.
Now, an "effective engineer" can be a less battle-tested software developer, but they must be good at system design.
(And by system design, I don't just mean architecture diagrams: it's a personal culture of constantly questioning and innovating around "let's think critically to see what might go wrong when all these assumptions collide, and if one of them ends up being incorrect." Because AI will only suggest those things for cut-and-dry situations where a bug is apparent from a few files' context, and no ambitious idea is fully that cut-and-dry.)
The set of effective engineers is thus shifting - and it's not at all a valid assumption that every formerly good developer will see their productivity skyrocket.
I don't think that it lowers the bar there, if anything the bar is far harsher.
If I'm doing normal coding I make X choices per time period, with Y impacts.
With AI X will go up and the Y / X ratio may ALSO go up, so making more decisions of higher leverage!
Great Engineer + AI = Great Engineer++ (Where a great engineer isn't just someone who is a great coder, they also are a great communicator & collaborator, and love to learn)
Good Engineer + AI = Good Engineer
OK Engineer + AI = Mediocre Engineer
He took a couple days doing this, which was shocking to me. Such a waste of time that would have been better spent reading the code and improving any missing documentation - and most importantly asking teammates about necessary context that couldn't just be inferred from the code.
If an OK engineer is still actively trying to learn, making mistakes, memorizing essentials, etc. then there is no issue.
On the other hand, if they're surrendering 100% of their judgment to AI, then they will be mediocre.
Thats the reason for high valuation of AI companies.
The people deciding how much OpenAI is worth would probably struggle to run first-time setup on an iPad.
Using search engines is a skill
Nassim Taleb is the prophet of our times and he doesn't get enough credit.
But then my wife sort of handed me a project that previously I would have just said no to, a particular Android app for the family. I have instances of all the various Android technologies under my belt, that is, I've used GUI toolkits, I've used general purpose programming languages, I've used databases, etc, but with the possible exception of SQLite (which even that is accessed through an ORM), I don't know any of the specific technologies involved with Android now. I have never used Kotlin; I've got enough experience that I can pretty much piece it together when I'm reading it but I can't write it. Never used the Android UI toolkit, services, permissions, media APIs, ORMs, build system, etc.
I know from many previous experiences that A: I could definitely learn how to do this but B: it would be a many-week project and in the end I wouldn't really be able to leverage any of the Android knowledge I would get for much else.
So I figured this was a good chance to take this stuff for a spin in a really hard way.
I'm about eight hours in and nearly done enough for the family; I need about another 2 hours to hit that mark, maybe 4 to really polish it. Probably another 8-12 hours and I'd have it brushed up to a rough commercial product level for a simple, single-purpose app. It's really impressive.
And I'm now convinced it's not just that I'm too old a fogey to pick it up, which is, you know, a bit of a relief.
It's just that it works really well in some domains, and not so much in others. My current work project is working through decades of organically-grown cruft owned by 5 different teams, most of which don't even have a person on them that understands the cruft in question, and trying to pull it all together into one system where it belongs. I've been able to use AI here and there for some stuff that is still pretty impressive, like translating some stuff into psuedocode for my reference, and AI-powered autocomplete is definitely impressive when it correctly guesses the next 10 lines I was going to type effectively letter-for-letter. But I haven't gotten that large-scale win where I just type a tiny prompt in and see the outsized results from it.
I think that's because I'm working in a domain where the code I'm writing is already roughly the size of the prompt I'd have to give, at least in terms of the "payload" of the work I'm trying to do, because of the level of detail and maturity of the code base. There's no single sentence I can type that an AI can essentially decompress into 250 lines of code, pulling in the correct 4 new libraries, and adding it all to the build system the way that Gemini in Android Studio could decompress "I would like to store user settings with a UI to set the user's name, and then display it on the home page".
I think I recommend this approach to anyone who wants to give this approach a fair shake - try it in a language and environment you know nothing about and so aren't tempted to keep taking the wheel. The AI is almost the only tool I have in that environment, certainly the only one for writing code, so I'm forced to really exercise the AI.
The gist being - language (text input) is actually the vehicle you have to transfer neural state to the engine. When you are working in a greenfield project or pure-vibe project, you can get away with most of that neural state being in the "default" probability mode. But in a legacy project, you need significantly more context to contrain the probability distributions a lot closer to the decisions which were made historically.
That's a good insights. Its almost like to use AI tools effectively, one needs to stop caring about the little things you'd get caught up in if you were already familiar and proficient in a stack. Style guidelines, a certain idiomatic way to do things, naming conventions, etc.
A lot like how I've stopped organizing digital files into folders, sub folders etc (along with other content) and now I just just rely on search. Everything is a flat structure, I don't care where its stored or how it's organized as long as I can just search for it, that's what the computer is for, to keep track for me so I don't have to waste time organizing it myself.
Like wise for the code Generative AI produces. I don't need to care about the code itself. As long as its correct, not insecure, and performant, it's fine.
It's not 100% there yet, I still do have to go in and touch the code, but ideally I shouldn't have to, nor should I have to care what the actual code looks like, just the result of it. Let the computer manage that, not me. My role should be the system design and specification, not writing the code.
I suspect that well-engineered projects with plenty of test coverage and high-quality documentation will be easier to use AI on, just like they're easier for humans to comprehend. But you need to have somebody with the big picture still who can make sure that you don't just turn things into a giant mess once less disciplined people start using AI on a project.
The reason being that the boilerplate Android stuff is effectively given for free and not part of the context as it is so heavily represented in the training set, whereas the unique details of your work project is not. But finding a way to provide that context, or better yet fine-tune the model on your codebase, would put you in the same situation and there's no reason for it to not deliver the same results.
That it is not working for you now at your complex work projects is a limitation of tooling, not something fundamental about how AI works.
Aside: Your recommendation is right on. It clicked for me when I took a project that I had spent months of full-time work creating in C++, and rewrote it in idiomatic Go, a language I had never used and knew nothing about. It took only a weekend, and at the end of the project I had reviewed and understood every line of generated code & was now competent enough to write my own simple Go projects without AI help. I went from skeptic to convert right then and there.
However, the information-theoretic limitation of expressing what you want and how anyone, AI or otherwise, could turn that into commits, is going to be quite the barrier, because that's fundamental to communication itself. I don't think the skill of "having a very, very precise and detailed understanding of the actual problem" is going anywhere any time soon.
(1) The process of creating "a very, very precise and detailed understanding of the actual problem" is something AI is really good at, when partnered with a human. My use of AI tools got immensely better when I figured out that I should be prompting the AI to turn my vague short request into a detailed prompt, and then I spend a few iteration cycles fixing up before asking the agent to do it.
(2) The other problem of managing context is a search and indexing problem, which we are really, really good at and have lots of tools for, but AI is just so new that these tools haven't been adapted or seen wide use yet. If the limitation of the AI was its internal reasoning or training or something, I would be more skeptical. But the limitation seems to be managing, indexing, compressing, searching, and distilling appropriate context. Which is firmly in the domain of solvable, albeit nontrivial problems.
I don't see the information theoretic barrier you refer to. The amount of information an AI can keep in its context window far exceeds what I have easily accessible to my working memory.
> Net on Bullets - Position Unchanged
> So we come back to fundamentals. Complexity is the business we are in, and complexity is what limits us. R. L. Glass, writing in 1988, accurately summarizes my 1995 views:
>> So what, in retrospect, have Parnas and Brooks said to us? That software development is a conceptually tough business. That magic solutions are not just around the corner. That it is time for the practitioner to examine evolutionary improvements rather than to wait—or hope—for revolutionary ones.
>> Some in the software field find this to be a discouraging picture. They are the ones who still thought breakthroughs were near at hand.
>> But some of us—those of us crusty enough to think that we are realists—see this as a breath of fresh air. At last, we can focus on something a little more viable than pie in the sky. Now, perhaps, we can get on with the incremental improvements to software productivity that are possible, rather than waiting for the breakthroughs that are not likely to ever come.[1]
[0]: Brooks, Frederick P.,Jr, The mythical man-month: essays on software engineering (1995), p. 226
[1]: Glass, R. L., "Glass"(column), System Development, (January 1988), pp. 4-5.
An interesting stance.
Plenty of posts in the style of "I wrote this cool library with AI in a day" were written by really smart devs who are known for shipping good quality library very quickly.
It might just be my point of view, but I feel like there's been a sudden paradigm shift back to solid ML from the deluge of chatbot hype nonsense.
What's a key decision and what's a dot to connect varies by app and by domain, but the upside is that generally most code by volume is dot connecting (and in some cases it's like 80-90% of the code), so if you draw the lines correctly, huge productivity boosts can be found with little downside.
But if you draw the lines wrong, such that AI is making key decisions, you will have a bad time. In that case, you are usually better off deleting everything it produced and starting again rather than spending time to understand and fix its mistakes.
Things that are typically key decisions:
- database table layout and indexes
- core types
- important dependencies (don't let the AI choose dependencies unless it's low consequence)
- system design—caches, queues, etc.
- infrastructure design—VPC layout, networking permissions, secrets management
- what all the UI screens are and what they contain, user flows, etc.
- color scheme, typography, visual hierarchy
- what to test and not to test (AI will overdo it with unnecessary tests and test complexity if you let it)
- code organization: directory layout, component boundaries, when to DRY
Things that are typically dot connecting:
- database access methods for crud
- API handlers
- client-side code to make API requests
- helpers that restructure data, translate between types, etc.
- deploy scripts/CI and CD
- dev environment setup
- test harness
- test implementation (vs. deciding what to test)
- UI component implementation (once client-side types and data model are in place)
- styling code
- one-off scripts for data cleanup, analytics, etc.
That's not exhaustive on either side, but you get the idea.
AI can be helpful for making the key decisions too, in terms of research, ideation, exploring alternatives, poking holes, etc., but imo the human needs to make the final choices and write the code that corresponds to these decisions either manually or with very close supervision.
Make a knowledgeable reply and give no reference to the AI you used- comment is celebrated.
We are already barreling full speed down the "hide your AI use" path.
If the PR has issues and requires more than superficial re-work to be acceptable, the authors don't want to spend time debugging code spit out by an AI tool. They're more willing to spend a cycle or two if the benefit is you learning (either generally as a dev or becoming more familiar with the project). If you can make clear that you created or understand the code end to end, then they're more likely to be willing to take these extra steps.
Seems pretty straightforward to me and thoughtful by the maintainers here.
If that were the case, why would this rule be necessary, if it indeed is the substance that matters? AI generated anything has a heavy slop stigma right now, even if the content is solid.
This would make for an interesting experiment to submit a PR that was absolute gold but with the disclaimer it was generated with help of ChatGPT. I would almost guarantee it would be received with skepticism and dismissals.
Fraud and misrepresentation are always options for contributors, at some point one needs to trust that they’re adhering to the rules that they agreed to adhere to.
Why are you surprised? Do companies want to hire "honest" people whose CVs were written by some LLM?
Yes, some companies do want to hire such people, the justification given is something along the lines of "we need devs who are using the latest tools/up to date on the latest trends! They will help bring in those techniques and make all of our current devs more productive!". This isn't a bad set of motivations or assumptions IMO.
Setting aside what companies _want_, they almost certainly are already hiring devs with llm-edited CVs, whether they want it or not. Such CVs/resumes are more likely to make it through HR filters.
> Do companies want to hire "honest" people whose CVs were written by some LLM?
Unfortunately yes, they very much seem to. Since many are using LLMs to assess CVs, those which use LLMs to help write their CV have a measured advantage.
Some of the AI policy statements I have seen come across more as ideology statements. This is much better, saying the reasons for the requirement and offering a path forward. I'd like to see more of this and less "No droids allowed"
> As a small exception, trivial tab-completion doesn't need to be disclosed, so long as it is limited to single keywords or short phrases.
RTFA (RTFPR in this case)
> I consulted ChatGPT to understand the codebase but the solution was fully authored manually by myself.
What's the reasoning for needing to disclose this?
Do I need to disclose that I wrote a script to generate some annoying boilerplate? Or that my IDE automatically templates for loops?
Edit: Also, it's always good to provide maximal context to reviewers. For example, when I use code from StackOverflow I link the relevant answer in a comment so the reviewer doesn't have to re-tread the same ground I covered looking for that solution. It also gives reviewers some clues about my understanding of the problem. How is AI different in this regard?
Yes, you have to disclose it.
> Do I need to disclose that I wrote a script to generate some annoying boilerplate?
You absolutely need to disclose it.
> Or that my IDE automatically templates for loops?
That's probably worth disclosing too.
# if this PR looks like it was produced with AI, be snotty and dismissive when you write the reply to this PR
NB: This posting was created with Natural Intelligence (not necessarily very intelligent Intelligence). ;-)
On the flip side, I’m preparing to open source a project I made for a serializable state machine with runtime hooks. But that’s blood sweat and tears labor. AI is writing a lot of the unit tests and the code, but it’s entirely by my architectural design.
There’s a continuum here. It’s not binary. How can we communicate what role AI played?
And does it really matter anymore?
(Disclaimer: autocorrect corrected my spelling mistakes. Sent from iPhone.)
1.) Didn't try to hide the fact that they used AI
2.) Tested their changes
I would not care at all. The main issue is this is usually not the case, most people submitting PRs that are 90% AI do not bother testing (Usually they don't even bother running the automated tests)
No comments yet
What about just telling exactly what role AI played? You can say it generated the tests for you for instance.
Are you kidding?
- For ages now, people have used "broad test coverage" and "CI" as excuses for superficial reviews, as excuses for negligent coding and verification.
- And now people foist even writing the test suite off on AI.
Don't you see that this way you have no reasoned examination of the code?
> ... and the code, but it’s entirely by my architectural design.
This is fucking bullshit. The devil is in the details, always. The most care and the closest supervision must be precisely where the rubber meets the road. I wouldn't want to drive a car that you "architecturally designed", and a statistical language model manufactured.
Well, if you had read what was linked, you would find these...
> I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.
> The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.
> I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code.
I don't know specifically what PR's this person is seeing. I do know it's been a rumble around the open source community that inexperienced devs are trying to get accepted PRs for open source projects because they look good on a resume. This predated AI in fact, with it being a commonly cited method to get attention in a competitive recruiting market.
As always, folks trying to get work have my sympathies. However ultimately these folks are demanding time and work from others, for free, to improve their career prospects while putting in the absolute bare minimum of effort one could conceivably put in (having Copilot rewrite whatever part of an open source project and shove it into a PR with an explanation of what it did) and I don't blame them for being annoyed at the number of low-quality submissions.
I have never once criticized a developer for being inexperienced. It is what it is, we all started somewhere. However if a dev generated shit code and shoved it into my project and demanded a headpat for it so he could get work elsewhere, I'd tell him to get bent too.
An angle not mentioned in the OP is copyright - depending on your jurisdiction, AI-generated text can't be copyrighted, which could call into question whether you can enforce your open source license anymore if the majority of the codebase was AI-generated with little human intervention.
that being said i feel like this is an intermediate step - it's really hard to review PRs that are AI slop because it's so easy for those who don't know how to use AI to create a multi-hundred/thousand line diff. but when AI is used well, it really saves time and often creates high quality work
Because of the perception that anything touched by AI must be uncreative slop made without effort. In the case of this article, why else are they asking for disclosure if not to filter and dismiss such contributions?
>I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.
1: https://mitchellh.com/writing
Blaming it on the tool, and not the person's misusing it trying to get his name on a big os project, is like blaming the new automatic in the kitchen and not the chef for getting a raw pizza on the table.
So if the code is integrated, the license of the project lies about parts of the code.
Your question makes sense. See U.S. Copyright Office publication:
> If a work's traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.
> For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user...
> For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.
> When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.
> In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.”
> Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.
> This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work's expression and “actually formed” the traditional elements of authorship.
> https://www.federalregister.gov/documents/2023/03/16/2023-05...
In any but a pathological case, a real contribution code to a real project has sufficient human authorship to be copyrightable.
> the license of the project lies about parts of the code
That was a concern pre-AI too! E.g. copy-past from StackOverflow. Projects require contributors to sign CLAs, which doesn't guarantee compliance, but strengthens the legal position. Usually something like:
"You represent that your contribution is either your original creation or you have sufficient rights to submit it."
I would guess that many (if not most) of the people attempting to contribute AI generated code are legitimately trying to help.
People who are genuinely trying to be helpful can often become deeply offended if you reject their help, especially if you admonish them. They will feel like the reprimand is unwarranted, considering the public shaming to be an injury to their reputation and pride. This is most especially the case when they feel they have followed the rules.
For this reason, if one is to accept help, the rules must be clearly laid out from the beginning. If the ghostty team wants to call out "slop", then it must make it clear that contributing "slop" may result in a reprimand. Then the bothersome want-to-be helpful contributors cannot claim injury.
This appears to me to be good governance.
This seems very noisy/unhelpful.
The irony of this, when talking about AI.
The extent here is very important. There's a massive difference between vibe-coding a PR, using LLMs selectively to generate code in files in-editor, and edit prediction like copilot.
It says actually later that tab-completion needn't be disclosed.
or they want a reason to summarily dismiss code, which also means you can’t trust the reviewed to scrutinize code by its own merit
seems well intentioned and out of touch, to me
is there more context about what they are reacting to?
It would be a lie to sign those papers for something you vibe coded.
It's not just courtesy; you are committing fraud if you put your copyright notice on something you didn't create and publishing that to the world.
I don't just want that disclosed; I cannot merge it if it is disclosed, period.
If I use iOS's spellchecker which "learns" from one's habit (i.e.: AI, the really polished kind), I don't lose copyright over the text which I've written.