Off-topic, but I wish Linux had a stable ABI for loadable kernel modules. Obviously the kernel would have to provide shims for internal changes because internal ABI constantly evolves, so it would be costly and the drivers would probably run slower over time. Yet, having the ability to use a driver from 15 years ago can be a huge win at times. That kind of compatibility is one of the things I love about Windows.
theptip · 7h ago
A good case study. I have found these two to be good categories of win:
> Use these tools as a massive force multiplier of your own skills.
Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.
> Use these tools for rapid onboarding onto new frameworks.
I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.
I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.
“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.
bicx · 7h ago
This is a good takeaway. I use Claude Code as my main approach for making changes to a codebase, and I’ve been doing so every day for months. I have a solid system I follow through trial and error, and overall it’s been a massive boon to my productivity and willingness to attempt larger experiments.
One thing I love doing is developing a strong underlying data structure, schema, and internal API, then essentially having CC often one-shot a great UI for internal tools.
Being able to think at a higher level beyond grunt work and framework nuances is a game-changer for my career of 16 years.
kccqzy · 6h ago
This is more of a reflection of how our profession has not meaningfully advanced. OP talks about boilerplate. You talk about grunt work. We now have AI to do these things for us. But why do such things need to exist in the first place? Why hasn't there been a minimal-boilerplate language and framework and programming environment? Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
abathologist · 6h ago
This is the glaring fallacy! We are turning to unreliable stochastic agents to churn out boilerplate and do toil that should just be abstracted or automated away by fully deterministic, reliably correct programs. This is, prima facie, a degenerative and wasteful way to develop software.
jama211 · 3h ago
Saying boilerplate shouldn’t exist is like saying we shouldn’t need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is “I mean, sure, that’d be great, not sure how you’ll actually accomplish that though”.
okr · 11m ago
Love this analogy.
jazzyjackson · 2h ago
Yes and its why AI fills me with impending doom: handing over the reigns to an AI that can deal with the bullshit for us means we will get stuck in a groundhog day scenario of waking up with the same shitty architecture for the foreseeable future. Automation is the opposite of plasticity.
ako · 1h ago
I don’t think that will happen. It’s more like a 3d printer where you can feed in a new architecture and new design every day and it will create it. More flexibility instead of less.
mquander · 12m ago
I guess this is probably what Lucifer said to God about why it was stupid to give humans free will.
jclarkcom · 4h ago
When humans are in the loop everything pretty much becomes stochastic as well. What matters more is the error rate and result correctness. I think this shifts the focus towards test cases, measurement, and outcome.
elzbardico · 2h ago
No. This is a fundamentally erroneous analogy. We don't generate code by a stochastic process.
jxf · 4m ago
Everything we do is a stochastic process. If you throw a dart 100 times at a target, it's not going to land at the same spot every time. There is a great deal of uncertainty and non-deterministic behavior in our everyday actions.
aargh_aargh · 1h ago
You don't? I do.
A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.
MostlyStable · 1h ago
We don't understand how human minds work anywhere close to well enough to say this.
tankenmate · 1h ago
I have a strong suspicion that the world is not as deterministic as you'd like it to be.
baq · 1h ago
nothing prevents stochastic agents from producing reliable, deterministic and correct programs. it's literally what the agents are designed for. it's much less wasteful than me doing the same work and much much less wasteful trying to find a framework for all frameworks.
zer00eyz · 3h ago
> This is the glaring fallacy!
It feels like toil because it's not the interesting or engaging part of the work.
If you're going to build a piece of furniture. The cutting, nailing, gluing are the "boiler plate" that you have to do around the act of creation.
LLM's are just nail guns.
baq · 1h ago
and sanding. don't forget sanding. 90% of building furniture is sanding.
nurettin · 3h ago
Great point, but there is absolutely no way of doing this for every framework and then maintain it for ages. It is logistically impossible.
jalk · 2h ago
We have been emphasizing on creating abstractions since forever.
We now have several different hardware platforms, programming languages, OS's, a gazillion web frameworks, tons of databases, build tools, clustering frameworks and on and on and on.
We havn't done so entirely collectively, but I don't think the amount of choice here reflects that we are stupid, but that rather that "one size doesn't fit all". Think about the endless debates and flame wars about the "best" of those abstractions.
I'm sure that Skynet will end that discussion and come up with the one true and only abstraction needed ;)
IanCal · 49m ago
Because the set of problems we make to be solvable with code is huge and the world is messy. Many of these things really are at a very high level of abstraction and the boiler plate feels boilerplatey but is actually slightly different in a way not automatable. Or it is but the configuration for that automation becomes the new bit you look at and see as grunt work.
Now we have a way we can get computers to do it!
mikepurvis · 5h ago
I feel this some days, but honestly I’m not sure it’s the whole answer. Every piece of code has some purpose or expresses a decision point in a design, and when you “abstract” away those decisions, they don’t usually go away — often they’re just hidden in a library or base class, or become a matter of convention.
Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.
travisgriggs · 5h ago
My take: money. Years ago, when I was cutting my teeth in software, efficiency was a real concern. Not just efficiency for limited CPU, memory, and storage. But also how you could maximize the output of smaller head count of developers. There was a lot of debate over which methodologies, languages, etc, gave the biggest bang for buck.
And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.
I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)
ZYbCRq22HbJ2y7 · 6h ago
> Why hasn't there been a minimal-boilerplate language and framework and programming environment?
There are? For example, rails has had boilerplate generation commands for a couple of decades.
mhluongo · 5h ago
There's boilerplate in Rails too. We move the goal posts for what we define as boilerplate as we better explore and solve a class of problems.
dymk · 2h ago
What boilerplate is there in rails?
TheDong · 1h ago
html is like 90% boilerplate, and so .html.erb in rails is mostly boilerplate.
kwanbix · 4h ago
It used to be. When I learned to program for windows, I will basically learn Delphi or Visual basic at the time. Maybe some database like paradox. But I was reading a website that lists the skills needed to write backend ant it was like 30 different things to learn.
anyfoo · 6h ago
Because people think learning Haskell is too hard.
I find of all languages, Haskell often allows me to get by with the least boilerplate. Packages like lenses/optics (and yes, scrap your boilerplate/Generics) help. Funny package, though!
wyager · 4h ago
It's very minimal-boilerplate. It's done an exceptional job of eliminating procedural, tedious work, and it's done it in a way that doesn't even require macros! "Template Haskell" is Haskell's macro system and it's rarely used anymore.
These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.
If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.
zipzapzip · 1h ago
Because of the obsession with backwards compatibility and not breaking code. The web development industry is the prime example. HTML, Javascript, CSS, a backend frontend architecture - absolutely terrible stack.
baq · 1h ago
if the simplest web page pulls in react in an attempt to be a small OS unto itself, that's what you get.
wyager · 4h ago
> Why hasn't there been a minimal-boilerplate language and framework and programming environment?
Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).
To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.
The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.
not_that_d · 13m ago
For me is not so. It makes me way faster in languages that I don't know, but makes me slower on the ones I know because a lot of times, it creates code that will fail eventually.
Then I need to expend extra time following everything it did so I can "fix" the problem.
nine_k · 6h ago
Yes. The author essentially asked Claude to port a driver from Linux 2.4 to Linux 6.8. Very certainly there must be sufficient amounts of training material, and web-searchable material, that describes such tasks. The author provided his own expertise where Claude could not find a good analogue in the training corpus, that is, the few actually non-trivial bits of porting.
"Use these tools as a massive force multiplier of your own skills" is a great way to formulate it. If your own skills in the area are near-zero, multiplying them by a large factor may still yield a near-zero result. (And negative productivity.)
rmoriz · 2h ago
You can still ask, generate a list of things to learn etc. basically generate a streamlined course based on all tutorials, readmes and source code available when the model was trained. You can call your tutor 24/7 as long as you got tokens.
theshrike79 · 1h ago
ChatGPT even has a specific "Study mode" where it refrains from telling you the answer directly and kinda guides you to figure it out yourself.
ZYbCRq22HbJ2y7 · 6h ago
We have members on my team that definitely feel empowered to wade into new territory, but they make so much misdirected code with LLMs, even when we make everyone use Claude 4 thinking agents.
It seems to me that if you have been pattern matching the majority of your coding career, then you have a LLM agent pattern match on top of that, it results in a lot of headaches for people who haven't been doing that on a team.
I think LLM agents are supremely faster at pattern matching than humans, but are not as good at it in general.
baq · 1h ago
> they make so much misdirected code with LLMs
just points to the fact that they've no idea what they're doing and would produce different, pointless code by hand, though much slower. this is the paradigm shift - you need a much bigger sieve to filter out the many more orders of magnitude of crap that inexperienced operators of LLMs create.
marcus_holmes · 5h ago
>> Use these tools for rapid onboarding onto new frameworks.
Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code. I have to make all the decisions, and guide it, but I don't need to learn Ruby to write acceptable-level code [0]. I get to be immediately productive in an unfamiliar environment, which is great.
[0] acceptable-level as defined by the rest of the team - they're checking my PRs.
AdieuToLogic · 4h ago
>>> Use these tools for rapid onboarding onto new frameworks.
> Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code.
If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?
> ... but I don't need to learn Ruby to write acceptable-level code [0].
Since the team you work with uses Ruby, why do you not need to learn it?
> [0] acceptable-level as defined by the rest of the team - they're checking my PRs.
Ah. Now I get it.
Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.
Here's a thought - has it crossed your mind that team members needing to determine if your PR's are acceptable is "a bad thing", in that it may indicate a lack of trust of the changes you have been introducing?
Furthermore, does this situation qualify as "immediately productive" for the team or only yourself?
EDIT:
If you are not a software engineer by trade and instead a stakeholder wanting to formally specify desired system changes to the engineering team, an approach to consider is authoring RSpec[0] specs to define feature/integration specifications instead of PR's.
This would enable you to codify functional requirements such that their satisfaction is provable, assist the engineering team's understanding of what must be done in the context of existing behavior, identify conflicting system requirements (if any) before engineering effort is expended, provide a suite of functional regression tests, and serve as executable documentation for team members.
are you advocating for not having code reviews...? Just straight force push to main?
AdieuToLogic · 2h ago
> are you advocating for not having code reviews...? Just straight force push to main?
No, not at all.
What I was speaking about was if the person to whom I replied is not a s/w engineer, then perhaps a better contribution to their project would be to define requirements in the form of RSpec specifications (since Ruby is in use) and allow the engineering team to satisfy them as they determine appropriate.
I have seen product/project managers attempt to "contribute" to a development effort much like what was described. Usually there is a power dynamic such that engineers cannot overtly tell the manager(s), "you define the 'what' and we will define the 'how'." Instead, something like the PR flow described is grudgingly accepted and then worked around.
meesles · 6h ago
> Use these tools as a massive force multiplier of your own skills.
I've felt this learning just this week - it's taken me having to create a small project with 10 clear repetitions, messily made from AI input. But then the magic is making 'consolidation' tasks where you can just guide it into unifying markup, styles/JS, whatever you may have on your hands.
I think it was less obvious to me in my day job because in a startup with a lack of strong coding conventions, it's harder to apply these pattern-matching requests since there are fewer patterns. I can imagine in a strict, mature codebase this would be way more effective.
rmoriz · 4h ago
In times of Rust and Typescript (just examples) coding standards are explicit. It‘s not optional anymore. All my vibe coding projects are using CI with tests including style and type checks. The agent makes mistakes but it sees the failing tests and fixes it. If you vibe code like we did Perl and PHP in 1999 you‘re gonna have a bad time.
baq · 1h ago
In a startup, especially early, the only thing that isn't optional is shipping. You can fix any and all issues later, first you have to ensure there is a 'later'. (That's not to say you shouldn't do it at all, but definitely focus on the business before the tech.)
tonkinai · 1h ago
It’s less about AI vs boilerplate and more about having good tests. if the code works and you can move fast, who cares who typed it.
mattfrommars · 5h ago
Do you get to use Claude Code through your employer to have the opportunity to spend 100 hours with it? Or do you do this on your own persona project?
meander_water · 4h ago
> Be as specific as possible, making sure to use the domain-specific keywords for the task.
If you don't have the technical understanding of a language or framework, there is going to be a lot of ambiguity in your prompts.
This specificity gap leads the LLM to fill in those gaps for you, which may not be what you intended. And that's usually where bugs hide.
I think this is the flip side to being a "force multiplier"
SV_BubbleTime · 4h ago
“I need a C class that has a constructor for a tuple”
I would hope Claude would be like “Hold on man…”
qayxc · 2h ago
Hm. That was interesting, so I ran it as prompt to Qwen3-Coder (which I run locally). It spat out a bunch of C functions together with a C struct and commented:
This implementation provides:
Generic Tuple Structure: A flexible tuple that can hold different types of elements (int, double, string, float)
Constructor Function: tuple_create() initializes a new tuple with specified size
Type-Safe Setters:
tuple_set_int(), tuple_set_double(), tuple_set_string(), tuple_set_float()
Type-Safe Getters:
tuple_get_int(), tuple_get_double(), tuple_get_string(), tuple_get_float()
Memory Management: Automatic memory allocation and deallocation
Utility Functions:
tuple_print() for displaying tuple contents
tuple_destroy() for cleanup
Seems to be a common enough request to be present in the model's training data :D Even the code looked fine, apart from being a very naive and thus slow implementation.
petesergeant · 3h ago
> I need a C class that has a constructor for a tuple
| I'll create a simple C++ class with a constructor that takes a tuple.
Here's an example...
> nah C
| C doesn't have classes or constructors - those are C++ features. In C, you
would use a struct and create an initialization function. Here's how
you'd handle a tuple-like structure:
Brendinooo · 7h ago
When I read an article like this it makes me think about how the demand for work to be done was nowhere close to being fully supplied by the pre-LLM status quo.
theshrike79 · 1h ago
LLM assisted coding can get you from an idea to MVP in an evening (within maybe 1 or 2 Claude 5 hour quota windows).
I've done _so_ many of these where I go "hmm, this might be useful", planned the project with gemini/chatgpt free versions to a markdown project file and then sic Claude on it while I catch up on my shows.
Within a few prompts I've got something workable and I can determine if it was a good idea or not.
Without an LLM I never would've even tried it, I have better and more urgent things to do than code a price-watcher for very niche Blu-ray seller =)
measurablefunc · 5h ago
It's never about lack of work but lack of people who have the prerequisite expertise to do it. If you don't have experience w/ kernel development then no amount of prompting will get you the type of results that the author was able to achieve. More specifically, in theory it should be possible to take all the old drivers & "modernize" them to carry them forward into each new version of the kernel but the problem is that none of the LLMs are capable of doing this work w/o human supervision & the number of people who can actually supervise the LLMs is very small compared to the amount of unmaintained drivers that could be ported into newer kernels.
There is a good discussion/interview¹ between Alan Kay & Joe Armstrong about how most code is developed backwards b/c none of the code has a formal specification that can be "compiled" into different targets. If there was a specification other than the old driver code then the process of porting over the driver would be a matter of recompiling the specification for a new kernel target. In absence of such specification you have to substitute human expertise to make sure the invariants in the old code are maintained in the new one b/c the LLMs has no understanding of any of it other than pattern matching to other drivers w/ similar code.
There is usually a specification for how hardware works. But:
1. The original hardware spec is usually proprietary, and
2. The spec is often what the hardware was supposed to do. But hardware prototype revisions are expensive. So at some point, the company accepts a bunch of hardware bugs, patches around them in software, ships the hardware, and reassigns the teams to a newer product. The hardware documentation won't always be updated.
This is obviously an awful process, but I've seen and heard of versions of it for over 20 years. The underlying factors driving this can be fixed, if you really want to, but it will make your product totally uncompetitive.
DrewADesign · 3h ago
AI doesn’t need to replace a specialist in their entirety for it to tank demand for a skill. If the people that currently do the work are significantly more productive, fewer people will be necessary to the same amount of work. Then, people trying to escape obsolescence in different, more popular specialties move into the niche ones. You could easily pass the threshold of having less work than people without having replaced a single specialist.
bandrami · 2h ago
IDK, the bottleneck really still seems to be "marketable ideas" rather than their implementation. There's only so much stuff people are willing to actually pay for.
pluto_modadic · 1h ago
things were on the backlog, but more important things absolutely needed to be done.
jabl · 1h ago
Blast from the past! When I was a kid we had such a floppy tape device connected to a 386 or 486 computer my parents had. I think it was a Colorado Jumbo 250. I think the actual capacity was 125MB, but the drive or the backup software had some built-in compression, hence why it was marketed as a 250MB drive. Never tried to use it with the Linux ftape driver, though.
It wouldn't surprise me if the drive and the tapes are still somewhere in my parents storage. Could be a fun weekend project to try it out, though I'm not sure I have any computer with a floppy interface anymore. And I don't think there's anything particularly interesting on those tapes either.
In any case, cool project! Kudos to the author!
0xbadcafebee · 7h ago
I had a suspicion AI would lower the barrier to entry for kernel hacking. Glad to see it's true. We could soon see much wider support for embedded/ARM hardware. Perhaps even completely new stripped-down OSes for smart devices.
mrheosuper · 18m ago
>new stripped-down OSes for smart devices.
What's wrong with exist one?
eviks · 1h ago
Nothing was lowered because there was no barrier:
> As a giant caveat, I should note that I have a small bit of prior experience working with kernel modules, and a good amount of experience with C in general
But yeah, the dream of new OSes is sweet...
baq · 1h ago
I'd bet a couple dollars that it'd take a week for someone who hasn't hacked on the kernel at all, but knows some C and two weeks for someone who doesn't even know C but is a proficient programmer. This would previously take months.
We're talking about an order of magnitude quicker onboarding. This is absolutely massive.
eviks · 59m ago
It's so massive that your own fantasy bet is just a couple of dollars...
giancarlostoro · 3h ago
If used correctly it can help you get up to speed quicker, sadly most people just want it to build the house instead of using it to help them hammer nails.
csmantle · 6h ago
It's a good example of a developer who knows what to do with and what to expect from AI. And a healthy sprinkle of skepticism, because of which he chose to make the driver a separate module.
athrowaway3z · 2h ago
> so I loaded the module myself, and iteratively pasted the output of dmesg into Claude manually,
One of the things that has Claude as my goto option is its ability to start long-running processes, which it can read the output of to debug things.
There are a bunch of hacks you could have used here to skip the manual part, like piping dmesg to a local udp port and having Claude start a listener.
mattmanser · 12m ago
I think that's the thing holding a lot of coders back on agentic coding, these little tricks are still hard to get working. And that feedback loop is so important.
Even something simple like getting it to run a dev server in react can have it opening multiple servers and getting confused. I've watched streams where the programmer is constantly telling it to use an already running server.
MagicMoonlight · 27m ago
Is Claude code better than ChatGPT?
Amadiro · 14m ago
In my experiments claude4 opus generated by far the best code (for my taste and purposes) but it's also a pretty expensive model. I think I used up $40 in one evening of frantic vibe-coding.
brainless · 4h ago
Empowering people is a lovely thing.
Here the author has a passion/side project they have been on for a while. Upgrading the tooling is a great thing. Community may not support this since the niche is too narrow. LLM comes in and helps in the upgrade. This is exactly what we want - software to be custom - for people to solve their unique edge cases.
Yes author is technical but we are lowering the barrier and it will be lowered even more. Semi technical people will be able to solve some simpler edge cases, and so one. More power to everyone.
tedk-42 · 6h ago
Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.
A reminder though these LLM calls cost energy and we need reliable power generation to iterate through this next tech cycle.
Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)
konfusinomicon · 6h ago
yes they do! those are the humans that pass down those lost arts even if the audience is a handful. to trust an amalgamation of neurally organized binary carved intricately into metal with deep and often arcane knowledge and the lineage of lessons that produced it is so absurd that if a catastrophe that destroyed life as we know it did occur, we deserve our fate of devolution back to stone tools and such.
rvz · 6h ago
> Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.
You would certainly need an expert to make sure your air traffic control software is working correctly and not 'vibe coded' the next time you decide to travel abroad safely.
We don't need a new generation who can't read code and are heavily reliant on whatever a chat bot said because: "you're absolutely right!".
> Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)
Useful enough for Stripe to building their own blockchain and even that and the rest of them are more energy efficient than a typical LLM cycle.
But the LLM grift (or even the AGI grift) will not only cost even more than crypto, but the whole purpose of its 'usefulness' is the mass displacement of jobs with no realistic economic alternative other than achieving >10% global unemployment by 2030.
That's a hundred times more disastrous than crypto.
vkaku · 1h ago
Excellent. This is the kind of W that needs more people to jump into.
anonymousiam · 6h ago
I hope Dmitry did a good job. I've got a box of 2120 tapes with old backups from > 20 years ago, and I'm in the process of resurrecting the old (486) computer with both of my tape drives (floppy T-1000 and SCSI DDS-4). It would be nice to run a modern kernel on it.
rmoriz · 5h ago
I was banned from an OpenSource project [1] recently because I suggested a bug fix. Their „code of conduct“ not only prevents PRs but also comments on issues with information that was retrieved by any AI tool or resource.
Thinking about asking Claude to reimplement it from scratch in Rust…
> 2. We will not accept changes (code or otherwise) created with the aid of "AI" tooling. "AI" models are trained at the expense of underpaid workers filtering inputs of abhorrent content, and does not respect the owners of input content. Ethically, it sucks.
Do you disagree with some part of the statement regarding "AI" in their CoC? Do you think there's a fault in their logic, or do you yourself personally just not care about the ethics at play here?
I find it refreshing personally to see a project taking a clear stance. Kudos to them.
Recently enjoyed reading the Dynamicland project's opinion on the subject very much too[0], which I think is quite a bit deeper of an argument than the one above.
Ethics seems to be, unfortunately, quite low down on the list of considerations of many developers, if it factors in at all to their decisions.
I disagree with their CoC on AI. There are so many projects which are important and don’t let you contribute or make the barrier to entry so hard, and so you do best effort to raise a detailed bug description for it to sit there for 14 years or them to tell you to get fucked. So anyone who complains about AI isn’t worth the time and day and I support not getting paid as much if at all.
DrewADesign · 3h ago
In today’s tech world, ethics that don’t support a profit motive are commie BS.
incr_me · 3h ago
> "AI" models are trained at the expense of underpaid workers filtering inputs of abhorrent content, and does not respect the owners of input content. Ethically, it sucks.
These ethics are definitely derived from a profit motive, however petty it may be.
pluto_modadic · 1h ago
you disobeyed a code of conduct? that's not a good look.
QuadmasterXLII · 4h ago
That must be so hard for you.
rmoriz · 3h ago
The bugs are on them. I‘ve fixed them in my fork but of course I‘ll migrate to a non-discriminating alternative.
sreekanth850 · 4h ago
/
Suddenly i saw this: //Update regarding corporate sponsors: we are open to sponsorship arrangements with organizations that align with our values; see the conditions below.//
They should know that beggars cant be choosers.
3836293648 · 2h ago
That's not begging. That's a premptive rejection for people who think they can take control of the project through money.
fourthark · 7h ago
Upgrades and “collateral evolution” are very strong use cases for Claude.
I think the training data is especially good, and ideally no logic needs to change.
AdieuToLogic · 5h ago
Something not yet mentioned by other commenters is the "giant caveat":
As a giant caveat, I should note that I have a small bit of
prior experience working with kernel modules, and a good
amount of experience with C in general, so I don’t want to
overstate Claude’s success in this scenario. As in, it
wasn’t literally three prompts to get Claude to poop out a
working kernel module, but rather several back-and-forth
conversations and, yes, several manual fixups of the code.
It would absolutely not be possible to perform this
modernization without a baseline knowledge of the internals
of a kernel module.
Of note is the last sentence:
It would absolutely not be possible to perform this
modernization without a baseline knowledge of the internals
of a kernel module.
This is critical context when using a code generation tool, no matter which one chosen.
Then the author states in the next section:
Interacting with Claude Code felt like an actual
collaboration with a fellow engineer. People like to
compare it to working with a “junior” engineer, and I think
that’s broadly accurate: it will do whatever you tell it to
do, it’s eager to please, it’s overconfident, it’s quick to
apologize and praise you for being “absolutely right” when
you point out a mistake it made, and so on.
I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.
Finally, the author asserts:
I’m sure that if I really wanted to, I could have done this
modernization effort on my own. But that would have
required me to learn kernel development as it was done 25
years ago.
This could also be described as "understanding the legacy solution and what needs to be done" when the expressed goal identified in the article title is:
... modernize a 25-year-old kernel driver
Another key activity identified as a benefit to avoid in the above quote is:
... required me to learn ...
rgoulter · 51m ago
> I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.
The point being, there's nuance to "it felt like a collaboration with another developer (some caveats apply)". -- It's not a straightforward hype of "LLM is perfect for everything", nor is it so simple as "LLM has imperfections, it's not worth using".
> Another key activity identified as a benefit to avoid in the above quote is:
> > ... required me to learn ...
It would be bad to avoid learning fundamentals, or things which will be useful later.
But, it's not bad to say "there are things I didn't need to know to solve a problem".
badsectoracula · 1h ago
> Another key activity identified as a benefit to avoid in the above quote is: ... required me to learn ...
"...kernel development as it was done 25 years ago."
Not "...kernel development as it is done today".
That "25 years ago" is important and one might be interested in the latter but not in the former.
rmoriz · 5h ago
Gatekeeping is toxic. I love agents explaining me projects I don‘t know. Recently I cloned sources of Firefox and asked qwen-code (tool not significant) about the AI features of Firefox and how it‘s implemented. Learning has become awesome.
AdieuToLogic · 4h ago
> Gatekeeping is toxic.
Learning what must be done to implement a device driver in order for it to operate properly is not "gatekeeping." It is a prerequisite.
> I love agents explaining me projects I don‘t know.
Awesome. This is one way to learn about implementations and I applaud you for benefiting from same.
> Recently I cloned sources of Firefox and asked qwen-code (tool not significant) about the AI features of Firefox and how it‘s implemented. Learning has become awesome.
Again, this is not the same as implementing an OS device driver. Even though one could justify saying Firefox is way more complicated than a Linux device driver (and I would agree), the fact is that a defective device driver can lock-up the machine[0], corrupt internal data structures resulting in arbitrary data corruption, and/or cause damage to peripheral devices.
Since it's the still the same driver addressing the same hardware it should be identical.
yieldcrv · 2h ago
I’ve been doing assembly subroutines in Solidity for years with LLMs, I wouldn't even have tried beforehand
aussieguy1234 · 5h ago
AI works better when it has an example. In this case, all the code needed for the driver to work was already there as the example. It just had to update the code to reflect modern kernel development practices.
The same approach can be used to modernise other legacy codebases.
I'm thinking of doing this with a 15 year old PHP repo, bringing it up to date with Modern PHP (which is actually good).
unethical_ban · 7h ago
Neat stuff. I just got Claude code and am training myself on Rails, I'm excited to have assistance working through some ideas I have and seeing it handle this kind of iterative testing is great.
One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.
nico · 7h ago
Claude is really good with frameworks like Rails. Both because it’s probably seen a lot of code in its training set, and because it works way better when there is a very well defined structure
anyfoo · 7h ago
... which would allow you to load arbitrary code into the kernel, pretty much bypassing any and all security. You might as well not have a password at all. Which, incidentally, can be a valid strategy for isolated external dev boards, or QEMU VMs. But on a machine with stuff you care about? You're basically ripping it open.
unethical_ban · 7h ago
He was already loading "arbitrary" Claude code, no? I'm suggesting there was a way to skip password entry by narrowly tailoring an exception.
Another thought, IIRC in the plugins for Claude code in my IDE, you can "authorize" actions and have manual intervention without having to leave the tool.
My point is there were ways I think they could have avoided copy/paste.
anyfoo · 7h ago
While I personally would have used a dedicated development target, the workflow he had at least allowed him to have a good look at any and all code changes, before approving with the root password.
That is a bit different than allowing unconfirmed loading of arbitrary kernel code without proper authentication.
frumplestlatz · 6h ago
> One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.
Even a minor typo in kernel code can cause a panic; that’s not a reasonable level of power to hand directly to Claude Code unless you’re targeting a separate development system where you can afford repeated crashes.
Keyframe · 5h ago
pipe dream - now automate Asahi development to M3, M4, and onwards.
mschuster91 · 7m ago
the problem here is that Apple, while at least not standing actively in the way (like console manufacturers), provides zero documentation on how stuff works internally. You gotta reverse-engineer everything, and that either takes at least a dozen highly qualified and thus rare and expensive-to-hire people or someone hard on the autism-hyperfixation spectrum with lots of free time to spare and/or the ability to turn it into an academic project. AI can't help at all here because even if it were able to decompile Apple's driver code, it would not be able to draft a coherent mental model on how things work.
M3, to answer the second part why AI won't be of much help, onwards use a massively different GPU architecture that needs to be worked out, again, from scratch. And all of that while there is a substantial number of subsystems remaining on M1, M2 and its variants that aren't supported at all, only partially supported or with serious workarounds, or where the code quality needs massive work to get upstreamed into Linux.
And on top of that, a number of contributors burned out along the way, some from dealing with the ultra-neckbeard faction amongst Linux kernel developers, some from other mental health issues, and Alyssa departed for Intel recently.
flykespice · 1h ago
How long would the prompt be? Longer than C++ standard specification?
rvz · 7h ago
No tests whatsoever. This isn't getting close to being merged into mainline and it will stay out-of-tree for a long time.
That's even before taking on the brutal linux kernel mailing lists for code review explaining what that C code does which could be riddled with bugs that Claude generated.
No thanks and no deal.
geor9e · 6h ago
"The intention is to compile this driver as an out-of-tree kernel module, without needing to copy it into the kernel source tree. That's why there's just a simple Makefile, and no other affordances for kernel inclusion. I can't really imagine any further need to build this driver into the kernel itself.
The last version of the driver that was included in the kernel, right up until it was removed, was version 3.04.
BUT, the author continued to develop the driver independently of kernel releases. In fact, the last known version of the driver was 4.04a, in 2000.
My goal is to continue maintaining this driver for modern kernel versions, 25 years after the last official release." - https://github.com/dbrant/ftape
Even without that other article, this really reads like the author tried it for menial tasks on a neat passion project, and reports his success on it. (I'm a kernel developer, so I can empathize.)
squeakywhite · 6h ago
The post was created to show how AI helped this person solve their particular problem - which it appeared to do successfully.
Other people commenting about AI hype on the post isn't an indication that the post itself was created to hype AI, or that that the post itself is "bad".
yeasku · 6h ago
First, I did not say the post is bad.
I said nobody will use the driver. But I am terrible wrong because one person will?
Second, any post on hackernews is made to generate hype.
anyfoo · 6h ago
> I said nobody will use the driver. But I am terrible wrong because one person will?
Yes? The person who needs it is using it. Other people who need it (anyone who wants to archive tapes of that kind) now can, too.
> Second, another post on hackernews about how AI helps you code is not AI hype?
Do you think it was written with the intent to specifically hype AI, rather than to report on a passion project?
yeasku · 5h ago
I tougth people around here will understand that noby will use it excludes the developers of the software.
If I post a recipe of baked shit and i get a reply "nobody will eat that shit" are they wrong?
Too much hope.
squeakywhite · 5h ago
If you are eating the baked shit and enjoying it, and a subset of shit-eaters might also like your baked shit recipe, then yes - it would be wrong to say "nobody will eat that shit".
I suspect HN readers won't see enough value in your baked shit recipe for it to reach the front page - sorry. But bake away!
yeasku · 5h ago
It is not wrong, it is a called a hiperbole, it is a figure of speech.
I doubt you didnt know that so that leaves me with two options:
You comment in bad faith or you are autistic and dont get hyperbole.
In ether case your comments feel annoying.
anyfoo · 5h ago
Why do you call what the author did shit? It is resurrecting an old tape driver for archival purposes. It may not have much commercial use, but anyone having those old tapes will appreciate it.
yeasku · 5h ago
I did not. Learn to read.
ambicapter · 4h ago
This isn't really hype, since in this case it actually built something. They're talking about reasonable uses of "AI", which this is one example of.
pmarreck · 3h ago
What the hell, dude? Did you even read the linked article? He sounds like he has a healthy skepticism... which is the right attitude to have (what I call "optimistic skepticism").
Whatever you have sounds more like "blanket knee-jerk unfounded pessimism"
grim_io · 5h ago
One man's AI hype is another man's tangible productivity boost and/or UX improvement.
firesteelrain · 6h ago
Upvoted you however it did solve the author’s problem and if he decides to post to GitHub then it could help someone later. Plenty of people working on retro architectures that wish they had things like this
> Use these tools as a massive force multiplier of your own skills.
Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.
> Use these tools for rapid onboarding onto new frameworks.
I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.
I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.
“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.
One thing I love doing is developing a strong underlying data structure, schema, and internal API, then essentially having CC often one-shot a great UI for internal tools.
Being able to think at a higher level beyond grunt work and framework nuances is a game-changer for my career of 16 years.
A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.
It feels like toil because it's not the interesting or engaging part of the work.
If you're going to build a piece of furniture. The cutting, nailing, gluing are the "boiler plate" that you have to do around the act of creation.
LLM's are just nail guns.
Now we have a way we can get computers to do it!
Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.
And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.
I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)
There are? For example, rails has had boilerplate generation commands for a couple of decades.
These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.
If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.
Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).
To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.
The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.
Then I need to expend extra time following everything it did so I can "fix" the problem.
"Use these tools as a massive force multiplier of your own skills" is a great way to formulate it. If your own skills in the area are near-zero, multiplying them by a large factor may still yield a near-zero result. (And negative productivity.)
It seems to me that if you have been pattern matching the majority of your coding career, then you have a LLM agent pattern match on top of that, it results in a lot of headaches for people who haven't been doing that on a team.
I think LLM agents are supremely faster at pattern matching than humans, but are not as good at it in general.
just points to the fact that they've no idea what they're doing and would produce different, pointless code by hand, though much slower. this is the paradigm shift - you need a much bigger sieve to filter out the many more orders of magnitude of crap that inexperienced operators of LLMs create.
Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code. I have to make all the decisions, and guide it, but I don't need to learn Ruby to write acceptable-level code [0]. I get to be immediately productive in an unfamiliar environment, which is great.
[0] acceptable-level as defined by the rest of the team - they're checking my PRs.
> Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code.
If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?
> ... but I don't need to learn Ruby to write acceptable-level code [0].
Since the team you work with uses Ruby, why do you not need to learn it?
> [0] acceptable-level as defined by the rest of the team - they're checking my PRs.
Ah. Now I get it.
Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.
Here's a thought - has it crossed your mind that team members needing to determine if your PR's are acceptable is "a bad thing", in that it may indicate a lack of trust of the changes you have been introducing?
Furthermore, does this situation qualify as "immediately productive" for the team or only yourself?
EDIT:
If you are not a software engineer by trade and instead a stakeholder wanting to formally specify desired system changes to the engineering team, an approach to consider is authoring RSpec[0] specs to define feature/integration specifications instead of PR's.
This would enable you to codify functional requirements such that their satisfaction is provable, assist the engineering team's understanding of what must be done in the context of existing behavior, identify conflicting system requirements (if any) before engineering effort is expended, provide a suite of functional regression tests, and serve as executable documentation for team members.
0 - https://rspec.info/features/6-1/rspec-rails/feature-specs/fe...
No, not at all.
What I was speaking about was if the person to whom I replied is not a s/w engineer, then perhaps a better contribution to their project would be to define requirements in the form of RSpec specifications (since Ruby is in use) and allow the engineering team to satisfy them as they determine appropriate.
I have seen product/project managers attempt to "contribute" to a development effort much like what was described. Usually there is a power dynamic such that engineers cannot overtly tell the manager(s), "you define the 'what' and we will define the 'how'." Instead, something like the PR flow described is grudgingly accepted and then worked around.
I've felt this learning just this week - it's taken me having to create a small project with 10 clear repetitions, messily made from AI input. But then the magic is making 'consolidation' tasks where you can just guide it into unifying markup, styles/JS, whatever you may have on your hands.
I think it was less obvious to me in my day job because in a startup with a lack of strong coding conventions, it's harder to apply these pattern-matching requests since there are fewer patterns. I can imagine in a strict, mature codebase this would be way more effective.
If you don't have the technical understanding of a language or framework, there is going to be a lot of ambiguity in your prompts.
This specificity gap leads the LLM to fill in those gaps for you, which may not be what you intended. And that's usually where bugs hide.
I think this is the flip side to being a "force multiplier"
I would hope Claude would be like “Hold on man…”
| I'll create a simple C++ class with a constructor that takes a tuple. Here's an example...
> nah C
| C doesn't have classes or constructors - those are C++ features. In C, you would use a struct and create an initialization function. Here's how you'd handle a tuple-like structure:
I've done _so_ many of these where I go "hmm, this might be useful", planned the project with gemini/chatgpt free versions to a markdown project file and then sic Claude on it while I catch up on my shows.
Within a few prompts I've got something workable and I can determine if it was a good idea or not.
Without an LLM I never would've even tried it, I have better and more urgent things to do than code a price-watcher for very niche Blu-ray seller =)
There is a good discussion/interview¹ between Alan Kay & Joe Armstrong about how most code is developed backwards b/c none of the code has a formal specification that can be "compiled" into different targets. If there was a specification other than the old driver code then the process of porting over the driver would be a matter of recompiling the specification for a new kernel target. In absence of such specification you have to substitute human expertise to make sure the invariants in the old code are maintained in the new one b/c the LLMs has no understanding of any of it other than pattern matching to other drivers w/ similar code.
¹https://www.youtube.com/watch?v=axBVG_VkrHI
1. The original hardware spec is usually proprietary, and
2. The spec is often what the hardware was supposed to do. But hardware prototype revisions are expensive. So at some point, the company accepts a bunch of hardware bugs, patches around them in software, ships the hardware, and reassigns the teams to a newer product. The hardware documentation won't always be updated.
This is obviously an awful process, but I've seen and heard of versions of it for over 20 years. The underlying factors driving this can be fixed, if you really want to, but it will make your product totally uncompetitive.
It wouldn't surprise me if the drive and the tapes are still somewhere in my parents storage. Could be a fun weekend project to try it out, though I'm not sure I have any computer with a floppy interface anymore. And I don't think there's anything particularly interesting on those tapes either.
In any case, cool project! Kudos to the author!
What's wrong with exist one?
> As a giant caveat, I should note that I have a small bit of prior experience working with kernel modules, and a good amount of experience with C in general
But yeah, the dream of new OSes is sweet...
We're talking about an order of magnitude quicker onboarding. This is absolutely massive.
One of the things that has Claude as my goto option is its ability to start long-running processes, which it can read the output of to debug things.
There are a bunch of hacks you could have used here to skip the manual part, like piping dmesg to a local udp port and having Claude start a listener.
Even something simple like getting it to run a dev server in react can have it opening multiple servers and getting confused. I've watched streams where the programmer is constantly telling it to use an already running server.
Here the author has a passion/side project they have been on for a while. Upgrading the tooling is a great thing. Community may not support this since the niche is too narrow. LLM comes in and helps in the upgrade. This is exactly what we want - software to be custom - for people to solve their unique edge cases.
Yes author is technical but we are lowering the barrier and it will be lowered even more. Semi technical people will be able to solve some simpler edge cases, and so one. More power to everyone.
A reminder though these LLM calls cost energy and we need reliable power generation to iterate through this next tech cycle.
Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)
You would certainly need an expert to make sure your air traffic control software is working correctly and not 'vibe coded' the next time you decide to travel abroad safely.
We don't need a new generation who can't read code and are heavily reliant on whatever a chat bot said because: "you're absolutely right!".
> Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)
Useful enough for Stripe to building their own blockchain and even that and the rest of them are more energy efficient than a typical LLM cycle.
But the LLM grift (or even the AGI grift) will not only cost even more than crypto, but the whole purpose of its 'usefulness' is the mass displacement of jobs with no realistic economic alternative other than achieving >10% global unemployment by 2030.
That's a hundred times more disastrous than crypto.
Thinking about asking Claude to reimplement it from scratch in Rust…
[1] https://codeberg.org/superseriousbusiness/gotosocial/src/bra...
Do you disagree with some part of the statement regarding "AI" in their CoC? Do you think there's a fault in their logic, or do you yourself personally just not care about the ethics at play here?
I find it refreshing personally to see a project taking a clear stance. Kudos to them.
Recently enjoyed reading the Dynamicland project's opinion on the subject very much too[0], which I think is quite a bit deeper of an argument than the one above.
Ethics seems to be, unfortunately, quite low down on the list of considerations of many developers, if it factors in at all to their decisions.
[0] https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relation...
These ethics are definitely derived from a profit motive, however petty it may be.
I think the training data is especially good, and ideally no logic needs to change.
Then the author states in the next section:
I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.Finally, the author asserts:
This could also be described as "understanding the legacy solution and what needs to be done" when the expressed goal identified in the article title is: Another key activity identified as a benefit to avoid in the above quote is:I read "junior" as 'subordinate' and 'lacking in discernment'.. -- Sycophancy is a good description. I also like "bullshit" (as in 'for the purpose of convincing'). https://en.wikipedia.org/wiki/Bullshit#In_the_philosophy_of_...
The point being, there's nuance to "it felt like a collaboration with another developer (some caveats apply)". -- It's not a straightforward hype of "LLM is perfect for everything", nor is it so simple as "LLM has imperfections, it's not worth using".
> Another key activity identified as a benefit to avoid in the above quote is: > > ... required me to learn ...
It would be bad to avoid learning fundamentals, or things which will be useful later.
But, it's not bad to say "there are things I didn't need to know to solve a problem".
"...kernel development as it was done 25 years ago."
Not "...kernel development as it is done today".
That "25 years ago" is important and one might be interested in the latter but not in the former.
Learning what must be done to implement a device driver in order for it to operate properly is not "gatekeeping." It is a prerequisite.
> I love agents explaining me projects I don‘t know.
Awesome. This is one way to learn about implementations and I applaud you for benefiting from same.
> Recently I cloned sources of Firefox and asked qwen-code (tool not significant) about the AI features of Firefox and how it‘s implemented. Learning has become awesome.
Again, this is not the same as implementing an OS device driver. Even though one could justify saying Firefox is way more complicated than a Linux device driver (and I would agree), the fact is that a defective device driver can lock-up the machine[0], corrupt internal data structures resulting in arbitrary data corruption, and/or cause damage to peripheral devices.
0 - https://en.wikipedia.org/wiki/Kernel_panic
No comments yet
The same approach can be used to modernise other legacy codebases.
I'm thinking of doing this with a 15 year old PHP repo, bringing it up to date with Modern PHP (which is actually good).
One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.
Another thought, IIRC in the plugins for Claude code in my IDE, you can "authorize" actions and have manual intervention without having to leave the tool.
My point is there were ways I think they could have avoided copy/paste.
That is a bit different than allowing unconfirmed loading of arbitrary kernel code without proper authentication.
Even a minor typo in kernel code can cause a panic; that’s not a reasonable level of power to hand directly to Claude Code unless you’re targeting a separate development system where you can afford repeated crashes.
M3, to answer the second part why AI won't be of much help, onwards use a massively different GPU architecture that needs to be worked out, again, from scratch. And all of that while there is a substantial number of subsystems remaining on M1, M2 and its variants that aren't supported at all, only partially supported or with serious workarounds, or where the code quality needs massive work to get upstreamed into Linux.
And on top of that, a number of contributors burned out along the way, some from dealing with the ultra-neckbeard faction amongst Linux kernel developers, some from other mental health issues, and Alyssa departed for Intel recently.
That's even before taking on the brutal linux kernel mailing lists for code review explaining what that C code does which could be riddled with bugs that Claude generated.
No thanks and no deal.
The last version of the driver that was included in the kernel, right up until it was removed, was version 3.04.
BUT, the author continued to develop the driver independently of kernel releases. In fact, the last known version of the driver was 4.04a, in 2000.
My goal is to continue maintaining this driver for modern kernel versions, 25 years after the last official release." - https://github.com/dbrant/ftape
AI hype in a nutshell.
Literally all comments on this post are about AI hype, all of them.
Even without that other article, this really reads like the author tried it for menial tasks on a neat passion project, and reports his success on it. (I'm a kernel developer, so I can empathize.)
Other people commenting about AI hype on the post isn't an indication that the post itself was created to hype AI, or that that the post itself is "bad".
I said nobody will use the driver. But I am terrible wrong because one person will?
Second, any post on hackernews is made to generate hype.
Yes? The person who needs it is using it. Other people who need it (anyone who wants to archive tapes of that kind) now can, too.
> Second, another post on hackernews about how AI helps you code is not AI hype?
Do you think it was written with the intent to specifically hype AI, rather than to report on a passion project?
If I post a recipe of baked shit and i get a reply "nobody will eat that shit" are they wrong?
Too much hope.
I suspect HN readers won't see enough value in your baked shit recipe for it to reach the front page - sorry. But bake away!
I doubt you didnt know that so that leaves me with two options:
You comment in bad faith or you are autistic and dont get hyperbole.
In ether case your comments feel annoying.
Whatever you have sounds more like "blanket knee-jerk unfounded pessimism"