Reverse engineering Claude Code (April 2025) (kirshatrov.com)
38 points by gianpaj 3h ago 3 comments
Installing Microsoft Windows 98 in DOSBox-X (dosbox-x.com)
35 points by keepamovin 6h ago 7 comments
Re: My AI skeptic friends are all nuts
75 skarlso 69 6/8/2025, 6:05:03 AM skarlso.github.io ↗
If that is true, it is indeed a serious problem.
Imagine getting downgraded because the substrings "Honestly?" and "—" didn't constitute 3.2% of your submission.
I just read this piece the other day with countless witnesses from teachers in the US:
https://www.404media.co/teachers-are-not-ok-ai-chatgpt/
One teacher, when I asked him what I can do about my failing grade, told me "you can go hang yourself". Why was I getting a failing grade in the first place? Well, it was normal that three quarters of the class would be failing his tests. That's just the kind of teacher he was.
The PE teacher thought that his role was to teach us discipline based on fear. Later I heard a fun story about him getting a new class and thinking that one of the girls was a student, while she was actually the mother of one of the students. She saw the students being yelled at for 45 minutes straight, she got yelled personally at and called retarded. Of course nothing happened to him.
The literature teacher yelled at us so hard that we were literally afraid of talking to her. She hated us, and at some point made that openly clear, by being mean on purpose. She never gave me more than "barely passing", even though at the standardized test I got a near-perfect score.
Once she did a test, threw the paper away, and assigned us grades by how much she liked each student. I brought up this story during reunion, and was told "she actually prepared us for how we'd be treated in college and adult life".
And that was one of the best schools that always took the most talented students from the region. In this context, having two LLMs talk to each other really isn't a bad thing.
Used to be, it was considered critically important that students learn to write in cursive and to multiply 3 digit numbers in their head. I can't do either, and I suspect many folks these days can't either. The world has not ended. I also can't tie a square knot, lasso a steed, or mend a fence.
School assignments have always been a waste of time. Essay-writing is not a critical skill, and I'm not sure much is lost if LLMs do it for us.
Sure multiplying 3-digit numbers is not really useful in everyday's life, but the important part is not the knowledge itself, it's the capacity to think and solve problems.
Thing is, LLMs beat humans on the words of such tests (if not the pencil part), and indeed basically all stabdardised tests at every level.
If not that then what? Prompt engineering?
IMO it's best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow". The rest is too noisy for me. I'm OK with some skepticism, but not outright denial. You can't take an unbiased look at what these things can do today and say "well, yes, but can they do x y z"? That's literally moving the goalposts, and I find it extremely counter productive.
In a way I see a parallel to the self driving cars discussions of 5 years ago. Lots of very smart people were focusing on silly things like "they have to solve the trolley problem in 0.0001 ms before we can allow them on our roads", instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably, with some degree of variability between solutions (waymos, teslas, mercedes, etc). All that talk 5 years ago was useless, IMO.
No, we really aren’t. Let me know when any of those systems can get me from Souix falls South Dakota to Thunder Bay Ontario without multiple disengagements and we can talk.
Based on what I’ve seen we’re still about 10 years away best case scenario and more likely 20+ assuming society doesn’t collapse first.
I think people in the Bay Area commenting on how well self driving works need to visit middle America and find out just how bad and dangerous it still is…
As with a lot of issues in today's world, each side is talking past the other. It can simultaneously be true that LLMs make writing code less enjoyable / gratifying, and that LLMs can speed up our work.
> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.
> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".
These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'
I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.
There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.
Instead, your average Joe will be writing some Java code with Spring Boot (or .NET with ASP.NET, or Ruby with Ruby on Rails, or PHP with Laravel, or Python with Django) where how to do things is already largely established and they only need to answer the "what?" question - to codify a bunch of boring business rules into a technical solution that runs.
That work isn't entirely different from tasks where codegen shines - like writing DB migrations from entities, or entity mappings from some existing DB schema, or writing client code based on an OpenAPI spec, or writing an OpenAPI spec based on your endpoint definitions. The less people have to do that manually, the better the results seem to be (e.g. a bunch of deterministic code that will do everything correctly based on the inputs, given how well known the problem space is).
You could also say the same for having something like autocomplete, or automated code suggestions, or even all of those higher abstraction level frameworks there in the first place - where you don't start solving a problem with "First, I'll write something that binds to a port and takes incoming HTTP requests..." but rather "I'll look up how you add a required constraint for a business object that's mapped to a DB table so validations fail before I even try to save something bad. If I'm really lucky, I'll be able to return a user friendly error message too!"
There, it doesn't really matter whether the official docs will tell you that, a colleague, StackOverflow, ChatGPT, or anything else... you're not doing that much creative work, but something a bit more blue collar. The end result of all this is programming becoming something closer to data entry with ample amounts of code to just glue other solutions together. There, the risks posed by LLMs are far lower and the utility far greater - because a lot of the code is similarly structured and quite boring.
That is the aspiration of AI software tools, too. They are not coming to make us 30% more efficient, they are coming to completely change how the software engineering production line operates. If they are successful, we will write fewer tests, we will understand less about our stack, and we will develop tools and workflows to manage that complexity and risk.
Maybe AI succeeds at this stated objective, and maybe it does not. But let's at least not kid ourselves: this is how it has always been. We are in the business of abstracting things away so that we have to understand less to get things done. We grumbled when we switched from assembly to high-level languages. We grumbled when we switched from high-level languages to managed languages. We grumbled when we started programming enormous piles of JavaScript and traveled farther from the OS and the hardware. Now we're grumbling about AI, and you can be sure that we're going to grumble about whatever is next, too.
I understand this is going to ruffle a lot of feathers, but I don't think Thomas and the Fly team actually have missed any of the points discussed in this article. I think they fully understand that software production is going to change and expect that we will build systems to cope with abstracting more, and understanding less. And, honestly, I think they are probably right.
Moreover, if it will be so bad, as the author think, we can just spend like 15 minutes per day, to train the abilities that become unused with LLMs. And LLMs could help us to do it, so it will be like 15 min/day, not 2 hours/day.
> My son is in school. And the teacher literally said they need to use an LLM and the teacher is using an LLM to grade/check it. This is ridiculous.
I don't know, if it is ridiculous. I like LLMs, they really help to learn things. I mean, yes, LLMs can be detrimental to learning, if instead of doing a task at hand, the student will be asking LLM to do the task. But it can be a real help, because with the help of LLM you can do tasks, you cannot do without it. You can do tasks faster, and to do more of them. LLM can be bad for learning or it can be good, it depends on how it is used.
The teacher using LLM is very interesting. When I was at school, I thought I was smarter then my teachers at school, and I didn't like them, some of them I just hated. If they were using LLMs, I'm sure I would figure out ways to do tasks in a way, so their LLMs would hallucinate and grade my work wrongly. You can always complain and make the teacher to check it without LLM. Let them spend additional 10 minutes grading, and do it in front of other pupils, so teachers would need to admit their mistakes publicly. I would definitely do it. I did something like that with math, I would go for any length to find some non-standard solution for a problem, that cannot be checked by checking numbers in specific places of the solution. It made the teacher to make mistakes while grading, I just loved it. But if I had LLM... I would definitely do it with literature and history also. My math teacher was not so bad after all, but I hated the literature and history ones.
Yeah. And that's OK. Because nobody will need to shoe horses any more!
If I forget how to write tests, what's the problem? It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
That's how atrophy works: the skills that atrophy are, by definition, the ones you no longer need at all, and this argument does this sleight of hand where it goes "don't let the skills you don't need atrophy, because you need them!".
Well, do I need them, or not?
I sympathize with this viewpoint, but I do think it’s important to recognize the differences here. One thing I’ve noticed from the vibe-code coalition is a push towards offloading _cognition_. I think this is a novel distinction from industrial innovations that more or less optimize manual labor.
You could argue that moving from assembly to python is a form of cognition offloading, but it’s not quite the same in my eyes. You are still actively engaged in thinking for extended periods of time.
With agentic code bots, active thinking just isn’t the vibe. I’m not so sure that style of half-engaged work has positive outcomes for mental health and personal development (if that’s one of your goals).
We basically are training prompt engineers without post validation now.
No, this would be more akin to saying, "if we don't know how to change our car's oil anymore because we have a robot that does it, soon nobody will know, while still being reliant on our cars."
For your analogy to work, we would have to be moving away from code entirely, as we moved away from horses.
> It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
Except that once you forget, you now would have to re-learn it, and that includes potentially re-learning all the pitfalls and edge cases that aren't part of standard training manuals. And you won't be able to ask someone else, because now they all don't know either.
tl;dr coding is a key job function of software developers. Not knowing how to do any key part of your job without relying on an intermediary tool, is a very bad thing. This already happens too much, and AI is just firing the trend into the stratosphere.
OK, are we worried that all the robots will somehow disappear? Why would I have to change my own oil, ever, if the robot did it as well as I did? If it doesn't do it as well as I did, I'm still doing it myself.
No, you should be worried that you (or devs who come later, who never had to 'change the oil'/ write tests themselves) won't know if the robot has done it right or not, because if you can't do the work, you can't validate the work.
And the robot isn't an employee, it's just a tool you the employee use, so when you are asked whether the test was coded correctly, all you'd be able to say with your 'atrophied' test-writing skills is, "I think so, the AI did it". See the issue now?
> If it doesn't do it as well as I did, I'm still doing it myself.
I thought your unnecessary skill atrophied and was forgotten ? How are you going to do it yourself? How do you know you're still as good at it as you once were?
So some people do develop that deeper understanding, when it's helpful, and they go on to build great things. I don't see why this will be different with AI. Some will rely too much on AI and possibly create slop, and others will learn more deeply and get better results. This is not a new phenomenon.
Critical thinking skills are skills that I think are incredibly useful and, if never taught or allowed to atrophy you can get, well, this...<gesturing dejectedly around with my cane>
People not understanding basic science, arithmetic, and reading comprehension at what was once an 8th grade level, can be easily tricked in everyday situations on matters that harm themselves, their family, and their community. If we dig deeply enough, I bet we could find some examples on this theme in modern times.
Maybe we get UBI. But if Jeff Bezos and some friends own all of the production. What would they do with your UBI dollars? Where can he spend them? He can make his own yachts and soldiers.
Who is this person who's paying for useless skills, and doesn't that go against the definition of "need"?
If that’s your thing that’s perfectly fine and I can understand how LLMs take away this fun challenge from you.
But for many - myself included - it’s not necessarily the process itself that’s rewarding - it’s the outcome. I love creating things that work and are useful. The particular process that was used to create the thing is secondary to me, and if I discover a tool that will enable me to skip some steps to achieve the outcome I will pick it any time of the day over doing things manually.
Universities as we know them are obsolete.
Glad to see a lot of my own criticisms of that article echoed by this author. AI is a tool, not an employee.
A bit tangential, but I've recently been baffled as to why is this belief held by so many:
"since LLMs are trained on online data, they are re-trained with their own AI slop, further damaging them. Soon bad code will get even worse, basically."
together with:
"I’m sure it will get better with time, but it’s not there yet."
What improvements people think will overcome the poisoning of the well? I'm not an expert at all, but to me it feels like we'd need a new breakthrough to get good output from garbage input.
As a counterpoint: isn't popularity of a library more a metric of API convemience than actual code quality?
And isn't popularity of an essay more about how it conforms to existing beliefs than the quality of the thinking?
https://news.ycombinator.com/item?id=44163063
Similarly, the state needs people to do the work of state power. People to investigate the citizenry, people to judge and prosecute and process appeals. AI takes away the human from that, allowing massive injustices to be rubber stamped. A small example - RFJ jr couldn’t get a human to write his garbage maha report, so he used AI (https://www.msn.com/en-us/politics/government/rfk-jr-may-hav...). Small examples become bigger examples.
AI is accelerating us towards a techno-fascist oligarchy. That’s why i am skeptical of it. Not because it’s mid at writing unit tests or whatever.
Which ARE being attacked right now
I'm an ML[0] researcher. Of course I love these machines! And you know what? I really fucking want to build some god damn AGI.
Maybe all you hype people are all right and LLMs will get us there. If you're right, you got all the money you could ask for and then several times more. You got some of the biggest companies in the world dumping billions in, competing, with huge infrastructures, and dumping money into a few dozen startups who are trying to do the same. If scale is all you need, then you're going to get there even without another 7 trillion dollars. I'm not saying "Don't fund LLM research" nor am I saying "Don't fund scaling research". Please do fund those things.
But be honest, money isn't your bottleneck.
Hell, what if you're right but other ways are faster?! We miss those opportunities and lose the race? For what?! If more efficient opportunities are out there (and I really do believe there are), who do you think is going to find it? It's literally betting on ITER[1] and giving essentially nothing to other fusion attempts. Except ITER is a international effort and we're a hell of a lot more confident it'll work.
But what if you're wrong? Why stop people like me from looking at other avenues? Why put all your eggs in one basket? Why dismiss us when we point out issues? We're engineers here, we fucking love to point out issues! IT IS OUR JOB! Because you must find issues if you're going to fix them. We could have hedged our bets a little and pivot fast, hopefully seamlessly enough that it doesn't even look like a hiccup. Nobody in this side of this is asking for half as much as what the next hip new LLM startup is. I don't know if it is even a penny on the dollar!
Are you really gonna bet that the golden goose is always going to be laying golden eggs? Are you really going to make a situation where if it stops you just go broke?
IDK, it just all seems needlessly risky to me. Maybe I just don't get it because I'm not a high stakes gambler.
[0] Funny... only a few years ago the litmus test was "If someone says 'AI' they're selling you something. If they say 'ML' they're probably not"
[1] https://en.wikipedia.org/wiki/ITER
What happens to the majority of the planet's biomass when we get there? If it's a new paradigm that does the trick. It might sneak up on us like the GPT-2 capabilities.
Back then we didn't have this particular modern meme of "extremely dangerous" for something like a syntactically inaccurate file so I suppose no one said it was "extremely dangerous" for people to copy-paste code from searches. SQL injection was fairly novel at the time but as that came up people would mock copy-paste scripters rather than act as if man's extinction was at hand from a vibe-coded single-page app that tells you which Pokemon you are.
Yes, yes, it's different now. But it was different then too. Search engines changed the stuff you could find from carefully curated Usenet channels to any rando with a blog. I suppose if we had modern sensibilities we'd say that was "extremely dangerous" too.
I've been writing code for decades, and it's true that I've seen nothing like this technology. It's true that it's a game changer and that with slightly different rates of change could have already lead to human extinction. But that doesn't mean that I have to lay any more credence to the guys who have, since time immemorial, said "this new thing makes something easier; it will make us worse because we do not slog as much".
I don't use LLMs myself, but the anecdotes about skill atrophy seem credible. From the OP: "Even for me it shows. I tried to write some test code recently and I absolutely forgot how to write table tests because I generate all that. And it’s frightening."
An article about the topic that tries to stay positive: https://addyo.substack.com/p/avoiding-skill-atrophy-in-the-a...
Though, it does seem ironic. A skeptic is someone that demands convincing evidence before believing an assertion. "AGI is around the corner", "self driving will replace human drivers", "don't learn how to code", "AI will replace developers", "AI cars will eliminate car accidents." The skeptics are the ones simply asking for evidence that any of that is true. A system that can barely count the 'R's in strawberry is supposed to be AGI in 3 years from now?? Let alone deliver on any of those other promises? Incredulity aside, let us see the evidence for any of it.
Though, those that are AI nay-sayers; that say AI will be a bad thing - they are not skeptics. They are making their own assertion. Namely that AI will be used as a crutch - that the lack of a slog will be a detriment. That is a claim, not skepticism. So, just ironic to be skeptical of the 'skeptics' and the lack of evidence from the 'skeptics' is used to dismiss the 'skeptics'.
My wife is pregnant and the education stuff terrifies me. I may have to make huge sacrifices to avoid schools that use LLMs. Hopefully by the time he’s school age people will have realised that relying entirely on LLMs will degrade your thinking, reading, and writing, but I’m not confident
There are some real technical debates to be had about the capabilities of LLMs and agents but that's not what's driving this "AI skepticism" at all. The "AI skeptics" decided AI is bad in advance and then came up with reasons, the weakest of which is the claim that it's all or mostly hype.
I really don’t think there is this “elephant” at all. If you reduce critique, skepticism and fear people might have to political slogans of mainly fringe online activists, I wonder how objective your assessment of those attitudes can actually be.
I recommend you take a look at this research published by Pew. You can see that concern about AI is much higher than 5% and is shared across the political spectrum: https://www.pewresearch.org/internet/2025/04/03/how-the-us-p...
The original article didn't talk about any of these things....
> My son is in school. And the teacher literally said they need to use an LLM otherwise they won’t score high enough because the other pupils are using LLMs and the teacher is using an LLM to grade/check it. This is ridiculous.
Ah. The article's author heard a bad take on LLMs and accidentally attributed it to Ptacek's article.