> And as a last point, I fear very much what’s going on in schools. My son is in school. And the teacher literally said they need to use an LLM otherwise they won’t score high enough because the other pupils are using LLMs and the teacher is using an LLM to grade/check it. This is ridiculous. And if you think that’s okay or you don’t see a serious problem with this, then that’s an even greater problem.
If that is true, it is indeed a serious problem.
at_compile_time · 6h ago
Any examination that isn't done in person on general course material (nothing an LLM could prepare for you) is just stubborn refusal to protect students from themselves in an age where anything else can be faked. Graded homework and take-home assignments are dead as useful pedagogical tools.
xnx · 3h ago
Death of homework could be a great thing. The education system teaching that schoolwork must be done at home is conditioning future workers to accept working late and taking their work home.
energy123 · 7h ago
Students will need to use the same LLM as their teacher, since LLMs are biased to grade their own outputs higher.
Imagine getting downgraded because the substrings "Honestly?" and "—" didn't constitute 3.2% of your submission.
zveyaeyv3sfye · 7h ago
The school situation is indeed in serious problems.
I just read this piece the other day with countless witnesses from teachers in the US:
This sounds like an assignment to learn to use LLMs, which as an isolated assignment sounds reasonable. Students should learn how to use tools of all kinds to maximize their effectiveness. It might be a bigger problem if all assignments are done like this but I doubt that's the case.
anal_reactor · 6h ago
I attended one of best high schools in the area.
One teacher, when I asked him what I can do about my failing grade, told me "you can go hang yourself". Why was I getting a failing grade in the first place? Well, it was normal that three quarters of the class would be failing his tests. That's just the kind of teacher he was.
The PE teacher thought that his role was to teach us discipline based on fear. Later I heard a fun story about him getting a new class and thinking that one of the girls was a student, while she was actually the mother of one of the students. She saw the students being yelled at for 45 minutes straight, she got yelled personally at and called retarded. Of course nothing happened to him.
The literature teacher yelled at us so hard that we were literally afraid of talking to her. She hated us, and at some point made that openly clear, by being mean on purpose. She never gave me more than "barely passing", even though at the standardized test I got a near-perfect score.
Once she did a test, threw the paper away, and assigned us grades by how much she liked each student. I brought up this story during reunion, and was told "she actually prepared us for how we'd be treated in college and adult life".
And that was one of the best schools that always took the most talented students from the region. In this context, having two LLMs talk to each other really isn't a bad thing.
yubblegum · 2h ago
This is so over the top that you might as well name and shame here to lend credibility to your story. What school in what "area" are you talking about?
raverbashing · 6h ago
Hence why my sympathy to teachers is limited. Not zero, for sure. But there's a limit
hackyhacky · 7h ago
I hear you, but also we need to ask why is it a problem?
Used to be, it was considered critically important that students learn to write in cursive and to multiply 3 digit numbers in their head. I can't do either, and I suspect many folks these days can't either. The world has not ended. I also can't tie a square knot, lasso a steed, or mend a fence.
School assignments have always been a waste of time. Essay-writing is not a critical skill, and I'm not sure much is lost if LLMs do it for us.
SomeoneOnTheWeb · 7h ago
Actually the world is more or less ending because of that. People lack more and more critical thinking skills because they aren't taught about that in school.
Sure multiplying 3-digit numbers is not really useful in everyday's life, but the important part is not the knowledge itself, it's the capacity to think and solve problems.
aianus · 7h ago
You can easily measure the capacity to think and solve problems with paper-and-pencil exams every week or two not hours of daily busywork.
ben_w · 6h ago
If such tests actually measured the capacity to think and solve problems.
Thing is, LLMs beat humans on the words of such tests (if not the pencil part), and indeed basically all stabdardised tests at every level.
daledavies · 7h ago
This is the digital divide, where students from more wealthy backgrounds can afford access to better LLM subscriptions, and are able to achieve more "academically".
thatcat · 7h ago
Pretty sure forming a logical argument is a critical skill, just add it to the list tho.
xnx · 3h ago
> Essay-writing is not a critical skill,
Essay writing is mainly about organizing thoughts logically. That's pretty important.
drewcoo · 4h ago
> Essay-writing is not a critical skill
If not that then what? Prompt engineering?
skarlso · 1h ago
WOW this blew up. I didn't expect it at all... Thank you so much for everyone engaging with it. I know it's a difficult topic. And I might have used skeptics term incorrectly? :)
The school LLM thing is absolutely real. And it's not even a cheap school. It's a school in Denmark. I was very disappointed to know this and was not expecting it at all. And it's sadly, not an assignment to learn llms either. I'd wish it was something like that.
To the ones saying we've seen this before with google search, ides, etc. Not on this scale. And even then, you didn't _completely_ outsource you ability to think. I _think_ that is super dangerous. Especially for young people who get into the habit a lot faster. And suddenly, we have people unable to think without asking an LLM for advice. Other disruptive technologies didn't affect thinking on this massive scale.
I'm not saying stop AI booo, that's obviously not going to happen. And I don't want to, maybe the singularity is just an AI away. Who knows? However, I'm asking that we absolutely put that thing behind some kind of oversight especially in schools for young people before they develop critical thinking skills. After all you started to count on paper before you start using a calculator for a reason.
Again, thank you for this discussion. I'm really grateful for the engagement.
NitpickLawyer · 7h ago
I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel. And how others using them make us feel. And how Hollywood-style stories about "AI" make us feel. And how people commenting on these things make us feel. And so on.
IMO it's best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow". The rest is too noisy for me. I'm OK with some skepticism, but not outright denial. You can't take an unbiased look at what these things can do today and say "well, yes, but can they do x y z"? That's literally moving the goalposts, and I find it extremely counter productive.
In a way I see a parallel to the self driving cars discussions of 5 years ago. Lots of very smart people were focusing on silly things like "they have to solve the trolley problem in 0.0001 ms before we can allow them on our roads", instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably, with some degree of variability between solutions (waymos, teslas, mercedes, etc). All that talk 5 years ago was useless, IMO.
tw04 · 7h ago
> instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably,
No, we really aren’t. Let me know when any of those systems can get me from Souix falls South Dakota to Thunder Bay Ontario without multiple disengagements and we can talk.
Based on what I’ve seen we’re still about 10 years away best case scenario and more likely 20+ assuming society doesn’t collapse first.
I think people in the Bay Area commenting on how well self driving works need to visit middle America and find out just how bad and dangerous it still is…
JustinCS · 6h ago
When you put it like that, it makes me wonder if we can just stick to using the self-driving cars in the Bay Area and not go to these bad and dangerous places.
krysp · 5h ago
I agree that a lot of the noise at the moment is an emotional reaction to LLMs, rather than a dispassionate assessment of how useful they are. It's understandable - they are changing the way we work, and for lots of us (software developers), the reason we chose this career was because we _enjoy_ writing code and solving problems.
As with a lot of issues in today's world, each side is talking past the other. It can simultaneously be true that LLMs make writing code less enjoyable / gratifying, and that LLMs can speed up our work.
seadan83 · 6h ago
IDK, my impression of the self driving car discussion 5 years ago were more akin to: "let us start designing AI only roads, get ready for no human drivers - they won't exist in 5 years! AI only cars will be so great, it will solve traffic congestion, pollution, noise, traffic deaths, and think of all the free time while you are lounging around in your commute!" Seemed like a conversation dominated by people gearing up for that VC money. Meanwhile, actual solutions for any of those problems seem to be languishing.. My perspective; was a lot of distraction away from real solutions, lead by a tech-maximalist group that had a LOT to gain by hype.
> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.
> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".
These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'
I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.
There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.
thefz · 3h ago
I can't shake the belief that the more one finds LLMs useful, the less valuable their work already is.
Something that has no guarantee,not even a reassurance of being correct should not be trusted with any meaningful work
mexicocitinluez · 3h ago
I need you to understand that software development with LLMs isnt about writing a prompt to spit out your entire app.
TOMDM · 7h ago
I don't think the original post took issue with what people enjoyed doing, I think it took issue with people's understanding of what is even possible with the current tech.
laserbeam · 4h ago
Agreed. The previous article made me think “this tool is probably more potent than I thought and I should give it a try”. It did not make me drop my concerns about AI in general.
NitpickLawyer · 7h ago
I agree! I see a lot of people who "have tried them and they suck". But when you dig deep, they barely tried a web interface for programming, or the OG chatgpt for writing, and that's basically it. They aren't willing to try again, and keep up to date. Things are moving incredibly fast, and the skeptics are adding noise without even being informed about current SotA capabilities.
stavros · 7h ago
I feel like this article makes the same tired point I see every time a new technology comes alone: "but if we don't know how to shoe our own horses any more because we got cars, soon nobody will know how to shoe horses!"
Yeah. And that's OK. Because nobody will need to shoe horses any more!
If I forget how to write tests, what's the problem? It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
That's how atrophy works: the skills that atrophy are, by definition, the ones you no longer need at all, and this argument does this sleight of hand where it goes "don't let the skills you don't need atrophy, because you need them!".
Well, do I need them, or not?
ianks · 6h ago
> I feel like this article makes the same tired point I see every time a new technology comes
I sympathize with this viewpoint, but I do think it’s important to recognize the differences here. One thing I’ve noticed from the vibe-code coalition is a push towards offloading _cognition_. I think this is a novel distinction from industrial innovations that more or less optimize manual labor.
You could argue that moving from assembly to python is a form of cognition offloading, but it’s not quite the same in my eyes. You are still actively engaged in thinking for extended periods of time.
With agentic code bots, active thinking just isn’t the vibe. I’m not so sure that style of half-engaged work has positive outcomes for mental health and personal development (if that’s one of your goals).
mixermachine · 6h ago
The big problem is that LLMs not only replace "shoeing your horse" or some other singular task.
If you let them they can replace every critical thought or every mental effort you throw at them.
Often in a "good enough" (or convincing enough) way
Especially for learners this is very bad because they will never learn how to come to any proper thought process on their own.
How should they be able to check the output?
We basically are training prompt engineers without post validation now.
ang_cire · 6h ago
> "but if we don't know how to shoe our own horses any more because we got cars, soon nobody will know how to shoe horses!"
No, this would be more akin to saying, "if we don't know how to change our car's oil anymore because we have a robot that does it, soon nobody will know, while still being reliant on our cars."
For your analogy to work, we would have to be moving away from code entirely, as we moved away from horses.
> It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
Except that once you forget, you now would have to re-learn it, and that includes potentially re-learning all the pitfalls and edge cases that aren't part of standard training manuals. And you won't be able to ask someone else, because now they all don't know either.
tl;dr coding is a key job function of software developers. Not knowing how to do any key part of your job without relying on an intermediary tool, is a very bad thing. This already happens too much, and AI is just firing the trend into the stratosphere.
stavros · 6h ago
> we don't know how to change our car's oil anymore because we have a robot that does it
OK, are we worried that all the robots will somehow disappear? Why would I have to change my own oil, ever, if the robot did it as well as I did? If it doesn't do it as well as I did, I'm still doing it myself.
ang_cire · 6h ago
> OK, are we worried that all the robots will somehow disappear?
No, you should be worried that you (or devs who come later, who never had to 'change the oil'/ write tests themselves) won't know if the robot has done it right or not, because if you can't do the work, you can't validate the work.
And the robot isn't an employee, it's just a tool you the employee use, so when you are asked whether the test was coded correctly, all you'd be able to say with your 'atrophied' test-writing skills is, "I think so, the AI did it". See the issue now?
> If it doesn't do it as well as I did, I'm still doing it myself.
I thought your unnecessary skill atrophied and was forgotten ? How are you going to do it yourself? How do you know you're still as good at it as you once were?
stavros · 4h ago
> See the issue now?
No. I use a calculator, and I don't have to second-guess it. It just works. If it didn't work reliably, I would use it.
> I thought your unnecessary skill atrophied and was forgotten ?
Again, either I didn't need to do it because the AI did it well, and I forgot the skill, or it never did it well, I always did it myself, and I never forgot it. I don't understand why you're assuming that the AI will do it well enough at first that I'll forget how it's done, and that it will then somehow get bad at it so I'll have to start doing it myself again.
thefz · 3h ago
> I forget how to write tests, what's the problem?
The day you will need troubleshoot one or simply understand it is the problem.
yubblegum · 2h ago
Critical thinking ability is not remotely akin to a mode of transport from a menu of alternatives.
> do I need them, or not?
Based on this comment of yours, the answer is a resounding yes, you do.
stavros · 2h ago
Then they will never atrophy and we have no problem.
yubblegum · 1h ago
Critical thinking is something that you can develop. Don't give up so easily.
stavros · 1h ago
You can either be snarky or wrong, don't be both.
JustinCS · 6h ago
I agree with this, it reminds me of how most people don't need to write assembly anymore, but it still helps with certain projects to have that understanding of what's going on.
So some people do develop that deeper understanding, when it's helpful, and they go on to build great things. I don't see why this will be different with AI. Some will rely too much on AI and possibly create slop, and others will learn more deeply and get better results. This is not a new phenomenon.
stavros · 6h ago
Indeed it's not a new phenomenon, so why are we fretting about it? The people who were going to understand (assembly|any code) will understand it, and go on to build great things, and everyone else will do what we've always done.
ImHereToVote · 6h ago
This stops to make sense as soon as the prompter can be automated. Who is going to pay for your artisanal software? Who will afford it?
sokoloff · 6h ago
How many times per year do you hear people saying something so obviously flawed that you first wonder how they feed themselves, only to realize with horror that no one else seems to notice and instead is transmitting and discussing the idea as it was some grand insight they’d just never thought about it that way?
Critical thinking skills are skills that I think are incredibly useful and, if never taught or allowed to atrophy you can get, well, this...<gesturing dejectedly around with my cane>
People not understanding basic science, arithmetic, and reading comprehension at what was once an 8th grade level, can be easily tricked in everyday situations on matters that harm themselves, their family, and their community. If we dig deeply enough, I bet we could find some examples on this theme in modern times.
stavros · 6h ago
If they're useful, they won't atrophy. It's as simple as that.
ImHereToVote · 6h ago
You are both right. You won't need the skills. The problem is that you will want to eat and have housing.
Maybe we get UBI. But if Jeff Bezos and some friends own all of the production. What would they do with your UBI dollars? Where can he spend them? He can make his own yachts and soldiers.
stavros · 6h ago
You're making a logical leap in "you don't need the skills, but there's someone willing to pay you for them".
Who is this person who's paying for useless skills, and doesn't that go against the definition of "need"?
ImHereToVote · 6h ago
Oh, I agree with you. Perhaps I phrased it poorly.
stavros · 6h ago
Oh, if you mean that an entire job sector will get automated away, and shareholders will keep yet more value for themselves, I definitely agree there.
antics · 6h ago
The legacy of the electric motor is not textile factories that are 30% more efficient because we point-replaced steam engines. It's the assembly line. The workforce "lost" the skills to operate the textile factories, but in turn, the assembly line made the workflow of goods production vastly more efficient. Industrial abstraction has been so successful that today, a small number of factories (e.g., TSMC) have become nearly-existential bottlenecks.
That is the aspiration of AI software tools, too. They are not coming to make us 30% more efficient, they are coming to completely change how the software engineering production line operates. If they are successful, we will write fewer tests, we will understand less about our stack, and we will develop tools and workflows to manage that complexity and risk.
Maybe AI succeeds at this stated objective, and maybe it does not. But let's at least not kid ourselves: this is how it has always been. We are in the business of abstracting things away so that we have to understand less to get things done. We grumbled when we switched from assembly to high-level languages. We grumbled when we switched from high-level languages to managed languages. We grumbled when we started programming enormous piles of JavaScript and traveled farther from the OS and the hardware. Now we're grumbling about AI, and you can be sure that we're going to grumble about whatever is next, too.
I understand this is going to ruffle a lot of feathers, but I don't think Thomas and the Fly team actually have missed any of the points discussed in this article. I think they fully understand that software production is going to change and expect that we will build systems to cope with abstracting more, and understanding less. And, honestly, I think they are probably right.
KronisLV · 5h ago
They cynic in me wants to point out how for many (most?) of the programmers out there, they aren't working on novel problems that need a lot of involved thinking or using that gray matter to come up with something innovative.
Instead, your average Joe will be writing some Java code with Spring Boot (or .NET with ASP.NET, or Ruby with Ruby on Rails, or PHP with Laravel, or Python with Django) where how to do things is already largely established and they only need to answer the "what?" question - to codify a bunch of boring business rules into a technical solution that runs.
That work isn't entirely different from tasks where codegen shines - like writing DB migrations from entities, or entity mappings from some existing DB schema, or writing client code based on an OpenAPI spec, or writing an OpenAPI spec based on your endpoint definitions. The less people have to do that manually, the better the results seem to be (e.g. a bunch of deterministic code that will do everything correctly based on the inputs, given how well known the problem space is).
You could also say the same for having something like autocomplete, or automated code suggestions, or even all of those higher abstraction level frameworks there in the first place - where you don't start solving a problem with "First, I'll write something that binds to a port and takes incoming HTTP requests..." but rather "I'll look up how you add a required constraint for a business object that's mapped to a DB table so validations fail before I even try to save something bad. If I'm really lucky, I'll be able to return a user friendly error message too!"
There, it doesn't really matter whether the official docs will tell you that, a colleague, StackOverflow, ChatGPT, or anything else... you're not doing that much creative work, but something a bit more blue collar. The end result of all this is programming becoming something closer to data entry with ample amounts of code to just glue other solutions together. There, the risks posed by LLMs are far lower and the utility far greater - because a lot of the code is similarly structured and quite boring.
ordu · 6h ago
It is just speculations. If we rely on LLMs, then something bad happens. Truly, something will happen, but will it be bad? A calculator ruins people's ability to do mental arithmetic, is it bad? I'm not sure, why I'd need to do mental arithmetic. Writing ruins memory. Is it bad? Probably, but I know no people who would reject writing to exercise their memory.
Moreover, if it will be so bad, as the author think, we can just spend like 15 minutes per day, to train the abilities that become unused with LLMs. And LLMs could help us to do it, so it will be like 15 min/day, not 2 hours/day.
> My son is in school. And the teacher literally said they need to use an LLM and the teacher is using an LLM to grade/check it. This is ridiculous.
I don't know, if it is ridiculous. I like LLMs, they really help to learn things. I mean, yes, LLMs can be detrimental to learning, if instead of doing a task at hand, the student will be asking LLM to do the task. But it can be a real help, because with the help of LLM you can do tasks, you cannot do without it. You can do tasks faster, and to do more of them. LLM can be bad for learning or it can be good, it depends on how it is used.
The teacher using LLM is very interesting. When I was at school, I thought I was smarter then my teachers at school, and I didn't like them, some of them I just hated. If they were using LLMs, I'm sure I would figure out ways to do tasks in a way, so their LLMs would hallucinate and grade my work wrongly. You can always complain and make the teacher to check it without LLM. Let them spend additional 10 minutes grading, and do it in front of other pupils, so teachers would need to admit their mistakes publicly. I would definitely do it. I did something like that with math, I would go for any length to find some non-standard solution for a problem, that cannot be checked by checking numbers in specific places of the solution. It made the teacher to make mistakes while grading, I just loved it. But if I had LLM... I would definitely do it with literature and history also. My math teacher was not so bad after all, but I hated the literature and history ones.
M4v3R · 6h ago
> My dopamine comes not from tweaking someone (something) else’s code to make something barely work. My dopamine comes from the fact that I solved some intricate, non-trivial problem.
If that’s your thing that’s perfectly fine and I can understand how LLMs take away this fun challenge from you.
But for many - myself included - it’s not necessarily the process itself that’s rewarding - it’s the outcome. I love creating things that work and are useful. The particular process that was used to create the thing is secondary to me, and if I discover a tool that will enable me to skip some steps to achieve the outcome I will pick it any time of the day over doing things manually.
CSDude · 7h ago
I agree that education needs overhaul, it's scary for new comers, AI can make mistakes that you need to be careful (so does old StackOverflow answers) but let’s be honest: Most employers aren’t paying for your art or your dopamine.
throw93484i4 · 7h ago
Most employers aren’t paying for your degrees!
Universities as we know them are obsolete.
matt3210 · 7h ago
Just like nobody can make chips in the USA any more, software dev will be a lost art.
ang_cire · 7h ago
Hear, hear!
Glad to see a lot of my own criticisms of that article echoed by this author. AI is a tool, not an employee.
SomeoneOnTheWeb · 7h ago
Thank was a very interesting read, thanks!
danr4 · 7h ago
replace "AI" with "Internet" in this article and this reads like a dumb post from 1994
meindnoch · 6h ago
Replace "AI" with "NFTs" in this article and this reads like a dumb post from 2021.
ttiurani · 7h ago
Solid point.
A bit tangential, but I've recently been baffled as to why is this belief held by so many:
"since LLMs are trained on online data, they are re-trained with their own AI slop, further damaging them. Soon bad code will get even worse, basically."
together with:
"I’m sure it will get better with time, but it’s not there yet."
What improvements people think will overcome the poisoning of the well? I'm not an expert at all, but to me it feels like we'd need a new breakthrough to get good output from garbage input.
JustinCS · 7h ago
Even as AI generates more writing and code, we still have a way of ranking quality: Good writing and successful projects tend to get more popular and prominent. This selection can allow LLMs to continue to improve. They get a huge flow of slop, but they generate based on the patterns correlated with better quality. The model developers can also develop better ways to curate the input data themselves and keep the slop at bay.
It's not a guaranteed or trivial mechanism, but I don't think we need a new breakthrough either.
ttiurani · 7h ago
Maybe.
As a counterpoint: isn't popularity of a library more a metric of API convemience than actual code quality?
And isn't popularity of an essay more about how it conforms to existing beliefs than the quality of the thinking?
JustinCS · 6h ago
Those are good points and that's why progress is not guaranteed or trivial, just plausible.
ChrisArchitect · 7h ago
All of the discussion on the original blog post in question:
Agree with a lot said here - but it doesn’t touch on the socio-political consequences of ai. In a capitalist system, wealth concentrates in the hands of the already rich. But ultimately, the rich need everyone else. They need them to do their laundry, clean their mansions, fly their planes. That gives the not rich some small token of power over the rich. With each new job ai is capable of doing, it drains away that power.
Similarly, the state needs people to do the work of state power. People to investigate the citizenry, people to judge and prosecute and process appeals. AI takes away the human from that, allowing massive injustices to be rubber stamped. A small example - RFJ jr couldn’t get a human to write his garbage maha report, so he used AI (https://www.msn.com/en-us/politics/government/rfk-jr-may-hav...). Small examples become bigger examples.
AI is accelerating us towards a techno-fascist oligarchy. That’s why i am skeptical of it. Not because it’s mid at writing unit tests or whatever.
oulipo · 6h ago
I think the issue with the original article (and this one) is that it COMPLETELY glosses over the major threats of AI to climate, trust in democracies, and the social fabric of our societies.
Which ARE being attacked right now
skarlso · 8h ago
Hello. Just wanted to share my 2¢ on this post that is entirely missing the point of the skepticism I believe. Take it with a grain of salt. Thank you.
godelski · 6h ago
To all the people that are confused by "skeptics":
Yes, LLMs are powerful tools
Yes, LLMs can be useful
I think "skeptic" is the wrong word for most of us. There's certainly people that will just deny that LLMs and ML can do anything useful. But I don't think that's the majority of the "skeptic" crowd.
I'm an ML[0] researcher. Of course I love these machines! And you know what? I really fucking want to build some god damn AGI.
Maybe all you hype people are all right and LLMs will get us there. If you're right, you got all the money you could ask for and then several times more. You got some of the biggest companies in the world dumping billions in, competing, with huge infrastructures, and dumping money into a few dozen startups who are trying to do the same. If scale is all you need, then you're going to get there even without another 7 trillion dollars. I'm not saying "Don't fund LLM research" nor am I saying "Don't fund scaling research". Please do fund those things.
But be honest, money isn't your bottleneck.
Hell, what if you're right but other ways are faster?! We miss those opportunities and lose the race? For what?! If more efficient opportunities are out there (and I really do believe there are), who do you think is going to find it? It's literally betting on ITER[1] and giving essentially nothing to other fusion attempts. Except ITER is a international effort and we're a hell of a lot more confident it'll work.
But what if you're wrong? Why stop people like me from looking at other avenues? Why put all your eggs in one basket? Why dismiss us when we point out issues? We're engineers here, we fucking love to point out issues! IT IS OUR JOB! Because you must find issues if you're going to fix them.
We could have hedged our bets a little and pivot fast, hopefully seamlessly enough that it doesn't even look like a hiccup.
Nobody in this side of this is asking for half as much as what the next hip new LLM startup is. I don't know if it is even a penny on the dollar!
Are you really gonna bet that the golden goose is always going to be laying golden eggs? Are you really going to make a situation where if it stops you just go broke?
IDK, it just all seems needlessly risky to me. Maybe I just don't get it because I'm not a high stakes gambler.
[0] Funny... only a few years ago the litmus test was "If someone says 'AI' they're selling you something. If they say 'ML' they're probably not"
I don't think it matters whether an LLM will get us there or some sort of causal transformer that we don't have the training data for yet.
What happens to the majority of the planet's biomass when we get there? If it's a new paradigm that does the trick. It might sneak up on us like the GPT-2 capabilities.
renewiltord · 6h ago
It's hard for me to take all of this seriously because I sat and listened to how Google Search was going to ruin everyone's mind and without the skill of manual research we'd all be turned into imbeciles.
Back then we didn't have this particular modern meme of "extremely dangerous" for something like a syntactically inaccurate file so I suppose no one said it was "extremely dangerous" for people to copy-paste code from searches. SQL injection was fairly novel at the time but as that came up people would mock copy-paste scripters rather than act as if man's extinction was at hand from a vibe-coded single-page app that tells you which Pokemon you are.
Yes, yes, it's different now. But it was different then too. Search engines changed the stuff you could find from carefully curated Usenet channels to any rando with a blog. I suppose if we had modern sensibilities we'd say that was "extremely dangerous" too.
I've been writing code for decades, and it's true that I've seen nothing like this technology. It's true that it's a game changer and that with slightly different rates of change could have already lead to human extinction. But that doesn't mean that I have to lay any more credence to the guys who have, since time immemorial, said "this new thing makes something easier; it will make us worse because we do not slog as much".
buovjaga · 6h ago
> But that doesn't mean that I have to lay any more credence to the guys who have, since time immemorial, said "this new thing makes something easier; it will make us worse because we do not slog as much".
I don't use LLMs myself, but the anecdotes about skill atrophy seem credible. From the OP: "Even for me it shows. I tried to write some test code recently and I absolutely forgot how to write table tests because I generate all that. And it’s frightening."
I would quibble a bit with some of your examples, but I think the points are conveyed well.
Though, it does seem ironic. A skeptic is someone that demands convincing evidence before believing an assertion. "AGI is around the corner", "self driving will replace human drivers", "don't learn how to code", "AI will replace developers", "AI cars will eliminate car accidents." The skeptics are the ones simply asking for evidence that any of that is true. A system that can barely count the 'R's in strawberry is supposed to be AGI in 3 years from now?? Let alone deliver on any of those other promises? Incredulity aside, let us see the evidence for any of it.
Though, those that are AI nay-sayers; that say AI will be a bad thing - they are not skeptics. They are making their own assertion. Namely that AI will be used as a crutch - that the lack of a slog will be a detriment. That is a claim, not skepticism. So, just ironic to be skeptical of the 'skeptics' and the lack of evidence from the 'skeptics' is used to dismiss the 'skeptics'.
ImHereToVote · 6h ago
I would posit that social media has made us worse. Hell, TV made our minds worse. I know it rotted mine to an extent. maybe there are externalities to search too. Maybe it devolved critical thinking skills. having a source at our fingertips made us source obsessed. Your cognitive dissonance just needs a good source to be placated.
bowsamic · 7h ago
Yes that was my thought exactly. The original article did not address by far my biggest concern which is skill degradation. It happens rapidly and drastically: I know developers with 10+ years experience who now feel useless without the LLM to write their emails and code for them.
My wife is pregnant and the education stuff terrifies me. I may have to make huge sacrifices to avoid schools that use LLMs. Hopefully by the time he’s school age people will have realised that relying entirely on LLMs will degrade your thinking, reading, and writing, but I’m not confident
The elephant in the room here is that, like so many issues, AI has become political for some people. In this case your "AI skeptic friends" are your "defund the police" and "abolish ICE" friends. For 95% of people AI is just a new technology to be loved and feared.
There are some real technical debates to be had about the capabilities of LLMs and agents but that's not what's driving this "AI skepticism" at all. The "AI skeptics" decided AI is bad in advance and then came up with reasons, the weakest of which is the claim that it's all or mostly hype.
pell · 6h ago
> The elephant in the room here is that, like so many issues, AI has become political for some people. In this case your "AI skeptic friends" are your "defund the police" and "abolish ICE" friends. For 95% of people AI is just a new technology to be loved and feared.
I really don’t think there is this “elephant” at all. If you reduce critique, skepticism and fear people might have to political slogans of mainly fringe online activists, I wonder how objective your assessment of those attitudes can actually be.
Maybe you could consider that some of us are not AI skeptics "in advance" and have already witnessed its downsides that in some particular conditions outweight its upsides.
alexjurkiewicz · 7h ago
> So am I saying don’t use LLMs? No, I’m not. I’m saying it needs serious guardrails, documentation, oversight, usage data, research, and it should definitely be banned from schools. Because it’s doing serious damage to young people’s ability to think, figure out, read, understand, and research.
The original article didn't talk about any of these things....
> My son is in school. And the teacher literally said they need to use an LLM otherwise they won’t score high enough because the other pupils are using LLMs and the teacher is using an LLM to grade/check it. This is ridiculous.
Ah. The article's author heard a bad take on LLMs and accidentally attributed it to Ptacek's article.
skarlso · 2h ago
>The original article didn't talk about any of these things....
Indeed it didn't. That's why I did. :)
>Ah. The article's author heard a bad take on LLMs and accidentally attributed it to Ptacek's article.
Not at all. And if that's everything you took from this article, then I'm sorry for the loss of your time.
If that is true, it is indeed a serious problem.
Imagine getting downgraded because the substrings "Honestly?" and "—" didn't constitute 3.2% of your submission.
I just read this piece the other day with countless witnesses from teachers in the US:
https://www.404media.co/teachers-are-not-ok-ai-chatgpt/
One teacher, when I asked him what I can do about my failing grade, told me "you can go hang yourself". Why was I getting a failing grade in the first place? Well, it was normal that three quarters of the class would be failing his tests. That's just the kind of teacher he was.
The PE teacher thought that his role was to teach us discipline based on fear. Later I heard a fun story about him getting a new class and thinking that one of the girls was a student, while she was actually the mother of one of the students. She saw the students being yelled at for 45 minutes straight, she got yelled personally at and called retarded. Of course nothing happened to him.
The literature teacher yelled at us so hard that we were literally afraid of talking to her. She hated us, and at some point made that openly clear, by being mean on purpose. She never gave me more than "barely passing", even though at the standardized test I got a near-perfect score.
Once she did a test, threw the paper away, and assigned us grades by how much she liked each student. I brought up this story during reunion, and was told "she actually prepared us for how we'd be treated in college and adult life".
And that was one of the best schools that always took the most talented students from the region. In this context, having two LLMs talk to each other really isn't a bad thing.
Used to be, it was considered critically important that students learn to write in cursive and to multiply 3 digit numbers in their head. I can't do either, and I suspect many folks these days can't either. The world has not ended. I also can't tie a square knot, lasso a steed, or mend a fence.
School assignments have always been a waste of time. Essay-writing is not a critical skill, and I'm not sure much is lost if LLMs do it for us.
Sure multiplying 3-digit numbers is not really useful in everyday's life, but the important part is not the knowledge itself, it's the capacity to think and solve problems.
Thing is, LLMs beat humans on the words of such tests (if not the pencil part), and indeed basically all stabdardised tests at every level.
Essay writing is mainly about organizing thoughts logically. That's pretty important.
If not that then what? Prompt engineering?
The school LLM thing is absolutely real. And it's not even a cheap school. It's a school in Denmark. I was very disappointed to know this and was not expecting it at all. And it's sadly, not an assignment to learn llms either. I'd wish it was something like that.
To the ones saying we've seen this before with google search, ides, etc. Not on this scale. And even then, you didn't _completely_ outsource you ability to think. I _think_ that is super dangerous. Especially for young people who get into the habit a lot faster. And suddenly, we have people unable to think without asking an LLM for advice. Other disruptive technologies didn't affect thinking on this massive scale.
I'm not saying stop AI booo, that's obviously not going to happen. And I don't want to, maybe the singularity is just an AI away. Who knows? However, I'm asking that we absolutely put that thing behind some kind of oversight especially in schools for young people before they develop critical thinking skills. After all you started to count on paper before you start using a calculator for a reason.
Again, thank you for this discussion. I'm really grateful for the engagement.
IMO it's best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow". The rest is too noisy for me. I'm OK with some skepticism, but not outright denial. You can't take an unbiased look at what these things can do today and say "well, yes, but can they do x y z"? That's literally moving the goalposts, and I find it extremely counter productive.
In a way I see a parallel to the self driving cars discussions of 5 years ago. Lots of very smart people were focusing on silly things like "they have to solve the trolley problem in 0.0001 ms before we can allow them on our roads", instead of "let's see if they can drive from point A to point B first". We are now at the point where they can do that, somewhat reliably, with some degree of variability between solutions (waymos, teslas, mercedes, etc). All that talk 5 years ago was useless, IMO.
No, we really aren’t. Let me know when any of those systems can get me from Souix falls South Dakota to Thunder Bay Ontario without multiple disengagements and we can talk.
Based on what I’ve seen we’re still about 10 years away best case scenario and more likely 20+ assuming society doesn’t collapse first.
I think people in the Bay Area commenting on how well self driving works need to visit middle America and find out just how bad and dangerous it still is…
As with a lot of issues in today's world, each side is talking past the other. It can simultaneously be true that LLMs make writing code less enjoyable / gratifying, and that LLMs can speed up our work.
> I feel (hah!) that we've reached a point where instead of focusing on objective things that the LLM-based systems can do, we are wasting energy and "ink" on how they make us feel.
> best to focus on objective things that the systems can do today, with maybe a look forward so we can prepare for things they'll be able to do "tomorrow".
These two together.. Reminds me of this type of sentiment that seems somewhat common here: 'I feel that AI is growing exponentially, therefore we should stop learning to code - because AI will start writing all code soon!'
I think this points to where a lot of skepticism comes from. From the perspective of: the AI barely does a fraction of what is claimed, has grown even less a fraction of what is claimed, yet these 'feelings' that AI will change everything is driving tons of false predictions. IMO, those feelings are driving VC money to damn MBAs that are plastering AI on everything because they are chasing money.
There is an irony here though too, skepticism simply is a lack of belief without evidence. Belief without evidence is irrational. The skeptics are the ones simply asking for evidence, not feelings.
Something that has no guarantee,not even a reassurance of being correct should not be trusted with any meaningful work
Yeah. And that's OK. Because nobody will need to shoe horses any more!
If I forget how to write tests, what's the problem? It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
That's how atrophy works: the skills that atrophy are, by definition, the ones you no longer need at all, and this argument does this sleight of hand where it goes "don't let the skills you don't need atrophy, because you need them!".
Well, do I need them, or not?
I sympathize with this viewpoint, but I do think it’s important to recognize the differences here. One thing I’ve noticed from the vibe-code coalition is a push towards offloading _cognition_. I think this is a novel distinction from industrial innovations that more or less optimize manual labor.
You could argue that moving from assembly to python is a form of cognition offloading, but it’s not quite the same in my eyes. You are still actively engaged in thinking for extended periods of time.
With agentic code bots, active thinking just isn’t the vibe. I’m not so sure that style of half-engaged work has positive outcomes for mental health and personal development (if that’s one of your goals).
We basically are training prompt engineers without post validation now.
No, this would be more akin to saying, "if we don't know how to change our car's oil anymore because we have a robot that does it, soon nobody will know, while still being reliant on our cars."
For your analogy to work, we would have to be moving away from code entirely, as we moved away from horses.
> It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
Except that once you forget, you now would have to re-learn it, and that includes potentially re-learning all the pitfalls and edge cases that aren't part of standard training manuals. And you won't be able to ask someone else, because now they all don't know either.
tl;dr coding is a key job function of software developers. Not knowing how to do any key part of your job without relying on an intermediary tool, is a very bad thing. This already happens too much, and AI is just firing the trend into the stratosphere.
OK, are we worried that all the robots will somehow disappear? Why would I have to change my own oil, ever, if the robot did it as well as I did? If it doesn't do it as well as I did, I'm still doing it myself.
No, you should be worried that you (or devs who come later, who never had to 'change the oil'/ write tests themselves) won't know if the robot has done it right or not, because if you can't do the work, you can't validate the work.
And the robot isn't an employee, it's just a tool you the employee use, so when you are asked whether the test was coded correctly, all you'd be able to say with your 'atrophied' test-writing skills is, "I think so, the AI did it". See the issue now?
> If it doesn't do it as well as I did, I'm still doing it myself.
I thought your unnecessary skill atrophied and was forgotten ? How are you going to do it yourself? How do you know you're still as good at it as you once were?
No. I use a calculator, and I don't have to second-guess it. It just works. If it didn't work reliably, I would use it.
> I thought your unnecessary skill atrophied and was forgotten ?
Again, either I didn't need to do it because the AI did it well, and I forgot the skill, or it never did it well, I always did it myself, and I never forgot it. I don't understand why you're assuming that the AI will do it well enough at first that I'll forget how it's done, and that it will then somehow get bad at it so I'll have to start doing it myself again.
The day you will need troubleshoot one or simply understand it is the problem.
> do I need them, or not?
Based on this comment of yours, the answer is a resounding yes, you do.
So some people do develop that deeper understanding, when it's helpful, and they go on to build great things. I don't see why this will be different with AI. Some will rely too much on AI and possibly create slop, and others will learn more deeply and get better results. This is not a new phenomenon.
Critical thinking skills are skills that I think are incredibly useful and, if never taught or allowed to atrophy you can get, well, this...<gesturing dejectedly around with my cane>
People not understanding basic science, arithmetic, and reading comprehension at what was once an 8th grade level, can be easily tricked in everyday situations on matters that harm themselves, their family, and their community. If we dig deeply enough, I bet we could find some examples on this theme in modern times.
Maybe we get UBI. But if Jeff Bezos and some friends own all of the production. What would they do with your UBI dollars? Where can he spend them? He can make his own yachts and soldiers.
Who is this person who's paying for useless skills, and doesn't that go against the definition of "need"?
That is the aspiration of AI software tools, too. They are not coming to make us 30% more efficient, they are coming to completely change how the software engineering production line operates. If they are successful, we will write fewer tests, we will understand less about our stack, and we will develop tools and workflows to manage that complexity and risk.
Maybe AI succeeds at this stated objective, and maybe it does not. But let's at least not kid ourselves: this is how it has always been. We are in the business of abstracting things away so that we have to understand less to get things done. We grumbled when we switched from assembly to high-level languages. We grumbled when we switched from high-level languages to managed languages. We grumbled when we started programming enormous piles of JavaScript and traveled farther from the OS and the hardware. Now we're grumbling about AI, and you can be sure that we're going to grumble about whatever is next, too.
I understand this is going to ruffle a lot of feathers, but I don't think Thomas and the Fly team actually have missed any of the points discussed in this article. I think they fully understand that software production is going to change and expect that we will build systems to cope with abstracting more, and understanding less. And, honestly, I think they are probably right.
Instead, your average Joe will be writing some Java code with Spring Boot (or .NET with ASP.NET, or Ruby with Ruby on Rails, or PHP with Laravel, or Python with Django) where how to do things is already largely established and they only need to answer the "what?" question - to codify a bunch of boring business rules into a technical solution that runs.
That work isn't entirely different from tasks where codegen shines - like writing DB migrations from entities, or entity mappings from some existing DB schema, or writing client code based on an OpenAPI spec, or writing an OpenAPI spec based on your endpoint definitions. The less people have to do that manually, the better the results seem to be (e.g. a bunch of deterministic code that will do everything correctly based on the inputs, given how well known the problem space is).
You could also say the same for having something like autocomplete, or automated code suggestions, or even all of those higher abstraction level frameworks there in the first place - where you don't start solving a problem with "First, I'll write something that binds to a port and takes incoming HTTP requests..." but rather "I'll look up how you add a required constraint for a business object that's mapped to a DB table so validations fail before I even try to save something bad. If I'm really lucky, I'll be able to return a user friendly error message too!"
There, it doesn't really matter whether the official docs will tell you that, a colleague, StackOverflow, ChatGPT, or anything else... you're not doing that much creative work, but something a bit more blue collar. The end result of all this is programming becoming something closer to data entry with ample amounts of code to just glue other solutions together. There, the risks posed by LLMs are far lower and the utility far greater - because a lot of the code is similarly structured and quite boring.
Moreover, if it will be so bad, as the author think, we can just spend like 15 minutes per day, to train the abilities that become unused with LLMs. And LLMs could help us to do it, so it will be like 15 min/day, not 2 hours/day.
> My son is in school. And the teacher literally said they need to use an LLM and the teacher is using an LLM to grade/check it. This is ridiculous.
I don't know, if it is ridiculous. I like LLMs, they really help to learn things. I mean, yes, LLMs can be detrimental to learning, if instead of doing a task at hand, the student will be asking LLM to do the task. But it can be a real help, because with the help of LLM you can do tasks, you cannot do without it. You can do tasks faster, and to do more of them. LLM can be bad for learning or it can be good, it depends on how it is used.
The teacher using LLM is very interesting. When I was at school, I thought I was smarter then my teachers at school, and I didn't like them, some of them I just hated. If they were using LLMs, I'm sure I would figure out ways to do tasks in a way, so their LLMs would hallucinate and grade my work wrongly. You can always complain and make the teacher to check it without LLM. Let them spend additional 10 minutes grading, and do it in front of other pupils, so teachers would need to admit their mistakes publicly. I would definitely do it. I did something like that with math, I would go for any length to find some non-standard solution for a problem, that cannot be checked by checking numbers in specific places of the solution. It made the teacher to make mistakes while grading, I just loved it. But if I had LLM... I would definitely do it with literature and history also. My math teacher was not so bad after all, but I hated the literature and history ones.
If that’s your thing that’s perfectly fine and I can understand how LLMs take away this fun challenge from you.
But for many - myself included - it’s not necessarily the process itself that’s rewarding - it’s the outcome. I love creating things that work and are useful. The particular process that was used to create the thing is secondary to me, and if I discover a tool that will enable me to skip some steps to achieve the outcome I will pick it any time of the day over doing things manually.
Universities as we know them are obsolete.
Glad to see a lot of my own criticisms of that article echoed by this author. AI is a tool, not an employee.
A bit tangential, but I've recently been baffled as to why is this belief held by so many:
"since LLMs are trained on online data, they are re-trained with their own AI slop, further damaging them. Soon bad code will get even worse, basically."
together with:
"I’m sure it will get better with time, but it’s not there yet."
What improvements people think will overcome the poisoning of the well? I'm not an expert at all, but to me it feels like we'd need a new breakthrough to get good output from garbage input.
As a counterpoint: isn't popularity of a library more a metric of API convemience than actual code quality?
And isn't popularity of an essay more about how it conforms to existing beliefs than the quality of the thinking?
https://news.ycombinator.com/item?id=44163063
Similarly, the state needs people to do the work of state power. People to investigate the citizenry, people to judge and prosecute and process appeals. AI takes away the human from that, allowing massive injustices to be rubber stamped. A small example - RFJ jr couldn’t get a human to write his garbage maha report, so he used AI (https://www.msn.com/en-us/politics/government/rfk-jr-may-hav...). Small examples become bigger examples.
AI is accelerating us towards a techno-fascist oligarchy. That’s why i am skeptical of it. Not because it’s mid at writing unit tests or whatever.
Which ARE being attacked right now
I'm an ML[0] researcher. Of course I love these machines! And you know what? I really fucking want to build some god damn AGI.
Maybe all you hype people are all right and LLMs will get us there. If you're right, you got all the money you could ask for and then several times more. You got some of the biggest companies in the world dumping billions in, competing, with huge infrastructures, and dumping money into a few dozen startups who are trying to do the same. If scale is all you need, then you're going to get there even without another 7 trillion dollars. I'm not saying "Don't fund LLM research" nor am I saying "Don't fund scaling research". Please do fund those things.
But be honest, money isn't your bottleneck.
Hell, what if you're right but other ways are faster?! We miss those opportunities and lose the race? For what?! If more efficient opportunities are out there (and I really do believe there are), who do you think is going to find it? It's literally betting on ITER[1] and giving essentially nothing to other fusion attempts. Except ITER is a international effort and we're a hell of a lot more confident it'll work.
But what if you're wrong? Why stop people like me from looking at other avenues? Why put all your eggs in one basket? Why dismiss us when we point out issues? We're engineers here, we fucking love to point out issues! IT IS OUR JOB! Because you must find issues if you're going to fix them. We could have hedged our bets a little and pivot fast, hopefully seamlessly enough that it doesn't even look like a hiccup. Nobody in this side of this is asking for half as much as what the next hip new LLM startup is. I don't know if it is even a penny on the dollar!
Are you really gonna bet that the golden goose is always going to be laying golden eggs? Are you really going to make a situation where if it stops you just go broke?
IDK, it just all seems needlessly risky to me. Maybe I just don't get it because I'm not a high stakes gambler.
[0] Funny... only a few years ago the litmus test was "If someone says 'AI' they're selling you something. If they say 'ML' they're probably not"
[1] https://en.wikipedia.org/wiki/ITER
What happens to the majority of the planet's biomass when we get there? If it's a new paradigm that does the trick. It might sneak up on us like the GPT-2 capabilities.
Back then we didn't have this particular modern meme of "extremely dangerous" for something like a syntactically inaccurate file so I suppose no one said it was "extremely dangerous" for people to copy-paste code from searches. SQL injection was fairly novel at the time but as that came up people would mock copy-paste scripters rather than act as if man's extinction was at hand from a vibe-coded single-page app that tells you which Pokemon you are.
Yes, yes, it's different now. But it was different then too. Search engines changed the stuff you could find from carefully curated Usenet channels to any rando with a blog. I suppose if we had modern sensibilities we'd say that was "extremely dangerous" too.
I've been writing code for decades, and it's true that I've seen nothing like this technology. It's true that it's a game changer and that with slightly different rates of change could have already lead to human extinction. But that doesn't mean that I have to lay any more credence to the guys who have, since time immemorial, said "this new thing makes something easier; it will make us worse because we do not slog as much".
I don't use LLMs myself, but the anecdotes about skill atrophy seem credible. From the OP: "Even for me it shows. I tried to write some test code recently and I absolutely forgot how to write table tests because I generate all that. And it’s frightening."
An article about the topic that tries to stay positive: https://addyo.substack.com/p/avoiding-skill-atrophy-in-the-a...
Though, it does seem ironic. A skeptic is someone that demands convincing evidence before believing an assertion. "AGI is around the corner", "self driving will replace human drivers", "don't learn how to code", "AI will replace developers", "AI cars will eliminate car accidents." The skeptics are the ones simply asking for evidence that any of that is true. A system that can barely count the 'R's in strawberry is supposed to be AGI in 3 years from now?? Let alone deliver on any of those other promises? Incredulity aside, let us see the evidence for any of it.
Though, those that are AI nay-sayers; that say AI will be a bad thing - they are not skeptics. They are making their own assertion. Namely that AI will be used as a crutch - that the lack of a slog will be a detriment. That is a claim, not skepticism. So, just ironic to be skeptical of the 'skeptics' and the lack of evidence from the 'skeptics' is used to dismiss the 'skeptics'.
My wife is pregnant and the education stuff terrifies me. I may have to make huge sacrifices to avoid schools that use LLMs. Hopefully by the time he’s school age people will have realised that relying entirely on LLMs will degrade your thinking, reading, and writing, but I’m not confident
There are some real technical debates to be had about the capabilities of LLMs and agents but that's not what's driving this "AI skepticism" at all. The "AI skeptics" decided AI is bad in advance and then came up with reasons, the weakest of which is the claim that it's all or mostly hype.
I really don’t think there is this “elephant” at all. If you reduce critique, skepticism and fear people might have to political slogans of mainly fringe online activists, I wonder how objective your assessment of those attitudes can actually be.
I recommend you take a look at this research published by Pew. You can see that concern about AI is much higher than 5% and is shared across the political spectrum: https://www.pewresearch.org/internet/2025/04/03/how-the-us-p...
The original article didn't talk about any of these things....
> My son is in school. And the teacher literally said they need to use an LLM otherwise they won’t score high enough because the other pupils are using LLMs and the teacher is using an LLM to grade/check it. This is ridiculous.
Ah. The article's author heard a bad take on LLMs and accidentally attributed it to Ptacek's article.
Indeed it didn't. That's why I did. :)
>Ah. The article's author heard a bad take on LLMs and accidentally attributed it to Ptacek's article.
Not at all. And if that's everything you took from this article, then I'm sorry for the loss of your time.