AI Angst

137 AndrewDucker 135 6/9/2025, 10:10:18 AM tbray.org ↗

Comments (135)

stevage · 3h ago
I guess we're all trying to figure out where we sit along the continuum from anti-AI Luddite to all-in.

My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

I'm happy to use Copilot to auto-complete, and ask a few questions of ChatGPT to solve a pointy TypeScript issue or debug something, but stepping back and letting Claude or something write whole modules for me just feels sloppy and unpleasant.

tobr · 1h ago
I tried Cursor again recently. Starting with an empty folder, asking it to use very popular technologies that it surely must know a lot about (Typescript, Vite, Vue, and Tailwind). Should be a home run.

It went south immediately. It was confused about the differences between Tailwind 3 and 4, leading to a broken setup. It wasn’t able to diagnose the problem but just got more confused even with patient help from me in guiding it. Worse, it was unable to apply basic file diffs or deletes reliably. In trying to diagnose whether this is a known issue with Cursor, it decided to search for bug reports - great idea, except it tried to search the codebase for it, which, I remind you, only contained code that it had written itself over the past half hour or so.

What am I doing wrong? You read about people hyping up this technology - are they even using it?

EDIT: I want to add that I did not go into this antagonistically. On the contrary, I was excited to have a use case that I thought must be a really good fit.

windows2020 · 1h ago
My recent experience has been similar.

I'm seeing that the people hyping this up aren't programmers. They believe the reason they can't create software is they don't know the syntax. They whip up a clearly malfunctioning and incomplete app with these new tools and are amazed at what they're created. The deficiencies will sort themselves out soon, they believe. And then programmers won't be needed at all.

gs17 · 39m ago
> It was confused about the differences between Tailwind 3 and 4

I have the same issue with Svelte 4 vs 5. Adding some notes to the prompt to be used for that project helps sort of.

tobr · 35m ago
It didn’t seem like it ever referred to documentation? So, obviously if it’s only going to draw on its ”instinctual” knowledge of Tailwind, it’s more likely to fallback on a version that’s been around for longer, leading to incompatibilities with the version that’s actually installed. A human doing the same task would probably have the setup guide on the website at hand if they realized they were feeling confused.
pandler · 3h ago
In addition to not enjoying it, I also don’t learn anything, and I think that makes it difficult to sustain anything in the middle of the spectrum between “I won’t even look at the code; vibes only” and advanced autocomplete.

My experience has been that it’s difficult to mostly vibe with an agent, but still be an active participant in the codebase. That feels especially true when I’m using tools, frameworks, etc that I’m not already familiar with. The vibing part of the process simultaneously doesn’t provide me with any deeper understanding or experience to be able to help guide or troubleshoot. Same thing for maintaining existing skills.

daxfohl · 2h ago
It's like trying to learn math by reading vs by doing. If all you're doing is reading, it robs you of the depth of understanding you'd gain by solving things yourself. Going down wrong paths, backtracking, finally having that aha moment where things click, is the only way to truly understand something.

Now, for all the executives who are trying to force-feed their engineering team to use AI for everything, this is the result. Your engineering staff becomes equivalent to a mathematician who has never actually done a math problem, just read a bunch of books and trusted what was there. Or a math tutor for your kid who "teaches" by doing your kid's homework for them. When things break and the shit hits the fan, is that the engineering department you want to have?

zdragnar · 1h ago
I'm fairly certain that I lost a job opportunity because the manager interviewing me kept asking me variations of how I use AI when I code.

Unless I'm stuck while experimenting with a new language or finding something in a library's documentation, I don't use AI at all. I just don't feel the need for it in my primary skill set because I've been doing it so long that it would take me longer to get AI to an acceptable answer than doing it myself.

The idea seemed rather offensive to him, and I'm quite glad I didn't go to work there, or anywhere that using AI is an expectation rather than an option.

I definitely don't see a team that relies on it heavily having fun in the long run. Everyone has time for new features, but nobody wants to dedicate time to rewriting old ones that are an unholy mess of bad assumptions and poorly understood.

bluefirebrand · 1h ago
My company recently issued an "Use AI in your workflow or else" mandate and it has absolutely destroyed my motivation to work

Even though there are still private whispers of "just keep doing what you're doing no one is going to be fired for not using AI", just the existence of the top down mandate has made me want to give up and leave

My fear is that this is every company right now, and I'm basically no longer a fit for this industry at all

Edit: I'm a long way from retirement unfortunately so I'm really stuck. Not sure what my path forward is. Seems like a waste to turn away from my career that I have years of experience doing, but I struggle like crazy to use AI tools. I can't get into any kind of flow with them. I'm constantly frustrated by how aggressively they try to jump in front of my thought process. I feel like my job changed from "builder" to "reviewer" overnight and reviewing is one of the least enjoyable parts of the job for me

I remember an anecdote about Ian McKellen crying on a green screen set when filming the Hobbit, because Talking to a tennis ball on a stick wasn't what he loved about acting

I feel similarly with AI coding I think

ryandrake · 35m ago
I just don't understand your company and the company OP interviewed for. This is like mandating everyone use syntax highlighting or autocomplete, or sit in special type of chair or use a standing desk, and making their use a condition for being hired. Why are companies so insistent that their developers "use AI somehow" in their workflows?
bluefirebrand · 24m ago
Shareholders are salivating at the prospect of doing either the same amount of work with fewer salaries or more work with the same salaries

There is nothing a VC loves more than the idea of extracting more value from people without investing more into them

daxfohl · 1h ago
The other side of me thinks that maybe the eventual landing point of all this is a merger of engineering and PM. A sizeable chunk of engineering work isn't really anything new. CRUD, jobs, events, caching, synchronization, optimizing for latency, cost, staleness, redundancy. Sometimes it amazes me that we're still building so many ad-hoc ways of doing the same things.

Like, say there's a catalog of 1000 of the most common enterprise (or embedded, or UI, or whatever) design patterns, and AI is good at taking your existing system, your new requirements, identifying the best couple design patterns that fit, give you a chart with the various tradeoffs, and once you select one, are able to add that pattern to your existing system, with the details that match your requirements.

Maybe that'd be cool? The system/AI would then be able to represent the full codebase as an integration of various patterns, and an engineer, or even a technical PM, could understand it without needing to dive into the codebase itself. And hopefully since everything is managed by a single AI, the patterns are fairly consistent across the entire system, and not an amalgamation of hundreds of different individuals' different opinions and ideals.

Another nice thing would be that huge migrations could be done mostly atomically. Currently, things like, say, adding support in your enterprise for, say, dynamic authorization policies takes years to get every team to update their service's code to handle the new authz policy in their domain, and so the authz team has to support the old way and the new way, and a way to sync between them, roughly forever. With AI, maybe all this could just be done in a single shot, or over the course of a week, with automated deployments, backfill, testing, and cleanup of the old system. And so the authz team doesn't have to deal with all the "bugging other teams" or anything else, and the other teams also don't have to deal with getting bugged or trying to fit the migration into their schedules. To them it's an opaque thing that just happened, no different from a library version update.

With that, there's fewer things in flight at any one time, so it allows engineers and PMs to focus on their one deliverable without how it's affecting everyone else's schedules etc.

So, IDK, maybe the end game of AI will make the job more interesting rather than less. We'll see.

timr · 2h ago
> My main issue with vibe coding etc is I simply don't enjoy it. Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun. It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

I am the opposite. After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today. I moved into product management because while I still enjoy building things, it's much more satisfying/challenging to focus on the higher-level issues of making a product that solves a need. My professional life became writing specs, and reviewing code. It's therefore actually kind of fun to work with AI, because I can think technically, but I don't have to do the tedious parts that make me want to descend into a coma.

I could care less if I'm writing a spec for a robot, or I'm writing a spec for a junior front-end engineer. They're both going to screw up, and I'm going to have to spend time explaining the problem again and again...at least the robot never complains and tries really hard to do exactly what I ask, instead of slacking off, doing something more intellectually appealing, getting mired in technical complexity, etc.

dlisboa · 2h ago
You touched on the significant thing that separates most of the AI code discourse in the two extremes: some people just don't like programming and see it as a simple means to an end, while others love the process of actually crafting code.

Similar to the differences between an art collector and a painter. One wants the ends, the other desires the means.

timr · 1h ago
That's not fair, and not what I am saying at all.

I enjoy writing code. I just don't enjoy writing code that I've written a thousand times before. It's like saying that Picasso should have enjoyed painting houses for a living. They're both painting, right?

(to be painfully clear, I'm not comparing myself to Picasso; I'm extending on your metaphor.)

tptacek · 1h ago
I love coding, do it for fun outside of my job, and find coding with an LLM very enjoyable.
icedchai · 44m ago
I've been experimenting with LLM coding for the past few months on some personal projects. I find it makes coding those projects more enjoyable since it eliminates much of the tedium that was causing me to delay the project in the first place.
timr · 30m ago
Exactly the same for me...now whenever I hit something like "oh god, I want to change the purpose of this function/variable, but I need to go through 500 files, and see where it's used, then make local changes, then re-test everything...", I can just tell the bot to do it.

I know a lot of folks would say that's what search & replace is for, but it's far easier to ask the bot to do it, and then check the work.

cesarb · 18m ago
> "oh god, I want to change the name of this function/variable, but I need to go through 500 files, and see where it's used, then make local changes, then re-test everything..."

Forgive me for being dense, but isn't it just clicking the "rename" button on your IDE, and letting it propagate the change to all definitions and uses? This already existed and worked fine well before LLMs were invented.

tptacek · 2m ago
Yes, that particular example modern editors do just fine. Now imagine having that for almost any rote transformation you wanted regardless of complexity (so long as the change was rote and describable).
d0100 · 1h ago
I love programming, I just don't like CRUDing, or API'ing...

I also love programming behaviours and interactions, just not creating endless C# classes and looking at how to implement 3D math

After a long day at the CRUD factory, being able to vibe code as a hobby is fun. Not super productive, but it's better than the alternative (scrolling reels or playing games)

morkalork · 1h ago
I think I could be happy switching between the two modes. There's tasks that are completely repetative slop that I've fully offloaded to AI with great satisfaction. There's others I enjoy that I prefer to use AI for consultation only. Regardless, few people liked doing code review before with their peers and somehow we've increased one of the least fun parts of the job.
icedchai · 2h ago
Same. After doing this for decades, so much programming work is tedious. Maybe 5% to 20% of the work is interesting. If I can get a good chunk of that other 80%+ built out quickly with a reasonable level of quality, then we're good.
kiitos · 2h ago
> After a few decades of writing code, it wasn't "fun" to write yet another file parser or hook widget A to API B -- which is >99% of coding today.

If this is your experience of programming, then I feel for you, my dude, because that sucks. But it is definitely not my experience of programming. And so I absolutely reject your claim that this experience represents "99% of programming" -- that stuff is rote and annoying and automate-able and all that, no argument, but it's not what any senior-level engineer worth their salt is spending any of their time on!

NewsaHackO · 2h ago
People who don’t do 1)API connecting, 2)Web design using popular frameworks or 3) requirements wrangling with business analysts have jobs that will not be taken over by AI anytime soon. I think 99% of jobs is pushing it, but I definitely think the vast majority of IT jobs fit into the above categories. Another benchmark would be how much of your job is closer to research work.
9d · 36m ago
Considering the actual Vatican literally linked AI to the apocalypse, and did so in the most official capacity[1], I don't think avoiding AI has to be ludditism.

[1] Antiqua et Nova p. 105, cf. Rev. 13:15

9d · 17m ago
I emphasize that it's the Vatican because they are the most theologically careful of all. This isn't some church with a superstitious pastor who jumps to conclusions about the rapture at a dime drop. This is the Church which is hesitant to say literally anything about the book of Revelation at all, which is run by tired men who just want to keep the status quo so they can hopefully hit retirement without any trouble.
9d · 31m ago
Full link and relevant quote:

https://www.vatican.va/roman_curia/congregations/cfaith/docu...

> Moreover, AI may prove even more seductive than traditional idols for, unlike idols that “have mouths but do not speak; eyes, but do not see; ears, but do not hear” (Ps. 115:5-6), AI can “speak,” or at least gives the illusion of doing so (cf. Rev. 13:15).

It quotes Rev. 13:15 which says (RSVCE):

> and it was allowed to give breath to the image of the beast so that the image of the beast should even speak, and to cause those who would not worship the image of the beast to be slain.

9d · 32m ago
> It doesn't give me any of the same kind of intellectual satisfaction that I get out of actually writing code.

Writing code is a really fun creative process:

1. Conceive an exciting and useful idea

2. Comprehend the idea fully from its top to its bottom

3. Translate the idea into specific instructions utilizing known mechanics

4. Find the beautiful middleground between instruction and abstraction

5. Write lots and lots of code!

6. Find where your conception was flawed and fix it as necessary.

7. Repeat steps 2-6 until the thing works just as you dreamed or you give up.

It's maybe the most fun and exciting mixture of art and technology ever.

9d · 23m ago
I forgot to say the second part:

Using AI is the same as code-review or being a PM:

1. Have an ideal abstraction

2. Reverse engineer an actual abstraction from code

3. Compare the two and see if they match up

4. If they don't, ask the author to change or fix it until it does

5. Repeat steps 2-4 until it does

This is incredibly not fun, because it's not a creative process.

You're essentially just an accountant or calculator at this point.

Kiro · 2h ago
I'm the opposite. I haven't had this much fun programming in years. I can quickly iterate, focus on the creative parts and it really helps with procrastination.
doctoboggan · 2h ago
> and then have to try to review

I think (at least by the original definition[0]) this is not vibe coding. You aren't supposed to be reviewing the code, just execute and pray.

[0]: https://xcancel.com/karpathy/status/1886192184808149383

rowanseymour · 2h ago
This was my experience until recently.. now I'm currently quite enjoying assigning small PRs to copilot and working through them via the GitHub PR interface. It's basically like managing a junior programmer but cheaper and faster. Yes that's not as much fun as writing code but there isn't time for me to write all the code myself.
cloverich · 32m ago
Can you elaborate on the "assign PR's" bit?

I use Cursor / ChatGPT extensively and am ready to dip into more of an issue / PR flow but not sure what people are doing here exactly. Specifically for side projects, I tend to think through high level features, then break it down into sub-items much like a PM. But I can easily take it a step further and give each sub issue technical direction, e.g. "Allow font customization: Refactor tailwind font configuration to use CSS variables. Expose those CSS variables via settings module, and add a section to the Preferences UI to let the user pick fonts for Y categories via dropdown; default to X Y Z font for A B C types of text".

Usually I spend a few minutes discussing w/ ChatGPT first, e.g. "What are some typical idioms for font configuration in a typical web / desktop application". Once I get that idea solidified I'd normally start coding, but could just as easily hand this part off for simple-ish stuff and start ironing out he next feature. In the time I'd usually have planned the next 1-2 months of side project work (which happens, say, in 90 minute increments 2x a week), the Agent could knock out maybe half of them. For a project i'm familiar with, I expect I can comfortably review and comment on a PR with much less mental energy than it would take to re-open my code editor for my side project, after an entire day of coding for work + caring for my kids. Personally I'm pretty excited about this.

gs17 · 2h ago
> My main issue with vibe coding etc is I simply don't enjoy it.

I almost enjoy it. It's kind of nice getting to feel like management for a second. But the moment it hits a bug it can't fix and you have to figure out its horrible mess of code any enjoyment is gone. It's really nice for "dumb" changes like renumbering things or very basic refactors.

tptacek · 1h ago
When the agent spins out, why don't you just take the wheel and land the feature yourself? That's what I do. I'm having trouble integrating these two skeptical positions of "LLMs suck all the joy out of actually typing code into an editor" and "LLMs are bad because they sometimes force you to type code into an editor".
Garlef · 3h ago
> Having a conversation with a computer to generate code that I don't entirely understand and then have to try to review is just not fun.

Same for me. But maybe that's ultimately an UX issue? And maybe things will straighten out once we figure out how to REALLY do AI assisted software development.

As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.

So: Maybe once we have good tools to understand the output it might be fun again.

(I guess this would include advances in structuring/architecting the output)

layer8 · 1h ago
The compiler analogy doesn’t quite fit, because the essential difference is that source code is (mostly) deterministic and thus can be reasoned about (you can largely predict in detail what behavior code will exhibit even before writing it), which isn’t the case for LLM instructions. That’s a major factor why many developers don’t like AI coding, because every prompt becomes a non-reproducible, literally un-reasonable experiment.
cratermoon · 2h ago
Tim doesn't address this in his essay, so I'm going to harp on it: "AI will soon be able to...". That phrase is far too load-bearing. The part of AI hype that says, "sure, it's kinda janky now, but this is just the beginning" has been repeated for 3 years now, and everything has been just around the corner the entire time. It's the first step fallacy, saying that if we can build a really tall ladder now, surely we'll soon be able to build a ladder tall enough to reach the moon.

The reality is that we've seen incremental and diminishing returns, and the promises haven't been met.

tptacek · 1h ago
Diminishing returns? Am I reading right that you believe the last 6 months has been marked by a decrease in the capability of these systems?
cratermoon · 59m ago
That's not what diminishing returns means.
tptacek · 56m ago
That's true, but it's nearest bit of evidence at hand to how the "returns" could be "diminishing". I'm fine if someone wants to provide any other coherent claim as to how we're in a "diminishing returns" state with coding LLMs right now.
cratermoon · 52m ago
tptacek · 29m ago
What's the implication of this story to someone who had started writing code with LLMs 6 months ago, and is today as well. How has their experience changed? Have the returns to that activity diminished?
username223 · 2h ago
> As an anology: Most people wouldn't want to dig through machine code/compiler output. At least not without proper tooling.

My analogy is GUI builders from the late 90s that let you drag elements around, then generated a pile of code. They worked sometimes, but God help you if you wanted to do something the builder couldn't do, and had to edit the generated code.

Looking at compiler output is actually more pleasant. You profile your code, find the hot spots, and see that something isn't getting inlined, vectorized, etc. At that point you can either convince the compiler to do what you want or rewrite it by hand, and the task is self-contained.

bitwize · 2h ago
I think that AI assistance in coding will become enjoyable for me once the technology exists for AI to translate my brainwaves into text. Then I could think my code into computer, greatly speeding the OODA loop of programming.

As it is, giving high-level directives to an LLM and debugging the output seems like a waste of my time and a hindrance to my learning process. But that's how professional coding will be done in the near future. 100% human written code will become like hand-writing a business letter in cursive: something people used to be taught in school, but no one actually does in the real world because it's too time-consuming.

Ultimately, the business world only cares about productivity and what the stopwatch says is faster, not whether you enjoy or learn from the process.

potatolicious · 2h ago
Yeah, I will say now that I've played with the AI coding tools more, it seems like there are two distinct use cases:

1 - Using coding tools in a context/language/framework you're already familiar with.

This one I have been having a lot of fun with. I am in a good position to review the AI-generated code, and also examine its implementation plan to see if it's reasonable. I am also able to decompose tasks in a way that the AI is better at handling vs. giving it vague instructions that it then does poorly on.

I feel more in control, and it feels like the AI is stripping away drudgery. For example, for a side project I've been using Claude Code with an iOS app, a domain I've spent many years in. It's a treat - it's able to compose a lot of boilerplate and do light integrations that I can easily write myself, but find annoying.

2 - Using coding tools in a context/language/framework you don't actually know.

I know next to nothing about web frontend frameworks, but for various side projects wanted to stand up some simple web frontends, and this is where AI code tools have been a frustration.

I don't know what exactly I want from the AI, because I don't know these frameworks. I am poorly equipped to review the code that it writes. When it fails (and it fails a lot) I have trouble diagnosing the underlying issues and fixing it myself - so I have to re-prompt the LLM with symptoms, leading to frustrating loops that feel like two cave-dwellers trying to figure out a crashed spaceship.

I've been able to stand up a lot of stuff that I otherwise would never have been able to, but I'm 99% sure the code is utter shit, but I also am not in a position to really quantify or understand the shit in any way.

I suppose if I were properly "vibe coding" I shouldn't care about the fact that the AI produced a katamari ball of code held together by bubble gum. But I do care.

Anyway, for use case #1 I'm a big fan of these tools, but it's really not the "get out of learning your shit" card that it's sometimes hyped up to be.

saratogacx · 1h ago
For case 2, I've had a lot of luck starting with asking the LLM "I have experience in X, Y, and Z technologies, help me translate this project in those terms, list anything this code does that doesn't align with the typical use of the technologies they've chosen". This has given me a great "intro" to move me closer to being able to understand.

Once I've done that and piked a few follow up questions, I feel much better in diving into the generated code.

thadt · 3h ago
On Learning:

My wife, a high school teacher, remarked to me the other day “you know, it’s sad that my new students aren’t going to be able to do any of the fun online exercises that I used to run.”

She’s all but entirely removed computers from her daily class workflow. Almost to a student, “research” has become “type it into Google and write down whatever the AI spits out at the top of the page” - no matter how much she admonishes them not to do it. We don’t even need to address what genAI does to their writing assignments. She says this is prevalent across the board, both in middle and high school. If educators don’t adapt rapidly, this is going to hit us hard and fast.

bgwalter · 3h ago
I notice a couple of things in the pro-AI [1] posts: All start writing in a lengthy style like Steve Yegge in his peak. All are written by ex-programmers who are on the management/founder side now. All of them cite programmer friends who claim that AI is useful.

It is very strange that no real open source project uses "AI" in any way. Perhaps these friends work on closed source and say what their manager wants them to say? Or they no longer care? Or they work in "AI" companies?

[1] He does mention return on investment doubts and waste of energy, but claims that the agent nonsense works (without public evidence).

cesarb · 3m ago
> It is very strange that no real open source project uses "AI" in any way.

Using genAI is particularly hard on open source projects due to worries about licensing: if your project is under license X, you don't want to risk including any code with a license incompatible with X, or even under a license compatible with X but without the correct attribution.

It's still not settled whether genAI can really "launder" the license of the code in its training set, or whether legal theories like "subconscious copying" would apply. In the later case, using genAI could be very risky.

orangecat · 1h ago
I'm a programmer, not a manager. I don't have a blog. AI is useful.

It is very strange that no real open source project uses "AI" in any way.

How do you know? Given the strong opposition that lots of people have I wouldn't expect its use to be actively publicized. But yes, I would expect that plenty of open source contributors are at the very least using Cursor-style tab completion or having AIs generate boilerplate code.

Perhaps these friends work on closed source and say what their manager wants them to say?

"Everyone who disagrees with me is paid to lie" is a really tiresome refrain.

bwfan123 · 3h ago
There is a large number of wannabe hands-on coders who have moved on to become management - and they all either have coder-envy or coder-hatred.

To them, gen-ai is a savior - Earlier, they felt out of the game - now, they feel like they can compete. Earlier they were wannabe coders. Now they are legit.

But, this will last only until they accept a chunk of code put out by co-pilot and then spend the next 2 days wrangling with it. At that point, it dawns on them what these tools can actually do.

No comments yet

zurfer · 27m ago
Using AI in real projects is not super simple but if you lean into it, it can accelerate things.

Anecdotally check this out https://github.com/antiwork/gumroad/graphs/contributors

Devin is an AI agent

rjsw · 1h ago
At least in my main open source project, use of AI is prohibited due to potentially tainting the codebase with stuff derived from other GPL projects.
perplex · 3h ago
> I really don’t think there’s a coherent pro-genAI case to be made in the education context

My own personal experience is that Gen AI is an amazing tool to support learning, when used properly.

Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

jplusequalt · 3h ago
>Seems likely there will be changes in higher education to work with gen AI instead of against it, and it could be a positive change for both teachers and students.

Since we're using anecdotes, let me leave one as well--it's been my experience that humans choose the path of least resistance. In the context of education, I saw a large percentage of my peers during K-12 do the bare minimum to get by in the classes, and in college I saw many resorting to Chegg to cheat on their assignments/tests. In both cases I believe it was the same motivation--half-assing work/cheating takes less effort and time.

Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

But wait, this isn't an anecdote, it's already happening! Here's an excellent article that details the damage these tools are already causing to our students https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.

>[blank] is an amazing tool ... when used properly

You could say the same thing about a myriad of controversial things that currently exist. But we don't live in a perfect world--we live in a world where money is king, and often times what makes money is in direct conflict with utilitarianism.

ryandrake · 23m ago
> Now, what happens when you give those same children access to an LLM that can do essentially ALL their work for them? If I'm right, those children will increasingly lean on those LLMs to do as much of their schoolwork/homework as possible, because the alternative means they have less time to scroll on Tik Tok.

I think schools are going to have to very quickly re-evaluate their reliance on "having done homework" and using essays as evidence that a student has mastered a subject. If an LLM can easily do something, then that thing is no longer measuring anything meaningful.

A school's curriculum should be created assuming LLMs exist and that students will always use them to bypass make-work.

dowager_dan99 · 2h ago
>> an amazing tool to support learning, when used properly.

how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI? The same way social media and mobile apps made the internet easy, mindless clicking, LLMs make school a mechanical task. It feels like your argument is similar to LLMs helping experienced, senior developers code more effectively, while eliminating many chances to grow the skills needed to join that group. Sounds like you already know how to learn and use AI to enhance that. My 12-yr-old is not there yet and may never get there.

lonelyasacloud · 2h ago
>> how can kids, think K-12, who don't even know how to "use" the internet properly - or even their phones - learn how to learn with AI?

For every person/child that just wants the answer there will be at least some that will want to know why. And these endlessly patient machines are very good at feeding that curiosity.

jplusequalt · 6m ago
>For every person/child that just wants the answer there will be at least some that will want to know why

You're correct, but let's be honest here, the majority will use it as a means to get their homework over and done with so they can return to Tik Tok. Is that the society we want to cultivate?

>And these endlessly patient machines are very good at feeding that curiosity

They're also very good at feeding you factually incorrect information. In comparison, a textbook was crafted by experts in their field, and is often fact checked by many more experts before it becomes published.

rightbyte · 2h ago
> My 12-yr-old is not there yet and may never get there.

Wouldn't class room exams enforce that though? Like, imagining LLMs like an older sibling or parent that would help pupils cheat on essays.

murrayb · 3h ago
I think he is talking education as in school/college/university rather than learning?

I too am finding AI incredibly useful for learning, I use it for high level overviews and to help guide me to resources (online formats and books) deeper dives. Claude has so far proven to be an excellent learning partner, no doubt other models are similarly good.

strict9 · 3h ago
That is my take. Continuing education via prompt is great, I try to do it every day. Despite years of use I still get that magic feeling when asking about some obscure topic I want to know more about.

But that doesn't mean I think my kids should primarily get K-12 and college education this way.

Aperocky · 3h ago
Computer and internet has been around for 20 years and yet the evaluation systems of our education has largely remained the same.

I don't hold my breath on this.

icedchai · 2h ago
Where are you located? The Internet boom in the US happened in the mid-90's. My first part-time ISP job was in 1994.
dowager_dan99 · 2h ago
dial-up penetration in the mid-90's was still very thin, and high-speed access limited to universities and the biggest companies. Here's the numbers ChatGPT found for me:

* 1990s: Internet access was rare. By 1995, only 14% of Americans were online.

* 2000: Approximately 43% of U.S. households had internet access .

* 2005: The number increased to 68% .

* 2010: Around 72% of households were connected .

* 2015: The figure rose to 75% .

* 2020: Approximately 93% of U.S. adults used the internet, indicating widespread household access .

icedchai · 2h ago
Yes, it was thin, but 1995 - 96 was when "Internet" went mainstream. Depending on your area, you could have several dialup ISP options. Major metros like Boston had dozens. I remember hearing ISP ads on the radio!

1995 was when Windows 95 launched, and with its built in dialup networking support, allowed a "normal" person to easily get online. 1995 was the Netscape IPO, which kicked off the dot-com bubble. 1995 was when Amazon first launched their site.

SkyBelow · 3h ago
The issue with education in particular is a much deeper issue which gen AI has ripped bandages off and exposed the wound to the world, while also greatly accelerating its decay, but it was not responsible for creating it.

What is the purpose of education? Is it to learn, or to gain credentials that you have learned? Too much of education has become the latter, to the point we have sacrificed the former. Eventually this brings down both, as a degree gains a reputation of no longer signifying the former ever happened.

Or existing systems that check for learning before granting the degree that showed an individual learned were largely not ready for the impact of genAI and teachers and professors have adapted poorly. Sometimes due to lack of understanding the technology, often due to their hands being tied.

GenAI used to cheat is a great detriment to education, but a student using genAI to learn can benefit greatly, as long as they have matured enough in their education process to have critical thinking to handle mishaps by the AI and to properly differentiate when they are learning and when they are having the AI do the work for them (I don't say cheat here because some students will accidentally cross the line and 'cheat' often carries a hint of mens rea). To the mature enough student interested in learning more, genAI is a worthwhile tool.

How do we handle those who use it to cheat? How do we handle students who are too immature in their education journey to use the tool effectively? Are we ready to have a discussion about those learning who only care for the degree and the education to earn the degree is just seen as a means to an end? How to teachers (and increasingly professors) fight back against the pressure of systems that optimize on granting credentials and which just assume the education will be behind those systems (Goodhart's Law anyone)? Those questions don't exist because of genAI, but genAI greatly increased our need to answer them.

strict9 · 3h ago
Angst is the best way to put it.

I use AI every day, I feel like it makes me more productive, and generally supportive of it.

But the angst is something else. When nearly every tech related startup seems to be about making FTEs redundant via AI it leaves me with a bad feeling for the future. Same with the impact on students and learning.

Not sure where we go from here. But this feels spot on:

>I think that the best we can hope for is the eventual financial meltdown leaving a few useful islands of things that are actually useful at prices that make sense.

fellowniusmonk · 10m ago
All the angst is 100% manufactured by policy, LLMs wouldn't be hated if it didn't dovetail with the end of ZIRP, Section 174 specifically targeting engineer roles to be tax losers so others could be other tax winners, Macro Economic Uncertainty (which compounds the problems of 174.)

If ours roles hadn't been specifically targeted by government policy for reduction as a way to buoy government revenues and prop up the budgetary bottom line, in the face of decreasing taxes for favored parties.

This is simply policy induced multifactorial collapse.

And LLMs get to take the blame from engineers because that is the excuse being used. Pretty much every old school hacker who has played around with them recognizes that LLMs are impressive and sci-fi, it's like my childhood dream come true for interface design.

I cannot begin to say how fucking stupid the people in charge of these policies are, I'm an old head, I know exactly the type of 80s executives that actively likes to see the nerds suffer because we're all irritating poindexters to them.

The pattern of actively attacking the freedoms and sabotaging incomes of knowledge workers is not remotely a rare pattern, and it's often done this stupidly and at the expense of an countries economic footing and ability to innovate.

bob1029 · 3h ago
I agree that some kind of meltdown/crash would be the best possible thing to happen. There are too many players not adding any value to the ecosystem at this point. MCP is a great example of this - Complexity merchants inventing new markets to operate in. We need something severe to scare off the bullshit artists for a while.

How many civil engineering projects could we have completed ahead of schedule and under budget if we applied the same amount of wild-eyed VC and genius tier attention to the problems at hand?

pzo · 2h ago
MCP is now only used by really power users and mostly only in software dev settings but I see them used by users in the future. There is no decent mcp client for non tech savvy users yet. But I think if browsers will have build in better implementation of them they will be used. Think what perplexity comet or browser company dia trying to do. It's still very early for MCP.
schmichael · 3h ago
> I really don’t think there’s a coherent pro-genAI case to be made in the education context.

I think it’s simple: the reign of the essay is over. Educators must find a new way to judge a student’s understanding.

Presentations, artwork, in class writing, media, discussions and debates, skits, even good old fashioned quizzes all still work fine for getting students to demonstrate understanding.

As the son of two teachers I remember my parents spending hours in the evenings grading essays. While writing is a critical skill, and essays contain a good bit of information, I’m not sure education wasn’t overindexing on them already. They’re easy to assign and grade, but there’s so much toil on both ends unrelated to the core subject matter.

ryandrake · 9m ago
I'd also say that the era of graded homework in general is over, and using "proof of toil" assignments as a meaningful measurement of a student's progress/mastery.
thadt · 2h ago
I posit that of the various uses of student writing, the most important isn't communication or even assessment, but synthesis. Writing forces you to grapple with a subject in a way that clarifies your thinking. It's easy to think you understand something until you have to explain or apply it.

Skipping that entirely, or using a LLM to do most of it for you, skips something rather important.

spacephysics · 55m ago
I disagree with genAI not having an education use case.

I think a useful LLM for education would be one with heavy guardrails, which is “forced” to provide step-by-step back and forth tutoring instead of just giving out answers.

Right now hallucinations would be problematic, but assuming its in a domain like Math (and maybe combined with something like Wolfram to verify outputs), i could see this theoretical tool being very helpful to learning mathematics, or even other sciences.

For more open-ended subjects like english, history, etc then it may be less useful.

Perhaps only as a demonstration, maybe an LLM is prompted to pretend to be a peasant from Medieval Europe, and with text to voice we could have students as a group interact with and ask questions of the LLM. In this case, maybe the LLM is only trained on historical text from specific time periods, with settings to be more deterministic and reduce hallucinations

jplusequalt · 3h ago
Wholeheartedly agree. I can't help but think that proponents of LLMs are not seriously considering the impact it will have on our ability to communicate with each other, or to reason on our own accord without the assistance of an LLM.

It confounds me how these people would trust the same companies who fueled the decay of social discourse via the internet with the creation of AI models which aim to encroach on every aspect of our lives.

Workaccount2 · 2h ago
For me it threatens to be like a spell check. Back 20 years ago when I was still in school and still hand writing for many assignments, my spelling was very good.

Nowadays it's been a long time since my brain totally checked out on spelling. Everything I write in every case has spell check, so why waste neurons on spelling?

I fear the same will happen on a much broader level with AI.

kiitos · 2h ago
What? Who is spending any brain cycles on spelling? When you write a word, you just write the word, the spelling is... intrinsic? automatic? certainly not something that you have to, like, actively think about?
steveklabnik · 1h ago
I both agree and disagree, I don't regularly think about spelling, but there are certain words I know my brain always gets wrong, so when I run into one of those, things come crashing to a halt for a second while I try to remember if I'm still spelling them wrong or if I've finally trained myself to do it correctly.
username223 · 2h ago
... until spellcheck gets "AI," and starts turning correctly-spelled words into different words that it thinks are more likely. (Don't get me started on "its" vs. "it's," which autocorrect frequently randomly incorrects.)
soulofmischief · 3h ago
Some of us realize this technology was inevitable and are more focused on figuring out how society evolves from here instead of complaining and trying to legislate away math and prevent honest people from using these tools while criminals freely make use of them.
dowager_dan99 · 3h ago
This is a really negative and insulting comment towards people who are struggling with a very real, very emotional response to AI, and super-concerned about both the real and potential negatives that the rabid boosters won't even acknowledge. You don't have to "play the game" to make an impact, it's valid to try and challenge the math and change the rules too.
soulofmischief · 2h ago
> This is a really negative and insulting comment towards people who are struggling with a very real, very emotional response to AI

I disagree that my comment was negative at all. Many of those same people (not all) spend a lot of time making negative comments towards my work in AI, and tossing around authoritarian ideas of restriction in domains they understand like art and literature, while failing to also properly engage with the real issues such as intelligent mass surveillance and increased access to harmful information. They would sooner take these new freedom weapons out of the hands of the people while companies like Palintir and NSO Group continue to use them at scale.

> super-concerned about both the real and potential negatives that the rabid boosters won't even acknowledge

So am I, the difference is I am having a rational and not an emotional response, and I have spent a lot of time deeply understanding machine learning for the last decade in order to be able to have a measured, informed response.

> You don't have to "play the game" to make an impact, it's valid to try and challenge the math and change the rules too

I firmly believe you cannot ethically outlaw math, and this is part of why I have trouble empathizing with those who feel otherwise. People are so quick to support authoritarian power structures the moment it supposedly benefits them or their world view. Meanwhile, the informed are doing what they can to prevent this stuff from being used to surveil and classify humanity, and to find a balance that allows humans to coexist with artificial intelligence.

We are not falling prey to reactionary politics and disinformation, and we are not willing to needlessly expand government overreach and legislate away critical individual freedom in order to achieve our goals.

spencerflem · 51m ago
its not outlawing math, its outlawing what companies can sell as a product.

that's like saying that you can't outlaw selling bombs in a store because its "chemistry".

Or even for usage- can we not outlaw shooting someone with a gun because it is "projectile physics"?

Im glad you do oppose Palantir - we're on the same side and I support what you're doing! - but I also think you're leaving the most effective solution on the table by ignoring regulatory options.

jplusequalt · 3h ago
>Some of us realize this technology was inevitable

How was any of this inevitable? Point me to which law of physics demanded we reach this state of the universe. These companies actively choose to train these models, and by framing their development as "inevitable" you are helping absolve them of any of the negative shit they have/will cause.

>figuring out how society evolves from here instead of complaining and trying to legislate away math

Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?

>prevent honest people from using these tools while criminals freely make use of them

What is your argument here? Should we suggest that everyone learn how to money launder to even the playing field against criminals?

soulofmischief · 2h ago
> Point me to which law of physics demanded we reach this state of the universe

Gestures vaguely around at everything

Intelligence is intelligence, and we are beginning to really get down to the fundamentals of self-organization and how order naturally emerges from chaos.

> Could you not apply this exact logic to the creation of nuclear weaponry--perhaps the greatest example of tragedy of the commons?

Yes, I can. Access to information is one thing (must be carefully handled, but information wants to be free, and there should be no law determining what one person can say to another, barring NDAs and government classification of national secrets (which doesn't include math and physics) but we absolutely have international treaties to limit nuclear proliferation, and we also have countries who do not participate in these treaties, or violate them, which illustrates my point that criminals will do whatever they want.

> Should we suggest that everyone learn how to money launder to even the playing field against criminals?

I have no interest in entertaining your straw mans. You're intelligent enough to understand context.

bgwalter · 2h ago
DDT was a very successful insecticide that was outlawed due to its adverse effects on humans.
soulofmischief · 2h ago
I shouldn't have to tell you that producing, distributing and using a toxic chemical that negatively affects the earth and its biosphere are much, much different than allowing people to train and use models for personal use. This is a massive strawman and doesn't even deserve as much engagement as I've given it here.
absurdo · 2h ago
It didn’t have a trillion dollar marketing campaign behind it.
harimau777 · 2h ago
Have they come up with anything? So far I haven't seen any solutions presented that are both politically viable and don't result in people being even more under the thumb of late stage capitalism.
soulofmischief · 2h ago
This is one of the most complicated issues humanity has ever dealt with. Don't hold your breath, it's gonna be a while. Society at large doesn't even have a healthy relationship with the internet and mobile phones, these advancements in artificial intelligence came at both a good and awful time.
collingreen · 3h ago
If only there were more nuances and options between those two extremes! Oh well, back to the anti math legislation pits I guess.
soulofmischief · 2h ago
There are many nuances to this argument, but I am not trying to write a novel in a hacker news comment. Certain broad strokes absolutely apply, and when you get down to brass tacks it's about respecting personal freedom.
swyx · 1h ago
> Just to be clear, I note an absence of concern for cost and carbon in these conversations. Which is unacceptable. But let’s move on.

hold on, its very simple. here's a oneliner even degrowthers would love: extra humans cost a lot more in money and carbon than it cost to have an llm spin up and down to do this work that would otherwise not get done.

throwawaybob420 · 3h ago
It’s not angst to see the people who run the companies we work for “encourage” us to use Claude to write our code knowing full well it’s their attempt to see if they really can fire us without a hit in “productivity”.

It’s not angst to see students throughout the entire spectrum end up using ChatGPT to write their papers, summarize 3 paragraphs, and use it to bypass any learning.

It’s not angst to see people ask a question to an LLM and talk what it says as gospel.

It’s not angst to understand the environmental impact of all this stupid fucking shit.

It’s not angst to see the danger in generative AI not only just creating slop, but further blurring the lines of real and fake.

It’s not angst to see the vast amount of non-consensual porn being generated of people without their knowledge.

Feel like I’m going fucking crazy here, just day after day of people bowing down at the altar and legit not giving a single fuck about what happens after rofl

bluefirebrand · 1h ago
Hey for what it's worth, you aren't alone

This is a really wild and unpredictable time, and it's ok to see the problems looming and feel unsettled at how easily people are ignoring the potential oncoming train

I would suggest taking some time for yourself to distance yourself from this as much as you can for your own mental health

Ride this out as best you can until things settle down a bit. You aren't alone

nikolayasdf123 · 3h ago
> Go programming language is especially well-suited to LLM-driven automation. It’s small, has a large standard library, and a culture that has strong shared idioms for doing almost anything

+1 to this. thank you `go fmt` for uniform code. (even culture of uniform test style!). thank you culture of minimal dependencies. and of course go standard library and static/runtime tooling. thank you simple code that is easy to write for humans..

and as it turns out for AIs too.

icedchai · 2h ago
I have found LLMs (mainly using Claude) are, indeed, excellent at spitting out Go boilerplate.
piker · 3h ago
> I really don’t think there’s a coherent pro-genAI case to be made in the education context

I use ChatGPT as an RNG of math problems to work through with my kid sometimes.

Herring · 1h ago
I used it to generate SQL questions set in real-world scenarios. I needed to pick up joins intuitively, and the websites I could find were pretty dull.
lowsong · 3h ago
> at the moment I’m mostly in tune with Thomas Ptacek’s My AI Skeptic Friends Are All Nuts. It’s long and (fortunately) well-written and I (mostly) find it hard to disagree with.

Ptacek has spent the past week getting dunked on in public for that article. I don't think it lends you a lot of credence to align with it.

> If you’re interested in that thinking, here’s a sample; a slide deck by a Keith Riegert for the book-publishing business which, granted, is a bit stagnant and a whole lot overconcentrated these days. I suspect scrolling through it will produce a strong emotional reaction for quite a few readers here. It’s also useful in that it talks specifically about costs.

You're not wrong here. I read the deck and the word that comes to mind is "disgusting". Then again, the morally bankrupt have always done horrible things to make a quick buck — AI is no different.

icedchai · 2h ago
Getting "dunked" only means it's controversial, not necessarily wrong. Developers who don't embrace AI tools are going to get left behind.
lowsong · 1h ago
> Getting "dunked" only means it's controversial, not necessarily wrong.

It undermines the author's position of being "moderate" if they align with perhaps the most decisive and aggressively written pro-AI puff piece doing the rounds.

> Developers who don't embrace AI tools are going to get left behind.

I'm not sure how to respond to this. I am doubtful a comment on Hacker News will change your mind, but I'd ask you to think about two questions.

If AI is going to be as revolutionary in our industry as other changes of the past, like web or mobile, then how would a similar statement sound around those? Is saying "Developers who don't embrace mobile development are going to get left behind" a sensible statement? I don't think so, even with how huge mobile has been. Same with other big shifts. "Developers who don't embrace microservice architecture are going to get left behind"? Maybe more comparable, but equally silly. So, why would it be different than those? Do you think LLM tools are more impactful than any other change in history?

Second, if AI truly as as groundbreakingly revolutionary as you suggest, what happens to us? Maybe you'll call me a luddite, raging against the loss of jobs when confronted with automated looms, but you'll have to forgive me for not welcoming my own destruction with open arms.

icedchai · 58m ago
I understand your skepticism. I think, in 20 years, when we look back, we'll see this time was the beginning of a fundamental paradigm shift in software development. This will be similar in magnitude to the move from desktop to web development in the 90's. If I told you, in 1996, that "developers who don't embrace web development will be left behind", it would be an accurate statement.
bgwalter · 2h ago
Sure, tptacek will outprogram all of us. With his two GitHub repositories, one of which is a POC.
icedchai · 2h ago
Have you tried any of the tools, like Cursor or Zed? They increase productivity if you use them correctly. If you give them quality inputs like well written, spec-like prompts, instruct them to work in phases, provide feedback on testing, the results can be very, very good. Unsurprisingly, this is similar to what you need to give to a human to also get positive results.
kiitos · 2h ago
Then maybe replace "getting dunked on" with "getting ratio'd" -- underlying point is the same, the post was a bad take.
icedchai · 2h ago
What was bad about it? Everything he wrote all sounded very pragmatic to me.
Havoc · 2h ago
> I think about the carbon that’s poisoning the planet my children have to live on.

Tbh I think we’re going to need a big breakthrough to fix that anyway. Like fusion etc.

A bit less proompting isnt going to save the day

That’s not to say one shouldn’t be mindful. Just think it’s no longer enough

absurdo · 3h ago
Poor HN.

Is there a glimpse of the next hype train we can prepare to board once AI gets dulled down? This has basically made the site unusable.

ManlyBread · 3h ago
My sentiments exactly, lately browsing HN feels like a sales pitch for LLMs, complete with the same snark about "luddites" and promises of future glory I remember back when NFTs were the hot new thing in tech. Two more weeks I guess.
Kiro · 2h ago
NFTs had zero utility but even the most anti AI posts are now "ok, AI can be useful but what are the costs?". It's clearly something different.
whynotminot · 1h ago
Really? I feel like hackernews is so anti-AI I go to other places for the latest. Anything posted here gets destroyed by cranky programmers desperately hoping this is just a fad.
yoz-y · 3h ago
At the moment 6 out of 30 front page articles are about AI. That’s honesty quite okay.
lagniappe · 3h ago
I use something called the Rust Index, where I compare a term or topic to the number of posts with "written in Rust" in the title.
steveklabnik · 1h ago
HN old timers would call this it Erlang Index.
absurdo · 3h ago
C-can we get an open source of this?

Is it written in Rust?

layer8 · 1h ago
Anti-aging is an evergreen.
acedTrex · 3h ago
It has made large parts of the internet and frankly previously solid tools and products unusable.

Just look at the Github product being transformed into absolute slop central its wild. Github universe was exclusively focused on useless LLM additions.

gh0stcat · 3h ago
I'm interested to see what the landscape of public code will look like in the next few years, with sites like StackOverflow dropping off and discussions moving to discord, plus code generation flooding github, writing your own high quality code in the open might become very valuable.
acedTrex · 2h ago
I am very bearish on that idea to be honest, I think the field will stagnate.
rightbyte · 2h ago
Giving away secret sauce for free is not the way of the new guilded era.
greybox · 3h ago
This is probably the best opinion piece I've read so far on GenAI
flufluflufluffy · 2h ago
Yep, basically sums up all of my thoughts about ai perfectly, especially the environmental impact.
timr · 3h ago
> On the money side? I don’t see how the math and the capex work. And all the time, I think about the carbon that’s poisoning the planet my children have to live on.

The "math and capex" are inextricably intertwined with "the carbon". If these tools have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem, and we'll all be better off. If the tools have no net value at a market-clearing price for energy (as purported), then it won't be a problem.

I mean, maybe the productive way to say this is that we should more formally link the environmental cost of energy production to the market cost of energy. But as phrased (and I suspect, implied), it sounds like "people who use LLMs are just profligate consumers who don't care about the environment the way that I do," and that any societal advancement that consumes energy (as most do) is subject to this kind of generalized luddite criticism.

lyu07282 · 2h ago
> If these tools have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem

I'm confused what you are saying, do you suggest "the market" will somehow do something to address climate change? By what mechanism? And what do LLMs have to do with that?

The problem with LLMs is that they require exorbitant amounts of energy and fresh water to operate, driving a global increase in ecological destruction and carbon emissions. [ https://www.greenmemag.com/science-technology/googles-contro... ]

That's not exactly a new thing, just making the problem worse. What is now different with LLMs as opposed to for example crypto mining?

timr · 1h ago
> I'm confused what you are saying, do you suggest "the market" will somehow do something to address climate change? By what mechanism? And what do LLMs have to do with that?

No, I'm suggesting that the market will take care of the cost/benefit equation, and that the externalities are part of the costs. We could always do a better job of making sure that costs capture these externalities, but that's not the same thing as what the author seems to be saying.

(Also I'm saying that we need to get on with nuclear already, but that's a secondary point.)

> The problem with LLMs is that they require exorbitant amounts of energy and fresh water to operate, driving a global increase in ecological destruction and carbon emissions.

They no more "require" this, than operating an electric car "requires" the same thing. While there may be environmental extremists who advocate for a wholesale elimination of cars, most sane people would be happy for the balance between cost and benfit represented by electric cars. Ergo, a similar balance must exist for LLMs.

lyu07282 · 42m ago
> I'm suggesting that the market will take care of the cost/benefit equation, and that the externalities are part of the costs.

You believe that climate change is an externality that the market is capable of factoring in the cost/benefit equation. Then I don't understand why you disagreed with the statement "the market will somehow do something to address climate change". There is a more fundamental disagreement here.

You said:

> If these tools [LLMs/ai] have some value, then we can finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem

And again, why? By what mechanism? Let's say Microsoft 10x it's profit through AI, then it will "finally invest in forms of energy (i.e. nuclear) that will solve the underlying problem". But why? Why would it? Why do you say "we" if we talk about the market.

jillesvangurp · 2h ago
I think the concerns about climate and CO2 emissions are valid but not a show stopper. The big picture here is that we are living through two amazing revolutions at the same time:

1) The emergence of LLMs and AIs that have turned the Turing test from science fiction into basically irrelevant. AI is improving at an absolutely mind boggling rate.

2) The transition from fossil fuel powered world to a world that will be net zero in few decades. The pace in the last five years has been amazing. China is basically rolling out amounts of solar and batteries that were unthinkable in even the most optimistic predictions a few years ago. The rest of the world is struggling to keep up and that's causing some issues with some countries running backward (mainly the US).

It's true that a lot of AI is powered by mix of old coal plants, cheap Texan gas and a few other things that aren't sustainable (or cheap if you consider the cleanup cost). However, I live in the EU and we just got cut off from cheap Russian gas, are now running on imported expensive gas (e.g. from Texas) and have some pet peeves about data sovereignty that are causing companies like OpenAI, Meta, and Google to have to use local data centers for serving their European users. Which means that stuff is being powered with electricity that is locally supplied with a mix of old dirty legacy infrastructure and new more or less clean infrastructure. That mix is shifting rapidly towards renewables.

The thing is that old dirty infrastructure has been on a downward trajectory for years. There are not a lot of new gas plants being built (LNG is not cheap) and coal plants are going extinct in a hurry because they are dirty and expensive to operate. And the few gas plants that are still being built are in stand by mode much of the time and losing money. Because renewables are cheaper. Power is expensive here but relatively clean. The way to get prices down is not to import more LNG and burn it but to do the opposite.

What I like about things that increase demand for electricity is that they generate investments in providing solutions to clean energy and actually accelerate. The big picture here is that the transition to net zero is going to vastly increase demands on power grids. If you add up everything needed for industry, transport, domestic and industrial heating, aviation, etc. it's a lot. But the payoffs are also huge. People think of this as cost. That's short term thinking. The big picture here is long term. And the payoff is net zero and cheap power making energy intensive things both affordable and sustainable. We're not there yet but we're on a path towards that.

For AI that means, yes, we need a lot of TW of power and some of the uses of AI seem frivolous and not that useful. But the big picture is that this is changing a lot of things as well. I see power needs as a challenge rather than a problem or reason to sit on our hands. It would be nice if that power was cheap. It so happens that currently the cheapest way to generate power happens to be through renewables. I don't think dirty power is long term smart, profitable, or necessary. And we could definitely do more to speed up its demise. But at the same time, this increased pressure on our grids is driving the very changes we need to make that happen.

sovietmudkipz · 3h ago
Minor off-topic quibble about streams: I’ve been learning about network programming for realtime multiplayer games, specifically about input and output streams. I just want to voice that the names are a bit confusing due to the perspective I adopt when I think about them.

Input stream = output from the perspective of the consumer. Things come out of this stream that I can programmatically react to. Output stream = input from the perspective of the producer. This is a stream you put stuff into.

…so when this article starts “My input stream is full of it…” the author is saying they’re seeing output of fear and angst in their feeds.

Am I alone in thinking this is a bit unintuitive?

nemomarx · 3h ago
I think an input stream is input from the perspective of the consumer? Like it's things you are consuming or taking as inputs. Output is things you emit.

Your input is ofc someone else's output, and vice versa, but you want to keep your description and thoughts to one perspective, and in a first person blog that's clearly the authors pov, right?

greybox · 2h ago
> horrifying survey of genAI’s impact on secondary and tertiary education.

I agree with this. It's probably terrible for structured education for our children.

The one and only one caveat: Self-Driven language learning

The one and only actual use (outside of generating funny memes) I've had from any LLM so far, is language learning. That I would pay for. Not $30/pcm mind you . . . but something. I ask the model to break down a target language sentence for me, explaining each and every grammar point, and it does so very well. sometimes even going to explain the cultural relevance of certain phrases. This is great.

I've not found any other use for it yet though. As a game engine programmer (C++) The code I write now a days quite deliberate and relatively little compared to a web-developer (I used to be one, I'm not pooping on web devs). so if we're talking about the time/cost of having me as a developer work on the game engine, I'm not saving any time or money by first asking Claude to type what I was going to type anyway. And it's not advanced enough yet to hold the context of our entire codebases spanning multiple components.

Edit, Migaku [https://migaku.com/] is a great language learning application that uses this

As OP, I'm not sure it's worth all that CO2 we're pumping into our atmosphere.

Alex-Programs · 1h ago
AI progress has also made high quality language translation a lot cheaper. When I started https://nuenki.app last year, the options were exorbitantly priced DeepL for decent quality low latency translation or Sonnet for slightly cheaper, much slower, but higher quality translation.

Now, just a year later, DeepL is beaten by open models served by https://groq.com for most languages, and Claude 4 / GPT-4.1 / my hybrid LLM translator (https://nuenki.app/translator) produce practically perfect translations.

LLMs are also better at critiquing translations than producing them, but pre-thinking doesn't help at all, which is just fascinating. Anyway, it's a really cool topic that I'll happily talk at length about! They've made so much possible. There's a blog on the website, if anyone's curious.