I am disappointed in the AI discourse

63 steveklabnik 80 5/28/2025, 5:34:52 PM steveklabnik.com ↗

Comments (80)

righthand · 1d ago
The offered discourse on AI is “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”

That doesn’t exactly leave a lot of room for people to feel the need to be involved in a discourse about it. For one thing the majority of people aren’t all workaholics looking for extra hobby time.

The author mentions ChatGpt can search the web. Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI.

Maybe the discourse sucks because the reality of it sucks?

foxyv · 1d ago
> "Eventually we won’t even need you and you can go get a hobby to spend the rest of your life."

I think the problem is that the statement is more like:

"Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."

kunzhi · 1d ago
> "Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."

You reminded me of an interesting article from 10 years ago, so I went ahead and re-posted it: https://news.ycombinator.com/item?id=44119705

righthand · 1d ago
Yes but the lower classes dying in a ditch is considered a hobby by the upper class.
foxyv · 1d ago
Ahh yeah, the alternative. AI can free you from your job so you can be hunted for sport!
soraminazuki · 1d ago
If anyone wants to know exactly how that would be achieved, just look at how Google does "support" right now. No need to predict the future.

Google's "support" is a robot that sends passive aggressive mocking emails to those who were screwed over by another robot that made up reasons to lock them out of their digital lives [1]. It allows Google to save a ton of money while evading accountability.

It's the same thing with the latest overhyped robots. It won't even matter whether or not it's actually competent at the thing it's supposed to do. It will replace people regardless.

[1]: https://news.ycombinator.com/item?id=26061935

palmotea · 1d ago
> I think the problem is that the statement is more like:

> "Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."

Exactly. If...

>> “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”

...was even remotely true, we'd have already had that outcome, before AI.

wenc · 1d ago
How about the one some of us believe:

“Isn’t it cool that I don’t have to write boiler plate and can prototype quickly? My job isn’t replaced because coding is not my job — it’s solving domain specific problems?”

I’m in my late 40s, have written code for 3 decades (yes started in my teens) and have always known that the code was never the point. Code is a means of solving a problem, mostly unrelated to computers (unless you work on pure software tooling).

This is why I chose not to study computer science. I studied something else and kept coding. I’ve always felt that CS as a field is oversubscribed because of $$$ dangling due to big tech.

So many fields are computational these days and the key is to apply coding to these fields. For instance, a PhD in biology gets you nowhere these days so many biologists these days are computational biologists or statisticians. Same with computational chemists, etc.

For most of my career I’ve written code, but in service of solving a real world physical problem (sensor based monitoring, logistics, mathematical modeling, optimization).

righthand · 1d ago
And we must all enjoy using Llms because it helps us code? Even if you enjoy coding? What if coding helps me relax? Relaxing isn’t my job.

I didn’t have to write boiler plate before Llms with a code scaffolding tool like create-express-app.

bananapub · 1d ago
literally no one is telling you to do anything.

you can just chill out and have whatever hobbies you want, using whatever tools you want.

righthand · 59m ago
Just not make money at it, that’s for Llm Ceos.

No comments yet

allturtles · 1d ago
> My job isn’t replaced because coding is not my job — it’s solving domain specific problems?

I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems? Look at how bad text generation, code generation, audio generation, image generation was five years ago versus how capable it is today. Video generation wasn't even conceivable then.

As an equally middle-aged person with children I'm less worried about myself than the next generation. What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?

munksbeer · 1d ago
The economy only works because people consume goods and services. If they can't do that, then capital can't make any money. So whatever the case is, capital needs to ensure that the ability to consume is ensured.

This is the same conversation that happens decade after decade.

trod1234 · 22h ago
I agree with you, but no one listened back then, why would they ever think about listening now.

Capital formation comes first before everything else, not the other way around, when you have nothing to trade that's of value it simply can't happen, and inevitable hyper-inflation/deflationary cycles begin which once started can't be stopped.

These people think, survival is guaranteed, jobs are guaranteed, the how doesn't matter; it happens because some politician says it does; reality doesn't matter.

That's the line and level of thinking we are dealing with here. How do you convince someone that if they do something, they and their children may die as a consequence; if they can't make that connection.

Communication ceases being possible in a noisy medium at a certain point according to Shannon. Pretty sure we've crossed that point, and where we may have been able to discern and separate the garbage previously, now through mimicry its all but impossible.

Intelligent people don't waste their efforts on lost causes. People make their own decisions, and they and their children will pay for the consequences of those choices; even if they didn't realize that was the choice they were making at the time they made it.

munksbeer · 17h ago
> I agree with you, but no one listened back then, why would they ever think about listening now.

Because we lead vastly better lives today than 100 years ago, when everyone was also raging about technology stealing jobs. The economy has to adapt to technology changes, there is no other way. It is a self healing system. If technology removes a lot of jobs, then new jobs are created. It has to be this way, don't you see?

wenc · 1d ago
> I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems?

If your domain is complex enough as well as have a critical people-facing component you generally still have some runway. If it’s not then it’s ripe for disruption anyway, if not by LLMs then by something else. I pivoted at age 32 because of this. I pivoted again at age 40 (I took a two level title drop (principal engineer to midlevel), but I got to learn a new domain - and got promoted back to one level below and now make more money).

I always treat my marketability not as a one and done but a perishable quantity. I’ve never taken for granted that I’ll have job security if I don’t strategize — because I grew up in a time of uncertainty and in a society where a high paying job was not guaranteed (some jobs like grocery clerk were however). People talk about “job security” as an entitlement of life are the first ones to be wiped out.

That said, not everyone is capable of constantly upgrading their skills and pivoting — we need some cushion for economic disruption for folks who have limited retrainability. But suspect this is not everyone — most people just haven’t had to do it so they think they can’t.

Americans have not had to face this en mass in the last 30 years but many people around the world have had to. If you’ve lived in competitive societies where there is job scarcity you get quickly used to this reality.

> What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?

I think those jobs will still exist in some form but there will be a painful period where everyone figures out how to be differentiated. I’m a hobbyist YouTuber in my free time (YouTuber was a job that didn’t exist before) and I think it’s hard to replace parasocial relationships — AI slop already exists on YouTube which gets views but few subscriptions.

The scope of jobs will also shift, and we will see things moving toward realms requiring human judgment — delivering things that require interpretation. Job scopes today are actually already much more than people think. Again no guarantee against disruption but job security was always an illusion anyway and the sooner we realize this the sooner we can adopt a preparatory mindset. (In a way, Americans are actually well positioned due to our relationship with capitalism)

Even the demise of radiologists has been overstated because being a radiologists is much more than just detecting disease from an image.

Writers will still be around — they might not be able to charge per word, but they’ll pivot to a new model. The transactional model will be gone but I’m convince something else will replace it.

I’m not sure about any of this because I can’t predict the future, but I have seen the past and the doomsday scenario doesn’t seem to me the inexorable one.

trod1234 · 22h ago
You should read Hazlitt.

There are things being done which cannot be undone, and there are issues that were long predicted, and ignored, and the consequences are now bearing fruit.

If you haven't heard a real doomsday scenario that's likely, you haven't been listening to the right people, and you rely far too much on the fallacy of survivorship bias.

If you don't have a plan to replace a fundamental societal model, there are two potential outcomes, someone comes up with something because they've been working on it (and it works, which is rare), or all dependencies that rely upon that system fail, and the consequences occur. In other words, everyone starves.

Think about what no exchange being possible suddenly would mean, overnight, for our supply chains with logistics delivering just in time. We've seen it, during the pandemic, but that was just a small disruption, and not a continuing one.

Imagine it. Nothing on the shelves. No amount of money that will let you get what you need (toilet paper). No means that would let this occur in the short timetables of need. What happens. Prior to 2020, people would call you crazy if you said those things would happen.

Bad things happen if you don't have a plan to make sure they don't happen.

creer · 1d ago
The discourse sucks not just on this topic. To be fair. It's an unsolved general problem (which deserves to put work in.)
horsawlarway · 1d ago
I think this is hilarious, because it's exactly the type of low effort response that tends to dominate general conversations about AI.

You are making the author's point.

I think there's a lot more nuance in

> "isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life."

than you'd like to admit, and some conversations that are worth having in earnest instead of simply resorting to trivial things like

> "Maybe the discourse sucks because the reality of it sucks?"

Maybe the reality of it doesn't suck?

In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.

It absolutely sucked for some people, some of the time - and that's an important part of the conversation, but it's not the conversation "end of sentence".

righthand · 1d ago
> than you'd like to admit, and some conversations that are worth having in earnest instead of simply resorting to trivial things like

Sure the author wants to talk about technical specifics of Llms. Yet Llms enable a lot of people to avoid understanding even the technical points of it. That would disincentivize people from understanding enough to have discourse which the author considers valuable.

> In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.

I really don’t care about the grand scheme of things type responses to criticism of Llms. But for the sake of argument why should I care about discussing Llms and their technical aspects if in the grand scheme of things we’re all to eventually die?

It is the end of the sentence because most people can’t imagine what comes next besides not having a job. It’s not that they won’t be fine if a super AI takes over tomorrow, it’s that is literally the limit of their concerns today is making money for themselves.

It might be different if Llms actually made the users richer but it doesn’t it makes the corporations richer.

horsawlarway · 1d ago
> But for the sake of argument why should I care about discussing Llms and their technical aspects if in the grand scheme of things we’re all to eventually die?

Why do anything, then? This is the laziest possible retort I can imagine.

righthand · 1d ago
> In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.

So you’re allowed that type of rhetoric, but when I use it, it’s lazy.

My point has been that it sucks, now. Right now, it’s hysterical on both sides of the conversation. So yes it sucks. In the grand scheme of things it may not suck or it could get even worse. Again one side of the conversation is choosing to promote only one of those ideas. Even though there is no evidence we will end up in a utopia from it. In fact there’s a lot of evidence to the contrary. So yes the conversation sucks. The reality right now sucks.

horsawlarway · 1d ago
Yes, because - to be blunt - yours is so much lazier.

I picked machines that were undeniably controversial at the time they were introduced, because they did all the things you're claiming to be upset about here: They put people out of work, they enriched capital owners, they changed social structures, they altered governments.

Essentially - they are relevant discussion items for the topic at hand (if you're unaware, the general term "luddite" for use as "anti-technology" directly comes from the english textile workers getting replaced by looms, which they tried to destroy repeatedly, and were eventually suppressed with military force, with sentences including execution and exile to penal colonies).

That's not some blasé "waves hand 'technology good'" reference I'm making, and I think your response is partially so annoying because we likely agree on a lot of things about the potential negative impacts of AI.

I just think the way you're articulating it is relatively low effort, and I think the original post is absolutely allowed to say that. You'll get dismissed because you're so obviously wrong about the easily verifiable things that it's hard to take you seriously about anything.

Which is exactly the impact of comments like "Why talk about this because we'll all eventually die" - they alienate your allies because they are trivial and trite trash.

righthand · 56m ago
Okay well as long as we’re delivering low effort attacks, I totally agree and think the same of you. I can’t take your response or ANYTHING you say seriously. Good talk, you’re right there’s plenty of good discourse on AI between people. This conversation is a winning example.
BobaFloutist · 1d ago
cotton gin is probably not the example you want to use here.
horsawlarway · 1d ago
No - I picked it precisely because it's a machine that improved efficiencies but undeniably had negative impacts as well.

I think that's my whole point - I'm not saying that the person I initially responded to is incorrect in not liking the impacts AI might have. I think it's a perfectly reasonable take to be concerned about how AI might impact you, and to express that, along with negative sentiments.

I'm saying that the argument they are currently making

> "Maybe the discourse sucks because the reality of it sucks?"

and even the slightly better

> "Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI."

Is a guaranteed way to be ignored and dismissed because it's a low effort emotional response - not an actual argument.

righthand · 54m ago
Those technological advancements with Llms are low effort advancements. So you only get low effort responses.

Do you understand why maybe no one’s wowed by browser automation/automated web search? Can you extrapolate why no one’s stoked to talk on Llm bots replacing them with low effort inaccurate “good-enough” fly-by-research summarization?

These are obvious for most people that’s why it’s low effort. You shouldn’t need to expound high-effort discussion just because you feel the low effort discussion doesn’t make a clear point or makes Llms look bad. The points are well discussed, and obvious. Hence low effort, hence sucky discourse.

Feel free to ignore and dismiss my perspective that doesn’t make me wrong or you right. It just makes you a bully.

tartuffe78 · 1d ago
Yea it's very polarized like everything seems to be these days. It's very similar to crypto in that people pumping it up for financial gain or because they enjoy being part of the current hype are overconfident in how much of an impact it's going to have. Unlike crypto my workplace is very into it for our domain, so it's something we have to deal with.

At my company most people do have a very balanced view of its capability, which is nice, but it's hard to find online discussion that isn't polarized. I guess it's also disheartening because if half of the optimistic projections are true, it's going to mean more kids cheating on homework, less people who are able to read and write and think critically, and more lonely people with AI girlfriends/boyfriends disconnected from human society.

rvz · 1d ago
> It's very similar to crypto in that people pumping it up for financial gain or because they enjoy being part of the current hype are overconfident in how much of an impact it's going to have. Unlike crypto my workplace is very into it for our domain, so it's something we have to deal with.

The worst part is that many people (investors) who are actively pushing such a narrative (crypto, AI, etc) not only most have an undisclosed position, but in private they are doing / saying the exact opposite.

That is the reason why you see them publicly screaming such nonsensical predictions and you then get companies like builder.ai collapsing. (You won't see any investor in builder.ai proudly announcing this news in their portfolio).

This won't be the last of frauds in AI.

breckinloggins · 1d ago
I propose we stop using the term "discourse" to describe what amounts to mobs with ever-shifting alliances shouting at each other.

Not really sure what I'd call it instead, but in a perfect world "irrelevant" might be my top candidate.

And yes, I'm aware that this very comment is an example of such "discourse". But at least with sites like Reddit and Hacker News the "comment-ness" of these little thought droppings is part of the UI.

Microblogging sites, though... the thought droppings are the point. I remain unconvinced that this will be seen in the future as a societal advance no matter the underlying positions or values.

righthand · 1d ago
Hysteria is a good term.
AmazingTurtle · 1d ago
Well, technically, GPT "itself" really isn't a search engine. The LLM was trained with capabilities to communicate to the runtime that I can request additional information for fulfilling the request, such as performing a web request - which is performed by the runtime and the results were fed back into the LLM as a regular prompt. Thus GPT only answers based on the textual context - again: involving statistical likeliehood.
cheald · 1d ago
Right, and I think that in this case the person making the original statement was probably erroneously using "ChatGPT" to mean "LLM" (in the same way that "Kleenex" is used to mean a disposable tissue).

It's true that ChatGPT-the-product can search the web (using a traditional search engine as a RAG datastore), and it's also true that o4-the-llm is not a go-out-and-find-external-truth search engine. I'm more sympathetic to the original point because many people do misunderstand what LLMs are and what they're capable of, and it's worth dispelling those myths (though it's valuable to do it precisely specifically to avoid this sort of confusion).

swyx · 1d ago
bluesky uniquely hates AI. very toxic around it. https://eugeneyan.com/writing/anti/

its sad that the informal social media bubbles we all have formally forked[0] into Red Twitter and Blue Twitter. just a direct consequence of filters, feeds, algorithms that we say we dont want but vote for with our actions.

with Latent Space I try very hard to stay practical and grounded. but this means i lose out to other podcasters who get huge hits by asking biglab people their AGI timelines (perpetually 2 years away every 2 years, upper bounds of every estimation conveniently landing within the lifetime of $current_middle_aged_generation)

with AIE i started my agents conference with a loud agents skeptic (https://www.youtube.com/@aiDotEngineer/search?query=sayash). that did better online but the audience/sponsors didn't like being told they were overhyping.

[0]: wont stop there - i have called it the Great Unzippening: https://swyx.io/the-great-unzippening

steveklabnik · 1d ago
Thanks for sharing that post. I really like Bluesky, and perceive it as not being uniquely anti-AI, but maybe that's actually not true.
growthwtf · 1d ago
I remember this incident. The screenshots from sywx are fairly tame, the dataset creator had death threats posted.

That said, I'd re-emphasize your perception slightly — "perceive it as not being _uniquely_ anti-AI" is more how I view it. I see similar sentiment on other social media too.

steveklabnik · 1d ago
Oh, after reading this, I also do remember the incident. I was pretty anti-AI myself at that point but also found the backlash very confusing. Honestly thinking about my response to that is probably a part of why I've moved towards the pro-AI side.

And yeah, I think you have the better emphasis too. thanks :)

dragonwriter · 1d ago
> bluesky uniquely hates AI. very toxic around it.

Not unique at all; AI hate to the point of viewing it as literally the destruction of civilization warranting being stopped by any means is extremely common on Twitter.

If there's anything unique about Bluesky’s anti-AI culture is that the opposite extreme of AI grifters hasn't moved to Bluesky in numbers that provide the balance of nuttiness that Twitter provides.

sebstefan · 1d ago
>Not unique at all; AI hate to the point of viewing it as literally the destruction of civilization warranting being stopped by any means is extremely common on Twitter

Well maybe because it's not at all being addressed by the other side

Things that are never addressed by AI grifters:

* The effect it's going to have on the sources of data it's driving traffic away from?

* What happens to the adoption curve when Stack overflow is dead and LLMs don't have a strong base of knowledge on it?

* Unlike the sewing machine, this one is primarily automating the tasks people enjoy?

* Climate change impact?

It's like wailing that the discourse around proof of work cryptos isn't up to snuff, but you don't care about the environmental impacts yourself. Well, then your discourse is not up to standard either.

swyx · 1d ago
you can block grifters. easy.

nothing you can do about absence of and harrassment of high signal AI people. which eugene is. that is bluesky's unique problem so far.

dragonwriter · 1d ago
Yes, if you are relying on social media as an AI information source, the absence of “high signal AI people” on a platform is a problem, though its a distinct problem from “AI hate” on the platform.
Karrot_Kream · 1d ago
Do you think LLM powered labellers can help?
causal · 1d ago
I'm sympathetic but I find it surprising that people expect rich discourse on microblogging sites like BlueSky, et. al.

There is probably an inverse relationship between number of voices on a platform and how nuanced the discourse can be. Podcasts kind of take this further by isolating the conversation to a few people who can dig deep.

Doesn't make every Tweet toxic and every podcast deep, but there's a tradeoff nonetheless.

OmarShehata · 1d ago
I don't think that's necessarily true, I think it's about curation, not volume. The largest open source projects in the world have enormous inbound volume but extremely high quality discussion because of curation (I'm thinking about maintenance of Wikipedia, Open Street Map, and Godot).

This is also true on twitter & blue sky. Looking at the general feed is a completely different world from looking at specific networks.

minimaxir · 1d ago
Pre-2023 Twitter had very good and nuanced discourse about AI, and sharing generative AI demos was more encouraged.
steveklabnik · 1d ago
I don't think this is limited to microblogging. There's a reason I posted it here.
JSR_FDED · 1d ago
I think the problem is that so many AI prognosticators talk about the impact of AI, without really talking about the impact of AI.

It’s easy to say “job xyz will be disrupted by AI”. Too few go the next level and say “that means that there will be almost no entry level positions for xyz, which means your kids, who are in school now, will face a very uncertain future. Here’s how you might prepare them for that future.”

Breathless pronouncements, not enough empathy, and not enough reckoning with the potential consequences. I’m not at all surprised people are turned off.

sebstefan · 1d ago
>Oh, and of course, ethical considerations of technology are important too. I’d like to have better discussions about that as well.

That's the entire reason for the polarization. There's no point writing a blog post about the state of the discourse if you leapfrog over the crux of the debate.

There's a reason the detractors are ignoring the facts

If you're upset that they are acting like this, then you can't do the reverse

Talk about the environment, the good parts of life it's trying to automate away, the effect it's having on the internet and what happens to the adoption curve of new techs when Stack overflow is dead.

steveklabnik · 21h ago
I intend to talk about these things. Part of why I didn't get into it in this post is because I want to hear opinions and get resources to help me think through what I think of on this, so that I can do so in a thoughtful manner.
jmathai · 1d ago
I feel this. My opinion is that folks on either side are not well informed.

Those who say AI won’t disrupt coding that much seem to use anecdotal evidence. Their own experiences, mostly. And perhaps they are using it differently than those who think otherwise. But they don’t seem interested to find out.

Similarly, those who claim it will disrupt coding a lot seem to be making premature conclusions … also from anecdotal evidence. Perhaps their work is a better use case for AI assisted coding. But they also don’t seem interested in learning more.

Sounds a lot like other polarizing topics, eh? I think that’s because it directly impacts people’s livelihood and identity … for better or worse.

FiatLuxDave · 1d ago
I think that a large reason why the discourse sucks is because the public is seeing hardly any discussion using language that is specific about types and features.

Let me explain. With older technologies, the language to talk about it is well developed. I like to use nautical terminology as an example that most people are somewhat familiar with. There are a lot of terms like jib, sheets, mainsail, schooner, keel, galley etc, and a lot of people don't know what those mean. But it is pretty easy to recognize that there is a whole terminology to describe ships and the features thereof which is used by experts and which can be very specific. If one guy says the boat won't make the trip, and it's because the keel goes more than two fathoms deep, and the other guy says that of course it can because the galleon can go bireme past five hundred knots, even landlubbers will be able to figure out who knows their stuff.

But in the current AI discourse, AI is AI is AI. Agentic LLMs? AI. Non-agentic LLMs? Also AI. Diffusion Models? Al. Search engine? AI, kinda. R2D2? AI. Autocomplete? Sure, why not, AI. It's like if sailors used language that barely distinguished between specific nautical technologies. Boat is Boat is Boat.

Now, nautical terms have developed over thousands of years, and AI (whichever type you mean) is a new technology that is not fully developed. But imagine a discourse where Boat is Boat is Boat. The bosses learn that we just got Boat working, and are making plans to conquer the New World. Other people are concerned about the moral implications of conquering the New World. Meanwhile, one of the sailors tasked with this tried using Boat over the weekend fishing with a friend, and he's not sure he can paddle it that far. Another guy tell him not to worry, that there's a new model of Boat and it doesn't even need a paddle. He says at the current rate of advancement, we should have a new Boat capable of going underwater in less than ten years! We thought of coming up with a name for this new type, but it's basically just another kind of Boat.

If we want better talk about AI, we should use better language about AI. We are living in the time where the language used to talk about these things will be developed. We might find there is more agreement when we are talking about the same things.

steveklabnik · 1d ago
Thank you for this, it's good food for thought. I do believe in soft linguistic relativity, but I hadn't considered this angle on this topic.
Nevermark · 1d ago
> To be clear, I am not particularly pro or anti AI. Here’s what I currently think:

> • [...]

> • AI writing is often bland and boring but better than the average person’s writing.

Yes, that is exactly the right thing to do.

Without style prompts, a model should produce competent output with generic style. Anything that is not "bland and boring" is going to make a lot of people unhappy, and be mismatched to most contexts.

So great success.

On the flip side, it is incredibly easy to add style via directions.

The right way to request a style you like is going to take some iteration, or style samples, because style is subjective and models can produce an infinite variety of styles.

Again, exactly what we want. Great success again.

It takes time to absorb that models are such uniquely broad tools we can't expect them to match preferences without specific requests. In humans, that is done by soaking up context. Models only have the context you give them, but are far more versatile.

nico · 1d ago
It’s just happening really fast. And as Gibson said, the future is already here, just not evenly distributed

Just one example:

Less than a year ago, most companies hiring for developer positions had strict anti-ai policies, meaning, candidates couldn’t (shouldn’t) use ai in their take homes nor live coding interviews. There were some exceptions of companies that didn’t fully know where to draw the line

Fast forward to today, almost all companies pretty much either explicitly require use of ai or at least expect it. Development teams describe themselves as ai-enabled, and are fully embracing ai development tools. Candidates are supposed to be up to date and know how to use the tools effectively. There are few exceptions of companies that still have strict policies, which mostly revolve around regulation (and they are solving the issue with on-premise models/services)

creer · 1d ago
Yes! And without losing sight of the bubble and hype aspect of it all, the speed of these developments is very exciting.
jancsika · 1d ago
There's too much money at stake to be able to have sensible discussion about it online.

We saw the same thing with blockchain. IIRC, someone on the Cryptography list replied to the whitepaper with the critique that the energy use wouldn't be worth the measly number of transactions processed. If it were Bells Labs, that observation would have been followed up with two or three prototypes for fundamentally different decentralized payment designs by now.

Instead, it's over a decade later and all we have is a literal strategic reserve of hot air.

So I'd give it some time, Steve. :)

nitwit005 · 1d ago
You have people on reddit attacking artists, because they perceive them as against AI.

Of course, you also have people attacking others because they perceive them as against Apple products, or a celebrity they like.

Why they decide to make such goofy things part of their identity, I have no idea.

searine · 1d ago
>AI art is often terrible and slightly creepy. I have seen AI art that was fine. This is very rare.

This is just the "special effects" problem all over again, its use is invisible if done correctly. Only the bad produced SFX sticks out and gets called "CGI".

Much like a programmer using AI to supplement their existing knowledge, a lot of artists (way more than you think) are using AI to supplement production of their art. The product is indistinguishable from their non-AI work, it just takes a fraction of the time.

People just aren't talking about it because it is highly stigmatized by the art community.

nunez · 1d ago
Interestingly, the proliferation of digital sfx led to lots of jobs going away and the jobs that replaced them being poorly paid relative to workload. It also led to a crap ton of shitty sfx.

Also interestingly, diffusion and generative video will probably make this worse by dint of being so much more accessible

steveklabnik · 1d ago
Yes, this is a hunch I have as well.
eaglelamp · 1d ago
There is no meaningful discourse because there is no meaningful decision at stake. The owning class has decided that “AI” will be shoved into any plausible orifice and the “discourse” online is just a reaction to a decision that has already been made.

Frankly the noise being made online about AI boils down to social posturing in nearly all cases. Even the author is striking a pose of a nuanced intellectual, but this pose, like the ones he opposes, will have no impact on events.

dlvhdr · 1d ago
Well it isn’t searching the web, it has a cut off date, and it takes bits of info from various sources often distorting them, hence it’s not a search engine.
steveklabnik · 1d ago
It literally searches the web, and shows you what it searched for and the results it decided to use.
minimaxir · 1d ago
ChatGPT.com (and other LLM UIs such as Perplexity) now use a Tool that searches the web if it detects that it is necessary to solve the user's question, and then uses the output of that search query to answer the user's question. This allows it to surface responses that are out of its training data cutoff date, and cite specific data sources.
fmajid · 1d ago
This anti-AI talk is a coping mechanism, but completely unproductive. People don't like the possible or likely consequences of AI that delivers even 10% of its boosters' outlandish promises, and instead of planning how to organize politically to mitigate them, just plug their ears and say "la la la I can't hear you".

I have a friend who is a translator, and all her translation work has dried out. All she gets now is tedious jobs reviewing and correcting machine translations. As Simon Willison points out, writing code is much more gratifying than code reviews, and I can easily imagine a future where the only work left for most human coders is code reviews of LLM-generated code, which would be terrible for job satisfaction.

The early 19th Century Luddites, skilled craftspeople and weavers understood the benefits of mechanized looms, but demanded the profits be shared equitably with the displaced workers. When the proto-capitalists, who largely overlapped with the feudal aristocracy that rules Britain to this day, refused, they engaged in a guerrilla campaign of breaking looms, and the elite establishment responded by sending more soldiers to quell the uprising than were fighting Napoleon, and hanging people for breaking frames:

https://en.wikipedia.org/wiki/Destruction_of_Stocking_Frames...

Ironically the kinds of jobs hardest for AI to replace are those requiring manual dexterity and skill, often derided as "unskilled work" by snobbish white-collar academics and workers.

thefz · 1d ago
> This anti-AI talk is a coping mechanism

No, it' being tired of the trite bullshit new tech to rob people of agency, culture and knowledge and move more value towards the shareholders.

The whole nature of LLMs is built upon exploiting other people's work. Without it they will be useless.

> I have a friend who is a translator, and all her translation work has dried out.

Can a machine translate prose?

munksbeer · 1d ago
Disclaimer: I feel that your post is overly angry and reactive, but I would like to have meaningful discourse about it, and not just talk past each-other with downvotes.

So with that said, what if we started from a position where I accept you are correct about the exploitation stuff, and went from there?

Do you see a way back from where we are now? I don't. The genie is out the bottle, and depending on your view, that means "big tech" has won again. So what do we do? Do you think LLMs are useless, or are they not useless? Will they be useless in the future? Do you think they will be an empowering tool, or one that disempowers?

> Can a machine translate prose?

Maybe not now, but can it in the future? I'm going to guess yes. So of what use is arguing that it is rubbish because it can't translate prose now?

thefz · 1d ago
> Do you see a way back from where we are now? I don't.

Do not use it. Do not promote or talk about it. Would be a great start.

Do you really believe LLMs are a force for good?

munksbeer · 22h ago
This is like asking me if steam power was a force for good, or the combustion engine, or computers. The answer is yes and no, but the more important point is that innovation and progress is inevitable. We lead vastly better lives than people 100 years ago. Why? Because of innovation.

I use LLM code completion in my IDE, and it is fairly useful, but not yet ideal. I use it all the time to ask technical questions rather than searching documentation first. It is extremely helpful for that task, and most of the time is correct - I always double check after it points me in the right direction.

I see a path where AI leads to the destruction of humanity, but it could equally lead to a post scarcity utopia.

You are being completely dogmatic in you view, and I can understand. If you truly believe AI is a force for evil, then it makes sense for you to rage against it. But keep in mind, no-one knows yet, and be open to the possibility that you're wrong.

thundergolfer · 1d ago
This is unfortunately a Bluesky problem. I’m still using the platform, but it’s got too many people posting unsophisticated takes, either in tech or culture or politics.

It’s not about being pro-AI or anti, left or right. I just read too much on Bluesky which has me thinking “oh, you really have no idea what you’re talking about.” As Steve says, to verify that they’re wrong is trivial, and yet they’re they are.

Twitter, on the other hand, rarely has this problem if you only look at the “following” tab and curate who you follow.

righthand · 1d ago
Twitter doesn’t rarely have this problem if you have to specifically use it a certain way to avoid it. Find a specific way to use Bluesky and you’ll fare about the same. There is nothing that makes the users of a different platform more qualified to discuss something.
steveklabnik · 1d ago
I hesitated to mention Bluesky specifically, because it is not just Bluesky. It's here, and lobste.rs, and a lot of reddit, too.

I don't use X anymore, so maybe I am missing out on that.

slowmovintarget · 1d ago
We, the chronically online, have generally forgotten discourse in favor of debate... and we're really awful at debate.
blueboo · 1d ago
Discourse is in the eye of the beholder. If your feed is a torrent of bile, is that evidence of a real problem in the world?
trod1234 · 1d ago
The simple fact of the matter is, the most intelligent people who should be participating in discussion on important existential matters like this, are not participating.

They are not participating because they know and realize that what people see today in communications is not a discussion. They have realized that for the most part, the communication channel has been jammed past Shannon's limit, with false narratives that are not actually coming from real people. Objective reasoning and statements are drowned out by the flood of lies. Its an attack, one long predicted but ignored.

What happens anytime communications are jammed and you don't realize it? It leads you to improper decisions which have a cost in blood instead of in resources. The bigger the impact, the longer the dynamics take, the more harm there is.

Worse, the simulacra released to do these things behave as a toddler behaves, or rather quite worse as an evil malevolent person towards manipulation and total control neglecting reality.

If you've ever had an argument with a toddler, you know inherently that they are just play-acting, with a tantrum held in reserve for when they are shown wrong. The same goes for the sock puppets, with malice held in reserve targeting blindspots towards torture and psychological harm for those people engaging in goodwill seeking a long-term future for their children.

When the distance between what is said and what we know to be objectively true is an abyss, evil wins, and everyone becomes victim to the loudest monster in the room, a monster which eventually comes to destroy us all.

Evil is not just some metaphorical construct but mainly describes the outcomes that result in destruction that are entirely preventable through choice and knowledge (truth).

The people who are truly intelligent have realized that without communication there can be no response, no counteraction, it is a runaway machine, a train running along a track at full speed that ends going over a cliff into the ocean where those aboard die, and those aboard don't know it because they've all put blindfolds on in merriment. Willful induction to blindness is a very dangerous thing.

Those that can are withdrawing and preparing for the inevitable consequence of these quite fundamental dynamics. We've passed a critical point of no return and no one noticed because their vision was obscured purposefully. They didn't realize the nature of the failures, which act as a wave with no forewarning aside from long discounted details that were ignored. A cascading failure based in hysteresis.

The only people that will likely survive this are the ones that recognized and prepared, a paltry amount are doing this compared to the number of people globally living today.

Instead of solving problems, and making careful choices, the aggregate of decisions from people over the past few generations were to blind themselves and others to the problems they created. To cherry-pick education, torture the rational, obscure reality, and live the good life with front-loaded benefits through money printing, thinking they'll be dead before the consequences can reach them, and sacrificing those of lesser intellect who were incapable of seeing through the lies.

The bill always comes due. The slow knife penetrates without anyone noticing.

anjc · 1d ago
It seems that people can't grasp the exponential rate of developments here. They're stuck in the GPT2 LLM narrative. Even with the amazing Veo 3 videos this week, people are still nitpicking and seemingly unable to remember the state of the art 2 weeks ago, 6 months ago, 1 year ago, etc.

I don't mean to say that scores on evaluation metrics will remain exponential but rather the developments, uses, integrations will (e.g., web search in ChatGPT), and people can't conceive or keep track of this, and therefore discussions on the area are always behind the times.

For example, I think it's inevitable now that TV/movie production will not exist as we know it in a short time, except as niche work, like fine art in the age of digital. It's also inevitable that fully personalised media will be predominant. I think this is obvious, but yet people are zooming in on the background of essentially perfect videos to spot minor and irrelevant coherence aberrations.

Nitpicking will also inevitably become a niche hobby, like people who complain about the colour grading on a movie remaster, while the rest of the world just watches the movie and doesn't notice or care about the issues.

nunez · 1d ago
You are saying all of this like it's a good thing.

I don't want to live in a world where reality doesn't exist (which tools like Veo3 will absolutely be used to distort the truth), but we are speed running ourselves to that destination.

anjc · 1d ago
Yes I agree, I don't think that it's good at all. It's just fascinating to me that people are criticising current LLMs using information they heard about LLMs 3 years ago, when right in front of their eyes are sci-fi-like results from the field.

> I don't want to live in a world where reality doesn't exist

Perhaps the end result is that the world turns away from digital completely and goes back to reality :) We see already that some universities are going back to written and oral assessments, for example.

dcchambers · 1d ago
"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

The NUMBER ONE reason anyone is anti-AI or AI-skeptical is because it directly threatens their livelihood. Many people have ignored all development in the AI space because they don't want to admit that it's a real problem that threatens their livelihood.

yencabulator · 19h ago
Meanwhile, the hypemen who migrated from web3 to cryptocurrencies to AI are utterly innocent of any such motives?

We can have reasonable discourse about AI a month or two after the hype ends and the VC bubble pops.