I appreciate seeing this point of view represented. It's not one I personally hold, but it is one a LOT of my friends hold, and I think it's important that it be given a voice, even if -- perhaps especially if -- a lot of people disagree with it.
One of my friends sent me a delightful bastardization of the famous IBM quote:
A COMPUTER CAN NEVER FEEL SPITEFUL OR [PASSIONATE†]. THEREFORE A COMPUTER MUST NEVER CREATE ART.
Hate is an emotional word, and I suspect many people (myself included) may leap to take logical issue with an emotional position. But emotions are real, and human, and people absolutely have them about AI, and I think that's important to talk about and respect that fact.
† replaced with a slightly less salacious word than the original in consideration for politeness.
randcraw · 2h ago
Picasso's Guernica was born of hate, his hate of war, of dehumanization for petty political ends. No computer will ever empathize with the senseless inhumanity of war to produce such a work. It must forever parrot.
petralithic · 2h ago
A human might generate a piece of media using AI (either via a slot machine spin or with more advanced workflows like ComfyUI) and once they deem it looks good enough for their purpose, they might display it to represent what they want it to represent. If Guernica was AI generated but still displayed by Picasso as a statement about war, it would still be art.
Tools do not dictate what art is and isn't, it is about the intent of the human using those tools. Image generators are not autonomously generating images, it is the human who is asking them for specific concepts and ideas. This is no different than performance art like a banana taped to a wall which requires no tools at all.
TheCraiggers · 1h ago
I read what you wrote, and it seems to me you think these two things are equal:
A human using their creativity to create a painting showcasing a statement about war.
A human asking AI to create a painting showcasing a statement about war.
I do not wish to use strawmen tactics. So I'll ask if you think the above is equal and true.
petralithic · 1h ago
Is a banana taped to a wall "art?" Your answer to that is the answer to your question.
saltcured · 1h ago
And, is the artist the one who taped it, the one who told them to tape it, or the one who created the banana?
petralithic · 57m ago
It's the person who had the idea to do so and did so. AI doesn't do anything you don't tell it to, it is the banana creator in this case. It is still up to you to get the best looking banana you can then display it.
jay_kyburz · 1h ago
Two people want to make a statement about war.
One person spent years painting landscapes and flowers.
The other spent years programming servers.
Is one persons statement less important than the other? Less profound or less valid?
The "statement" is the important part, the message to be communicated, not the tools used to express that idea.
aspaviento · 1h ago
And let's not forget that people call "art" to more things than the popular masterpieces. A guy sold an invisible sculpture¹ clamming it was art. If things like this can be called art, whatever AI makes can be called art too.
"What is or isn't art" didn't simply become a topic because people like to philosophize about the meaning of words. Over the 20th century the art world took fascination with the subversive, transgressive, the postmodern, rejecting authority and standards of beauty that were deemed limiting and oppressive etc. One direct contributing component was photography. Skill of realistic depiction became deemphasized, with mass production, plastic etc., the focus became abstract ideas. It was also a protest against the system that brought the two world wars.
It was considered "anti-art" at the time, but basically took over the elite art world itself and the overall movement had huge impact on what is considered art today, on performance art, sculptures, architecture that looks intentionally upsetting etc.
It's not useful to try to think of the sides as "expansive definitionists" who consider pretty much anything art just because, and "restrictive definitionists" who only consider classic masterpieces art. The divide is much more specific and has intellectual foundation and history to it.
The same motivations that led to the expansive definition in the personally transgressive, radical and subversive sense today logically and coherently oppose the pictures and texts generated in huge centralized profit-oriented companies via mechanization. Presumably if AI was more of a distributed hacker-ethos-driven thing that shows the middle finger to Disney copyrightism, they may be pro-AI.
petralithic · 19m ago
By this same logic, AI will also become accepted as art in 50 years. And by the way, no one who's serious about AI "art" uses commercial generators, they use local AI with workflow managers like ComfyUI. They are not just typing into a box like Midjourney. Therefore these are the hackers who're showing the middle finger to Disney, they dislike copyright as much as anyone.
AlotOfReading · 1h ago
This is a debate that existed long before LLMs with things like action painting. If I give you a Jackson Pollock and a piece from someone who randomly splattered paint on a canvas until it looked like Jackson Pollock, are they the same?
bonoboTP · 34m ago
Pollock was a part of a coherent intellectual movement across all of art. You can't productively discuss whether it's art without focusing on that. He didn't just wake up one day and think to himself that it would be fun to throw paint on the canvas like this and then people looked and wondered if that's art or not.
It was the intellectual statement conveyed through that medium that made him famous.
petralithic · 1h ago
Same in what sense? That is the real question, and perhaps not even the important one when it comes to art. Because, if the Pollock is more "important," there is an implication that it's better because it's by a more famous person, while art should be able to come from anywhere and anyone.
AlotOfReading · 1h ago
The same in whatever sense you want to compare the art rather than the creators. Pollocks try to convey the action and emotion of the creation process. Our hypothetical copycat lacks that higher level meaning, even though they've created an otherwise similar physical product.
As an aside:
...art should be able to come from anywhere and anyone.
is an immensely political view (and one I happen to agree with). It's not a view shared by all artists, or their art. Ancient art in particular often assumes that the highest forms of art require divine inspiration that isn't accessible to everyone. It's common for epic poetry to invoke muses as a callback to this assumption, nominally to show the author's humility. John Milton's Paradise Lost does this (and reframes the muse within a Christian hierarchy at the same time), although it doesn't come off as remotely humble.
petralithic · 51m ago
It depends what the copycat was thinking, maybe they wanted to follow in Pollock's footsteps, maybe they wanted to showcase the point you're making, whether a copycat is as good as the real thing and therefore also considered art, perhaps even as important (apprentices often copied their masters, such as da Vinci's), maybe they are just creating it because it looks good. If there's no other reasoning, then I'd still say they're the same, because how can one say they're not art too? Even as an observer of the art, what if I like the copycat more? These are all open questions to the philosophy of art and I'm glad it's accessible today to everyone rather than only to the historically abled.
s1mplicissimus · 1h ago
Agreed, tools do not dictate what art is and isn't - but using those tools for art doesn't relieve them from being ethically justified.
If generating the piece costs half a rain forest or requires tons of soul crushing badly paid work by others, it might be well worth considering what is the general framework the artist operates in.
Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is.
petralithic · 1h ago
There are tons of examples of art that take much more energy than what an AI does, such as an architectural monument. It is not necessarily the case that "Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is." as not all artists will agree and even those that do might not follow it. For example, certain pigments in painting could be highly unethically sourced but people still used them and some still do, such as mummy brown, Indian yellow, or ivory black, all from living organisms.
perching_aix · 2h ago
To honor the "spirit" of OP's post:
I looked up Picasso's Guernica now out of curiosity. I don't understand what's so great about this artwork. Or why it would represent any of the things you mention. It just looks like deranged pencilwork. It also comes across as aggressively pretentious.
What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?
didibus · 1h ago
I'm not an art historian, but I think Picasso invented an entire art style.
When you use AI, you might now prompt "in the style of Picasso".
jacquesm · 1h ago
That a human made it to express their feelings.
perching_aix · 1h ago
What do I care? Can't even tell what feelings are supposedly being expressed there.
mm263 · 1h ago
Why do you care to connect with another human? Try to feel his emotions, what he tried to express? If you see no value in that, there's no discussion to have, honestly. For most people I know there's value in connecting with others and emphasizing with their emotions
petralithic · 1h ago
But they just said they don't get what emotions are meant to be expressed, so how can they try to feel his emotions?
jacquesm · 1h ago
That goes for all art. It either stirs you or it doesn't. I find https://www.youtube.com/watch?v=9tjstsWoQiw to be one of the most beautiful pieces ever recorded, others can't listen to it and think it is bland and a terrible recording.
You can't argue about taste.
bonoboTP · 21m ago
I don't think this is just taste. The painting was made in a specific historic context and commemorates the bombing of Guernica. Without knowing that context, it may be appreciated as a disembodied visual artifact, but that's not how art really works or ever worked. An influential artpiece usually states something relevant to the historic moment and intellectual Zeitgeist of the time.
You may like the music of Zombie by The Cranberries, but I'd say it belongs to the complete appreciation of it to know that it's about the Irish Troubles, and for that you need some background knowledge.
You may like to smoke weed to Bob Marley songs, but without knowing something about the African slave trade, you won't get the significance of tracks like 400 years.
For Guernica you also have to understand Picasso's fascination with primitive art, prehistoric cave art, children's drawings and abstraction, the historic moment when photography took over the role of realistic depiction, freeing painters to express themselves more in terms of emotional impressions and abstractions.
jacquesm · 2m ago
Yes, context is really important. But:JS Bach made a whole raft of music, and quite a large fraction of it was religiously inspired. In spite of that it is perfectly possible to appreciate it at a deep emotional level without that particular spiritual connection. This is the genius of art to me: that it opens up an emotional channel between two individual separated by time and space and manages to convey a feeling, as clear as day.
Take U2's October as a nice example. (You mentioned Zombie, incidentally one of my favorites, the anger and frustration in there never fail to hit me, I can't listen to it too often for that reason), superficially it is a very simple set of lyrics (8 lines I think) and an even simpler set of chords. And yet: it moves me. And I doubt any AI would have come up with it or even a close approximation if it wasn't part of the input. That's why I refuse to call AI generated stuff art. It's content, not art.
perching_aix · 1h ago
But then why wouldn't AI generated art be able to stir me? Why is a human being in the loop so important as to be supposedly essential?
jacquesm · 1h ago
Because it is mimicking human input. Effectively you are getting a mixture of many pieces of artwork that humans made distilled down into some sloppy new one that was made without feeling, purpose or skill and that can be described by its prompt, a few kilobytes at best. Original human art can only be approximated but never captured with 100% fidelity regardless of the bitrate, that is what makes it unique to begin with. Even an imitation by another human (some of which can be very good) could stir you in the exact same way but they'd be copies, not original works.
Anyway, this gets hairy quickly, that's why I chose to illustrate with a crappy recording of a magnificent piece that still captures that feeling - for me - whereas many others would likely disagree. Art is made by its creator because they want to and because they can, not because they are regurgitating output based on a multitude of inputs and a prompt.
Paint me a Sistine Chapel is going to yield different results no matter how many times you would give that same prompt to Michelangelo depending on his mood, what happened recently, what he ate and his health as well as the season. That AI will produce the same result over and over again from the same prompt. It is a mechanistic transformation, not an original work, it reduces the input, it does not expand on it, it does not add its own feelings to it.
perching_aix · 1h ago
I think this is a reasonable counter in some respects, although I do also think it's specific to the current iteration of AI art.
It's a bit like when people describe how models don't have a will or the likes. Of course they don't, "they" are basically frozen in time. Training is way slower than inference, and even inference is often slower than "realtime". It just doesn't work that way from the get-go. They're also simply not very good - hence why they're being fed curated data.
In that sense, and considering history, I can definitely see why it would (and should?) be considered differently. Not sure this is what you meant, but this is an interesting lens, so thanks for this.
petralithic · 1h ago
Haven't these arguments been the same since Stable Diffusion came out? Someone (A) will say what you said, then someone else (B) will say, well humans remix as well, A: no that's different because we're humans not machines, B: there is no need to prefer a biological substrate over a silicon one; A: AI will produce the same result over and over, B: not if you change the temperature and randomize the seed.
It's tiresome to read the same thing over and over again and at this point I don't think A's arguments will convince B and vice versa because both come from different initial input conditions in their thought processes. It's like trying to dig two parallel tunnels through a mountain from different heights and thinking they'll converge.
perching_aix · 40m ago
Ironically, for the first time, I think I found some perspective to the remix argument here.
Normally it's just like you say: I don't find the remixing argument persuasive, because I consider it to be a point of commonality. This time however, my focus shifted a bit. I considered the difference in "source set".
To be more specific, it kind of dawned on me how peculiar it is to engage in creating art as a human given how a human life looks like. How different the "setup" is between a baby just kind of existing and taking in everything, which for the most part means supremely mundane, not at all artful or aesthetic experiences, and between an AI model being trained on things people uploaded. It will also have a lot of dull, irrelevant stuff, but not nearly in the same way or in the same amount, hitting at the same registers.
I still think it's a bit of a bird vs plane comparison, but then that is also what they are saying in a way. That it is a bird and a plane, not a bird and a bird. I do still take issue with refusing to call the result flight though, I think.
jacquesm · 42m ago
The day I see AI generated art and it moves me in the same way that human generated art does I will concede the point. So far all I've seen is more, not novel.
Art never was about productivity, even though there have been some incredibly productive artists.
Some of the artists that I've known were capable of capturing the essence of the subject they were drawing or painting in a few very crude lines and I highly doubt that an AI given a view would be able to do that in a way that it resonated. And that resonance is what it is all about for me, the fact that briefly there is an emotional channel between the artist and you, the receiver. With AI generated content there is no emotion on the sending side, so how could you experience that feeling in a genuine way?
To me AI art is distortion of art, not new art. It's like listening to multiple pieces of music at the same time, each with a different level of presence, out of tune and without any overarching message. It can even look skilled (skill is easy to imitate, emotion is not).
petralithic · 25m ago
I still don't get why you don't see it as a tool and not the creator itself. The human sitting behind the desk is the one attaching their emotions to what they send, because they control what image they want to send, otherwise they reroll or redo their work flow. These days they can even edit the image with natural language so they can build it up just as one does in Photoshop, only using words instead of a mouse.
jacquesm · 11m ago
> I still don't get why you don't see it as a tool and not the creator itself.
If after 33 comments in this thread and countless people trying to explain a part of it you don't get it that may be because you either don't want to get it or are unable to get it. Restating it one more time is not going to make a difference and I'm perfectly ok with you not 'getting it', so don't worry about it.
AI without real art as input is noise. It doesn't get any more concrete than that. Humans without any education at all and just mud and sticks for tools will spontaneously create art.
_DeadFred_ · 42m ago
There is no intention in either case. Just a machine doing machine things.
petralithic · 27m ago
The intention is the human prompting or creating the work flow, the computer was never going to autonomous create images, why would it?
TeMPOraL · 58m ago
Don't also forget:
A: but AI only interpolates between training points, it can't extrapolate to anything new.
B: sure it can, d'uh.
petralithic · 1h ago
It's not. If one takes the fact that art is in the eye of the beholder [0], then yes, even AI art may stir you, especially as a human is the one generating at the end of the day, for a specific purpose and statement about what they want to convey.
There is a good part of the series Remembrance of Earth's Past (of which The Three Body Problem is the first book) where the aliens are creating art and it shocks people to learn that the art they're so moved by was actually created by non-humans. This is exactly what this situation with AI feels like, and not even to the same extent because again AI is not autonomously making images, it's still a human at the end of the day picking what to prompt.
> it's still a human at the end of the day picking what to prompt
I think that 'dutch people skating on a lake' or 'girl with a pearl earring' or 'dutch religious couple in front of their barn' without having an AI trained on various works will produce just noise. And if those particular works (you know the ones, right?) were not part of the input then the AI would never produce anything looking like the original, no matter how specific you made the prompt. It takes human input to animate it, and even then what it produces to me does not look original whereas any five year old is able to produce entirely original works of art, none of which can be reduced to a prompt.
Prompts are instructions, they are settings on a mixer, they are not the music produced by the artists at the microphones.
petralithic · 22m ago
Have you actually used image generators today? It can produce things it's never seen if only you describe the constituent pieces. Prompts are a compressed version of the image one wants to create, and these days you don't even need "prompts" per se, you can say, make a woman looking towards the viewer, now add a pearl earing, now adjust this and that etc.
jacquesm · 6m ago
> Have you actually used image generators today?
Why would you ask this? It sounds like a lead-up to some kind of put down.
> It can produce things it's never seen if only you describe the constituent pieces.
It can produce things it's never seen based on lots of things that it has seen.
> Prompts are a compressed version of the image one wants to create
They emphatically are not. They are instructions to a tool on what relative importance to assign to all of the templates that it was trained on. But it doesn't understand the output image any more than it understood any of the input images. There is no context available to it in the purest sense of the word. It has no emotion to express because it doesn't have emotions in the first place.
> and these days you don't even need "prompts" per se, you can say, make a woman looking towards the viewer, now add a pearl earing, now adjust this and that etc.
That's just a different path to building up the same prompt. It doesn't suddenly cause the AI to use red for a dress because it thinks it is a nice counterpoint to a flower in a different part of the image because it does not think at all.
DyslexicAtheist · 48m ago
nazis held the same believe.
perching_aix · 31m ago
> nazis held the same believe.
Along with being against any form of animal cruelty.
They were also pretty obsessed with spiritualistic quackery.
Are we giving each other fun facts or what? Surely one does not need to go all the way to the nazis to find a Picasso hater? Or are you just following the footsteps of the blogpost author too?
andybak · 1h ago
The same Picasso that was notorious for churning them out towards the end of his career?
I'm being slightly flippant but I do think this is a motte and bailey argument.
Not even painting is a Guernica nor does it need to be.
And not every aesthetically pleasing object is art. (And finally - art doesn't even have to be aesthetically pleasing. And actually finally "art" has a multitude of contradictory meanings)
dragonwriter · 1h ago
> No computer will ever empathize with the senseless inhumanity of war to produce such a work.
Neither will a paintbrush.
The tool does need to, though.
exoverito · 1h ago
Needless to say, most humans are unoriginal parrots too, one need only look at the prevalence of memetic desire. Few are capable of artistic genius like Picasso.
One technical definition of empathy is understanding what someone else is feeling. In war you must empathize with your enemy in order to understand their perspective and predict what they will do next. This cognitive empathy is basically theory of mind, which has been demonstrated in GPT4.
If we do not assume biological substrate is special, then it's possible that AIs will one day have qualia and be able to fully empathize and experience the feelings of another.
It could be possible that new AI architectures with continuously updating weights, memory modules, evolving value functions, and self-reflection, could one day produce truly original perspectives. It's still unknown if they will truly feel anything, but it's also technically unknowable if anyone else really experiences qualia, as described in the thought experiment of p-zombies.
freehorse · 1h ago
> it's possible that AIs will one day have qualia
As the article says, then we can discuss about it that day. "One day AI will have qualia" is no argument in discussing about AI nowadays.
jondwillis · 2h ago
We must unironically give the computer pain sensors. :( don’t hurt me mr. Basilisk, I’m just parroting someone else’s idea.
racl101 · 1h ago
Monkey's paw closes.
Now, just like you can with Studio Ghibli art, you can generate new images in the style of Guernica.
kelseyfrog · 1h ago
> No computer will ever empathize with the senseless inhumanity of war
My computer does. What evidence would change your mind?
saint_yossarian · 1h ago
What evidence convinced you?
kelseyfrog · 5m ago
I performed an "Affective Turing Test" with null results.
fridder · 2h ago
I do wonder if a significant portion of the hate is from the AI push coming from the executive level.
andybak · 1h ago
> replaced with a slightly less salacious word than the original in consideration for politeness.
Please don't. That offends me much more than a very mild word ever could.
stronglikedan · 1h ago
I think it's obvious virtue signaling, but I would never let something so insignificant actually offend me. Life's too short.
didibus · 2h ago
Hate can be emotional, but it can also have underlying rational causes.
For example, someone can feel like they already have to compete with people, and that's nature, but now they have to compete with machines too, and that's a societal choice.
sam_lowry_ · 2h ago
I had to search and found the word "horny".
oasisaimlessly · 2h ago
What was the original word?
jclulow · 2h ago
"horny"
lo_zamoyski · 2h ago
I'm not terribly interested in emotional reactions. This is too common of a problem: we think emoting is a substitute for reasoning. Many if not most people believe that if they feel something, then it must be true; the disagreeing party just doesn't "get it". We must learn to reason and make arguments.
I am interested in the intelligible content of the thing.
Also, AI does not reason. Human beings do.
petralithic · 1h ago
How can we be sure humans reason?
petralithic · 2h ago
I've talked to people like this and when you dig deep enough, it's a fear of the economic effects of it, not actually any strongly held belief of AI inherently not being intelligent or emotional. Similarly, and I'm speaking generally here, ask artists about coding AI and they won't care, and ask programmers about media generation AI and they also won't care. That's because AI outside their domain does not (ostensibly) threaten their livelihood.
hofrogs · 2h ago
I am not an artist, yet I care about media generation "AI", as in I resent it deeply.
petralithic · 2h ago
Like I said, I'm speaking generally. There are a few like you who do, for whatever reason, but most artists hate it because they, at the most basal level, see it as a threat, especially when it came out. You should've seen what engineers on HN said about GitHub Copilot said when that first came out too.
Palomides · 2h ago
this is a claim shockingly contrary to what every artist I know, and I myself as an amateur, believe
petralithic · 1h ago
Which artists care about coding AI like Copilot? All the ones I talked to simply do not care. Regarding economic means, I asked them whether they'd care if they lived in a post scarcity society where they could make art all day and not have to worry about their material needs being met, ie they're rich, and it turns out if that were the case, they didn't care about what people did with AI, be it image generation or code generation.
xantronix · 1h ago
As an artist, I do not dread AI's artistic capabilities from a philosophical standpoint because its apparent "humanity" is a distilled average entirely divorced from the contexts in which its stolen art inputs are provided. In this way, it is categorically devoid of meaning.
As a software developer, I dread AI's capabilities to greatly accelerate the accumulation of technical debt in a codebase when used by somebody who lacks the experience to temper its outputs. I also dread AI's capabilities, at least in the short term, to separate me and others from economic opportunities.
petralithic · 1h ago
That's because, if I'm inferring correctly what you're implying in the last sentence, you work primarily as a software developer. Try telling a working artist your first paragraph or that they shouldn't worry about AI taking their commission work for example and see what they think.
Palomides · 1h ago
so here's the thing, artists like making the art, skipping the making leaves you with nothing
most artists I know are against AI because they feel it is anti-human, devaluing and alienating both the viewer and the creator
some can tolerate it as a tool, and some (as is long art tradition) will use it to offend or be contrarian, but these are not the common position
if I were a spherical cow in a vacuum with infinite time, and nobody around me had economic incentives to make things with it, I could, maybe, in the spirit of openness, tolerate knowing some people somewhere want to use it... but I still wouldn't want to see its output
petralithic · 47m ago
They don't have to use AI though, they can leave people who do alone. But that's not what I see, I see artists getting mad at the latter and when I dig deep, it turns out they're scared it'll take their digital commission work. This has primarily been my experience talking with artists I commission as well as people online on Twitter and reddit for example.
Palomides · 40m ago
sure, it's hard for an artist to compete on price with AI, and the ones who depend on this kind of ultra low budget work will have a hard time (and have a direct economic self-interest in advocating against)
but again, that's not what I see in the people around me
petralithic · 17m ago
And that's my point. It was never about the philosophy, it was always about the economics. That's what frustrates me, why lie? If it's money you want then ask for it, don't make up some bullshit.
jclulow · 1h ago
Where can I sign up for the post scarcity society? Asking for my artist friends.
petralithic · 1h ago
You can't, hence my point about their fear being economic, not philosophical.
magicalist · 1h ago
Sounds like you were maybe having some one-sided conversations with all the many artists you spoke to.
petralithic · 1h ago
Ah yes, because you disagree with me, I must have been having one sided conversations. I suppose some people just can't accept other people's experiences without denigrating them.
footy · 2h ago
I'm no artist (I even failed high school art) and I think AI media generation is a travesty.
eaglelamp · 1h ago
If you dig deep enough isn’t the same thing true of people like yourself? Do you truly believe that the large language models we currently have, not some fantasy AI of the distant future, are emotional and intellectual beings? Or, are you more interested in the short term economic gains of using them? Does this invalidate your beliefs? I don’t think so, most everyday beliefs are related to economic conditions.
How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?
petralithic · 1h ago
When did I say I believe AI to be intelligent or emotional? Of course I use it for economic factors, but I'm honest about it, not wrapping it up in some intellectual, solipsizing arguments. I'm not even sure what non-economic arguments you're talking about, my point is that at the end of the day most people care about the economic impact it might have on them, not anything about the technology itself.
eaglelamp · 1h ago
I don’t think the author is hiding his economic anxiety behind solipsism. He states plainly he doesn’t like the deskilling of work.
My point is why are your economic motivations valid while his aren’t?
petralithic · 1h ago
Who said my economic motivations are or aren't valid? My point is that people shouldn't lie, to others or to themselves, and to state their motivations plainly. While the author does do so, I am talking about other people who do hide behind solipsism, thus that is why my comment is not a top level comment about the article but a reply to a specific comment that says "one of my friends...", hence why I said "people like this" where "this" refers to their friend, not the author.
diamond559 · 1h ago
And most "AI" evangelists are actually stock holders.
doctorpangloss · 2h ago
> I've talked to people like this and when you dig deep enough, it's a fear of the economic effects of it
You hear what you want to hear. You think fine artists - and really, how many working fine artists do you really know? - don't have sincere, visceral feelings about stuff, that have nothing to do with money?
petralithic · 1h ago
We can talk anecdata all day. I do know fine artists, for example sculptors and painters, as well as many digital creators, as I commission pieces from them for prints in my place, and I've talked to all of them about AI out of curiosity.
rsoto2 · 1h ago
I care because it's outright theft. That's what AI companies do and what you are an accessory to.
AI is not intelligent or emotional. It's not a "strongly held belief" it simply hasn't been proven.
petralithic · 1h ago
It's as much theft as piracy is.
> AI is not intelligent or emotional.
Yes, I agree, my point is that people use arguments against these types of issues instead of stating plainly that their livelihood will be threatened. Just say it'll take your job and that's why you're mad, I don't understand why so many people try to dance around this issue and make it seem like it's some disagreement about the technology rather than economics.
FredPret · 2h ago
Me too. I bounce off of any product landing page that has "AI" slapped on it, which lately is ~99% of them.
On the other hand, if I saw a product labelled "No AI bullshit" then I'd immediately be more interested.
But that's just me, the AI buzz among non-techies is enormous and net-positive.
Atlas667 · 1h ago
lol, marketing knows no bounds.
Almost like its all emotional-level gimmicks anyways.
If I see "No AI bullshit" I'd be as skeptical if it said "AI Inside". Corpos tryina squeeze a buck will resort to any and all manipulative tactics.
esseph · 2h ago
The AI buzz among non-techies inflates the bubble.
danielbln · 2h ago
Yet it is here to stay, won't go away and even if it won't get any better at the useful things it does, it is useful. The externalities are real, some can be removed, some mitigated. If you're a hater and a human, then you don't have to mitigate anything, of course.
Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.
myhf · 2h ago
> the useful things it does
It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.
Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.
If you want to argue that it does useful things, you have to explain at least one of those things.
Semiapies · 7m ago
And they're always desperately insisting that won't go away and you can't escape it. It stinks a lot of Big Lie techniques.
schwartzworld · 1h ago
It's bad at
- Actually knowing things / being correct
- Creating anything original
It's good at
- Producing convincing output fast and cheap
There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.
petralithic · 45m ago
> doesn't mean that every technology automatically has upsides
Who said "every technology?" We're talking about a specific one here with specific up and downsides delineated.
yifanl · 2h ago
> Yet it is here to stay, won't go away
Source for this claim? Are you still using Groupon?
hex4def6 · 1h ago
Of course it's here to stay. There are models that are --right-now-- great at text-to-speech, speech-to-text, categorization, image recognition, etc etc. Even if progress stopped now, these models would be useful in their current state.
Your argument could just as easily be applied to social networks ("are you still using friendster?") or e-commerce ("are you still using pets.com?). GPT3 or Kimi K2 or Mistral is going to become obsolete at some point, but that's because the succeeding models are going to be fundamentally better. That doesn't mean that they weren't themselves fit for a certain task.
sindriava · 1h ago
Comparing a general technology (AI) to a specific company (Groupon) is a category error. To your point coupons still exist and people use them and Anthropic might not exist in 2 years while AI will.
dpoloncsak · 1h ago
The last time we saw a bet from Wall St like we are seeing with AI, was when they bet on the internet.
Do you still use the internet?
satisfice · 1h ago
I still use the Internet. The Internet is also a technology that undermines society.
petralithic · 44m ago
You are free not to use it if that is what you believe.
Timwi · 14m ago
You are also free to give away all your money and not participate in capitalism if you wish.
Wait, are you sure?
satisfice · 26m ago
No I'm not, you idiot.
turzmo · 1h ago
Maybe a better generalization, the last time [bubble] happened, do you still use [bubble]?
Depends on the nature of the bubble, doesn't it?
TeMPOraL · 52m ago
Indeed. Therefore, the past bubbles most similar to AI are the one around Internet, and the earlier one around electricity.
brokencode · 2h ago
You must be living on a different planet if you think the adoption and societal impact Groupon was ever remotely comparable to AI.
yifanl · 1h ago
I assure you as someone working FoH at the time, Groupon's impact on me was far greater than AI ever could be.
brokencode · 1h ago
I’m talking about how it impacts society in general, not you specifically. Also, I don’t think you appreciate how deeply AI will affect your life in the future.
The next time you get a CT for example, it might be an AI system that finds a lung nodule and saves your life.
Or for a negative possibility, consider how deepfakes could seriously degrade politics and the media landscape.
There are massive potential upsides and downsides to AI that will almost certainly impact you more than a coupon company.
sshine · 2h ago
People are.
Just like crypto.
Just look at the bitcoin hashrate; it’s a steep curve.
iLoveOncall · 2h ago
There are plenty of things that are useful and that have gone away. As long as GenAI stays unprofitable, it has every chance of disappearing if it stays as useless as it is right now.
ginko · 2h ago
There's people running stable diffusion locally on their systems for their own amusement. Do you think that will go away?
brian-armstrong · 37m ago
100% yes, at least down to a rounding error. If people stop pushing billions into training new versions, then the novelty will wear off very quickly. There are still many constraints on what it can do and people will generally lose motivation when they start finding those invisible boundaries on its capabilities. It'll be effectively a dead pursuit.
Mallowram · 2h ago
AI is irrelevant if it operates the arbitrary, that's the limit, face it.
petralithic · 43m ago
I don't understand this sentence. What is "operating the arbitrary?"
danielbln · 2h ago
AI is information.
Mallowram · 2h ago
Even Shannon knew the limits of information late in career. AI is not information, it's signaling. And it embeds without decipherment or segregating dominance, bias, control, manipulation. The dark matter of language we can't extract.
"Shannon warned in 1956 that information theory “has perhaps been ballooned to an importance beyond its actual accomplishments” and that information theory is “not necessarily relevant to such fields as psychology, economics, and other social sciences.” Shannon concluded: “The subject of information theory has certainly been sold, if not oversold.” [Claude E. Shannon, “The Bandwagon,” IRE Transactions on Information Theory, Vol. 2, No. 1 (March 1956), p. 3.]"
zwnow · 2h ago
Disinformation with more and more propaganda due to being vulnerable to bad actors. There's already evidence on people spreading propaganda through LLMs.
danielbln · 2h ago
Definitely, AI can be used for terrible things. Doesn't change that it's information and won't go away.
justsomejew · 32m ago
I think you are a real person, still you sound like a broken record.. "disinformation..", "propaganda".. "bad actors"..
You are the "bad actors", pumpkin. Worse than the other ones.
jerhewet · 2h ago
AI is "put Elmer's glue on your pizza so the ingredients won't slide off". AI is "three B's in blueberry".
Garbage in, garbage out. Which will always be the case when your AI is scraping stuff off of random pages and commentary on the internet.
s1mplicissimus · 1h ago
If only it were garbage in, garbage out - that would be solvable by better training data.
But it's much worse than that, because even if you'd only feed it good stuff, the output would still deteriorate.
pointing index finger at imaginary baloon: pfffffffffft
StopDisinfo910 · 2h ago
Amusingly things are going with AI like with any complex topics nowadays. It’s easier to hold a strong position than a nuanced one. So you see a lot of vapid articles either for or against, even - or especially actually - if you don’t really know for or against what exactly, and very few insightful ones.
Plague of our ages I guess. Ironically AI might even make it worse.
codyb · 1h ago
I suspect we're in a bubble... and when it pops, the useful, profitable work will stay around. A bunch of things will also disappear.
And then we'll wait till the next bubble.
Gains seem to have leveled off tremendously. As far as I can tell folk were saying "Wow, look at this, I can get it to generate code... it does really well at tests, and small well defined tasks"
And a year or a year and a half later we're at like... that + "it's slightly better than it was before!" lol.
So, yea, I dunno, I suspect we'll see a fair amount fall away and some useful things to continue to be used.
TeMPOraL · 39m ago
My personal view is that there are broadly two groups of people, and thus two perspectives, related to the AI hype. I call them the Beneficiaries, and the Investors.
Beneficiaries are the ones who care about the actual tech and what it can do for them. Investors are the ones who care about making money off the tech. For the Beneficiaries, AI hype is about right where it should be, given the demonstrable power of the tech itself. For Investors, it may be a dangerous bubble - but then I myself am a Beneficiary, not an Investor, so I don't care.
I don't care which companies get burned on this, which investors will lose everything - businesses come and gone, but foundational inventions remain. The bubble will burst, and then the second wave of companies will recycle what the first wave left; the tech will continue to be developed and become even more useful.
Or put another way: I don't care which of the contestants wins a tunnel-digging race. I only care about the tunnels being dug.
See e.g. history of rail lines, and arguably many more big infrastructure projects: people who fronted the initial capital did not see much of a return, but the actual infrastructure they left behind as they folded was taken over and built upon by subsequent waves of companies.
utyop22 · 1h ago
What profitable work? Please do post numbers in the form of free cash flows to the firm (or equity) ;).
Also you seem to forget that irrespective of cash profits in the future, will this investment generate excess returns? Nope. That's what investors care about. Its not even profit actually.
notfed · 2h ago
"I strongly feel that AI is an insult to life itself." - Hayao Miyazaki
I'm going to start using this quote.
brabel · 2h ago
You should see the context in which he said that. It was 2016. It was no ChatGPT he was talking about. It was some truly bizarre art that was going on back then, like a sort of humanoid form trying to learn how to walk without being given instructions... it would do disturbing things like us its head as if it was a limb and move in completely unnatural ways... that's what he, and most who watch that video, found so disturbing. But of course, taking things out of context and using a powerful sentence as if it were referring to something entirely different to make your own point is more fun.
exdeejay_ · 1h ago
For more context to whoever is interested, the dialogue following the quote goes like this:
Studio Ghibli producer, Suzuki: "So, what is your goal?"
ML Developer: "Well, we would like to build a machine that can draw pictures like humans do."
<jump cut>
Miyazaki VO: "I feel like we are nearing to the end of times."
"We humans are losing faith in ourselves."
Of course, the form of AI has changed over the years, but the claim that this quote could be tied to Miyazaki's general view on having machines create art is not totally baseless.
petralithic · 41m ago
Genetic programming. It's actually quite an interesting method of creating programs, certainly not like LLMs but cool nonetheless.
myhf · 1h ago
Do you think that the more refined version is somehow less of an insult to life itself? It wasn't a statement about how refined the art style is. It's about the meaning and intentionality that goes into deliberate communication, and how tools designed specifically to skip over the decision making and deliberation are removing the most important part of the result.
Look at all the AI-written and AI-illustrated articles being published this year. Look at how smooth the image slop is. Look at how fluent the text slop is. Higher quality slop doesn't change the fact that nobody could be bothered to write the thing, and nobody can be bothered to read it.
bee_rider · 2h ago
Lots of quotes are out of context, it’s a great like with the context stripped from it.
antegamisou · 1h ago
> It was no ChatGPT he was talking about
As if it's in any way less horrifying having the entire Internet infested with AI slop.
TeMPOraL · 1h ago
If GP was an LLM, we'd say they hallucinated this argument.
Wish some of the AI detectors realized when they're doing a worse job reasoning than the LLMs they criticize.
badsectoracula · 2h ago
AFAIK that wasn't a general response to AI but of a very particular implementation of a procedural animation system shown to him by some (IIRC) students for the movement of a disabled person and he found it distasteful as it reminded him of someone he knows who is disabled and had issues moving.
He's right that to someone who's art is about capturing the world through a child's eyes, the dreamlike consonance of everyday life with simple fantasy, this is abominable.
He didn't say this about AI generally as far as I know. He was shown some kid's art project using an earlier AI and it just looked extremely uncanny in the way that is typical of bad generative art.
So that's definitely a misquote, though I wouldn't be surprised if Miyazaki dislikes AI.
randcraw · 2h ago
Or maybe, "I strongly feel that Artificial Intelligence is an insult to Human Intelligence."
the_af · 2h ago
Have you watched the video that goes with it? It's online, and very amusing.
Regardless of how you feel about AI, the specific instance Miyazaki was reacting to was, indeed, an insult to life itself!
frozenseven · 1h ago
It's out of context. They used reinforcement learning to make a ragdoll move. So much drama over nothing.
Kuinox · 2h ago
You changed the quote:
His statement was about a specific technology, an AI that make 3D character move like zombies.
The author is also changing the subject of the quote.
He said it reminded him of a disabled friend that this technology was an insult to life itself.
tzumaoli · 1h ago
It's interesting to see the trend of the attitude towards GenAI in Hacker News through out the years. This is totally vibe based and I don't have numbers to back it up, but back in 2022-2023, the site was dominantly people who mostly treat GenAI as a curious technology without too much attachment, and some non-trivial amount of folks who are very skeptical of the tech. More recently I see a lot more people who see themselves as evangelists and try very hard to boost/advocate the technology (see all the "LLM coding changes my life" posts). It seems that the tide has turned back a little bit again since we now see this kind of posts surfacing.
For me, I kind of wish this site to go back to the good old days where people just share their nerdy niche hacker things and not filling the first page with the same arguments we see on the other parts of the internet over and over again. ; ) But granted I was attracted by the clickbait title too, so I can't blame others.
petralithic · 38m ago
Curious technology? People were foaming at the mouth about "license concerns" when GitHub Copilot was first announced, saying they're going to boycott Microsoft. But just like all things, over time people realize they're not as good or bad as initially thought. I noticed this too with media generation, people on Twitter were very mad about it and now many of them use Photoshop's AI features.
TeMPOraL · 37m ago
I don't pay much attention to the submission themselves, but I do care what the fellow HN-ers think, and my own "vibe-based" perspective is that the voices have been predominantly negative for many years now, and only grow even more so.
bonoboTP · 5m ago
HN is usually negative, cynical, skeptical, eyerolling, regardless of topic.
Just the other day someone posted the ImageNet 2012 thread (https://news.ycombinator.com/item?id=4611830), which was basically the threshold moment that kickstarted deep learning for computer vision. Commenters claimed it doesn't prove anything, it's sensational, it's just one challenge with a few teams, etc. Then there is the famous comment when Dropbox was created that it could be replaced by a few shell scripts and an ftp server.
dpoloncsak · 1h ago
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright...
This paragraph really pisses me off and I'm not sure why.
> Critics have already written thoroughly about the environmental harms
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
> the reinforcement of bias and generation of racist output
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
>the cognitive harms and AI supported suicides
There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.
>the problems with consent and copyright
This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.
Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.
bayindirh · 1h ago
HPC admin here.
A "small" 7 rack, SOTA CPU cluster uses ~700KW of energy for computing, plus there's the energy requirements of cooling. GPUs use much more in the same rack space.
In DLC settings you supply 20-ish degree C water from primary circuit to heat exchanger, and get it back at 40-ish degree C, and then you pump this heat to environment, plus the thermodynamic losses.
This is a "micro" system when compared to big boys.
How there can be no environmental harm when you need to run a power plant on-premises and pump that much heat in much bigger scale 24/7 to environment.
Who are we kidding here?
When this is done for science and intermittently, both the grid and the environment can tolerate this. When you run "normal" compute systems (e.g. serving GMail or standard cloud loads), both the grid and environment can tolerate this.
But running at full power and pumping this much energy in and heat out to train AI and run inference is a completely different load profile, and it is not harmless.
> the cognitive harms and AI supported suicides
Extensive use of AI is shown to change brain's neural connections and makes some areas of brain lazy. There are a couple of papers.
There was a 16 year old boy's ChatGPT fueled death on the front page today, BTW.
> This is the best argument on the page imo, and even that is highly debated.
My blog is strictly licensed with a non-commercial and no-derivatives license. AI companies gets my text, derives it and sells it. No consent, no questions asked.
Same models consume GPL and Source Available code the same and offer their derivations to anyone who pays. Again, infringing both licenses in the process.
Consent & Copyright is a big problem in AI, where the companies wants us to believe otherwise.
mcpar-land · 1h ago
> didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
> There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this.
We have investigated ourselves and found no wrongdoing
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
Do you have to ask a race-based question to an LLM for it to give you biased or racist output?
nerevarthelame · 1h ago
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper [0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
You can't even ask it anything for genuine curiosity it starts to scold you and makes assumptions that you are trying to be racist. The conclusions I'm hearing are weird. It reminds me of that one Google engineer who quit or got fired after saying AI is racist or whatever back in like 2018 (edit: 2020).
simianwords · 1h ago
Don't try to argue using logic against a person who came to their position primarily through emotions!
All these points are just trying to forcefully legitimise his hatred.
the_other · 1h ago
The article doesn’t say that. The article says the author wont do the work of explaining their position to the reader. It doesn’t say they havn’t done that work for themselves. I read it as saying they had done some undisclosed amount of work informing themselves such that they could reach to their position: thinking, reading articles, etc.
Also, I think their lean towards a political viewpoint is worth some attention. The point is a bit lost in the emotional ranting, which is a shame.
(To be fair, I liked the ranting. I appreciated their enjiyment of the position they have reached. I use LLMs but I worry about the energy usage and I’m still not convinced by the productivity argument. Their writing echoed my anxiety and then ran with it into glee, which I found endearing.)
schwartzworld · 1h ago
> Didn't google just prove there is little to no environmental harm
I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.
> I don't ask a lot of race-based questions to my LLMS I guess
The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...
> Copyright never stopped me from saving images or pirating movies.
I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.
> I also grew up being told that ANYTHING on the internet was for the public
Who told you that? How sure are you they are right?
Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.
We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.
mrsilencedogood · 1h ago
"This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped me from saving images or pirating movies."
I think the main problem for me is that these companies benefit from copyright - by beating anyone they can reach with the DMCA stick - and are now also showing they don't actually care about it at all and when they do it, it's ok.
Go ahead, AI companies. End copyright law. Do it. Start lobbying now.
(They won't, they'll just continue to eat their cake and have it too).
ACCount37 · 1h ago
Lawyers of all the most beloved companies - Disney, New York Times, book publishers, music publishers and more - are now engaged in court battles, trying to sue all kinds of AI companies for "copyright infringement".
So far, case law is shaping up towards "nope, AI training is fair use". As it well should.
_DeadFred_ · 28m ago
If your product wouldn't exist without inputting someone else's product, it is derivative of that someone else's product. This isn't a human learning. This is a corporate, for profit product, it is derivative, and violates copyright.
dpoloncsak · 1h ago
Yeah, it's a fair point. We have seen a clear abuse of our copyright system.
sindriava · 1h ago
I appreciate this response. The environmental impact is such a red herring it's not even funny. Somehow these statements never include the impact of watching Netflix shows or doing data processing manually.
didibus · 1h ago
They might hate those too?
It's pretty clear there are impacts, AI needs energy, consumes material, creates trash.
You probably just don't mind it. The fact is still fact, the conclusion is different, you assess it's not a big concern in the grand scheme of it and worth it for the pros. The author doesn't care much for the pros, so then any environmental impact is a net loss for them.
I feel both take are rational.
sindriava · 1h ago
They might be rational, but taking things out of context as much as happens with any AI / environment narrative gives off a strong "arsenic-free cauliflower" smell.
1. Dismiss it by believing the projections are very wrong and much too high
2. Think 20% of all energy consumed isn't that bad.
3. Find it concerning environmentally
All takes have some weight behind them in my opinion. I don't think this is a case of "arsenic-free cauliflower", maybe unless you claim #1, but that claim can't really invalidate the others on their rational, they make an assumption on the available data and reason of it, the data doesn't show ridiculously small numbers like it does in the cauliflower case.
sindriava · 1h ago
I can't speak for you but I'm certainly not qualified to opine on the predictions so I won't address the 20% figure since I don't find it relevant.
> data centers account for 1% to 2% of overall global energy demand
So does the mining industry. Part of that data center consumption is the discussion we are having right now.
I find that in general energy doesn't tend to get spent unless there's something to be gained from it. Note that providing something that uses energy but doesn't provide value isn't a counterexample for this, since the greater goal of civilization seems to be discovering valuable parts of the state space, which necessitates visiting suboptimal states absent a clairvoyant heuristic.
I reject the statement that energy use is bad in principle and pending a more detailed ROI analysis of this, I think this branch of the topic has ran its course, at least for me :)
didibus · 13m ago
> so I won't address the 20% figure
Ok, but that's the figure that would be alarming, AI is projected to consume 20% of the global energy production by 2030... That's not like the mining industry...
> I find that in general energy doesn't tend to get spent unless there's something to be gained from it
Yes, you'd fall in the #2 conclusion bucket. This is a value judgement, not a factual or logical contradiction. You accept the trade off and find it worth it. That's totally fair, but in no way does it remove or mitigate the environmental impact argument, it just judges it an acceptable cost.
andybak · 1h ago
I think I get "arsenic-free cauliflower" from context but searching brings up no sources. Did you coin that phrase or is my non-google-fu just weak?
sindriava · 1h ago
Huh, my search is also turning up nothing. I could swear I heard a story about cauliflower originally being yellow and getting replaced with the white cultivar due to the guy who grew it marketing it as "arsenic-free" cauliflower despite the fact that the yellow one had no arsenic to begin with. Either I'm getting Mandela effected or I'm hallucinating -- which of course only AI models are capable of ;)
lostmsu · 1h ago
They would be rational if author also produced everything they consume off the earth and hosted this very slop on a tree. Otherwise they needed hardware produced by other humans, and those humans used the things mentioned above, and probably AI too.
But as it stands the author indirectly loves Netflix.
jacobsenscott · 1h ago
Uh, we've been doing data processing for nearly 80 years, and watching netflix for nearly 20 years. Suddenly we need to tile the earth with data centers, build power plants, burn all the fuels we can, and will "need to get to fusion" (per Sam) to run AGI. He also said "if we need to burn a little more gas to get there, that's fine". We'll never get to fusion or AGI, but we will destroy the earth to put a few more dollars in the pockets of the 0.01%.
You don't see the difference, or are you willfully ignorant?
TeMPOraL · 12m ago
You do understand what "exponential" in the "exponential growth" means?
Yes, it means that "suddenly" we need to do more of everything than we did for entirety of human history until ~few years ago. Same was true ~few years ago. And ~few years before that. And so on.
That's what exponential growth means. Correct for that, and suddenly we're not really doing things that much faster "because AI" than we'd be doing them otherwise.
sindriava · 1h ago
Do you honestly expect anyone to believe you're trying to take part in a discussion with that last statement? I appreciate this topic has your emotions running hot, but this is HN, not Reddit. Please leave that kind of talk at the door.
sonofhans · 1h ago
> This paragraph really pisses me off and I'm not sure why.
No hate, but consider — when I feel that way, it’s often because one of my ideas or preconceptions has been put into question. I feel like it’s possible that I might be wrong, and I fucking hate that. But if I can get over hating it and figuring out why, I may learn something.
Here’s an example:
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
Consider that Google is one of the creators of the supposed harm, and thus trusting them may not be a good idea. Tobacco companies still say smoking ain’t that bad
The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.
TeMPOraL · 29m ago
> The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.
Presented like this, the argument is complete bullshit. Anything we do consumes energy, therefore requires energy to be supplied, production of which has negative side effects, period.
Let's just call it a day on civilization and all (starve to death so that the few survivors can) go back to living in caves or up the trees.
The real questions are, a) how much more energy use are LLMs causing, and b) what value this provides. Just taking this directly, without going into the weeds of meta-level topics like the benefits of investment in compute and energy infrastructure, and how this is critical to solving climate problems - just taking this directly, already this becomes a nothing-burger, because LLMs are by far some of the least questionable ways to use energy humanity has.
827a · 1h ago
The idea that these things cause “minimal” environmental harm is utterly laughable. It’s Orwell-level doublespeak. Am I seriously to believe that Musk wants to run 50M H100 in the coming years, an amount that might equate to 60GW of power draw on the low end, roughly equal to 10% of the entire US power draw, and that won’t have significant environmental consequences?
Of course, they hide the truth in plain site: inference is a drop in the ocean compared to training.
merksoftworks · 1h ago
What I will say about sycophancy - the recent rollback that OpenAI went through does appear like a clear attempt to push the envelope on dark patterns wrt AI Assistants. Engagement optimized assistants, pornography, and tooling are inherently misaligned with the productivity or wellbeing of their users in the same way that engagement maximized social media is inherently misaligned with the social wellbeing of it's users.
indoordin0saur · 1h ago
It bugged me too. There are some legitimate criticisms about AI but the author has some laughably bad ones mixed in there with the good. The way he just presents these criticisms and then handwaves them away as self-evidently true is just a very lazy appeal to authority.
danso · 1h ago
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".
For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces [0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?
Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.
Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
That's a crazy argument to accept from one of the lead producers of the technology. It's up there with arguing that ExxonMobil just proved oil drilling has no impact on global warming. I'm sure they're making the argument, but they would be doing that wouldn't they?
byronic · 2h ago
A marvelous article that gave voice to things I can't articulate appropriately (although, I guess, now I can).
tanvach · 2h ago
I've noticed that the younger you are, the most likely you're in love with AI. So the tide will turn eventually, for better or worse. I talked to someone recently who claims with no irony that 'AI has zero downside'.
I don't hate AI. I hate the people who're in love with it. The culture of people who build and worship this technology is toxic.
petralithic · 35m ago
Isn't that the nature of all technologies? I'm sure people 50 years ago thought the internet wasn't going to be a big deal, like Krugman.
racl101 · 1h ago
Not surprising.
From the point of view of a typical, not very curious kid or teen AI seems like a godsend. Now you don't have to put much effort in a lot of things you don't want to do to begin with.
_DeadFred_ · 19m ago
Who at that age wouldn't love an all knowing computer that also happens to think everything you think 'really cuts to the crux' and is deeply profound and smart?
iLoveOncall · 1h ago
It's not about age, it's about experience.
didibus · 1h ago
I love this article, I'm not an AI hater personally, but I doubt an AI could have written it. And in a way, that's as compelling of an argument as the content of the article itself.
Honestly, the first paragraph is packed full with good talking points, there's definitely a lot of ignoring of the cons of AI happening, I try to remember how I felt when social media first appeared, but I recall loving it, being part of all the hype, finding it amazing, using it all the time...
pmdr · 2h ago
Is Ed Zitron's newsletter banned here? I haven't seen a single article of his on HN and he's been ranting about AI for years now.
footy · 2h ago
I think it just doesn't do very well and likely gets flagged often. If I type his domain name into the search bar there are only three articles.
pmdr · 2h ago
I mean, I get that the tone might not be liked by everyone, but his work does raise some IMO valid concerns about the economics LLMs, which I don't see discussed around here very often. Then again, I get that AI has AI-friendly owners.
footy · 2h ago
I agree with you, but until very recently it was super rare to see anything other than techno-optimism about LLMs on this site. I assume partially for the reason you bring up.
petralithic · 34m ago
That's definitely not true, read the initial threads about GitHub Copilot and Stable Diffusion on HN.
turzmo · 1h ago
I too am an AI hater, and I generally agree with the sentiment, but that Miyazaki quote was taken far out of context.
Refreeze5224 · 2h ago
This is wonderful. It captures so many of the issues of AI, and without apology.
sarchertech · 2h ago
"Their dream is to invent new forms of life to enslave."
That seems like a succinct way to describe the goal to create conscious AGI.
ACCount37 · 1h ago
Who has "the goal to create conscious AGI", exactly?
AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.
You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.
sarchertech · 1h ago
OpenAI openly has a goal to build AGI.
We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.
>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.
If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...
ACCount37 · 50m ago
We don't know if existing AI systems are "conscious". Or, for that matter, if an ECU in a year 2002 Toyota Hilux is.
We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.
With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.
Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.
brabel · 2h ago
While some people support AGI because they yearn for "new forms of life to enslave", I think it's fair to say most people who look forward to AGI wants it because it means they may find solutions to very difficult issues we just can't with our own intelligence. It may be a pipe dream, but I can understand why people would want to believe that.
sarchertech · 1h ago
I doubt many slaveholders want slaves just to own slaves. They want the useful things the slaves can provide for them.
TeMPOraL · 9m ago
Right. But that's also why we invented robotics, automation, the entire field of software engineering, and - going in the other direction - specialization of labor.
NBJack · 2h ago
The TV show Pantheon did a really cool job of exploring super intelligence from a very personal perspective, disguised in part as a scifi about living forever.
(Mild spoiler): It has a basic plot point about uploaded humans being used to tackle problems as unknowing slaves and resetting their memories to get them to endlessly repeat tasks.
robochat · 1h ago
"Valuable Humans in Transit and Other Stories" by Qntm has some good (harrowing) stories about human uploads too.
indoordin0saur · 1h ago
I'm certainly on the "AI is over-hyped" bus, so was excited to read this but the links to very fringe political websites at the beginning makes me question the judgement of the author.
yahoozoo · 1h ago
You mean the New Socialist isn’t a credible, unbiased authority on why on AI is fascist?
rsoto2 · 1h ago
Palantir, AWS, Anthropic are being used to mass slaughter children and surveille/target journalists. The entire industry is infected with fascism and moral decay.
Group_B · 2h ago
Maybe if he keeps hating it more and more it'll go away. Or maybe we just have to combine all our AI hate together and AI itself will cease to exist.
Mallowram · 2h ago
The problem is neither AI nor humans understand language
Words are the most indirect form of perception imaginable. Both Aristotle and Cassirer knew this, AI demos this. The writer doesn't grasp how bad we have it either way
"I became a hater by doing precisely those things AI cannot do: reading and understanding human language; thinking and reasoning about ideas; considering the meaning of my words and their context"
lo_zamoyski · 2h ago
> Words are the most indirect form of perception imaginable. Both Aristotle and Cassirer knew this
What?
Mallowram · 2h ago
Aristotle: There are no contradictions.
Cassirer: “Only when we put away words will be able to reach the initial conditions, only then will we have direct perception. All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.”
Cassirer Language and Myth
utyop22 · 5m ago
This is beautiful.
I also had a similar epiphany 3 days ago - once it hits you and you understand it, you can see clearly why LLMs are destined to crash and burn in their present form (good luck to those who will have to answer the questions regarding the money dumped into it).
What will come out of the investment will not justify what has been invested (for anyone who thinks otherwise, PLEASE GO AHEAD AND DO A DCF VALUATION!) and it will have a depressing effect on future AI investment.
siliconsorcerer · 1h ago
I think this is just an extension of the idea that "only 20% of communication is verbal and the rest is nonverbal". We have always understood the limitations of language, most of what is communicated between humans is nonverbal.
rsoto2 · 1h ago
we understood the limitations of language, which is why programming was done via...math and logic! Something LLMs seem to absolutely suck at
mrandish · 2h ago
> "the makers of AI aren’t damned by their failures, they’re damned by their goals."
Good observation.
qustrolabe · 1h ago
As all AI haters do this one also uses false interpretation of Miyazaki quote
visarga · 1h ago
Our society would break down without language. Without math there would be much fewer people because we would not be able to have large cities and commerce. Without technology we can't manage anymore.
The moral? It's always been an unbalanced society tumbling into the future. Even if AI has both downsides and upsides we will still make it a part of us. Consider the scale - 1B people chatting for the likes of 1T tokens/day. That amount of AI-language has got to influence human language and abilities as well.
Point by point rebuttals:
- environmental harms - so does any use of electricity, fuel or construction
- reinforcement of bias - all ours, reflected back, and it depends on prompting as well
- generation of racist output - depends on who's prompting what
- cognitive harms and AI supported suicides - we are the consequence sink for all things AI, good and bad
- problems with consent and copyright - only if you think abstractions should be owned
- enables fraud and disinformation and harassment and surveillance - all existed before 2020
- exploitation of workers, excuse to fire workers and de-skill work - that is AI being used as excuse, can't be AI's fault
- they don’t actually reason and probability and association are inadequate to the goal of intelligence - apparently you don't need reasoning to win gold at IMO
- people think it makes them faster when it makes them slower - and advanced LLMs are just 2.5 years old, give people time to learn to use it
- it is inherently mediocre - all of us have been at some point
- it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?
The author mixes hate of AI with hate of people behind AI and hate of how other people excuse their actions blaming AI.
ch4s3 · 55m ago
> - it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?
Yeah, "statistics is fascism" - Umberto Eco (probably)
egamirorrim · 1h ago
Someone get this man a Claude Max subscription already.
rsoto2 · 1h ago
thanks but no thanks. Would rather not support a company involved in the mass slaughter of children.
the_arun · 1h ago
Prompt: Write an article on how much you hate AI assuming you are an AI hater. Repeat "I Am an AI Hater" a few times through out the article.
narrator · 1h ago
Some recent takes I've heard:
"AI makes me feel stupid" - economically struggling millennial
"This waymo stuff the money goes to big corporations instead of me a hard working American that contributes to the economy" - Uber driver
Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.
Proofread0592 · 1h ago
> Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.
I think you need to add the word potentially in front of "get things done". The venn diagram of what current LLMs can do, and what wealthy business owners think LLMs can do, has the smallest of overlaps.
andix · 2h ago
A lot of valid arguments. But the conclusion (hate) is not constructive. LLMs are here, and they are going to stay. Like cars, internet or smartphones.
monkaiju · 2h ago
Thats nonsense and just feeds the inevitability narrative.
MisterTea · 2h ago
I have so far used AI a total of 4 or 5 times to ask it programming questions that I really didn't need answers to, only curious.
I can see it being useful as a teaching aide but to use it to write my emails, letters or whatever is something I would never consider as it removes the human element which I enjoy. Sure writing sometimes sucks but its supposed to - work is hard and finishing work is rewarding.
Very soon we will see blog posts about AI burnout where mindless copy-pasting of output and boring prompt fiddling sucks so much joy out of life people will begin to loose their sanity.
If I want "AI" I want a model I have full control over, ran locally, to e.g. query my picture collection for "all pictures of grey cats in a window" or whatever. Or point a webcam out of my window and have it tell me when the squirrels are fucking with my bird feeder and maybe squirt water at them but leave the birds alone. That would be cool. But turning programmers into copy pasters, emails into soulless monologues, media with minimal/no human input and so on is something that can die in a fire. It's all low effort which I have no respect for.
hlieberman · 2h ago
Preach, my brother, preach!
holbrad · 2h ago
>Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright,
I just can't take anything the author has to say seriously after the intro.
miltonlost · 2h ago
After the intro and all the links to the statements he's saying? Because which of those aren't actually true
tensor · 2h ago
Very few of them, if any, are true.
Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.
For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.
Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.
AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.
I also don't care what the author has to say after the intro.
traes · 1h ago
Come on now. You know he's not talking about small machine learning models or protein folding programs. When people talk about AI in this day and age they are talking about generative AI. All of the articles he links when bringing up common criticisms are about generative AI.
I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.
tensor · 1h ago
Deep image models are used in medical applications. LLMs have huge potential in literature searches and reference tracing.
Small models are still generative AI. The author nor you can even define what you are talking about. So yes, I can dismiss it.
01HNNWZ0MV43FF · 2h ago
Because they didn't explain it themselves, or because you disagree with the assessment?
hofrogs · 2h ago
All of those are links in the original text, do you think that these points aren't true? What makes it unserious?
lostmsu · 1h ago
It would take too much time to tear the entirety of this slop apart, but if you understand the mechanics of AI, you'd know environmental impact is negligible vs the value.
The links are laughable. For environment we get one lady whose underground water well got dirtier (according to her) because Meta built a data center nearby. Which, even if true (which is doubtful), has negligible impact on environment, and maybe a huge annoyance for her personally.
And 2 gives bad estimates such as ChatGPT 4 generation of ~100 tokens for an email (say 1000tok/s from 8xH100, so 0.1s so 0.1Wh) using as much energy as 14 LEDs for an hour (say 3W each, so 45Wh, so almost 3 orders of magnitude off, 9 if you like me count in binary).
P.S. Voted dems and would never vote Trump, but the gp is IMHO spot on.
diamond559 · 1h ago
What value? It isn't even profitable. I think we spotted the stock holder...
lostmsu · 1h ago
This is the dumbest question ever. I guess you need to ask 1B+ LLM users.
But hey, I already know you'd say you personally would never use it for these purposes.
Moreover, of the two of us you appear to have "shareholder" mentality. How profitable are volunteers serving food to homeless people? I guess they have no value then.
gjsman-1000 · 2h ago
And that's why Trump won the election.
I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.
simianwords · 1h ago
Coastal city dwellers want the next thing to signal rebellion. Its just that AI serves as a way to do that plus also show some concern to the working class.
01HNNWZ0MV43FF · 2h ago
When I see how the voters vote and don't vote, I yearn for sortition
runjake · 1h ago
You know an argument is going to be strong when it starts off by citing a Teen Vogue article and uses phrases like:
"[AI] is at its core a fascist technology rooted in the ideology of supremacy"
and
"The people who build it are vapid shit-eating cannibals glorifying ignorance."
tl;dr: This person professes to hate AI. They repeat the same arguments as others who hate AI, ignoring that it is an emerging technology with lots of work to do. Regardless of AI's existence, power infrastructure needs to improve and become more environmentally friendly.
Finally, AI is not going away, and we cannot make it away. That cat is out of the bag.
rsoto2 · 1h ago
The article is "I'm an AI hater" it's about hating AI and why. Whats wrong with "at its core a fascist technology" do you not understand the statement? Companies like Palantir are using shitty targeting AI to literally mass murder children. Yes this technology you used was funded and created in partnership with apartheid governments. AWS and Microsucks are happy to lend a hand. The industry has fascist leadership, yes.
frozenseven · 51m ago
If I were to make a list of companies and organizations that are the most "influential" in the space of AI, Palantir would not make the top 100. And I don't subscribe to your views about them either.
whitehexagon · 1h ago
I'm also 'agAInst' this trend. Mainly because it feels like a dangerous step further along the path towards conscious GAI.
But there is too much money and greed involved to stop this now. The only thing I can do is avoid any product or service that mentions AI, chatGPT, .ai domain, smart, agent etc. etc.
It feels like we are on a cliff edge, just before every government builds in a dependency on this nightmare technology. Billions more will be wasted whilst the planet burns.
utyop22 · 1h ago
I feel we will see more Luigi's.
phoenixhaber · 1h ago
Many of the comments presupposes that human free will isn't mechanic or deterministic at some level of introspection. How would we know a sufficiently complicated AI were incapable of love or hate and how would this differ from ourselves saving in one is made of silicon and one of neurobiology? This isn't an easy question.
diamond559 · 1h ago
Because when a GPU isn't receiving and computing orders from human instructions it does not draw power and therefore is an inert hunk of various metals and materials.
siliconsorcerer · 2h ago
I really appreciate the tone of this article, and honestly I was an "AI hater" as well. I honestly just don't think it makes sense any more. Almost everything in this article is making a valid point, AI is being pushed by the powers that be that have absolutely no regard for the masses and how this is going to affect society. But I fail to see how that is different than any other time or any other technology in history. People that are declaring that "AI is harmful to society" are ignoring the fundamental brokenness of society that is the underlying reason why AI development is moving forward with reckless abandon. AI is a problem because our society doesn't have a solid moral compass.
utyop22 · 1h ago
I don't think people themselves can be trusted to collectively have a moral compass. You need institutions and other mechanisms to bring and fix this within society.
siliconsorcerer · 59m ago
I agree, and actually I should have clarified that the government and our elected officials are responsible for that and they’re failing.
utyop22 · 17m ago
Im working on something in the UK (pray for me). Can't say too much but I'm trying to build a mechanism fundamental to the inner workings of the economy.
efitz · 2h ago
At least he’s honest.
hudon · 2h ago
The environmental problem is enough for us to pump the brakes. By the end of this year, AI systems will be responsible for half of global data center power demand… 23 gigawatts. For what? A more useful search engine, a better autocomplete, and a shit code generator. Is it worth it? Are we even asking that question? When does it become not worth it? Who’s even running the calculus? The free market certainly isn’t.
gloosx · 1h ago
Impressive writing, I enjoyed it, yet I feel it needs to go deeper and acknowledge that AI is nothing more but a product of modern society which dictates what is to be done: an algorithm which generates infinite profits; This algorithm was just invented, it can respond pseudo-emotionally and mimic the individual, so it has potential to build dependence and empty said individual's pocket indefinitely. Thats peak capitalism
Scrapemist · 2h ago
Nothing more human than a hater.
stego-tech · 1h ago
Well said. No notes.
Bedlow · 2h ago
This is great. I'm not going to make any argument. Because I am an AI hater.Genius.
nancyminusone · 1h ago
My personal AI dichotomy:
- When I use AI, it is typically useful.
- When other people build and do things with AI, it's slop that I didn't ask for which is waste of resources and a threat to humanity.
This entirely sums up my thoughts on the technology. I suppose it's rather like the personal benefits vs greater harm of using coal for electricity.
LeicaLatte · 1h ago
I just hate arrays.
lll-o-lll · 1h ago
I love arrays! How can you array hate? Uiua is my favourite language for fun.
mrandish · 1h ago
Although I agree with most of the article's points, I'm not an AI hater. But that's only because AI doesn't incite enough emotion in me to be "hate". It's more apathy or antipathy at worst. I concede that AI can occasionally be useful and, as a technologist, I admit early on there were some 'gee whiz' moments but the constant hype has grown annoying.
Frankly, it's gotten kind of boring and more recently it's to where I don't even like talking about it anymore. Of course, the non-technical general public is split between those who mistakenly think it's much 'smarter' or more capable than it is and those who dismiss it entirely but often for the wrong reasons. The disappointing part is how deeply polarized many of my more experienced technical friends are between one of those two extremes.
On the positive side there's endless over-the-top raving about how incredible AI is and on the negative side overwhelming angst over how unspeakably evil and destructive AI is. These are people who've generally been around long enough to see long-term trends evolve, hype cycles fade, bubbles burst and certain world-ending doom eventually arrive as just everyday annoyance. Yet both extremes are so highly energized on the topic they tend to leap to some fairly ungrounded, and occasionally even irrational, conclusions. Engaging with either type for very long gets kind of exhausting. I just don't think AI is quite as unspeakably amazing as the ravers insist OR nearly as apocalyptic as the doomers fear - but both groups are so into their viewpoint it borders on evangelical obsession - which makes hard for anyone with an informed but dispassionate, measured and nuanced perspective to engage with them.
r2_pilot · 2h ago
I feel like this level of opprobrium is disproportionate. At least Claude has enabled me to live a more full life, spending extra time with those I love more while being able to rubberducky my random thoughts(who are they to judge what fleeting thoughts I allow myself?). I've been burned by AI falsehoods and read the same slop, sure, but I also went through the same with search engines and even books before that. This tool would have unlocked so much more of my potential had it existed 30 years ago and I'm excited(maybe a lot of dread too) to see what the next 30 years will bring.
huqedato · 1h ago
I love you, AI Hater!
iLoveOncall · 2h ago
I don't think the author hates AI, but rather the people developing AI, and in particular the CEOs and others of those companies. This is particularly clear in this paragraph:
> And to what end? In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears. What is life but what we choose, who we know, what we experience? Incoherent empty men want to sell me the chance to stop reading and writing and thinking, to stop caring for my kids or talking to my parents, to stop choosing what I do or knowing why I do it. Blissful ignorance and total isolation, warm in the womb of the algorithm, nourished by hungry machines.
There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do.
If GenAI allows you to build automations for those tasks, by all means it will make you life more meaningful because you will have more time to spend on meaningful things. Think of opening the tap to get water instead of having to carry a bucket home from the well.
It's fine to hate the people who build AI, it's fine to hate the people who push for AI use, it's fine to hate the people who release garbage built with AI, etc. But hating "AI" is nonsensical. It's akin to hating hammers or shoes, it's just a tool that may or may not fit a job (and personally, like the author, I don't think it fits any job at the moment).
visarga · 1h ago
> their dream of the perfect slave machine
I don't get if AI is supposed to be a slave or a machine. Is it sentient or a toaster?
utyop22 · 1h ago
"There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do."
Ok but what are these? People keep saying right now they are trying to figure out where LLM's fit. Someone, somwhere would've figured it out by now - the world is more interconnected than ever before.
I think the approach with all that is going on is all entirely wrong - you cannot start with the technology and figure out where to put it. You have got to start with the experience - Steve Jobs famously quipped this and his track record speaks for itself. All I'm seeing is experimentation with the first approach which is costly in explicit and implicit form. Nobody from what I see seems to have a visionary approach.
iLoveOncall · 1h ago
> Ok but what are these?
Throwing the trash?
I agree with all the rest of your comment. I'm not saying that AI is the solution to any problem, just that the article is not about hating AI, it's about hating the fact that people want you to use AI for specific stuff that you don't want to use it on.
utyop22 · 16m ago
Fair enough. My problem with most people is the hand-waving going on and pretending all will be figured out.
Its incredibly disrespectful to those innovators who came before who busted their guts privately, not hyping stuff up and misleading investors and the public.
ratelimitsteve · 2h ago
watching the tide turn (or, more accurately, the undercurrent bubble up) on AI has been interesting
marcosdumay · 2h ago
It's interesting that the social reaction started to surface as soon as the companies failed to get more investment and decided to increase prices.
I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?
prisenco · 2h ago
It may have started earlier. This study came out a year ago, showing consumers overwhelmingly were turned off by companies slapping "AI" on products.
| Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk
Witness how quickly we went from being awed by Dall-E and Midjourney to saying "looks like AI" as an insult.
mingus88 · 2h ago
I can only speak to my social circle but initially LLMs were a lot of fun. Like my kids playing with Photo Booth filters on a new device.
I don’t think the social reaction was there the whole time. It feels more like we have been playing around with them for two years and are finally realizing they won’t change our lives as positively as we thought
And seeing what the CEO class is doing with them makes it even worse
taormina · 2h ago
Are they running out of funds to drown out the protesters with their own marketing?
deadbabe · 2h ago
It has nothing to do with that.
In a hype cycle, at the beginning, it is easy to harvest attention just by talking about the hype. But as more people do this, eventually the influence market is saturated.
After this point, you then will get a better ROI on attention by taking the opposite position and discussing the anti-hype. This is where we currently are with AI, the contrarians are now in style.
sindriava · 2h ago
> I became a hater by doing precisely those things AI cannot do: reading and understanding human language; thinking and reasoning about ideas; considering the meaning of my words and their context;
With this the article lost all seriousness for me. I may be on board with a lot of what you are saying, but pretending you know the answer to these questions just makes you look as idiotic as anyone who says the opposite.
deadbabe · 2h ago
Consider that unlike a human, an AI cannot “pretend” to know anything, it is incapable of knowing how to pretend because it doesn’t actually “know” anything.
raggi · 2h ago
Extremism and division aren’t a path to wisdom or to healthy cultures, but you do you.
jplusequalt · 2h ago
>or to healthy cultures
Are the companies funding this push for LLMs contributing to healthy cultures? The same companies who ruined societal discourse with social media? The same people who designed their algorithms to be as addictive as possible to drive engagement?
satisfice · 1h ago
A refreshing post to place next to the endless nihilistic gaspings of the AI fanboys.
monkaiju · 2h ago
I love the high citation per sentence ratio and couldn't agree more with the sentiment. It seems that, finally, people are starting to verbalize sufficiently direct responses to the AI slop being flung at us from all angles.
danielbln · 2h ago
Outright rejection won't help with making AI go away. We can only change it, but it is here to stay.
cmiles74 · 2h ago
I suspect the cost will rise and we'll start seeing much less of it. As the free tier gets smaller and less useful, I think we'll see the pool of people who use AI casually start to shrink.
Seeing which use-cases make it through will certainly be interesting.
mingus88 · 1h ago
I’ll bet call center workloads stick.
That whole industry is literally just a sweatshop for English language speakers who just follow scripts (prompts) and try to keep customers happy.
Seeing as how so many people volunteer to make meaningful relationships with LLMs as it is, it has to be more effective than talking to a “Bill” or “Cheryl” with a heavy South Asian accent.
Not trying to "make it go away", though that'd be great, just making another irrelevant technology that I dont interact with.
danielbln · 2h ago
I think it will be pervasive, like the Internet is pervasive, and will be unavoidable unless you drop off the grid. For better or for worse.
fzeroracer · 1h ago
No, we can absolutely reject it and destroy it at a fundamental level. LLMs are deeply unprofitable and only exist because of insane amounts of money being set on fire by the richest assholes you know to support stochastic parrots. Otherwise the sheer resource cost would've devoured the companies multiple times other.
The goal by all of these companies is to force you to pay for and eat the slop. That's why they keep inserting it into every subscription, every single app and program you use, and directly on the OS itself. It's like the Sacklers pushing opioids but directly in the open, with similar effects on vulnerable people.
aaroninsf · 1h ago
Genuine response: it is hard for me to read this sort of screed, and not wonder,
are the authors genuinely or merely performatively ignorant?
Ignorant, to be precise, of the often comical extent to which they very obviously construct—to their own specification and for their purposes—the object of their hostility...?
While dismissing—in a fashion that renders their reasoning vacuous—the wearying complexity of the actually-observable complex reality they think they are attacking?
One of the most obvious "tells" in this sort of thing is the breezy ease with which abstract _theys_ are compounded and then attacked.
I'm sorry, Anthony; there is no they. There is a bewildering and yes, I get it, frightening and all but inconceivable number of actors, each pursuing their own aims, sometimes in explicit or implicit collusion, sometimes competitively or adversarially...
...and that is but the most banal of the dimensions within which one might attempt to reason about "AI."
Frustration is warranted; hostility towards the engines of surveillance capital and its pleasure with advancing fascism is more than warranted; applications of AI within this domain and services rendered by its corporate builders—all ripe and just targets.
But it is a mistake that renders the critique and position dismisable to slip from specifics to generalities and scarecrows.
yahoozoo · 1h ago
> how it is at its core a fascist technology rooted in the ideology of supremacy
These people are insufferable.
frozenseven · 1h ago
>The people who build it are vapid shit-eating cannibals glorifying ignorance
>at its core a fascist technology rooted in the ideology of supremacy
>inherently mediocre and fundamentally conservative
>The machine is disgusting and we should break it
Jesus. Unclear why anyone would endorse this blogpost, much less post it on a website focused on computer science and entrepreneurship.
nkohari · 2h ago
I hate social media and what it's done to the internet, but I accept that it is now a part of the fabric of society. You can't unring the bell. (In fact, here I am, saying I hate social media on a social media site.)
In the end, it doesn't matter what you or I think. You can hate AI, but it's not going away. The industry needs more skeptical, level-headed people to help figure out how best to leverage the technology in a responsible way.
yawnxyz · 1h ago
I 100% agree; this entire post seems like it was a product of social media and social signaling, and it feels weirdly lacking in nuance because it's supposed to rile a certain group of people up and ally itself with another — so in a way that's deeply hypocritical to me
utyop22 · 1h ago
Ah you have no bias do you? Afterall, you are the founder of an AI startup.
_Algernon_ · 2h ago
So fucking based
nahuel0x · 1h ago
This was a merchant who sold pills that had been invented to quench thirst. You need only swallow one pill a week, and you would feel no need for anything to drink.
"Why are you selling those?" asked the little prince.
"Because they save a tremendous amount of time," said the merchant. "Computations have been made by experts. With these pills, you save fifty-three minutes in every week."
"And what do I do with those fifty-three minutes?"
"Anything you like..."
"As for me," said the little prince to himself, "if I had fifty-three minutes to spend as I liked, I should walk at my leisure toward a spring of fresh water.”
― Antoine de Saint-Exupéry, The Little Prince
jay_kyburz · 1h ago
I think the little prince is being a contrary little shit. I'm sure sometimes he would prefer to play in the park, jump-rope with friends, or draw a picture than just walk an hour whether he wanted to or not.
TeMPOraL · 48m ago
I do agree with your view of the Little Prince. Still, the irony is, the validity of the merchant's argument is irrelevant. 53 person-minutes per week is a tiny benefit compared to eliminating logistics around manufacturing and shipping beverages.
For better or worse, in real world, conditions like these end up with the market forcing adoption of the solution, whether the people on the receiving end like it or not.
simianwords · 2h ago
Cynicism is the new virtue to signal for the tech elite class. New technology is the ideal way for those people to signal their cynicism.
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.
This word salad proves that the author out to stack leftist jabs. I want to be respectful but this paragraph proves that the author does not think for themselves but just uses this as an opportunity to signal that they are the "in group" amongst the tech-cynics.
Post is probably going to get flagged for what its worth
yawnxyz · 1h ago
I think this is similar to how some people hated electricity when it first came out, and today, it's very hard to find someone who absolutely hates and avoids electricity.
Many of same concerns and objections people raised about electricity can be applied to AI (everything under the sun back in the day became "electrified" just like AI today; most of those use cases were ridiculous and deserved to be made fun of)
But I think more concerningly though, people like this don't sound like they're a "real" hater- they're positioning themselves in some kind of social signaling kind of way.
I was (and still is) a social media hater, and this person is clearly a child of the social justice / social signaling days of social media, and their entire personality seems to have been shaped by that era, and that's something I'm happy to blame on the tech industry.
modeless · 1h ago
Real crab bucket mentality here. We live in an age of wonders, literally the best time in human history to be alive, sci-fi turning to reality as we are on the brink of huge advances in space exploration, computation, robotics, biology, you name it.
No matter how good things get there will always be people filled with this sort of rage, but what bothers me is how badly this site wants to upvote this stuff.
HN is supposed to gratify intellectual curiosity. HN is explicitly not for political or ideological battle. Fulmination is explicitly discouraged in the guidelines. This article is about as far as I can imagine from appropriate content for HN. I strongly wish that everyone who wants this on the front page would find another site to be miserable on together, and stop ruining this one.
AlexeyBrin · 1h ago
Have you considered that some of the people that are against AI are worried about a potential loss in the human experience like AI replacing writers, painters and thinkers or at least diluting the human output with a sea of mediocre AI output ?
I'm not saying that they are right or wrong, but you should at least respect their right to have their own opinions and fears instead of pointing to an illusory appropriate content for HN.
modeless · 58m ago
I didn't make up that stuff about what HN is for, it's straight from the official HN guidelines. They're linked at the bottom of every page. There is nothing illusory about the lack of appropriateness of this content.
An interesting discussion about issues like that could be had. This ain't it.
magicalist · 57m ago
Eh, it's not a great article, but I prefer the balance of it being present. FWIW I feel seemingly the opposite, that there are way too many posts here filled with comments by AI startup CEOs or comments copied from r/singularity circa February. It seems equally boring and nontechnical as what you describe.
modeless · 52m ago
I don't want a "balance" where we have half r/singularity slop and half luddite rage. Those aren't the only choices. I want interesting apolitical technical content, as encouraged by the guidelines, written and commented on by people who know what they're talking about.
One of my friends sent me a delightful bastardization of the famous IBM quote:
A COMPUTER CAN NEVER FEEL SPITEFUL OR [PASSIONATE†]. THEREFORE A COMPUTER MUST NEVER CREATE ART.
Hate is an emotional word, and I suspect many people (myself included) may leap to take logical issue with an emotional position. But emotions are real, and human, and people absolutely have them about AI, and I think that's important to talk about and respect that fact.
† replaced with a slightly less salacious word than the original in consideration for politeness.
Tools do not dictate what art is and isn't, it is about the intent of the human using those tools. Image generators are not autonomously generating images, it is the human who is asking them for specific concepts and ideas. This is no different than performance art like a banana taped to a wall which requires no tools at all.
A human using their creativity to create a painting showcasing a statement about war.
A human asking AI to create a painting showcasing a statement about war.
I do not wish to use strawmen tactics. So I'll ask if you think the above is equal and true.
One person spent years painting landscapes and flowers.
The other spent years programming servers.
Is one persons statement less important than the other? Less profound or less valid?
The "statement" is the important part, the message to be communicated, not the tools used to express that idea.
1: https://news.artnet.com/art-world/italian-artist-auctioned-o...
It was considered "anti-art" at the time, but basically took over the elite art world itself and the overall movement had huge impact on what is considered art today, on performance art, sculptures, architecture that looks intentionally upsetting etc.
It's not useful to try to think of the sides as "expansive definitionists" who consider pretty much anything art just because, and "restrictive definitionists" who only consider classic masterpieces art. The divide is much more specific and has intellectual foundation and history to it.
The same motivations that led to the expansive definition in the personally transgressive, radical and subversive sense today logically and coherently oppose the pictures and texts generated in huge centralized profit-oriented companies via mechanization. Presumably if AI was more of a distributed hacker-ethos-driven thing that shows the middle finger to Disney copyrightism, they may be pro-AI.
It was the intellectual statement conveyed through that medium that made him famous.
As an aside:
is an immensely political view (and one I happen to agree with). It's not a view shared by all artists, or their art. Ancient art in particular often assumes that the highest forms of art require divine inspiration that isn't accessible to everyone. It's common for epic poetry to invoke muses as a callback to this assumption, nominally to show the author's humility. John Milton's Paradise Lost does this (and reframes the muse within a Christian hierarchy at the same time), although it doesn't come off as remotely humble.If generating the piece costs half a rain forest or requires tons of soul crushing badly paid work by others, it might be well worth considering what is the general framework the artist operates in.
Using more resources to achieve subpar outcomes is not generally something considered artful. Doing a lot with little is.
I looked up Picasso's Guernica now out of curiosity. I don't understand what's so great about this artwork. Or why it would represent any of the things you mention. It just looks like deranged pencilwork. It also comes across as aggressively pretentious.
What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?
When you use AI, you might now prompt "in the style of Picasso".
You can't argue about taste.
You may like the music of Zombie by The Cranberries, but I'd say it belongs to the complete appreciation of it to know that it's about the Irish Troubles, and for that you need some background knowledge.
You may like to smoke weed to Bob Marley songs, but without knowing something about the African slave trade, you won't get the significance of tracks like 400 years.
For Guernica you also have to understand Picasso's fascination with primitive art, prehistoric cave art, children's drawings and abstraction, the historic moment when photography took over the role of realistic depiction, freeing painters to express themselves more in terms of emotional impressions and abstractions.
Take U2's October as a nice example. (You mentioned Zombie, incidentally one of my favorites, the anger and frustration in there never fail to hit me, I can't listen to it too often for that reason), superficially it is a very simple set of lyrics (8 lines I think) and an even simpler set of chords. And yet: it moves me. And I doubt any AI would have come up with it or even a close approximation if it wasn't part of the input. That's why I refuse to call AI generated stuff art. It's content, not art.
Anyway, this gets hairy quickly, that's why I chose to illustrate with a crappy recording of a magnificent piece that still captures that feeling - for me - whereas many others would likely disagree. Art is made by its creator because they want to and because they can, not because they are regurgitating output based on a multitude of inputs and a prompt.
Paint me a Sistine Chapel is going to yield different results no matter how many times you would give that same prompt to Michelangelo depending on his mood, what happened recently, what he ate and his health as well as the season. That AI will produce the same result over and over again from the same prompt. It is a mechanistic transformation, not an original work, it reduces the input, it does not expand on it, it does not add its own feelings to it.
It's a bit like when people describe how models don't have a will or the likes. Of course they don't, "they" are basically frozen in time. Training is way slower than inference, and even inference is often slower than "realtime". It just doesn't work that way from the get-go. They're also simply not very good - hence why they're being fed curated data.
In that sense, and considering history, I can definitely see why it would (and should?) be considered differently. Not sure this is what you meant, but this is an interesting lens, so thanks for this.
It's tiresome to read the same thing over and over again and at this point I don't think A's arguments will convince B and vice versa because both come from different initial input conditions in their thought processes. It's like trying to dig two parallel tunnels through a mountain from different heights and thinking they'll converge.
Normally it's just like you say: I don't find the remixing argument persuasive, because I consider it to be a point of commonality. This time however, my focus shifted a bit. I considered the difference in "source set".
To be more specific, it kind of dawned on me how peculiar it is to engage in creating art as a human given how a human life looks like. How different the "setup" is between a baby just kind of existing and taking in everything, which for the most part means supremely mundane, not at all artful or aesthetic experiences, and between an AI model being trained on things people uploaded. It will also have a lot of dull, irrelevant stuff, but not nearly in the same way or in the same amount, hitting at the same registers.
I still think it's a bit of a bird vs plane comparison, but then that is also what they are saying in a way. That it is a bird and a plane, not a bird and a bird. I do still take issue with refusing to call the result flight though, I think.
Art never was about productivity, even though there have been some incredibly productive artists.
Some of the artists that I've known were capable of capturing the essence of the subject they were drawing or painting in a few very crude lines and I highly doubt that an AI given a view would be able to do that in a way that it resonated. And that resonance is what it is all about for me, the fact that briefly there is an emotional channel between the artist and you, the receiver. With AI generated content there is no emotion on the sending side, so how could you experience that feeling in a genuine way?
To me AI art is distortion of art, not new art. It's like listening to multiple pieces of music at the same time, each with a different level of presence, out of tune and without any overarching message. It can even look skilled (skill is easy to imitate, emotion is not).
If after 33 comments in this thread and countless people trying to explain a part of it you don't get it that may be because you either don't want to get it or are unable to get it. Restating it one more time is not going to make a difference and I'm perfectly ok with you not 'getting it', so don't worry about it.
AI without real art as input is noise. It doesn't get any more concrete than that. Humans without any education at all and just mud and sticks for tools will spontaneously create art.
A: but AI only interpolates between training points, it can't extrapolate to anything new.
B: sure it can, d'uh.
There is a good part of the series Remembrance of Earth's Past (of which The Three Body Problem is the first book) where the aliens are creating art and it shocks people to learn that the art they're so moved by was actually created by non-humans. This is exactly what this situation with AI feels like, and not even to the same extent because again AI is not autonomously making images, it's still a human at the end of the day picking what to prompt.
[0] https://en.wikipedia.org/wiki/The_Death_of_the_Author
I think that 'dutch people skating on a lake' or 'girl with a pearl earring' or 'dutch religious couple in front of their barn' without having an AI trained on various works will produce just noise. And if those particular works (you know the ones, right?) were not part of the input then the AI would never produce anything looking like the original, no matter how specific you made the prompt. It takes human input to animate it, and even then what it produces to me does not look original whereas any five year old is able to produce entirely original works of art, none of which can be reduced to a prompt.
Prompts are instructions, they are settings on a mixer, they are not the music produced by the artists at the microphones.
Why would you ask this? It sounds like a lead-up to some kind of put down.
> It can produce things it's never seen if only you describe the constituent pieces.
It can produce things it's never seen based on lots of things that it has seen.
> Prompts are a compressed version of the image one wants to create
They emphatically are not. They are instructions to a tool on what relative importance to assign to all of the templates that it was trained on. But it doesn't understand the output image any more than it understood any of the input images. There is no context available to it in the purest sense of the word. It has no emotion to express because it doesn't have emotions in the first place.
> and these days you don't even need "prompts" per se, you can say, make a woman looking towards the viewer, now add a pearl earing, now adjust this and that etc.
That's just a different path to building up the same prompt. It doesn't suddenly cause the AI to use red for a dress because it thinks it is a nice counterpoint to a flower in a different part of the image because it does not think at all.
Along with being against any form of animal cruelty.
They were also pretty obsessed with spiritualistic quackery.
Are we giving each other fun facts or what? Surely one does not need to go all the way to the nazis to find a Picasso hater? Or are you just following the footsteps of the blogpost author too?
I'm being slightly flippant but I do think this is a motte and bailey argument.
Not even painting is a Guernica nor does it need to be.
And not every aesthetically pleasing object is art. (And finally - art doesn't even have to be aesthetically pleasing. And actually finally "art" has a multitude of contradictory meanings)
Neither will a paintbrush.
The tool does need to, though.
One technical definition of empathy is understanding what someone else is feeling. In war you must empathize with your enemy in order to understand their perspective and predict what they will do next. This cognitive empathy is basically theory of mind, which has been demonstrated in GPT4.
https://www.nature.com/articles/s41562-024-01882-z
If we do not assume biological substrate is special, then it's possible that AIs will one day have qualia and be able to fully empathize and experience the feelings of another.
It could be possible that new AI architectures with continuously updating weights, memory modules, evolving value functions, and self-reflection, could one day produce truly original perspectives. It's still unknown if they will truly feel anything, but it's also technically unknowable if anyone else really experiences qualia, as described in the thought experiment of p-zombies.
As the article says, then we can discuss about it that day. "One day AI will have qualia" is no argument in discussing about AI nowadays.
Now, just like you can with Studio Ghibli art, you can generate new images in the style of Guernica.
My computer does. What evidence would change your mind?
Please don't. That offends me much more than a very mild word ever could.
For example, someone can feel like they already have to compete with people, and that's nature, but now they have to compete with machines too, and that's a societal choice.
I am interested in the intelligible content of the thing.
Also, AI does not reason. Human beings do.
As a software developer, I dread AI's capabilities to greatly accelerate the accumulation of technical debt in a codebase when used by somebody who lacks the experience to temper its outputs. I also dread AI's capabilities, at least in the short term, to separate me and others from economic opportunities.
most artists I know are against AI because they feel it is anti-human, devaluing and alienating both the viewer and the creator
some can tolerate it as a tool, and some (as is long art tradition) will use it to offend or be contrarian, but these are not the common position
if I were a spherical cow in a vacuum with infinite time, and nobody around me had economic incentives to make things with it, I could, maybe, in the spirit of openness, tolerate knowing some people somewhere want to use it... but I still wouldn't want to see its output
but again, that's not what I see in the people around me
How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?
My point is why are your economic motivations valid while his aren’t?
You hear what you want to hear. You think fine artists - and really, how many working fine artists do you really know? - don't have sincere, visceral feelings about stuff, that have nothing to do with money?
AI is not intelligent or emotional. It's not a "strongly held belief" it simply hasn't been proven.
> AI is not intelligent or emotional.
Yes, I agree, my point is that people use arguments against these types of issues instead of stating plainly that their livelihood will be threatened. Just say it'll take your job and that's why you're mad, I don't understand why so many people try to dance around this issue and make it seem like it's some disagreement about the technology rather than economics.
On the other hand, if I saw a product labelled "No AI bullshit" then I'd immediately be more interested.
But that's just me, the AI buzz among non-techies is enormous and net-positive.
Almost like its all emotional-level gimmicks anyways.
If I see "No AI bullshit" I'd be as skeptical if it said "AI Inside". Corpos tryina squeeze a buck will resort to any and all manipulative tactics.
Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.
It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.
Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.
If you want to argue that it does useful things, you have to explain at least one of those things.
- Actually knowing things / being correct - Creating anything original
It's good at
- Producing convincing output fast and cheap
There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.
Who said "every technology?" We're talking about a specific one here with specific up and downsides delineated.
Source for this claim? Are you still using Groupon?
Your argument could just as easily be applied to social networks ("are you still using friendster?") or e-commerce ("are you still using pets.com?). GPT3 or Kimi K2 or Mistral is going to become obsolete at some point, but that's because the succeeding models are going to be fundamentally better. That doesn't mean that they weren't themselves fit for a certain task.
Do you still use the internet?
Wait, are you sure?
Depends on the nature of the bubble, doesn't it?
The next time you get a CT for example, it might be an AI system that finds a lung nodule and saves your life.
Or for a negative possibility, consider how deepfakes could seriously degrade politics and the media landscape.
There are massive potential upsides and downsides to AI that will almost certainly impact you more than a coupon company.
Just like crypto.
Just look at the bitcoin hashrate; it’s a steep curve.
"Shannon warned in 1956 that information theory “has perhaps been ballooned to an importance beyond its actual accomplishments” and that information theory is “not necessarily relevant to such fields as psychology, economics, and other social sciences.” Shannon concluded: “The subject of information theory has certainly been sold, if not oversold.” [Claude E. Shannon, “The Bandwagon,” IRE Transactions on Information Theory, Vol. 2, No. 1 (March 1956), p. 3.]"
You are the "bad actors", pumpkin. Worse than the other ones.
Garbage in, garbage out. Which will always be the case when your AI is scraping stuff off of random pages and commentary on the internet.
pointing index finger at imaginary baloon: pfffffffffft
Plague of our ages I guess. Ironically AI might even make it worse.
And then we'll wait till the next bubble.
Gains seem to have leveled off tremendously. As far as I can tell folk were saying "Wow, look at this, I can get it to generate code... it does really well at tests, and small well defined tasks"
And a year or a year and a half later we're at like... that + "it's slightly better than it was before!" lol.
So, yea, I dunno, I suspect we'll see a fair amount fall away and some useful things to continue to be used.
Beneficiaries are the ones who care about the actual tech and what it can do for them. Investors are the ones who care about making money off the tech. For the Beneficiaries, AI hype is about right where it should be, given the demonstrable power of the tech itself. For Investors, it may be a dangerous bubble - but then I myself am a Beneficiary, not an Investor, so I don't care.
I don't care which companies get burned on this, which investors will lose everything - businesses come and gone, but foundational inventions remain. The bubble will burst, and then the second wave of companies will recycle what the first wave left; the tech will continue to be developed and become even more useful.
Or put another way: I don't care which of the contestants wins a tunnel-digging race. I only care about the tunnels being dug.
See e.g. history of rail lines, and arguably many more big infrastructure projects: people who fronted the initial capital did not see much of a return, but the actual infrastructure they left behind as they folded was taken over and built upon by subsequent waves of companies.
Also you seem to forget that irrespective of cash profits in the future, will this investment generate excess returns? Nope. That's what investors care about. Its not even profit actually.
Of course, the form of AI has changed over the years, but the claim that this quote could be tied to Miyazaki's general view on having machines create art is not totally baseless.
Look at all the AI-written and AI-illustrated articles being published this year. Look at how smooth the image slop is. Look at how fluent the text slop is. Higher quality slop doesn't change the fact that nobody could be bothered to write the thing, and nobody can be bothered to read it.
As if it's in any way less horrifying having the entire Internet infested with AI slop.
Wish some of the AI detectors realized when they're doing a worse job reasoning than the LLMs they criticize.
The quote was taken a little bit out of context.
He's right that to someone who's art is about capturing the world through a child's eyes, the dreamlike consonance of everyday life with simple fantasy, this is abominable.
So that's definitely a misquote, though I wouldn't be surprised if Miyazaki dislikes AI.
Regardless of how you feel about AI, the specific instance Miyazaki was reacting to was, indeed, an insult to life itself!
The author is also changing the subject of the quote.
He said it reminded him of a disabled friend that this technology was an insult to life itself.
For me, I kind of wish this site to go back to the good old days where people just share their nerdy niche hacker things and not filling the first page with the same arguments we see on the other parts of the internet over and over again. ; ) But granted I was attracted by the clickbait title too, so I can't blame others.
Just the other day someone posted the ImageNet 2012 thread (https://news.ycombinator.com/item?id=4611830), which was basically the threshold moment that kickstarted deep learning for computer vision. Commenters claimed it doesn't prove anything, it's sensational, it's just one challenge with a few teams, etc. Then there is the famous comment when Dropbox was created that it could be replaced by a few shell scripts and an ftp server.
This paragraph really pisses me off and I'm not sure why.
> Critics have already written thoroughly about the environmental harms
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
> the reinforcement of bias and generation of racist output
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
>the cognitive harms and AI supported suicides
There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.
>the problems with consent and copyright
This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.
Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.
A "small" 7 rack, SOTA CPU cluster uses ~700KW of energy for computing, plus there's the energy requirements of cooling. GPUs use much more in the same rack space.
In DLC settings you supply 20-ish degree C water from primary circuit to heat exchanger, and get it back at 40-ish degree C, and then you pump this heat to environment, plus the thermodynamic losses.
This is a "micro" system when compared to big boys.
How there can be no environmental harm when you need to run a power plant on-premises and pump that much heat in much bigger scale 24/7 to environment.
Who are we kidding here?
When this is done for science and intermittently, both the grid and the environment can tolerate this. When you run "normal" compute systems (e.g. serving GMail or standard cloud loads), both the grid and environment can tolerate this.
But running at full power and pumping this much energy in and heat out to train AI and run inference is a completely different load profile, and it is not harmless.
> the cognitive harms and AI supported suicides
Extensive use of AI is shown to change brain's neural connections and makes some areas of brain lazy. There are a couple of papers.
There was a 16 year old boy's ChatGPT fueled death on the front page today, BTW.
> This is the best argument on the page imo, and even that is highly debated.
My blog is strictly licensed with a non-commercial and no-derivatives license. AI companies gets my text, derives it and sells it. No consent, no questions asked.
Same models consume GPL and Source Available code the same and offer their derivations to anyone who pays. Again, infringing both licenses in the process.
Consent & Copyright is a big problem in AI, where the companies wants us to believe otherwise.
> There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this.
We have investigated ourselves and found no wrongdoing
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
Do you have to ask a race-based question to an LLM for it to give you biased or racist output?
I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper [0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."
[0]: https://arxiv.org/html/2508.15734v1
You can't even ask it anything for genuine curiosity it starts to scold you and makes assumptions that you are trying to be racist. The conclusions I'm hearing are weird. It reminds me of that one Google engineer who quit or got fired after saying AI is racist or whatever back in like 2018 (edit: 2020).
All these points are just trying to forcefully legitimise his hatred.
Also, I think their lean towards a political viewpoint is worth some attention. The point is a bit lost in the emotional ranting, which is a shame.
(To be fair, I liked the ranting. I appreciated their enjiyment of the position they have reached. I use LLMs but I worry about the energy usage and I’m still not convinced by the productivity argument. Their writing echoed my anxiety and then ran with it into glee, which I found endearing.)
I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.
> I don't ask a lot of race-based questions to my LLMS I guess
The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...
> Copyright never stopped me from saving images or pirating movies.
I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.
> I also grew up being told that ANYTHING on the internet was for the public
Who told you that? How sure are you they are right?
Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.
We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.
I think the main problem for me is that these companies benefit from copyright - by beating anyone they can reach with the DMCA stick - and are now also showing they don't actually care about it at all and when they do it, it's ok.
Go ahead, AI companies. End copyright law. Do it. Start lobbying now.
(They won't, they'll just continue to eat their cake and have it too).
So far, case law is shaping up towards "nope, AI training is fair use". As it well should.
It's pretty clear there are impacts, AI needs energy, consumes material, creates trash.
You probably just don't mind it. The fact is still fact, the conclusion is different, you assess it's not a big concern in the grand scheme of it and worth it for the pros. The author doesn't care much for the pros, so then any environmental impact is a net loss for them.
I feel both take are rational.
You can:
1. Dismiss it by believing the projections are very wrong and much too high
2. Think 20% of all energy consumed isn't that bad.
3. Find it concerning environmentally
All takes have some weight behind them in my opinion. I don't think this is a case of "arsenic-free cauliflower", maybe unless you claim #1, but that claim can't really invalidate the others on their rational, they make an assumption on the available data and reason of it, the data doesn't show ridiculously small numbers like it does in the cauliflower case.
> data centers account for 1% to 2% of overall global energy demand
So does the mining industry. Part of that data center consumption is the discussion we are having right now.
I find that in general energy doesn't tend to get spent unless there's something to be gained from it. Note that providing something that uses energy but doesn't provide value isn't a counterexample for this, since the greater goal of civilization seems to be discovering valuable parts of the state space, which necessitates visiting suboptimal states absent a clairvoyant heuristic.
I reject the statement that energy use is bad in principle and pending a more detailed ROI analysis of this, I think this branch of the topic has ran its course, at least for me :)
Ok, but that's the figure that would be alarming, AI is projected to consume 20% of the global energy production by 2030... That's not like the mining industry...
> I find that in general energy doesn't tend to get spent unless there's something to be gained from it
Yes, you'd fall in the #2 conclusion bucket. This is a value judgement, not a factual or logical contradiction. You accept the trade off and find it worth it. That's totally fair, but in no way does it remove or mitigate the environmental impact argument, it just judges it an acceptable cost.
But as it stands the author indirectly loves Netflix.
You don't see the difference, or are you willfully ignorant?
Yes, it means that "suddenly" we need to do more of everything than we did for entirety of human history until ~few years ago. Same was true ~few years ago. And ~few years before that. And so on.
That's what exponential growth means. Correct for that, and suddenly we're not really doing things that much faster "because AI" than we'd be doing them otherwise.
No hate, but consider — when I feel that way, it’s often because one of my ideas or preconceptions has been put into question. I feel like it’s possible that I might be wrong, and I fucking hate that. But if I can get over hating it and figuring out why, I may learn something.
Here’s an example:
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
Consider that Google is one of the creators of the supposed harm, and thus trusting them may not be a good idea. Tobacco companies still say smoking ain’t that bad
The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.
Presented like this, the argument is complete bullshit. Anything we do consumes energy, therefore requires energy to be supplied, production of which has negative side effects, period.
Let's just call it a day on civilization and all (starve to death so that the few survivors can) go back to living in caves or up the trees.
The real questions are, a) how much more energy use are LLMs causing, and b) what value this provides. Just taking this directly, without going into the weeds of meta-level topics like the benefits of investment in compute and energy infrastructure, and how this is critical to solving climate problems - just taking this directly, already this becomes a nothing-burger, because LLMs are by far some of the least questionable ways to use energy humanity has.
Of course, they hide the truth in plain site: inference is a drop in the ocean compared to training.
You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".
For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces [0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?
Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.
Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.
[0] https://www.cnn.com/2016/12/07/asia/new-zealand-passport-rob...
That's a crazy argument to accept from one of the lead producers of the technology. It's up there with arguing that ExxonMobil just proved oil drilling has no impact on global warming. I'm sure they're making the argument, but they would be doing that wouldn't they?
I don't hate AI. I hate the people who're in love with it. The culture of people who build and worship this technology is toxic.
From the point of view of a typical, not very curious kid or teen AI seems like a godsend. Now you don't have to put much effort in a lot of things you don't want to do to begin with.
Honestly, the first paragraph is packed full with good talking points, there's definitely a lot of ignoring of the cons of AI happening, I try to remember how I felt when social media first appeared, but I recall loving it, being part of all the hype, finding it amazing, using it all the time...
That seems like a succinct way to describe the goal to create conscious AGI.
AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.
You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.
We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.
>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.
If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...
We don't even know for certain if all humans are conscious either. It could be another one of those things that we once thought everyone has, but then it turned out that 10% of people somehow make do without.
With how piss poor our ability to detect consciousness is? If you decide to give a fuck, then best you can do for now is acknowledge that modern AIs might have consciousness in some meaningful way (or might be worth assigning moral weight to for other reasons), which is what Anthropic is rolling with. That's why they do those "harm reduction" things - like letting an AI end a conversation on its end, or probing some of the workloads for whether an AI is "distressed" by performing them, or honoring agreements and commitments they made to AI systems, despite those AIs being completely unable to hold them accountable for it.
Of course, not giving a fuck about any of that "consciousness" stuff is a popular option too.
(Mild spoiler): It has a basic plot point about uploaded humans being used to tackle problems as unknowing slaves and resetting their memories to get them to endlessly repeat tasks.
Words are the most indirect form of perception imaginable. Both Aristotle and Cassirer knew this, AI demos this. The writer doesn't grasp how bad we have it either way
"I became a hater by doing precisely those things AI cannot do: reading and understanding human language; thinking and reasoning about ideas; considering the meaning of my words and their context"
What?
Cassirer: “Only when we put away words will be able to reach the initial conditions, only then will we have direct perception. All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.” Cassirer Language and Myth
I also had a similar epiphany 3 days ago - once it hits you and you understand it, you can see clearly why LLMs are destined to crash and burn in their present form (good luck to those who will have to answer the questions regarding the money dumped into it).
What will come out of the investment will not justify what has been invested (for anyone who thinks otherwise, PLEASE GO AHEAD AND DO A DCF VALUATION!) and it will have a depressing effect on future AI investment.
Good observation.
The moral? It's always been an unbalanced society tumbling into the future. Even if AI has both downsides and upsides we will still make it a part of us. Consider the scale - 1B people chatting for the likes of 1T tokens/day. That amount of AI-language has got to influence human language and abilities as well.
Point by point rebuttals:
- environmental harms - so does any use of electricity, fuel or construction
- reinforcement of bias - all ours, reflected back, and it depends on prompting as well
- generation of racist output - depends on who's prompting what
- cognitive harms and AI supported suicides - we are the consequence sink for all things AI, good and bad
- problems with consent and copyright - only if you think abstractions should be owned
- enables fraud and disinformation and harassment and surveillance - all existed before 2020
- exploitation of workers, excuse to fire workers and de-skill work - that is AI being used as excuse, can't be AI's fault
- they don’t actually reason and probability and association are inadequate to the goal of intelligence - apparently you don't need reasoning to win gold at IMO
- people think it makes them faster when it makes them slower - and advanced LLMs are just 2.5 years old, give people time to learn to use it
- it is inherently mediocre - all of us have been at some point
- it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?
The author mixes hate of AI with hate of people behind AI and hate of how other people excuse their actions blaming AI.
Yeah, "statistics is fascism" - Umberto Eco (probably)
"AI makes me feel stupid" - economically struggling millennial
"This waymo stuff the money goes to big corporations instead of me a hard working American that contributes to the economy" - Uber driver
Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.
I think you need to add the word potentially in front of "get things done". The venn diagram of what current LLMs can do, and what wealthy business owners think LLMs can do, has the smallest of overlaps.
I can see it being useful as a teaching aide but to use it to write my emails, letters or whatever is something I would never consider as it removes the human element which I enjoy. Sure writing sometimes sucks but its supposed to - work is hard and finishing work is rewarding.
Very soon we will see blog posts about AI burnout where mindless copy-pasting of output and boring prompt fiddling sucks so much joy out of life people will begin to loose their sanity.
If I want "AI" I want a model I have full control over, ran locally, to e.g. query my picture collection for "all pictures of grey cats in a window" or whatever. Or point a webcam out of my window and have it tell me when the squirrels are fucking with my bird feeder and maybe squirt water at them but leave the birds alone. That would be cool. But turning programmers into copy pasters, emails into soulless monologues, media with minimal/no human input and so on is something that can die in a fire. It's all low effort which I have no respect for.
I just can't take anything the author has to say seriously after the intro.
Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.
For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.
Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.
AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.
I also don't care what the author has to say after the intro.
I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.
Small models are still generative AI. The author nor you can even define what you are talking about. So yes, I can dismiss it.
The links are laughable. For environment we get one lady whose underground water well got dirtier (according to her) because Meta built a data center nearby. Which, even if true (which is doubtful), has negligible impact on environment, and maybe a huge annoyance for her personally.
And 2 gives bad estimates such as ChatGPT 4 generation of ~100 tokens for an email (say 1000tok/s from 8xH100, so 0.1s so 0.1Wh) using as much energy as 14 LEDs for an hour (say 3W each, so 45Wh, so almost 3 orders of magnitude off, 9 if you like me count in binary).
P.S. Voted dems and would never vote Trump, but the gp is IMHO spot on.
But hey, I already know you'd say you personally would never use it for these purposes.
Moreover, of the two of us you appear to have "shareholder" mentality. How profitable are volunteers serving food to homeless people? I guess they have no value then.
I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.
"[AI] is at its core a fascist technology rooted in the ideology of supremacy"
and
"The people who build it are vapid shit-eating cannibals glorifying ignorance."
tl;dr: This person professes to hate AI. They repeat the same arguments as others who hate AI, ignoring that it is an emerging technology with lots of work to do. Regardless of AI's existence, power infrastructure needs to improve and become more environmentally friendly.
Finally, AI is not going away, and we cannot make it away. That cat is out of the bag.
But there is too much money and greed involved to stop this now. The only thing I can do is avoid any product or service that mentions AI, chatGPT, .ai domain, smart, agent etc. etc.
It feels like we are on a cliff edge, just before every government builds in a dependency on this nightmare technology. Billions more will be wasted whilst the planet burns.
- When I use AI, it is typically useful.
- When other people build and do things with AI, it's slop that I didn't ask for which is waste of resources and a threat to humanity.
This entirely sums up my thoughts on the technology. I suppose it's rather like the personal benefits vs greater harm of using coal for electricity.
Frankly, it's gotten kind of boring and more recently it's to where I don't even like talking about it anymore. Of course, the non-technical general public is split between those who mistakenly think it's much 'smarter' or more capable than it is and those who dismiss it entirely but often for the wrong reasons. The disappointing part is how deeply polarized many of my more experienced technical friends are between one of those two extremes.
On the positive side there's endless over-the-top raving about how incredible AI is and on the negative side overwhelming angst over how unspeakably evil and destructive AI is. These are people who've generally been around long enough to see long-term trends evolve, hype cycles fade, bubbles burst and certain world-ending doom eventually arrive as just everyday annoyance. Yet both extremes are so highly energized on the topic they tend to leap to some fairly ungrounded, and occasionally even irrational, conclusions. Engaging with either type for very long gets kind of exhausting. I just don't think AI is quite as unspeakably amazing as the ravers insist OR nearly as apocalyptic as the doomers fear - but both groups are so into their viewpoint it borders on evangelical obsession - which makes hard for anyone with an informed but dispassionate, measured and nuanced perspective to engage with them.
> And to what end? In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears. What is life but what we choose, who we know, what we experience? Incoherent empty men want to sell me the chance to stop reading and writing and thinking, to stop caring for my kids or talking to my parents, to stop choosing what I do or knowing why I do it. Blissful ignorance and total isolation, warm in the womb of the algorithm, nourished by hungry machines.
There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do.
If GenAI allows you to build automations for those tasks, by all means it will make you life more meaningful because you will have more time to spend on meaningful things. Think of opening the tap to get water instead of having to carry a bucket home from the well.
It's fine to hate the people who build AI, it's fine to hate the people who push for AI use, it's fine to hate the people who release garbage built with AI, etc. But hating "AI" is nonsensical. It's akin to hating hammers or shoes, it's just a tool that may or may not fit a job (and personally, like the author, I don't think it fits any job at the moment).
I don't get if AI is supposed to be a slave or a machine. Is it sentient or a toaster?
Ok but what are these? People keep saying right now they are trying to figure out where LLM's fit. Someone, somwhere would've figured it out by now - the world is more interconnected than ever before.
I think the approach with all that is going on is all entirely wrong - you cannot start with the technology and figure out where to put it. You have got to start with the experience - Steve Jobs famously quipped this and his track record speaks for itself. All I'm seeing is experimentation with the first approach which is costly in explicit and implicit form. Nobody from what I see seems to have a visionary approach.
Throwing the trash?
I agree with all the rest of your comment. I'm not saying that AI is the solution to any problem, just that the article is not about hating AI, it's about hating the fact that people want you to use AI for specific stuff that you don't want to use it on.
Its incredibly disrespectful to those innovators who came before who busted their guts privately, not hyping stuff up and misleading investors and the public.
I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?
https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...
| Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk
Witness how quickly we went from being awed by Dall-E and Midjourney to saying "looks like AI" as an insult.
I don’t think the social reaction was there the whole time. It feels more like we have been playing around with them for two years and are finally realizing they won’t change our lives as positively as we thought
And seeing what the CEO class is doing with them makes it even worse
In a hype cycle, at the beginning, it is easy to harvest attention just by talking about the hype. But as more people do this, eventually the influence market is saturated.
After this point, you then will get a better ROI on attention by taking the opposite position and discussing the anti-hype. This is where we currently are with AI, the contrarians are now in style.
With this the article lost all seriousness for me. I may be on board with a lot of what you are saying, but pretending you know the answer to these questions just makes you look as idiotic as anyone who says the opposite.
Are the companies funding this push for LLMs contributing to healthy cultures? The same companies who ruined societal discourse with social media? The same people who designed their algorithms to be as addictive as possible to drive engagement?
Seeing which use-cases make it through will certainly be interesting.
That whole industry is literally just a sweatshop for English language speakers who just follow scripts (prompts) and try to keep customers happy.
Seeing as how so many people volunteer to make meaningful relationships with LLMs as it is, it has to be more effective than talking to a “Bill” or “Cheryl” with a heavy South Asian accent.
The goal by all of these companies is to force you to pay for and eat the slop. That's why they keep inserting it into every subscription, every single app and program you use, and directly on the OS itself. It's like the Sacklers pushing opioids but directly in the open, with similar effects on vulnerable people.
are the authors genuinely or merely performatively ignorant?
Ignorant, to be precise, of the often comical extent to which they very obviously construct—to their own specification and for their purposes—the object of their hostility...?
While dismissing—in a fashion that renders their reasoning vacuous—the wearying complexity of the actually-observable complex reality they think they are attacking?
One of the most obvious "tells" in this sort of thing is the breezy ease with which abstract _theys_ are compounded and then attacked.
I'm sorry, Anthony; there is no they. There is a bewildering and yes, I get it, frightening and all but inconceivable number of actors, each pursuing their own aims, sometimes in explicit or implicit collusion, sometimes competitively or adversarially...
...and that is but the most banal of the dimensions within which one might attempt to reason about "AI."
Frustration is warranted; hostility towards the engines of surveillance capital and its pleasure with advancing fascism is more than warranted; applications of AI within this domain and services rendered by its corporate builders—all ripe and just targets.
But it is a mistake that renders the critique and position dismisable to slip from specifics to generalities and scarecrows.
These people are insufferable.
>at its core a fascist technology rooted in the ideology of supremacy
>inherently mediocre and fundamentally conservative
>The machine is disgusting and we should break it
Jesus. Unclear why anyone would endorse this blogpost, much less post it on a website focused on computer science and entrepreneurship.
In the end, it doesn't matter what you or I think. You can hate AI, but it's not going away. The industry needs more skeptical, level-headed people to help figure out how best to leverage the technology in a responsible way.
"Why are you selling those?" asked the little prince.
"Because they save a tremendous amount of time," said the merchant. "Computations have been made by experts. With these pills, you save fifty-three minutes in every week."
"And what do I do with those fifty-three minutes?"
"Anything you like..."
"As for me," said the little prince to himself, "if I had fifty-three minutes to spend as I liked, I should walk at my leisure toward a spring of fresh water.”
― Antoine de Saint-Exupéry, The Little Prince
For better or worse, in real world, conditions like these end up with the market forcing adoption of the solution, whether the people on the receiving end like it or not.
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.
This word salad proves that the author out to stack leftist jabs. I want to be respectful but this paragraph proves that the author does not think for themselves but just uses this as an opportunity to signal that they are the "in group" amongst the tech-cynics.
Post is probably going to get flagged for what its worth
Many of same concerns and objections people raised about electricity can be applied to AI (everything under the sun back in the day became "electrified" just like AI today; most of those use cases were ridiculous and deserved to be made fun of)
But I think more concerningly though, people like this don't sound like they're a "real" hater- they're positioning themselves in some kind of social signaling kind of way.
I was (and still is) a social media hater, and this person is clearly a child of the social justice / social signaling days of social media, and their entire personality seems to have been shaped by that era, and that's something I'm happy to blame on the tech industry.
No matter how good things get there will always be people filled with this sort of rage, but what bothers me is how badly this site wants to upvote this stuff.
HN is supposed to gratify intellectual curiosity. HN is explicitly not for political or ideological battle. Fulmination is explicitly discouraged in the guidelines. This article is about as far as I can imagine from appropriate content for HN. I strongly wish that everyone who wants this on the front page would find another site to be miserable on together, and stop ruining this one.
I'm not saying that they are right or wrong, but you should at least respect their right to have their own opinions and fears instead of pointing to an illusory appropriate content for HN.
An interesting discussion about issues like that could be had. This ain't it.