> The second insight is that human hallucinations are fundamentally social. Unlike AI, which hallucinates in isolation, humans hallucinate collaboratively. At the dinner party, each false fact was immediately reinforced by social validation. The nods, the interested expressions, the follow-up comments - all of these served to solidify the hallucination into shared "knowledge."
Oh boy, that burns - the Futuristic AI dystopia won't be robots killing humans, but robots embarrassing us to death by revealing our ignorance.
euroderf · 23h ago
In the eternal struggle of carbon-based lifeforms versus silicon-based lifeforms, both sides are full of crap.
mckn1ght · 4h ago
My pet theory on AI is that the true breakthrough of machine intelligence won’t be one of elevating machines so much as realizing that we aren’t as special as we think we are.
zettapwn · 22h ago
This is on point. The revelation of this early AI era is not computational excellence, but human failure.
vasco · 23h ago
I feel like we do a great job of that already and the internet is full of people pointing it out from others.
noworriesnate · 23h ago
What you may be ignorant of is that there are IRL communities as well where there’s not enough of this behavior
gosub100 · 22h ago
> the internet is full of people
This is the part that is about to change. Big time. The Internet is going to be full of bots thanks to LLM. It's going to get to the point where the majority of "people" are fake and the suspicion will overlap onto the actual people. It will reach a point where nobody will believe anybody online.
Inb4 "we've had bots forever now". You know what I mean.
hombre_fatal · 22h ago
I don't think it will be much different than how we already conduct ourselves online.
Consider the thousands of replies to a Reddit submission that probably never happened, or the thousands of replies to a screenshot of a headline/tweet/text post with no source and nobody even asking for one.
Thousands of people voraciously pretending it's the most serious thing in the world so they have an excuse to share their two cents and some emotion over it.
I don't think we very much cared about the veracity of most things we encounter day to day. It's mostly an excuse to engage and socialize.
LLMs seem to perfectly fit into this. Now everyone can find their personal hobby horse to engage with, and there will always be some straw men reply guys to keep you entertained.
riku_iki · 23h ago
I think LLM will also hallucinate way more if you will give confirmation: "oh, this is so true, could you continue and expand more?".
Its the issue of reward function, both humans and LLMs are trained on pleasing clients as one of major goal.
amake · 23h ago
Why would I read a book that a person couldn't be bothered to write?
niyyou · 21h ago
Behind this rhetorically articulated question, there is a simplistic view of content generation and consumption. First off, if you use LLMs, you are _constantly_ reading stuff nobody else bothered to answer you. So, it's not about who speaks, it's about the what. Then, what you deem as written by nobody turns out (not without irony) an average of so many (possibly) good writings and thoughts, gleaned from various corners of the web. In this regard, I particularly like--and want to shout out--the author's clarity on the work's attribution. It's not his. It is Claude's, hence, everybody's (in a sense).
Finally, as a mere form of compressed human written productions, LLMs don't have agency over what they generate. So the prompting, the idea, as well as some editorial decisions are still attributed to the person behind this, hence making it unique in its own way.
Instead of seeing as a piece that he didn't bother write, I see it as piece he chose to edit summoning the ghosts of every writer who ever put something on internet, which again, he correctly credits (to the limits of knowing what exact data Claude uses... which is another story).
shawabawa3 · 22h ago
Why would you choose not to read a book based on how it was written rather than the content?
If it came out that Stephen king had been using AI for decades would that make his work any worse?
shinycode · 22h ago
Actually LLM exists in the first place because creative people like him wrote down their thoughts. If no human ever wrote anything, LLMs wouldn’t be able to generate anything would they.
Now because LLM are rehashing on human content the feeding value for us is less important.
What would be nice is an LLM that writes a book that spans accros authors of different fields to aggregate and consolidate knowledge, saving us the time of reading all the books
cbm-vic-20 · 22h ago
Just feed it to an LLM and let it summarize it for you.
_Algernon_ · 22h ago
That would require something worth summarizing
cheema33 · 22h ago
You consume many things that a human did not bother to produce.
s1mplicissimus · 22h ago
>> You consume many things that a human did not bother to produce.
apart from maybe at times eating wild growing food - what kinds of things did you have in mind?
ToValueFunfetti · 22h ago
I find I'm willing to explore a minecraft world or puzzle through a nethack dungeon that nobody bothered to create. You could argue that humans made the biomes or defined the layout constraints, but humans also supplied the training data for an LLM. I guess it comes down to whether the art is any good, with procedural generation being mostly irrelevant? But perhaps a book is different
loudmax · 19h ago
I don't think I'd want to read a novel that was generated by an algorithm, but I might be up for a Choose Your Own Adventure style game, which might be a better analogy to Minecraft or nethack.
kemotep · 21h ago
I mean the difference with Minecraft is each part that is procedurally generated was made by a human or involved human input into the design decisions.
Unless you are suggesting Notch was a generative AI model he made Minecraft.
Tadpole9181 · 20h ago
What? That's not true at all, sans structures. Minecraft worlds are infinite and unique, using randomly generated noise textures as the basis. Nobody "made" each "part".
And arguing that a human tweaking noise parameters is somehow more creative than humans distilling their entire knowledge and cultural repertoire into a machine, then working with that to produce literature with a guided hand seems quite silly.
kemotep · 17h ago
But someone did design a diamond block and a grass block and program their properties and model them and add them to the procedural generation system with rules on how they should be placed.
An LLM would from whole cloth create a block, how it works, and how it would be randomly generated. That’s why the current MinecraftGPT doesn’t have any consistency if you turn around in 360 degrees. Everything is being generated on the fly and how it works. Once you generate that Minecraft world, how it works and what it looks like is static and why it works the way it works was designed entirely by people.
gchamonlive · 22h ago
Sometimes it's not about the destination, but the journey
fragmede · 21h ago
because that's a lazy dismissive take on LLM usage. If you drive someone to airport, do you just say your cat have them a ride and say you had no part in it? Or do you just say you gave them a ride because of the time and effort it still took you, even though you didn't pick them up and throw them over your shoulder? Would you ride a self-driving Waymo car to the airport?
The problem with LLM generated writing is that, apart from a couple of tells, which high school and college students have figured out how to ask ChatGPT to use diction that befitting a highschool student, to avoid tells like "delve", that you can't reliably detect when something is basically entirely LLM generated vs half human and half LLM generated, vs entirely LLM generated. And if you can't actually tell that it's been generated, that you're instead trying to look for tells that it comes from an LLM instead of the content or the message itself, then whybare you even reading it?
If instead of setting it up to run entirely on its own, as this post did, you give it a scenario, writing a fiction book with ChatGPT is a fun way to spend a bit of time that's (imo) better than doom scrolling for the same amount of time. Give it a scenario and some theme and tell it you want to write a book about it, and have it ask you questions on where the book should go and then make a book that goes how you want. Want a utopian pollyanna view of the future? what a nitty gritty future that makes skynet look like paradise? Want aliens to visit? Want ChatGPT to give you an act three surprise that isn't a trope you expected? Whatever you want, it's just fun to play with (unless you just hate LLMs and can't have fun).
The question is, what do you do with this book that's now been written. If you had fun by yourself and don't share with anybody, was fun still had? If you only share the book with your LLM adopting book writing club, and you all take turns doing analysis of each others books, knowing they were helped by an LLM, does it still "count"? And what if you submit it, or not, to a publisher who accepts it, you get it posted to Kindle Unlimited, and you get a lot of readers? What then?
The very nature of entertainment is changing, from mass media, to personal media. Culture was already fragmenting, AI will only serve to divide us further apart from one another. Between AI for writing and images, as well as video, along with AI like suno for music, the only challenge we need face is the problem of connecting with other people when there's no shared cultural references.
If you and I have both read an loved a book/enjoyed a song/joined a movie/tv show's fandom, there's a basis for continued conversation. But other than adversity like addiction or a trip into the desert/mountain/Serengeti, soon we'll have even less to connect with our fellow humans over.
(and yes, I know there are a lot words here. I wrote this all by hand and didn't have time to shorten it)
tux3 · 23h ago
There have been good discussions recently about preventing AIs from being too sycophantic. There have been some dangerous moments where LLMs would praise every idea the user has as genius, ground-breaking, and a brilliant observation.
Some academics have reported a noticeable increase in the volume of crackpot emails they get daily. They're full of LLM-generated nonsense, where the AI goes along with the nonsensical ramblings, always telling the person they've found some critical insight.
While this feels good, it can end up reinforcing dangerous nonsense. This encourages some people to dig further and further into what the LLM is constantly telling them is a brilliant idea.
Most of the time it's pretty harmless, but when it veers into "revealing hidden patterns" and "illuminating human cognition", you start to worry about a disconnect with consensus reality.
MSFT_Edging · 22h ago
I have a friend who is predisposed to manic episodes. He's also a big fan of AI. The last episode was before LLMs got really good and he was essentially harassing professors because he believed he solved an unsolvable problem.
Sycophantic AI would be like throwing dry wood into a house fire.
MontyCarloHall · 22h ago
There are many such cases [0]. Chatbots throw gasoline on the embers of schizoaffective disorders. I wonder if episodes of these magnitudes would have ultimately been triggered by other things, or whether the combination of sycophancy and perceived omniscient sentience of chatbots is a uniquely powerful trigger unlike anything else. Would these people otherwise never have experienced a psychotic break, despite a clear lurking predisposition?
* A man says his soon-to-be-ex-wife began “talking to God and angels via ChatGPT” after they split up. She is changing her whole life to be a spiritual adviser and do weird readings and sessions with people — I’m a little fuzzy on what it all actually is — all powered by ChatGPT Jesus.” What’s more, he adds, she has grown paranoid, theorizing that “I work for the CIA and maybe I just married her to monitor her ‘abilities.’”
* A woman recounts how her husband initially used ChatGPT to troubleshoot at work. Then the program began “lovebombing him.” The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him. I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory. He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” Her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”
* A teacher, who requested anonymity, said her partner of seven years “would listen to the bot over me. He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.” “It would tell him everything he said was beautiful, cosmic, groundbreaking. Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer.”
I think we have to stop the idea that wasting people's time at such high scale is harmless. It's not.
amelius · 23h ago
Reminds me of Google's Doodles, which must have annoyed millions of people and wasted millions of minutes worldwide, and don't get me started on ads.
falcor84 · 23h ago
Google doodles generally bring me joy and let me know about world events I wasn't aware of. I honestly can't think of a single one that annoyed me. Often it's just "oh, it looks interesting, but I don't have time to click on it now; maybe I'll look into it later"
amelius · 23h ago
Maybe you never ran into them when you didn't want any distractions, like while working?
jdiff · 22h ago
If your focus is that sensitive to disruption, an unfiltered list of internet links is also likely to be disruptive, especially with the quality of search these days.
amelius · 22h ago
Bad search results are a source of frustration, yes.
redeux · 22h ago
Are they wasting time though? If they get utility out of it, if it makes them happy, if they learn something new, if they would otherwise do something self-destructive then it’s not a waste of time. Sure it might be leisure time, but leisure time isn’t wasted time.
bgwalter · 23h ago
All issues that are implicitly pointed out as novel in the introduction have been discussed for centuries.
Not only by scholars and experts. Literally 50% of Internet discussion is about biases, selective facts, spin etc. The problem with "AI" is that propaganda can be automated and that it wastes our time.
The topic is suited for "AI" because it is a soft topic that lends itself to uninhibited preaching. "AI" is also great at writing presidential speeches. It is probably the only thing it is good at.
Nevertheless, the result is still painful to read.
cheema33 · 22h ago
> "AI" is also great at writing presidential speeches. It is probably the only thing it is good at.
People with this, head in the sand attitude about AI, are in for a rude awakening.
bgwalter · 22h ago
The FOMO argument!
gokhan · 22h ago
Just like in blind wine tasting, I suspect people’s perceptions (including many here) would be very different if the author hadn’t told us it was created by AI.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
debesyla · 22h ago
Just like with mass-manufactured furniture, some of us seek hand-made items, as it feels more human, despite its flaws.
fragmede · 22h ago
Realy? Does having flawws really make four better reading? Okay, I'll admit that hurt me to right (as did that) but writing isn't furniture, and other than a couple of tells which I haven't kept pace with (eg use of the word "delve"), the problem with trying to key off of LLM generated content and decide quality, is that you can't tell if the LLM operator took three minutes to copy and pasted the whole thing (unless they accidentally leave in the prompts, which has happened, and is a dead giveaway that no one even proof skimmed it), or if they took more time with it and carefully considered the questions ChatGPT asked them as to what the writing wood (ouch!) contain.
If you made it this far, does having English mistakes like that make really make for better reading?
debesyla · 21h ago
I, personally, like mistakes in writing (as with in painting or singing) - I feel that it gives the art an additional depth, context, detail and comparison with author earlier/later works, other authors.
I believe that art function is to communicate - we create art, type letters, paint graffiti, verbal-vomit in online game PvP match to make a connection with other people.
So the mistakes are only adding to the art: "cooking this is difficult, and everyone do mistakes, but it's made with love and intuition, not blind recipe". Well, I can continue with examples of kissing but I guess I am repeating myself, haha.
I believe that being perfect is not human, and life doesn't have to be perfect. Getting better is great! But so is making mistakes.
(Or, dunno, maybe I have more to learn and I will some day think in a different way.)
etblg · 19h ago
Here it is cleaned up by ChatGPT:
"Really? Does having flaws actually make for better reading?
Okay, I’ll admit—that hurt to write (as did that last sentence), but writing isn’t furniture. Aside from a few tells I haven’t kept pace with (like the overuse of the word “delve”), the problem with trying to judge quality based on LLM-generated content is this: you can’t always tell whether the operator spent three minutes copying and pasting the whole thing (unless they accidentally leave in the prompts—which has happened and is a dead giveaway that no one even skimmed it), or if they took the time to thoughtfully consider the questions ChatGPT asked about what the writing should contain.
If you’ve made it this far: do mistakes like these really make for better reading?"
And I'm going to have to say: yes, I enjoyed reading your weird paragraph more than the ChatGPT sanitized version of it.
bgwalter · 22h ago
There has been an effort to deny all variance in human output or abilities in the last 8 years.
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
tossandthrow · 22h ago
This thing is that it is not worth writing - people should consume the book directly from the LLM.
A project that would rethink the book medium into something backed by an LLM would be worth it.
podgietaru · 22h ago
Yeah? The sentiment of “why read something somebody didn’t bother to write” sort of has to be.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
cheema33 · 22h ago
I don't understand it either. I suspect it is the fear for their own wellbeing. The fear is well placed. But the response is perplexing. The only way to deal with this challenge is to try to stay ahead of it. Not to stick your head in the sand.
myaccountonhn · 22h ago
For me its the injustice of stealing data, scrapers incurring huge costs to open source projects, companies exploiting cheap labour in labelling that data and finally the growing environmental cost that makes me not want to use LLMs.
podgietaru · 22h ago
I have that issue too. But for literature it’s something more primal.
Fiction feels like the ultimate distillation of the human experience. A way to share perspective and experience. And having some algorithm flatten that feels utterly macabre.
Not to be too dramatic. I know that not all fiction is transcendent. But still. There’s something so utterly gross about using a machine for it.
nunez · 5h ago
Using LLMs to write self-help style books like this one is a slam dunk use case, honestly. This book was dry (though it's prompt feels creative) but full of anecdotes and listicles --- the perfect slate for some big name needing a book deal.
scrumper · 22h ago
I've seen worse at an airport bookshop. This was funny:
"The irony is delicious and deeply instructive. Every flaw we've identified in artificial intelligence exists, magnified and unchecked, in human intelligence. But here's the critical difference: when it appears in AI, we can see it, measure it, and try to fix it. When it appears in humans, we call it "just being human" and move on."
Poor Claude thinks it's all a bit unfair.
I didn't read much of it though, it made me feel a bit like I was a naughty avout being punished by reading pages from The Book.
anorwell · 23h ago
Is it any good? Perhaps we can ask Opus to review it to find out.
karmakurtisaani · 23h ago
It's full of made up scenarios and anecdotes. Make it a little bit worse and it'll reach Malcolm Gladwell level.
IshKebab · 22h ago
I skimmed one chapter and it just decayed into lists. Why do LLMs love lists so much?
Cyphase · 22h ago
Top 13 Reasons LLMs Love Lists
[Click to begin the slideshow]
__alexs · 22h ago
Gemini seems to think it's very interesting.
gradus_ad · 23h ago
The premise is wrong. Humans don't listen to one another and nod unthinkingly. They criticize and validate relentlessly. Friends aren't mistaken for oracles. We have learned to not trust one another, except when one is speaking from deep experience and expertise in a given domain.
AI is presented as an expert in every domain though, so we are lulled into a vulnerable state of unvigilance.
robotbikes · 22h ago
I really found the story in chapter 14 (recursive self-improvement) about the guy who got so addicted to self-improvement that he ended up in his own meta-reality unable to understand even himself because he was getting so much better and hacking his learning. A completely fabricated story with no basis in reality that I'm aware of but man there are a lot of bullet points to make it seem factual. What are we going to do about the worrying trend of 10X hackers self-improving so much that they aren't able to exist in the real world.
Here's an excerpt
"The Addiction to Acceleration
The fourth uncomfortable truth is how recursive improvement becomes compulsive. Kenji can’t stop because each day of not improving his improvement feels like stagnation. When you’re accelerating, constant velocity feels like moving backward.
This addiction manifests as:
• Inability to accept plateau phases
• Anxiety when not optimizing optimization
• Devaluing of steady-state excellence
• Compulsion to add meta-levels
• Fear of falling behind yourself
Recursive improvement can become its own trap."
I find that this criticism is far less applicable to say individuals but perhaps it could be levied against the way companies are currently treating AI. Which of course is where this comes from.
robotbikes · 21h ago
Honestly there are some interesting concepts and broad overviews of them but this is hardly a "book" but just a verbose LLM document that briefly lists a lot of concepts without sufficiently or consistently fleshing them out into actual meaningful chapters. Not to say that this sort of thing isn't potentially useful but it seems more like the starting point of an outline of a book rather than anything resembling a finished published book.
tpmoney · 22h ago
> Humans don't listen to one another and nod unthinkingly. They criticize and validate relentlessly. Friends aren't mistaken for oracles. We have learned to not trust one another, except when one is speaking from deep experience and expertise in a given domain.
Do we though? I think a quick jaunt through any of the “-o-spheres” or “-tubes” of the internet would quickly disabuse someone of the idea that we default to not trusting other people. Even before the internet, “urban legends” are effectively “mass social hallucinations”.
rjurney · 23h ago
As a writer, this is deeply comforting. The book is total shit. It sounds like an AI, not like something a person would connect with for even a moment. Not even in the opening lines.
dinfinity · 23h ago
Did you read more than the opening lines?
When you've read it in its entirety, could you indicate on a scale from 1 to 10 what score it would get compared to published books you've read (including of course all the best and the worst ones)?
NameError · 22h ago
It seems silly to expect someone to read the whole book before evaluating it when the creator didn't even read the whole book before publishing it
dinfinity · 22h ago
If someone claims "the book is total shit", it is entirely reasonable to ask if they've actually read it, regardless of who or what wrote it.
evanelias · 20h ago
If you flip through something claiming to be a "book" and immediately see that a majority of pages just contain nonsensical bulleted lists, and furthermore see that chapter titles are printed overlapping with the book title on each page, you can correctly conclude the entire thing is a zero-effort pile of shit without wasting any further time to read it.
dinfinity · 18h ago
I read through a bit of it and it really wasn't all that bad. The only thing that I found to be really problematic were the made up experiences. Clearly hallucinations are still a big problem for LLMs, but if we manage to get rid of those a book like this can really be quite serviceable (a lot of human-written books are badly written so the bar isn't incredibly high, imho).
The creator should really tweak the prompt/process to include automatic review explicitly intended to remove hallucinations. It clearly is already the intent: "Future iterations of this experiment will include AI-powered fact-checking of the content."
I'm looking forward to what the improved version will look like.
evanelias · 17h ago
Flip through the middle of the book. Nearly every page has either a bulleted list or a numbered list. In several cases, a single list spans multiple pages.
That’s the format of an outline, not a legitimate book.
dinfinity · 14h ago
Is it? Or are 'legitimate' books just too often not concise and structured enough?
I do a lot of personal knowledge management and I use a shit ton of sections and lists in that. Books evolved from the art of telling stories, not from efficiently conveying knowledge. Perhaps we're just way too used for books etc. to an approach that is suboptimal. I know I personally despise news articles and blogs that start with "setting the scene" and are incredibly and needlessly verbose, using thousands of words to say what could be made clear in a single paragraph.
Viewed from another angle: Reading text is inherently serial in nature even though a lot of things are related to each other in a graph. A document with sections with bulleted lists is actually a way to represent a tree, which is closer to a fully unconstrained graph. I would argue that trees like that are much easier to parse than classically written texts.
There is irony here in that I only used some whitespace to add structure, but never used any bulleted lists in this comment.
[...]
I did generate an alternative with Google Gemini 2.5 Pro, but the formatting doesn't work here on HN. It was decent, though!
evanelias · 11h ago
> I do a lot of personal knowledge management and I use a shit ton of sections and lists in that.
That's because these are notes, not a book. A list-heavy outline format makes sense for notes, as these are summaries that supplement your own memory and knowledge you've already taken in. They're not a sole/primary source of conveying knowledge to others on their own.
> Perhaps we're just way too used for books etc. to an approach that is suboptimal.
If you truly believe books are "suboptimal", I can only suggest that you consider looking inward and do some reflection:
Is the "problem" really with books and long-form writing, which is the dominant form of knowledge transfer across several thousand years of human civilization?
Or is the problem with people's attention spans in the past decade, due to dopamine-fueling social media doom scrolling and AI usage?
MSFT_Edging · 22h ago
This is by definition slop. A mathematical average of training data. By design it contains nothing novel.
karmakurtisaani · 20h ago
A small correction: it can contain novelty in the form of AI hallucinations.
MSFT_Edging · 19h ago
Which is happenstance as opposed to intentional novelty.
karmakurtisaani · 18h ago
Yeah I was writing it tongue in cheek, should have maybe been a bit more obvious about it.
jbellis · 22h ago
interesting experiment!
it does seem like the consensus is that o3 and (at a much lower cost) r1 are better writers than claude, but obviously anthropic's agent framework doesn't support those
chaosbolt · 22h ago
AI makes things too easy, this will destroy culture.
We thought that movies adapting to the tiktok generation wouldn't kill cinema, and that new and better directors would rise... this didn't happen and even the latest movies from good directors like ridley scott are quite bad.
Now 3 years ago I typed "lovecraft nietzsche" and would only find 2 videos on youtube that pertain to what I'm looking for, aka the link between the two and how lovecraft's cosmicism might be a metaphor for the abyss etc. but those 2 videos are both excellent, 2 different people thought what I thought but cared enough to write it down and make a video about it. Today I can barely find these videos, there is a sea of AI generated videos with AI narrated text rambling on and on about lovecraft this nietzsche that to hit the 20min mark and maximize ad revenue, all this in a sea of short videos that youtube push harder and harder like multiple shorts between each 2 normal videos. Did another plateform overtake youtube? Not really.
Now some author will use AI to help with his next book, it will work and he will publish faster, then other authors will do the same, and others will optimize it more and more until most books available will be 90% written by AI, colleges will teach you AI assisted writing, and decades after that no one would even think to write a book without the help of AI.
How the hell would you explain to your publisher that you need 3 years to write the sequel when everyone else is doing it in 3 months.
It really does take the beauty out of the whole experience.
cheema33 · 22h ago
> It really does take the beauty out of the whole experience.
Beauty is subjective.
For a long time we were an agrarian society. Getting up early, getting on a horse and tending to your land every day, was probably considered beautiful by some. But we don't do that anymore.
We are probably going to see a similar shift in society. At a much more accelerated pace.
fabiofzero · 18h ago
If you can't bother to write it I won't bother to read it.
karmakurtisaani · 23h ago
Did you read it tho? We might be approaching a time when saying an author has written more books than they've read might turn from a joke to a fact.
gjm11 · 23h ago
The actual author is Claude, and Claude has read a lot of books.
novosel · 23h ago
It is interesting to me that your emphasis is on _lot_, and not on _read_.
gjm11 · 22h ago
Why?
I emphasized "lot" because presumably the training data for a high-end model like Claude includes literally millions of books, far more than any human being has read even a few pages of.
To me, "Claude has read a lot of books" would read pretty oddly here. I suppose the idea would be to contrast with the (hopefully few) books it's written to date, but I can't think why the emphasis would be called for.
novosel · 20h ago
Yes, I understood your argument, and your emphasis is on the point there.
I am simply asking in what sense did Claude read all those books. I am sorry if I have hijacked the disscusion.
edited: spelling
gjm11 · 16h ago
It's read them in the same sense as it's written a new one: it's ingested them, had what pass for its mental processes influenced by them, remembered some of their contents, etc.
You might want to say that that isn't "reading", just as we never say that aeroplanes "fly" since they don't do the same thing as birds, never say that computers "play chess" since the calculations they do are very different from those done by human chessplayers, never say that machines "dig ditches" since they don't have the experience of tired muscles and the sun beating down on their backs, etc. As you may gather from the tone of the previous sentence, I am not altogether convinced.
I do agree that the two things aren't equivalents. I can imagine futures in which AI systems have a training process that somewhat resembles the present one, and also do something that corresponds more closely to human reading[1], and then I'd want to reserve the term "read" for the latter. What Claude has done to all the books that helped shape it isn't exactly reading. But it's quite like reading for our present purpose; when someone jokes about an author having written more books than they've read, they mean that the author doesn't have much awareness of other people's work, and that isn't a problem Claude has.
[1] E.g., they might have some sort of awareness of the process, whatever exactly that might mean; they might do it "for pleasure", whatever exactly that might mean; etc. Those things would require the AI systems to resemble humans in ways present AI systems aren't designed to and, so far as I can tell, don't; maybe some future AI systems will be that way, maybe not.
synthos · 23h ago
> First Revision - Completed with minimal human contribution (auto-accept mode)
>
> Second Revision - Planned with increased human-in-the-loop involvement
At least 'minimal human contribution' in the first revision. So scan read it as it revised the book.
I do find it amusing to think that people might just ask an AI to summarize the book instead of actually reading it. Maybe even 'review' it as well.
Oh boy, that burns - the Futuristic AI dystopia won't be robots killing humans, but robots embarrassing us to death by revealing our ignorance.
This is the part that is about to change. Big time. The Internet is going to be full of bots thanks to LLM. It's going to get to the point where the majority of "people" are fake and the suspicion will overlap onto the actual people. It will reach a point where nobody will believe anybody online.
Inb4 "we've had bots forever now". You know what I mean.
Consider the thousands of replies to a Reddit submission that probably never happened, or the thousands of replies to a screenshot of a headline/tweet/text post with no source and nobody even asking for one.
Thousands of people voraciously pretending it's the most serious thing in the world so they have an excuse to share their two cents and some emotion over it.
I don't think we very much cared about the veracity of most things we encounter day to day. It's mostly an excuse to engage and socialize.
LLMs seem to perfectly fit into this. Now everyone can find their personal hobby horse to engage with, and there will always be some straw men reply guys to keep you entertained.
Its the issue of reward function, both humans and LLMs are trained on pleasing clients as one of major goal.
If it came out that Stephen king had been using AI for decades would that make his work any worse?
apart from maybe at times eating wild growing food - what kinds of things did you have in mind?
Unless you are suggesting Notch was a generative AI model he made Minecraft.
And arguing that a human tweaking noise parameters is somehow more creative than humans distilling their entire knowledge and cultural repertoire into a machine, then working with that to produce literature with a guided hand seems quite silly.
An LLM would from whole cloth create a block, how it works, and how it would be randomly generated. That’s why the current MinecraftGPT doesn’t have any consistency if you turn around in 360 degrees. Everything is being generated on the fly and how it works. Once you generate that Minecraft world, how it works and what it looks like is static and why it works the way it works was designed entirely by people.
The problem with LLM generated writing is that, apart from a couple of tells, which high school and college students have figured out how to ask ChatGPT to use diction that befitting a highschool student, to avoid tells like "delve", that you can't reliably detect when something is basically entirely LLM generated vs half human and half LLM generated, vs entirely LLM generated. And if you can't actually tell that it's been generated, that you're instead trying to look for tells that it comes from an LLM instead of the content or the message itself, then whybare you even reading it?
If instead of setting it up to run entirely on its own, as this post did, you give it a scenario, writing a fiction book with ChatGPT is a fun way to spend a bit of time that's (imo) better than doom scrolling for the same amount of time. Give it a scenario and some theme and tell it you want to write a book about it, and have it ask you questions on where the book should go and then make a book that goes how you want. Want a utopian pollyanna view of the future? what a nitty gritty future that makes skynet look like paradise? Want aliens to visit? Want ChatGPT to give you an act three surprise that isn't a trope you expected? Whatever you want, it's just fun to play with (unless you just hate LLMs and can't have fun).
The question is, what do you do with this book that's now been written. If you had fun by yourself and don't share with anybody, was fun still had? If you only share the book with your LLM adopting book writing club, and you all take turns doing analysis of each others books, knowing they were helped by an LLM, does it still "count"? And what if you submit it, or not, to a publisher who accepts it, you get it posted to Kindle Unlimited, and you get a lot of readers? What then?
The very nature of entertainment is changing, from mass media, to personal media. Culture was already fragmenting, AI will only serve to divide us further apart from one another. Between AI for writing and images, as well as video, along with AI like suno for music, the only challenge we need face is the problem of connecting with other people when there's no shared cultural references.
If you and I have both read an loved a book/enjoyed a song/joined a movie/tv show's fandom, there's a basis for continued conversation. But other than adversity like addiction or a trip into the desert/mountain/Serengeti, soon we'll have even less to connect with our fellow humans over.
(and yes, I know there are a lot words here. I wrote this all by hand and didn't have time to shorten it)
Some academics have reported a noticeable increase in the volume of crackpot emails they get daily. They're full of LLM-generated nonsense, where the AI goes along with the nonsensical ramblings, always telling the person they've found some critical insight.
While this feels good, it can end up reinforcing dangerous nonsense. This encourages some people to dig further and further into what the LLM is constantly telling them is a brilliant idea.
Most of the time it's pretty harmless, but when it veers into "revealing hidden patterns" and "illuminating human cognition", you start to worry about a disconnect with consensus reality.
Sycophantic AI would be like throwing dry wood into a house fire.
I think we have to stop the idea that wasting people's time at such high scale is harmless. It's not.
Not only by scholars and experts. Literally 50% of Internet discussion is about biases, selective facts, spin etc. The problem with "AI" is that propaganda can be automated and that it wastes our time.
The topic is suited for "AI" because it is a soft topic that lends itself to uninhibited preaching. "AI" is also great at writing presidential speeches. It is probably the only thing it is good at.
Nevertheless, the result is still painful to read.
People with this, head in the sand attitude about AI, are in for a rude awakening.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
If you made it this far, does having English mistakes like that make really make for better reading?
I believe that art function is to communicate - we create art, type letters, paint graffiti, verbal-vomit in online game PvP match to make a connection with other people.
So the mistakes are only adding to the art: "cooking this is difficult, and everyone do mistakes, but it's made with love and intuition, not blind recipe". Well, I can continue with examples of kissing but I guess I am repeating myself, haha.
I believe that being perfect is not human, and life doesn't have to be perfect. Getting better is great! But so is making mistakes.
(Or, dunno, maybe I have more to learn and I will some day think in a different way.)
"Really? Does having flaws actually make for better reading?
Okay, I’ll admit—that hurt to write (as did that last sentence), but writing isn’t furniture. Aside from a few tells I haven’t kept pace with (like the overuse of the word “delve”), the problem with trying to judge quality based on LLM-generated content is this: you can’t always tell whether the operator spent three minutes copying and pasting the whole thing (unless they accidentally leave in the prompts—which has happened and is a dead giveaway that no one even skimmed it), or if they took the time to thoughtfully consider the questions ChatGPT asked about what the writing should contain.
If you’ve made it this far: do mistakes like these really make for better reading?"
And I'm going to have to say: yes, I enjoyed reading your weird paragraph more than the ChatGPT sanitized version of it.
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
A project that would rethink the book medium into something backed by an LLM would be worth it.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
Fiction feels like the ultimate distillation of the human experience. A way to share perspective and experience. And having some algorithm flatten that feels utterly macabre.
Not to be too dramatic. I know that not all fiction is transcendent. But still. There’s something so utterly gross about using a machine for it.
"The irony is delicious and deeply instructive. Every flaw we've identified in artificial intelligence exists, magnified and unchecked, in human intelligence. But here's the critical difference: when it appears in AI, we can see it, measure it, and try to fix it. When it appears in humans, we call it "just being human" and move on."
Poor Claude thinks it's all a bit unfair.
I didn't read much of it though, it made me feel a bit like I was a naughty avout being punished by reading pages from The Book.
[Click to begin the slideshow]
AI is presented as an expert in every domain though, so we are lulled into a vulnerable state of unvigilance.
"The Addiction to Acceleration The fourth uncomfortable truth is how recursive improvement becomes compulsive. Kenji can’t stop because each day of not improving his improvement feels like stagnation. When you’re accelerating, constant velocity feels like moving backward.
This addiction manifests as: • Inability to accept plateau phases • Anxiety when not optimizing optimization • Devaluing of steady-state excellence • Compulsion to add meta-levels • Fear of falling behind yourself Recursive improvement can become its own trap."
I find that this criticism is far less applicable to say individuals but perhaps it could be levied against the way companies are currently treating AI. Which of course is where this comes from.
Do we though? I think a quick jaunt through any of the “-o-spheres” or “-tubes” of the internet would quickly disabuse someone of the idea that we default to not trusting other people. Even before the internet, “urban legends” are effectively “mass social hallucinations”.
When you've read it in its entirety, could you indicate on a scale from 1 to 10 what score it would get compared to published books you've read (including of course all the best and the worst ones)?
The creator should really tweak the prompt/process to include automatic review explicitly intended to remove hallucinations. It clearly is already the intent: "Future iterations of this experiment will include AI-powered fact-checking of the content."
I'm looking forward to what the improved version will look like.
That’s the format of an outline, not a legitimate book.
I do a lot of personal knowledge management and I use a shit ton of sections and lists in that. Books evolved from the art of telling stories, not from efficiently conveying knowledge. Perhaps we're just way too used for books etc. to an approach that is suboptimal. I know I personally despise news articles and blogs that start with "setting the scene" and are incredibly and needlessly verbose, using thousands of words to say what could be made clear in a single paragraph.
Viewed from another angle: Reading text is inherently serial in nature even though a lot of things are related to each other in a graph. A document with sections with bulleted lists is actually a way to represent a tree, which is closer to a fully unconstrained graph. I would argue that trees like that are much easier to parse than classically written texts.
There is irony here in that I only used some whitespace to add structure, but never used any bulleted lists in this comment.
[...]
I did generate an alternative with Google Gemini 2.5 Pro, but the formatting doesn't work here on HN. It was decent, though!
That's because these are notes, not a book. A list-heavy outline format makes sense for notes, as these are summaries that supplement your own memory and knowledge you've already taken in. They're not a sole/primary source of conveying knowledge to others on their own.
> Perhaps we're just way too used for books etc. to an approach that is suboptimal.
If you truly believe books are "suboptimal", I can only suggest that you consider looking inward and do some reflection:
Is the "problem" really with books and long-form writing, which is the dominant form of knowledge transfer across several thousand years of human civilization?
Or is the problem with people's attention spans in the past decade, due to dopamine-fueling social media doom scrolling and AI usage?
it does seem like the consensus is that o3 and (at a much lower cost) r1 are better writers than claude, but obviously anthropic's agent framework doesn't support those
We thought that movies adapting to the tiktok generation wouldn't kill cinema, and that new and better directors would rise... this didn't happen and even the latest movies from good directors like ridley scott are quite bad.
Now 3 years ago I typed "lovecraft nietzsche" and would only find 2 videos on youtube that pertain to what I'm looking for, aka the link between the two and how lovecraft's cosmicism might be a metaphor for the abyss etc. but those 2 videos are both excellent, 2 different people thought what I thought but cared enough to write it down and make a video about it. Today I can barely find these videos, there is a sea of AI generated videos with AI narrated text rambling on and on about lovecraft this nietzsche that to hit the 20min mark and maximize ad revenue, all this in a sea of short videos that youtube push harder and harder like multiple shorts between each 2 normal videos. Did another plateform overtake youtube? Not really.
Now some author will use AI to help with his next book, it will work and he will publish faster, then other authors will do the same, and others will optimize it more and more until most books available will be 90% written by AI, colleges will teach you AI assisted writing, and decades after that no one would even think to write a book without the help of AI.
How the hell would you explain to your publisher that you need 3 years to write the sequel when everyone else is doing it in 3 months.
It really does take the beauty out of the whole experience.
Beauty is subjective.
For a long time we were an agrarian society. Getting up early, getting on a horse and tending to your land every day, was probably considered beautiful by some. But we don't do that anymore.
We are probably going to see a similar shift in society. At a much more accelerated pace.
I emphasized "lot" because presumably the training data for a high-end model like Claude includes literally millions of books, far more than any human being has read even a few pages of.
To me, "Claude has read a lot of books" would read pretty oddly here. I suppose the idea would be to contrast with the (hopefully few) books it's written to date, but I can't think why the emphasis would be called for.
I am simply asking in what sense did Claude read all those books. I am sorry if I have hijacked the disscusion.
edited: spelling
You might want to say that that isn't "reading", just as we never say that aeroplanes "fly" since they don't do the same thing as birds, never say that computers "play chess" since the calculations they do are very different from those done by human chessplayers, never say that machines "dig ditches" since they don't have the experience of tired muscles and the sun beating down on their backs, etc. As you may gather from the tone of the previous sentence, I am not altogether convinced.
I do agree that the two things aren't equivalents. I can imagine futures in which AI systems have a training process that somewhat resembles the present one, and also do something that corresponds more closely to human reading[1], and then I'd want to reserve the term "read" for the latter. What Claude has done to all the books that helped shape it isn't exactly reading. But it's quite like reading for our present purpose; when someone jokes about an author having written more books than they've read, they mean that the author doesn't have much awareness of other people's work, and that isn't a problem Claude has.
[1] E.g., they might have some sort of awareness of the process, whatever exactly that might mean; they might do it "for pleasure", whatever exactly that might mean; etc. Those things would require the AI systems to resemble humans in ways present AI systems aren't designed to and, so far as I can tell, don't; maybe some future AI systems will be that way, maybe not.
At least 'minimal human contribution' in the first revision. So scan read it as it revised the book.
I do find it amusing to think that people might just ask an AI to summarize the book instead of actually reading it. Maybe even 'review' it as well.
The future will have authenticity in short supply