There is an Asimov story called "Someday" in which a toy computer called a Bard generates random fairy tales and reads them to children.
In the story two children try to hack their Bard, to make it tell more interesting modern stories, by feeding it a new vocabulary of modern words. In the end, it just generates the same old fairy tale plots using the new words it has learned.
I really feel like that story embodies today's AI generated stories. I've tried to get ChatGPT to generate original fairy tales and whatever plot prompt I give it, it spits out what is essentially the same dull story every time.
I always enjoy spotting a good anachronism in a sci-fi story (societies with space travel but still use typewriters), but this is a case of really spot on prediction.
godelski · 1h ago
> it just generates the same old fairy tale plots using the new words it has learned.
I think you're leaving out the best part! I don't want to spoil it, it's a short story. Classic trope, but still. Story here[0]
On another note, as an avid SciFi lover I have always found it interesting that in books, movies, and shows there have been many machines that talk and do complex tasks yet no one ever thought they were alive. Just take Star Trek. The simulations in the Holodeck are highly realistic and intended to mimic real humans. Or even the computer is able to speak and write code as requested. Far more advanced than our systems today. There's even that famous episode in TNG with Data where they are questioning if he is actually alive or not. Not such an easy thing but yet every viewer probably thought he was and recognized the difference between him and the computer and Holodeck[1]. Though my favorite version of that question is in Asimov's The Positronic Man (basis of the movie Bicentennial Man and yes, Asimov is why Data has a Positronic brain). These are fiction, but I find this so interesting. I feel like our LLMs look much more like the computer from Star Trek than the Holograms let alone Data. Yet, I think there's a lot of disagreement about the level of intelligence of these systems and it makes me wonder why someone would say the computer in Star Trek isn't intelligent but the LLM is (I'm sure there's retconning too).
[1] Well there is Voyager. And that episode from TNG. But go read [0] ;)
og_kalu · 1h ago
Because it's fiction and the Author is God.
In Star Trek, the computer is framed as an appliance. It's the ship's operating system. The characters treat it like a highly advanced Alexa. They issue commands ("Tea, Earl Grey, hot"), ask for information, and expect a transactional response. No one ever asks the computer, "How are you feeling today?" because the narrative has established it doesn't have feelings. It's a tool, and we, the audience, accept this premise.
In contrast, the entire point of Data's character is to question the line between machine and person. The episode you mentioned is a courtroom drama specifically designed to force the characters (and the audience) to see him as a sentient being with rights. His "positronic brain" is the magical Asimovian hand-waving that signals to the audience: "This one is different. Pay attention."
'The Author' could have easily positioned the computer or the holodeck in a similar manner and people would agree it was sentient. Or Star Wars droids could easily be given more of this kind of weight than they are currently given.
It's one thing to read a fictional story about a fictional technology and assume the position and framing the God is pushing you to, it's another thing entirely to have the technology in your hands and play around with it.
godelski · 43m ago
> Because it's fiction and the Author is God.
>> These are fiction, but I find this so interesting
I mean... I do recognize this fact. I hope we're clear on that.
> The characters treat it like a highly advanced Alexa.
I see a lot of people use GPT the same way.
But also, I disagree. People do ask "How are you feeling today?" to the holo programs. Hell, Paris makes a joke to Kim about how everyone falls in love with a holo character at some point. That it is the fantasy.
> 'The Author' could have easily positioned the computer or the holodeck in a similar manner and people would agree it was sentient.
I mentioned [1] for a good reason. There were more than one episode addressing this point. Not to mention the entire Voyager where this is a subplot of the entire series.
> Or Star Wars droids could easily be given more of this kind of weight than they are currently given.
I disagree. Some feel very alive.
I get your point and there's a lot I agree with it but I think you're brushing things off too quickly. You can't just say that people have no free interpretation and "the author" fooled everyone. Especially where there are plenty of stories and episodes which bring all this into question. Please, go read [0]
thenoblesunfish · 2h ago
Is this why the Google product (now just called Gemini) was called that?
godelski · 1h ago
I don't know if anyone has officially said so but there was a public statement about it being chosen as a reference to story telling (Celtic language). So could be for similar reasons or could be a reference. Not surprising considering how famous the story is and how famous Asimov is. But maybe someone else knows more definitively.
andrewflnr · 1h ago
Celtic? The name "Gemini" is much, much, much better known from Greek myth, if it occurs in Celtic culture at all...
godelski · 40m ago
Here, I'll rephrase thenoblesunfish for clarity, because you seem to have misread
> Is this why the Google product (now just called Gemini) was called [Bard]?
"That" == "Bard".
They weren't referring to Gemini, which is why there's that whole thing in parentheses stating it's *now* called Gemini
From Wiki
| The technology was developed under the codename "Atlas", with the name "Bard" in reference to the Celtic term for a storyteller and chosen to "reflect the creative nature of the algorithm underneath".
strken · 55m ago
Gemini was originally called Bard. That's what thenoblesunfish was asking about: the original name, not the new one.
xenotux · 6h ago
I think this is similar to AI generated images: it puts a new creative tool in the hands of people who might have had good ideas, but didn't have a mastery of the medium. In that respect, it's cool: if you had a great idea for a sci-fi story but no talent for writing, and if an LLM let you realize your vision, that's neat. It has some negative externalities for the craftsmen, but overall, more creativity is hardly a bad thing.
The real problem is that the most lucrative uses of the tech aren't that. It's generating 10,000 fake books on Amazon on subjects you don't care about. It's cranking out SEO spam, generating monetizable clickbait, etc.
thwarted · 5h ago
> I think this is similar to AI generated images: it puts a new creative tool in the hands of people who might have had good ideas, but didn't have a mastery of the medium.
Reading this sentence reminded me of the classic HN position of "ideas are worthless, what matters is the execution", usually mentioned in the context of an "ideas person" looking for their "technical cofounder" and the ideas person thinking they deserve at least 50%, often more, of the ownership of what would be built because without them there'd be no idea.
> if you had a great idea for a sci-fi story but no talent for writing, and if an LLM let you realize your vision, that's neat.
If your "vision" is only the "idea for a sci-fi story", is that really a vision? Good books leave the reader changed/influenced in some fashion, through the way the idea is presented and developed over the course of the story, not just from a blurb on the book jacket.
> overall, more creativity is hardly a bad thing.
Is coming up with an idea for a for a sci-fi story the meat of creative act such that that flooding the market with ideas counts as an increase in creativity overall?
vunderba · 5h ago
Agreed. Have a read through the 10 dragon stories - the majority of them are rich in spices, but bereft otherwise.
LLMs seem to revel in throwing layer after layer of decorative paint in the hope that people will fail to notice that they're not actually painting anything.
As a writer, the best advice that I can give is to build your house upon the rock and not upon the sand.
thwarted · 4h ago
> LLMs seem to revel in throwing layer after layer of decorative paint in the hope that people will fail to notice that they're not actually painting anything.
Length or duration is considered, erroneously IMO, to be a measure of completeness or thoroughness. Pithiness is valuable, and is a skill that can be honed. I guess padding out your writing using an LLM is equivalent to adjusting the font size and margins on a "three page essay" to meet the minimum requirements.
d0100 · 4h ago
> If your "vision" is only the "idea for a sci-fi story", is that really a vision?
We have art, games and movie directors
AI just enables anyone to be a "director", but most people can't direct anything worthwhile
kronatus · 4h ago
I agree with your point that most of us probably couldn't direct much worthwhile, but surely art/game/movie directors have more than an "idea". Granted the only creative directors that I know of are the ones that did an exceptional job, so that might be a skewed perspective. I guess my objection is largely that the creative process is the fumbling around, treating LLMs as a shortcut to a creative product because you "haven't mastered the medium" is skipping the whole creative process; if you haven't bothered to fail at it, why should I be bothered to read it?
cgriswald · 3h ago
I use AI to generate lists of ideas. And then, if my idea is in the list I know it’s at best not novel and at worst cliche. There are many short cuts like this that aid the process but aren’t replacing it with dreck.
perching_aix · 4h ago
It's also not a static process. It is reasonable to expect that through interaction with AI, just like through the usage of any tool, people can get better at this. Especially since AI can explain people concepts about the thing they're trying to make, teaching them about it, even if it cannot apply them consistently at scale on its own.
hyperadvanced · 5h ago
I also think that just having “an idea” isn’t exactly the same as increasing net creativity. Oftentimes with art, the impact of something truly creative results in changing the parameters of the medium or genre itself, not merely sticking to the script and producing a new work in the style of X. If you take, for example, J Dilla and his impact on hip-hop, the fact that there’s an entire subgenre or two focused on some of his hallmark innovations (micro-rhythm/wonky beats, neo-soul, lofi sampling/creative use of samples) speaks to that kind of “real” creativity. I frankly think that kind of genre-bending is possible with the use of LLMs, but if you just say, “here’s my story idea, make it so”, without any eye towards the actual technique or craft, you won’t be getting the next Blood Meridian out of it.
voidhorse · 4h ago
I completely agree. If nothing else, the discourse around LLM use in the creative space has just shown me that many people in technology simply have little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works. The 20th cen transformation of art into time-filling idle entertainment by way of mass media has been a great success. The internet helped reset some of that, but not by much. I guess that shouldn't necessarily be surprising, but I am always kind of astonished at the fact that our society has produced a large number of specialists who are profoundly good at what they do but who clearly lack exposure to other spheres of life.
thwarted · 4h ago
> the discourse around LLM use in the creative space has just shown me that many people in technology simply have little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works
While shitting on "people in technology" is the pastime du jour, the technologists may be boosters, but non-technical, non-creative people also have "little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works". And that's because they are mainly consumers of the creative output.
xenotux · 3h ago
I think that's an odd way of viewing creativity: unless you can pull off the whole thing, you're not really creative/
What about people who are not native speakers? Who are dyslexic? Do we deny them the spark of creativity because they can't write perfect prose without help? Heck, what about most sci-fi writers? Their editors often do a lot of heavy lifting to make the final product good.
If you have a killer idea for a meme or a really clever concept of a four-panel comic strip, but don't know how to use Photoshop or can't draw very well, is it a sin to ask a machine to help? Is your idea somehow worthless just because you previously couldn't do that?
I'm not disputing that a lot of people don't use these tools this way. In fact, that was exactly my point. If your "idea" is to crank out deceptive drivel, I'm not defending that.
thwarted · 3h ago
> Do we deny them the spark of creativity because they can't write perfect prose without help?
If you don't create anything, you're not being creative. My assertion is that just coming up with an idea is not sufficient to create something; an idea alone isn't manifest. An idea without expression, in whatever medium, isn't very useful.
Why does prose need to be "perfect" (whatever that means) in order for the act of writing to be "creative"? Much poetry isn't "perfect prose", and that it doesn't follow a known, accepted grammatical standard is often its defining quality as poetry.
Have you created a joke if you just think of it (the "idea") and it is never told to anyone (the "execution")?
Have you created a joke if someone is exposed to it but they don't laugh? (You may have created something by telling it to someone, but it probably isn't a joke if no one finds it funny, and delivery is a good portion of what can make a joke funny, and delivery is part of the execution).
If your intent is to exercise your creativity by writing a book, but all you do is come up with an idea and have an LLM write it, did you write a book? If you intend to write a joke and say "it would be funny if we had a joke for this" and someone else comes up with a joke, did you write the joke because you had the idea of it?
> If you have a killer idea for a meme or a really clever concept of a four-panel comic strip, but don't know how to use Photoshop or can't draw very well, is it a sin to ask a machine to help?
Asking for help isn't a sin (nor do I know why one would use that word). But claiming you did something that you didn't do is a lie, and lying is a sin.
If it's the idea that is killer, then the quality of the output doesn't matter as long as the idea is communicated, so one's ability with Photoshop isn't relevant. A well drawn four-panel comic doesn't turn a shit idea into gold. But a lot of meme gold isn't gold because of the quality of drawing — which means you don't need to draw to some arbitrarily high standard to produce meme gold. The assertions that somehow it's not creative unless it's "perfect" and the use of an LLM can result in "perfection" are ideas that have to die.
> Is your idea somehow worthless just because you previously couldn't do that?
Well, my original observation at the top of the thread is that HN has considered ideas to be largely worthless if they don't have meaningful execution. In the case of writing — books, jokes, or memes — expression is the execution.
colechristensen · 4h ago
Having tried to use AI as a technical cofounder (in small scale experiments), nah it's not there yet. It's more like having a herd of interns. With close supervision there can be a lot of productivity but without a good deal of taste and expertise it's just going to be garbage unless you're very lucky.
entropyneur · 1h ago
Wondering if anyone had success with this yet. I have several ideas for poetry and prose that I don't have the skill to pull off. I periodically plug them into new models and so far all the results have been completely unsatisfactory.
gtowey · 6h ago
And they haven't even gotten around to adding advertising into them yet! Imagine when chat assistants subtly steer you towards certain products. Would you even know it was manipulating you?
perching_aix · 5h ago
They kind of did though. Thing tries to make me generate diagrams every step of the way (as a kind of feature demo), even though they're rarely ever a good idea, and even when they are, the pictures and diagrams generated are useless.
andrewflnr · 4h ago
> more creativity is hardly a bad thing.
Why, exactly, is creativity good? What is the benefit, and to whom? Does that benefit survive the interposition of genAI? I'm doubtful, either for the reader or the craftsmen.
Aeolun · 6h ago
Personally I’m having a blast reading AI generated fiction. As long as the direction is human, and often enough corrected to keep the minor inconsistencies out, the results are pretty good.
For me it’s no different from generating code with Claude, except it’s generating prose. Without human direction your result ends up as garbage, but there’s no need to go and actually write all the prose yourself.
And I guess that just like with code, sometimes you have to hand craft something to make it truly good. But that’s probably not true for 80% of the story/code.
antihipocrat · 6h ago
Who's voice are you using when adding your hand crafted prose? Mimicking the style of the 80% or switching to your own?
Perhaps I'm a Luddite, or just in the dissonance phase toward enlightenment, but at the moment I don't want to invest in AI fiction. A big part of the experience for me is understanding the author's mind, not just the story being told
CuriouslyC · 6h ago
Plot twist, people who do first drafts and structural edits with AI can still do line edits and copy edits by hand for personal voice (and you have to anyhow if you want the prose to be exceptional).
Aeolun · 6h ago
I think there’s only a very small subset of fiction that uses the prose to that extent. Much like code really. If you are writing original algorithms you cannot use the LLM. If you are just remixing existing ones, it becomes a lot more useful.
Also, I guess I missed the brunt of your question, though the answer is similar. Most voice works for most characters. There’s only so many ways to say something, but occassionally you have to adjust the sentence or re-prompt the whole thing (the LLM has a tendency to see the best in characters).
exmadscientist · 5h ago
Perhaps on a relative scale "most" fiction doesn't carry any sort of deeper meaning, but if you look at things like "Hugo or Nebula Award nominees" (to pluck out the SF/F genre as a category), I'd say that almost every single one of them, going back all those decades, has something more to say than just their straightforward text.
And unless reading is your day job or only hobby, that's a massive, massive corpus of interesting text. (In just one genre! There are more genres!) So on an absolute scale, there is so much fiction to read with more-than-surface-level meaning that I personally just don't understand why anyone would have the least interest in reading AI slop.
(I also don't have any real interest in most Kindle Unlimited works, probably for similar reasons. Though I am quite certain there are diamonds there, I've just not had particularly much time for/good luck at finding them.)
Aeolun · 4h ago
Sure, but that more than surface level meaning comes out in the story, not often in the specific way the sentences are written (I acknowledge those exist, I just don’t consider them the majority).
Also, you say you don’t understand why anyone would be interested in the AI slop. But from the article we learn that one is indistinguishable from the other (apparently even to the one professional author that tried)
exmadscientist · 1h ago
I was disappointed that the results shown didn't break things down a bit more between star ratings given and authorship guesses. I didn't think any of these stories were amazing flash fiction, and I think that's relevant here. I'm curious to know what people who liked them all, or at least really liked one of them, had to say on judging AI-vs-human.
add-sub-mul-div · 6h ago
> A big part of the experience for me is understanding the author's mind, not just the story being told
AI content is really exposing how people fall into a group that does go further than the surface text into deeper layers of context/subtext, and a group that doesn't.
Seattle3503 · 6h ago
I could be in either group, depending on my inclination at the moment.
sram1337 · 6h ago
I would love to read some of this. Where do you find AI generated fiction?
I've had two separate experiences reading a story on RoyalRoad, getting ten chapters in, noticing that any individual chapter is technically great but I just don't care about the characters or plot can't be bothered reading further, and then noticing the story was AI-assisted.
I think the AI seems to struggle with consistency of characters and themes, and particularly with character growth over time: it can write touching moments, but these don't fit properly with the character's actions before and then after. It reads a bit like a story written by a hundred professional authors who can skim-read all the previous chapters but are on a strict time limit and don't have access to each other's notes. This makes me wonder if they're just not giving the AI notes on structure and character.
deadbabe · 3h ago
The only AI generated fiction I read is stuff I create on the fly. Why would you read AI generated fiction made by other people when it’s the same as reading regular fiction?
unignorant · 3h ago
Here are my notes and guesses on the stories in case people here find it interesting. Like some others in the blog post comments I got 6/8 right:
1.) probably human, low on style but a solid twist (CORRECT)
2.) interesting imagery but some continuity issues, maybe AI (INCORRECT)
3.) more a scene than a story, highly confident is AI given style (CORRECT)
4.) style could go either way, maybe human given some successful characterization (INCORRECT)
5.) I like the style but it's probably AI, the metaphors are too dense and very minor continuity errors (CORRECT)
6.) some genuinely funny stuff and good world building, almost certainly human (CORRECT)
7.) probably AI prompted to go for humor, some minor continuity issues (CORRECT)
8.) nicely subverted expectations, probably human (CORRECT)
My personal ranking for scores (again blind to author) was:
So for me the two best stories were human and the two worst were AI. That said, I read a lot of flash fiction, and none of these stories really approached good flash imo. I've also done some of my own experiments, and AI can do much better than what is posted above for flash if given more sophisticated prompting.
breuleux · 1h ago
The only one I was fairly sure was human was #6, and that was the only one I kinda enjoyed. In any case, as someone who reads a good deal, I agree. I didn't think any of the stories was particularly great (not enough to bother ranking them, beyond favourite) so I don't care all that much about the result.
> AI can do much better than what is posted above for flash if given more sophisticated prompting.
How sophisticated, compared to just writing the thing yourself?
I enjoy writing so a system like this would never replace that for me. But for someone who doesn't enjoy writing (or maybe can't generate work that meets their bar in the Ira Glass sense of taste) I think this kind of setup works okay for generating flash even with today's models.
biffles · 2h ago
Could you expand on your point re more sophisticated prompting?
I have found it hard to replicate high quality human-written prose and was a bit surprised by the results of this test. To me, AI fiction (and most AI writing in general) has a certain “smell” that becomes obvious after enough exposure to it. And yet I scored worse than you did on the test, so what do I know…
unignorant · 1h ago
For flash you can get much better results by asking the system to first generate a detailed scaffold. Here's an example of some metadata you might try to generate before actually writing the story: genres the story should fit into; pov of the story;
high level structure of the story; list of characters in the story along with significant details; themes and topics present in the story; detailed style notes
From there you have a second prompt to generate a story that follows those details. You can also generate many candidates and have another model instance rate the stories based on both general literary criteria and how well the fit the prompt, then you only read the best.
This has produced some work I've been reasonably impressed by, though it's not at the level of the best human flash writers.
Also, one easy way to get stuff that completely avoids the "smell" you're talking about by giving specific guidance on style and perspective (e.g., GPT-5 Thinking can do "literary stream-of-consciousness 1st person teenage perspective" reasonably well and will not sound at all like typical model writing).
codechicago277 · 3h ago
I had similar results, and story 4 is so trope heavy I wonder if it’s just an amalgamation of similar stories. The human stories all felt original, where none of the AI ones did.
unignorant · 2h ago
I'm not sure I agree that the human stories felt original. I was pretty unimpressed with all of the stories except maybe 6, and even that one dealt in some common tropes. 5 had fewer tropes than 6 (and maybe as a result of that received the highest average scores from his readers) but I could tell from the style it was AI
keiferski · 3h ago
I think if you compared the AI stories to works by “top” authors, the results wouldn’t really be as close. No one is confusing a story by Kafka or Conrad with a ChatGPT one.
Because unfortunately, one reason why readers can’t tell the difference between the AI and human authors is because they don’t have much exposure to the greats. The average person reads something like 2 books a year, and they probably aren't reading Nabokov.
vunderba · 3h ago
My personal litmus test that works fairly effectively with these AI generated stories is this - if someone asked you "What was the story about?" - could you reply with anything more substantial than the prompt that was given to generate the story to begin with?
Have a read through the 10 dragon stories where the prompt was "Meeting a dragon" and you'll see what I mean.
That's probably true, but as the author points out, it's still interesting to see where the boundary is at the moment. It's a lot further along than people typically argue imo.
keiferski · 54m ago
I don’t really find it that surprising - if you write generic low-quality stories, it’s difficult to differentiate your work from an AI writing generic low-quality stories. People have been selling slop stories on Amazon for a long time before AI tools, so describing them as professional authors is not exactly damning here.
mariusor · 51m ago
With all due respect but at least Robin Hobb, is one of the greats in her domain.
og_kalu · 1h ago
I put the basic prompt to 5 medium thinking in the API (because most of the writing gains seems to be tucked away in the reasoning mode and because you can't trust the router for this stuff) and this is what i got. It's not flash fiction, more a short story, but i'm impressed.
ai fiction really shines in baffling surreal prompts that it tries hard to satisfy. Here's an example:
"let's write a story where donald trump is giving a speech to a crowd as people slowly discover he is secretly a northern red oak tree in a human suit to the shock of fans and reporters! The tips of his fingers become branches as he tries to deny it as wildly impossible meanwhile his human disguise continues to fail"
The best fantasy/sci-fi literature involves a lot of world building.
For some, the world building came first and the stories were an offshoot of that.
Tolkien needed a world and stories to bring life to the languages he was inventing.
Raymond E Feist's Midkemia was a massive collaborative effort for a RPG world. He has stated: "I don't write fantasy; I write historical novels about an imaginary place. At least that's how I look at it."
This is what you won't see AI doing...yet.
ethan_smith · 1h ago
Multi-agent LLM systems with persistent memory are actually making significant progress on world-building coherence, maintaining consistency across thousands of interactions while incrementally developing complex fictional universes.
antisthenes · 3h ago
World building...is just having a really large context window.
lithiumii · 2h ago
I love what he is doing but really hope the voting interface was better. Also I wonder what the results would be if there are AI-assisted stories, but maybe real authors would hate to do that.
akoboldfrying · 6h ago
I applaud OP's transparency and willingness to call this result what it is.
bsder · 5h ago
Perhaps what this is pointing out is that a lot of writers of the genre of "fantasy" produce mostly formulaic, trope-laden piles of crap that AI is pretty good at mimicking?
It does suggest that publishers might want to screen new writing with a quick "Did AI write this?" and only publish the ones where it is obvious to humans that AI did not write it.
andrewflnr · 1h ago
> "The Well-Tempered Plot Device" is almost 4 decades old
Yeah, we've moved forward a ways in the last 4 decades, or the top of the market has, at least. That was a fun read, though.
exmadscientist · 5h ago
Yes, the human-written stories that I guessed wrong about were the ones that seemed to have nothing to say. When the plot's stereotypical and trite, there's no subtext (difficult to do in flash, but not impossible; some can do it well), and it scans like anyone could have written it anywhere or anytime, well, that looks like AI.
(In that vein I am baffled how anyone could think the fourth story, especially, was anything but AI.)
(And, as well, the seventh story is interesting because it reads, to me, exactly like someone who's used to writing something longer trying to write flash. It doesn't land anything, it doesn't conclude, but it looks like if it had about twice the length it might be interesting. And it's got some dissonance from breaking with the usual demon-bargaining template. So I pegged that as human. Oops!)
spondylosaurus · 5h ago
I thought it was interesting/telling (but maybe not surprising) that the AI-generated stories scored the highest according to reader rankings, yet pinged to me as immediately flat and generic. But I really liked the idiosyncrasies of a few of the human-authored entries!
vunderba · 5h ago
The story plot was "Meeting a dragon". As both a human and a writer, challenge accepted:
Long ago, there lived a golden dragon whose fractal-like scales gleamed in the glow of the morning in her cave. She was known for her kindness, and many came not with sword or spear, but with humble requests - for you see, it was widely believed that the mystical scales of a dragon would heal illness, cure ailments, and provide fortune.
One such visitor timidly looked up at her great shining body and beseeched, "Oh glorious dragon, might I have a single scale?"
Of course, the dragon replied warmly. She delicately, almost lovingly, with a slight twinge, used a single claw to prise off a single golden scale, leaving a dull patch.
Over the eons, more and more people would come as supplicants. The scales were used for good luck, for warmth, to ward off evil, as the draconic equivalent of a rabbit's foot.
In the end, the poor dragon was stripped bare - the fire from her burning furnace now showed clearly through to her patchwork, sensitive, and naked skin.
When winter came, she huddled in the cold darkness. And still, when a peasant would come asking for a scale - just one, a single scale nothing more, she would not refuse. In her eternal generosity she would carefully break off another. This time it took longer to find one left upon her body, as the humans had stripped her bare like a tree come winter.
Then thus came a knight. "I'm sorry, good sir, but I have no scales left to give," she said pitiably.
"Why, your scale was a choking hazard and wasn’t labeled not for ages under 5! Prepare for a class-action lawsuit and also to be impaled upon a lance."
The End.
I'll pretend I intended it as a parable of the destructive nature of mass tourism or something something Lorax something something truffula trees.
andrewflnr · 1h ago
For me, the part about "fractal-like scales" would have flagged the author as either an AI or just kind of a pretentious dummy, because that makes very little sense. "Then thus" would actually lean me toward human, because LLMs are usually better at grammar than that. :D
Just wanted to be first to say I'm a huge fan of Mark Lawrence. I didn't know he blogged! Now I'll go actually read the post.
dwd · 6h ago
Big fan of Janny Wurts, particularly the Empire series she co-wrote with Raymond E Feist. Very surprised she was outed as an AI, though she is the only author I've ever had to pull out the dictionary as she used a word I had never seen and wanted to be sure of the meaning in the context.
npinsker · 3h ago
I was very (wrongly) confident she was AI. I thought her prose was bland and repetitive and lacking melody, and she was the only human author to explicitly mention demons in the first paragraph, which is something that happens when you prompt an AI for a demon story.
The twist had potential, but to me wasn’t executed as well as it could have been. I was hoping for some stronger irony — e.g. if the mainlanders had pushed for the bridge, but not the islanders; or a sentence about how the demons were very surprised, but nonetheless went on to take over the world before they knew what had happened. As it is, it felt underdeveloped and slightly uncanny in an AI-like way.
dwd · 1h ago
The twist was the one thing that made me think it was human written. I quite liked her story after the second read. But sad to say, I actually preferred most of the AI ones over the rest.
voidhorse · 4h ago
Interesting study. I think the use of AI boils down to this: is the product independent of the process and the context? Or is it dependent on it in some way? I think, when it comes to art, the latter is truer than the former, and most use of AI in creative fields is predicated on trying to convince people to engage with art in an extremely shallow way (art strictly as soulless, time filling entertainment).
When an author writes a novel, the novel does not exist in a vacuum. The author's persona and the cultural exchange that emerges around the text also becomes an important part of the phenomenal status of the the work and its cultural recognition. Even when an author remains pseudonymous and makes no appearance, this too is part of the work.
If an author uses AI as a tool and takes care to imbue the output with artistic and personal relevance, it probably will become an art object, but its status may or may not be modulated by the use of AI in the process to the extent that the use does/doesn't affect people's crafting interpretations of the work, or the author's own engagements. Contrarily, AI generated work that has close to no actual processual involvement on the part of the author will almost always have slop status, just because it's hard to imagine an interpretive culture around such work that doesn't at some point break down in the face of the inability to connect the worm with other cultural touchstones or the actual experience of a human being. Maybe it could happen, but if it did, at that point the status of the work is till something different in so far as it would be a marker not of human experience, as literature traditionally has been, but something quite new and different literature-cum-hypermarket(we already had mass market) product.
thedevilslawyer · 5h ago
We're at a key milestone in the IP wars; IP was inducted into our civilization to protect the few who had the ability to create.
This idea is well past it's due date. We should move to a liberal IP regime, with copyright strictly reduced to 7-10 years, with all works then entering public domain. Our society will universally thrive with the abundance that will come.
I understand and empathize that a class of vocations today will go away, but so did lamplighters. The roles may become extinct; but we will endure as a people.
rwyinuse · 4h ago
I'm not sure people left without a job will endure, at least if they're unfortunate enough to live somewhere without social security. The key question is whether AI will somehow create enough new jobs for those let go (I doubt that), or just cause a massive unemployment.
ordinaryradical · 4h ago
There are hundreds of years of art already in the public domain and society is not “universally thriving?” Why would adding a couple more recent years change that?
Seems very utopian magic thinking to me.
HPMOR · 5h ago
Why is it always taken as a conclusion that humans will persist? Lots of species go extinct, and it is not clear that won’t be true here as well.
In the story two children try to hack their Bard, to make it tell more interesting modern stories, by feeding it a new vocabulary of modern words. In the end, it just generates the same old fairy tale plots using the new words it has learned.
I really feel like that story embodies today's AI generated stories. I've tried to get ChatGPT to generate original fairy tales and whatever plot prompt I give it, it spits out what is essentially the same dull story every time.
I always enjoy spotting a good anachronism in a sci-fi story (societies with space travel but still use typewriters), but this is a case of really spot on prediction.
On another note, as an avid SciFi lover I have always found it interesting that in books, movies, and shows there have been many machines that talk and do complex tasks yet no one ever thought they were alive. Just take Star Trek. The simulations in the Holodeck are highly realistic and intended to mimic real humans. Or even the computer is able to speak and write code as requested. Far more advanced than our systems today. There's even that famous episode in TNG with Data where they are questioning if he is actually alive or not. Not such an easy thing but yet every viewer probably thought he was and recognized the difference between him and the computer and Holodeck[1]. Though my favorite version of that question is in Asimov's The Positronic Man (basis of the movie Bicentennial Man and yes, Asimov is why Data has a Positronic brain). These are fiction, but I find this so interesting. I feel like our LLMs look much more like the computer from Star Trek than the Holograms let alone Data. Yet, I think there's a lot of disagreement about the level of intelligence of these systems and it makes me wonder why someone would say the computer in Star Trek isn't intelligent but the LLM is (I'm sure there's retconning too).
[0] https://nyc3.digitaloceanspaces.com/sffaudio-usa/mp3s/Someda...
[1] Well there is Voyager. And that episode from TNG. But go read [0] ;)
In Star Trek, the computer is framed as an appliance. It's the ship's operating system. The characters treat it like a highly advanced Alexa. They issue commands ("Tea, Earl Grey, hot"), ask for information, and expect a transactional response. No one ever asks the computer, "How are you feeling today?" because the narrative has established it doesn't have feelings. It's a tool, and we, the audience, accept this premise.
In contrast, the entire point of Data's character is to question the line between machine and person. The episode you mentioned is a courtroom drama specifically designed to force the characters (and the audience) to see him as a sentient being with rights. His "positronic brain" is the magical Asimovian hand-waving that signals to the audience: "This one is different. Pay attention."
'The Author' could have easily positioned the computer or the holodeck in a similar manner and people would agree it was sentient. Or Star Wars droids could easily be given more of this kind of weight than they are currently given.
It's one thing to read a fictional story about a fictional technology and assume the position and framing the God is pushing you to, it's another thing entirely to have the technology in your hands and play around with it.
But also, I disagree. People do ask "How are you feeling today?" to the holo programs. Hell, Paris makes a joke to Kim about how everyone falls in love with a holo character at some point. That it is the fantasy.
I mentioned [1] for a good reason. There were more than one episode addressing this point. Not to mention the entire Voyager where this is a subplot of the entire series. I disagree. Some feel very alive.I get your point and there's a lot I agree with it but I think you're brushing things off too quickly. You can't just say that people have no free interpretation and "the author" fooled everyone. Especially where there are plenty of stories and episodes which bring all this into question. Please, go read [0]
They weren't referring to Gemini, which is why there's that whole thing in parentheses stating it's *now* called Gemini
From Wiki
The real problem is that the most lucrative uses of the tech aren't that. It's generating 10,000 fake books on Amazon on subjects you don't care about. It's cranking out SEO spam, generating monetizable clickbait, etc.
Reading this sentence reminded me of the classic HN position of "ideas are worthless, what matters is the execution", usually mentioned in the context of an "ideas person" looking for their "technical cofounder" and the ideas person thinking they deserve at least 50%, often more, of the ownership of what would be built because without them there'd be no idea.
> if you had a great idea for a sci-fi story but no talent for writing, and if an LLM let you realize your vision, that's neat.
If your "vision" is only the "idea for a sci-fi story", is that really a vision? Good books leave the reader changed/influenced in some fashion, through the way the idea is presented and developed over the course of the story, not just from a blurb on the book jacket.
> overall, more creativity is hardly a bad thing.
Is coming up with an idea for a for a sci-fi story the meat of creative act such that that flooding the market with ideas counts as an increase in creativity overall?
LLMs seem to revel in throwing layer after layer of decorative paint in the hope that people will fail to notice that they're not actually painting anything.
As a writer, the best advice that I can give is to build your house upon the rock and not upon the sand.
Length or duration is considered, erroneously IMO, to be a measure of completeness or thoroughness. Pithiness is valuable, and is a skill that can be honed. I guess padding out your writing using an LLM is equivalent to adjusting the font size and margins on a "three page essay" to meet the minimum requirements.
We have art, games and movie directors
AI just enables anyone to be a "director", but most people can't direct anything worthwhile
While shitting on "people in technology" is the pastime du jour, the technologists may be boosters, but non-technical, non-creative people also have "little to no comprehension of the arts and humanities and have no clue what it is that artists actually do, or even how to intelligently engage with artistic works". And that's because they are mainly consumers of the creative output.
What about people who are not native speakers? Who are dyslexic? Do we deny them the spark of creativity because they can't write perfect prose without help? Heck, what about most sci-fi writers? Their editors often do a lot of heavy lifting to make the final product good.
If you have a killer idea for a meme or a really clever concept of a four-panel comic strip, but don't know how to use Photoshop or can't draw very well, is it a sin to ask a machine to help? Is your idea somehow worthless just because you previously couldn't do that?
I'm not disputing that a lot of people don't use these tools this way. In fact, that was exactly my point. If your "idea" is to crank out deceptive drivel, I'm not defending that.
If you don't create anything, you're not being creative. My assertion is that just coming up with an idea is not sufficient to create something; an idea alone isn't manifest. An idea without expression, in whatever medium, isn't very useful.
Why does prose need to be "perfect" (whatever that means) in order for the act of writing to be "creative"? Much poetry isn't "perfect prose", and that it doesn't follow a known, accepted grammatical standard is often its defining quality as poetry.
Have you created a joke if you just think of it (the "idea") and it is never told to anyone (the "execution")?
Have you created a joke if someone is exposed to it but they don't laugh? (You may have created something by telling it to someone, but it probably isn't a joke if no one finds it funny, and delivery is a good portion of what can make a joke funny, and delivery is part of the execution).
If your intent is to exercise your creativity by writing a book, but all you do is come up with an idea and have an LLM write it, did you write a book? If you intend to write a joke and say "it would be funny if we had a joke for this" and someone else comes up with a joke, did you write the joke because you had the idea of it?
> If you have a killer idea for a meme or a really clever concept of a four-panel comic strip, but don't know how to use Photoshop or can't draw very well, is it a sin to ask a machine to help?
Asking for help isn't a sin (nor do I know why one would use that word). But claiming you did something that you didn't do is a lie, and lying is a sin.
If it's the idea that is killer, then the quality of the output doesn't matter as long as the idea is communicated, so one's ability with Photoshop isn't relevant. A well drawn four-panel comic doesn't turn a shit idea into gold. But a lot of meme gold isn't gold because of the quality of drawing — which means you don't need to draw to some arbitrarily high standard to produce meme gold. The assertions that somehow it's not creative unless it's "perfect" and the use of an LLM can result in "perfection" are ideas that have to die.
> Is your idea somehow worthless just because you previously couldn't do that?
Well, my original observation at the top of the thread is that HN has considered ideas to be largely worthless if they don't have meaningful execution. In the case of writing — books, jokes, or memes — expression is the execution.
Why, exactly, is creativity good? What is the benefit, and to whom? Does that benefit survive the interposition of genAI? I'm doubtful, either for the reader or the craftsmen.
For me it’s no different from generating code with Claude, except it’s generating prose. Without human direction your result ends up as garbage, but there’s no need to go and actually write all the prose yourself.
And I guess that just like with code, sometimes you have to hand craft something to make it truly good. But that’s probably not true for 80% of the story/code.
Perhaps I'm a Luddite, or just in the dissonance phase toward enlightenment, but at the moment I don't want to invest in AI fiction. A big part of the experience for me is understanding the author's mind, not just the story being told
Also, I guess I missed the brunt of your question, though the answer is similar. Most voice works for most characters. There’s only so many ways to say something, but occassionally you have to adjust the sentence or re-prompt the whole thing (the LLM has a tendency to see the best in characters).
And unless reading is your day job or only hobby, that's a massive, massive corpus of interesting text. (In just one genre! There are more genres!) So on an absolute scale, there is so much fiction to read with more-than-surface-level meaning that I personally just don't understand why anyone would have the least interest in reading AI slop.
(I also don't have any real interest in most Kindle Unlimited works, probably for similar reasons. Though I am quite certain there are diamonds there, I've just not had particularly much time for/good luck at finding them.)
Also, you say you don’t understand why anyone would be interested in the AI slop. But from the article we learn that one is indistinguishable from the other (apparently even to the one professional author that tried)
AI content is really exposing how people fall into a group that does go further than the surface text into deeper layers of context/subtext, and a group that doesn't.
Though I’ll admit I can’t speak to the quality of that except my own stuff (which I’m naturally predisposed to like).
This was my attempt at fully AI generated (though edited by human):
https://www.royalroad.com/fiction/101072/inherited-wounds
I think the AI seems to struggle with consistency of characters and themes, and particularly with character growth over time: it can write touching moments, but these don't fit properly with the character's actions before and then after. It reads a bit like a story written by a hundred professional authors who can skim-read all the previous chapters but are on a strict time limit and don't have access to each other's notes. This makes me wonder if they're just not giving the AI notes on structure and character.
1.) probably human, low on style but a solid twist (CORRECT) 2.) interesting imagery but some continuity issues, maybe AI (INCORRECT) 3.) more a scene than a story, highly confident is AI given style (CORRECT) 4.) style could go either way, maybe human given some successful characterization (INCORRECT) 5.) I like the style but it's probably AI, the metaphors are too dense and very minor continuity errors (CORRECT) 6.) some genuinely funny stuff and good world building, almost certainly human (CORRECT) 7.) probably AI prompted to go for humor, some minor continuity issues (CORRECT) 8.) nicely subverted expectations, probably human (CORRECT)
My personal ranking for scores (again blind to author) was:
6 (human); 8 (human); 4 (AI); 1 (human) and 5 (AI) -- tied; 2 (human); 3 and 7 (AI) -- tied
So for me the two best stories were human and the two worst were AI. That said, I read a lot of flash fiction, and none of these stories really approached good flash imo. I've also done some of my own experiments, and AI can do much better than what is posted above for flash if given more sophisticated prompting.
> AI can do much better than what is posted above for flash if given more sophisticated prompting.
How sophisticated, compared to just writing the thing yourself?
I enjoy writing so a system like this would never replace that for me. But for someone who doesn't enjoy writing (or maybe can't generate work that meets their bar in the Ira Glass sense of taste) I think this kind of setup works okay for generating flash even with today's models.
I have found it hard to replicate high quality human-written prose and was a bit surprised by the results of this test. To me, AI fiction (and most AI writing in general) has a certain “smell” that becomes obvious after enough exposure to it. And yet I scored worse than you did on the test, so what do I know…
From there you have a second prompt to generate a story that follows those details. You can also generate many candidates and have another model instance rate the stories based on both general literary criteria and how well the fit the prompt, then you only read the best.
This has produced some work I've been reasonably impressed by, though it's not at the level of the best human flash writers.
Also, one easy way to get stuff that completely avoids the "smell" you're talking about by giving specific guidance on style and perspective (e.g., GPT-5 Thinking can do "literary stream-of-consciousness 1st person teenage perspective" reasonably well and will not sound at all like typical model writing).
Because unfortunately, one reason why readers can’t tell the difference between the AI and human authors is because they don’t have much exposure to the greats. The average person reads something like 2 books a year, and they probably aren't reading Nabokov.
Have a read through the 10 dragon stories where the prompt was "Meeting a dragon" and you'll see what I mean.
https://mark---lawrence.blogspot.com/2023/09/so-is-ai-writin...
https://pastebin.com/huGhbX7u
"let's write a story where donald trump is giving a speech to a crowd as people slowly discover he is secretly a northern red oak tree in a human suit to the shock of fans and reporters! The tips of his fingers become branches as he tries to deny it as wildly impossible meanwhile his human disguise continues to fail"
After some back and forth, here is what I got:
https://9ol.es/md/trump
No human would write something that crazy...
For some, the world building came first and the stories were an offshoot of that.
Tolkien needed a world and stories to bring life to the languages he was inventing.
Raymond E Feist's Midkemia was a massive collaborative effort for a RPG world. He has stated: "I don't write fantasy; I write historical novels about an imaginary place. At least that's how I look at it."
This is what you won't see AI doing...yet.
This is neither new nor news. "The Well-Tempered Plot Device" is almost 4 decades old (see: https://news.ansible.uk/plotdev.html).
It does suggest that publishers might want to screen new writing with a quick "Did AI write this?" and only publish the ones where it is obvious to humans that AI did not write it.
Yeah, we've moved forward a ways in the last 4 decades, or the top of the market has, at least. That was a fun read, though.
(In that vein I am baffled how anyone could think the fourth story, especially, was anything but AI.)
(And, as well, the seventh story is interesting because it reads, to me, exactly like someone who's used to writing something longer trying to write flash. It doesn't land anything, it doesn't conclude, but it looks like if it had about twice the length it might be interesting. And it's got some dissonance from breaking with the usual demon-bargaining template. So I pegged that as human. Oops!)
Long ago, there lived a golden dragon whose fractal-like scales gleamed in the glow of the morning in her cave. She was known for her kindness, and many came not with sword or spear, but with humble requests - for you see, it was widely believed that the mystical scales of a dragon would heal illness, cure ailments, and provide fortune.
One such visitor timidly looked up at her great shining body and beseeched, "Oh glorious dragon, might I have a single scale?"
Of course, the dragon replied warmly. She delicately, almost lovingly, with a slight twinge, used a single claw to prise off a single golden scale, leaving a dull patch.
Over the eons, more and more people would come as supplicants. The scales were used for good luck, for warmth, to ward off evil, as the draconic equivalent of a rabbit's foot.
In the end, the poor dragon was stripped bare - the fire from her burning furnace now showed clearly through to her patchwork, sensitive, and naked skin.
When winter came, she huddled in the cold darkness. And still, when a peasant would come asking for a scale - just one, a single scale nothing more, she would not refuse. In her eternal generosity she would carefully break off another. This time it took longer to find one left upon her body, as the humans had stripped her bare like a tree come winter.
Then thus came a knight. "I'm sorry, good sir, but I have no scales left to give," she said pitiably.
"Why, your scale was a choking hazard and wasn’t labeled not for ages under 5! Prepare for a class-action lawsuit and also to be impaled upon a lance."
The End.
I'll pretend I intended it as a parable of the destructive nature of mass tourism or something something Lorax something something truffula trees.
The twist had potential, but to me wasn’t executed as well as it could have been. I was hoping for some stronger irony — e.g. if the mainlanders had pushed for the bridge, but not the islanders; or a sentence about how the demons were very surprised, but nonetheless went on to take over the world before they knew what had happened. As it is, it felt underdeveloped and slightly uncanny in an AI-like way.
When an author writes a novel, the novel does not exist in a vacuum. The author's persona and the cultural exchange that emerges around the text also becomes an important part of the phenomenal status of the the work and its cultural recognition. Even when an author remains pseudonymous and makes no appearance, this too is part of the work.
If an author uses AI as a tool and takes care to imbue the output with artistic and personal relevance, it probably will become an art object, but its status may or may not be modulated by the use of AI in the process to the extent that the use does/doesn't affect people's crafting interpretations of the work, or the author's own engagements. Contrarily, AI generated work that has close to no actual processual involvement on the part of the author will almost always have slop status, just because it's hard to imagine an interpretive culture around such work that doesn't at some point break down in the face of the inability to connect the worm with other cultural touchstones or the actual experience of a human being. Maybe it could happen, but if it did, at that point the status of the work is till something different in so far as it would be a marker not of human experience, as literature traditionally has been, but something quite new and different literature-cum-hypermarket(we already had mass market) product.
This idea is well past it's due date. We should move to a liberal IP regime, with copyright strictly reduced to 7-10 years, with all works then entering public domain. Our society will universally thrive with the abundance that will come.
I understand and empathize that a class of vocations today will go away, but so did lamplighters. The roles may become extinct; but we will endure as a people.
Seems very utopian magic thinking to me.