LLMs Are Weird, Man

4 libraryofbabel 1 6/23/2025, 3:36:19 AM surfingcomplexity.blog ↗

Comments (1)

proc0 · 5h ago
> Postscript: I don’t use LLMs for generating the texts in my blog posts, because I use these posts specifically to clarify my own thinking.

I can clearly tell when AI is being used. AI has a distinct writing style, with broad generalizations usually with lists and it repeats itself a lot. Same happens with images. This is why it seems clear once you use them a lot that they have limitations, and those limitations do not follow (or make sense) given that they display otherwise smart behavior.

For example, if you can write lyrics in jar-jar binks style about quantum physics or some other crazy prompts that would take a human significant effort, it should also write really good article with descent structure and good points, but it fails. Or another example from the latest image generators I tested, it can generate a beautiful image of a castle or some setting with exquisite details but when I prompt it to generate a character with its left foot forward it somehow completely fails. I think this consistency problem is crucial for reliability and applicability in the real world. There might be some workarounds with agents, but so far it seems there is a higher cost to result ratio as it needs lots of prompts to raise that reliability a few points.

I'm thinking LLMs are like one part of our brains like the language area, but intelligence is much more than that so maybe we need a new architecture that combines with LLMs to get full logical thinking.