Ask HN: Are AI generated stuffs intentionally bad?

1 revskill 3 8/22/2025, 10:19:49 AM
I'm not sure why, but i don't know why the whole AI generated things are confusing because it seems intentionally badly trained to generate bullshits more often then it should. Do you guys have the same problem ?

My hypothesis is: it's designed that way to generate more profits.

Comments (3)

ungreased0675 · 4h ago
I think it’s more likely LLMs naturally produce variable quality outputs.

Because, if you had a machine that could automatically generate excellent code, would you sell access to it, or would you use it to put every other software company out of business?

rlv-dan · 5h ago
> it seems intentionally badly trained

This implies that available training code is good. Consider for example how many GitHub repos are students just trying to learn to code.

incomingpain · 3h ago
>I'm not sure why, but i don't know why the whole AI generated things are confusing because it seems intentionally badly trained to generate bullshits more often then it should. Do you guys have the same problem ?

First, Hanlon's Razor: Never attribute to malice that which is adequately explained by stupidity.

Second, imagine how you train your model. You want a huge selection of quality writing. If you were to input Twitter posts into your model, your model will be dumb as rocks. So you put in literature. Science articles. Technical documentation. If you're Anthropic, you pirate all those books.

The model you get out then is very high quality. Then you ask it to output text and it is going to output this average tone, style, quality from centuries of writing. The text you're getting is 2% shakespearean, 4% old english, 5% japananese english translated manga.

You need to ask it for exactly what you want. If you dont, it will be confusing.