What common mistakes do people make when using LLMs or writing prompts?

1 Alex_001 0 5/7/2025, 8:34:04 AM
I'm currently building a side project that integrates an LLM to help users generate structured content from messy input. While experimenting, I’ve noticed that even small changes in phrasing can drastically affect output quality — and it's made me wonder how much of this is due to prompt design vs. the model’s own quirks.

Curious to hear from others:

What are some common mistakes you've seen (or made) when prompting LLMs?

Any patterns in user behavior that lead to unreliable or unexpected results?

Are there prompt-writing techniques that work across models (ChatGPT, Claude, Llama, etc.)?

Would love to collect insights and even horror stories from folks deploying or experimenting with LLMs.

Comments (0)

No comments yet