Would you use an LLM that follows instructions reliably?

3 gdevaraj 5 6/4/2025, 10:01:03 PM
I'm considering a startup idea and want to validate whether others see this as a real problem.

In my experience, current LLMs (like GPT-4 and Claude) often fail to follow detailed user instructions consistently. For example, even after explicitly telling the model not to use certain phrases, follow a strict structure, or maintain a certain style, it frequently ignores part of the prompt or gives a different output every time. This becomes especially frustrating for complex, multi-step tasks or when working across multiple sessions where the model forgets the context or preferences you’ve already given.

This isn’t just an issue in writing tasks—I've seen the same problem in coding assistance, task planning, structured data generation (like JSON/XML), tutoring, and research workflows.

I’m thinking about building a layer on top of existing LLMs that allows users to define hard constraints and persistent rules (like tone, logic, formatting, task goals), and ensures the model strictly follows them, with memory across sessions.

Before pursuing this as a startup, I’d like to understand:

Have you experienced this kind of problem?

In what tasks does it show up most for you?

Would solving it be valuable enough to pay for?

Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?

Comments (5)

proc0 · 1d ago
It's the central problem with AI right now! If this was fixed it wouldn't matter if they were elementary school AIs, they would still be useful if the output was consistent. If they were reliable, then you can find an upper bound on their capabilities and you would instantly know anything below that you can automate with confidence. Right now, they might do certain tasks even at PhD level but there is no guarantee that they won't fail miserably at some completely trivial task.
gdevaraj · 1d ago
Thank you for your feedback.
dtagames · 1d ago
Prompting and RAG are the only tools you have, like everyone else. What is "tone?" That's not deterministic. You're asking an LLM to predict tone. And logic? Forget it.

To validate (or really, dismiss) this idea, try it with your own RAG app or even with Cursor. There's just no way you can stack enough prompts to turn predictions into determinism.

ggirelli · 1d ago
> Have you experienced this kind of problem? In what tasks does it show up most for you? I have experienced this type of problem. A colleague asked an LLM to convert a list of items in a text to a table. The model managed to skip 3 out of 7 items from the list somehow.

> Would solving it be valuable enough to pay for? Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?

The solution I have found so far is to prompt the model to write and execute code to make responses more reproducible. In that way most of the variability ends up in the code, but the code outputs tend to be more consistent, at least in my experience.

That said, I do feel like current providers will start to or are already working on this.

gdevaraj · 1d ago
Thank you for your time and feedback.