Show HN: Opper AI – Task-Completion API for LLMs
LLM features often fail in production due to fragile, model-specific prompt chains. That’s why we built a task-completion API to make model calls as reliable as any other API: you declare what you want, and it manages the interaction with 80+ proprietary and open-source models. If the output fails, Opper retries or falls back automatically. Successful completions can be saved as task specific dataset entries and serve as examples for future generations. When the completion passes (or retries are exhausted), you get structured output plus a pass/fail flag.
For example, gettested.io previously spent weeks rewriting prompts to generate consistent blood test summaries. With Opper, a single task definition now delivers reports in 62 countries without any edits.
Key features:
- Define tasks in JSON: inputs, outputs, success test - Automatic prompt construction, retries and fallbacks - Quality through task specific datasets and in-context learning - Full observability (llm-as-a-judge, prompts, responses, tokens, costs) - Free tier up to $5/mon. Utility plan from $5/month gets you high rate limits
We built Opper after our last startup, Unomaly (ML observability, acquired 2020). Our vision with Opper is to let developers work with LLMs the same way they write code and get reliable results.
The task-completion API is live today. We’d love for you to give it a try and would appreciate any feedback you have.
Thanks for checking it out!