Show HN: LLML: Data Structures => Prompts
2 knrz 0 7/6/2025, 6:57:14 PM
I've been building AI systems for a while and kept hitting the same wall - prompt engineering felt like string concatenation hell. Every complex prompt became a maintenance nightmare of f-strings and template literals.
So I built LLML - think of it as React for prompts. Just as React is data => UI, LLML is data => prompt.
The Problem:
# We've all written this...
prompt = f"Role: {role}\n"
prompt += f"Context: {json.dumps(context)}\n"
for i, rule in enumerate(rules):
prompt += f"{i+1}. {rule}\n"
# The Solution:
from zenbase_llml import llml
# Compose prompts by composing data
context = get_user_context()
prompt = llml({
"role": "Senior Engineer",
"context": context,
"rules": ["Never skip tests", "Always review deps"],
"task": "Deploy the service safely"
})
# Output:
<role>Senior Engineer</role>
<context>
...
</context>
<rules>
<rules-1>Never skip tests</rules-1>
<rules-2>Always review deps</rules-2>
</rules>
<task>Deploy the service safely</task>
Why XML-like? We found LLMs parse structured formats with clear boundaries (<tag>content</tag>) more reliably than JSON or YAML. The numbered lists (<rules-1>, <rules-2>) prevent ordering confusion.Available in Python and TypeScript:
pip/poetry/uv/rye install zenbase-llml
npm/pnpm/yarn/bun install @zenbase/llml
Experimental Rust and Go implementations also available for the adventurous :)Key features:
- ≤1 dependencies
- Extensible formatter system (create custom formatters for your domain objects)
- 100% test coverage (TypeScript), 92% (Python)
- Identical output across all language implementations
The formatter system is particularly neat - you can override how any data type is serialized, making it easy to handle domain-specific objects or sensitive data.GitHub: https://github.com/zenbase-ai/llml
Would love to hear if others have faced similar prompt engineering challenges and how you've solved them!
No comments yet