Show HN: Framework for LLM Iterative Refinement Until Mathematical Convergence
The classes work as Python callables with built in observability: instances are callable -
from recursive_companion.base import MarketingCompanion
agent = MarketingCompanion()
answer = agent("question or problem…") # final refined output
print(answer)
print(agent.run_log) # list[dict] of every draft, critique & revision
Why it stays clean & modular:* Templates are plain text files (system prompts, user prompts, protocol). Swap harsh critiques for creative ones by swapping files. * build_templates() lets you compose any combination. * Protocol injection cleanly separates reasoning patterns from implementation. * New agents in 3 lines—just inherit from BaseCompanion. * Convergence uses embedding‑based cosine similarity by default, but the metric is fully pluggable.
How it came together:
The design emerged from recursive dialogues with multiple LLMs—the same iterative process the framework now automates. No legacy assumptions meant every piece became independent: swap models, add phases, change convergence logic—no rewiring required.
Extras:
* Streamlit app shows the thinking live as it happens. * Demos cover raw multi agent orchestration and LangGraph integration (agents as graph nodes). Demos have outputs and display full thinking in markdown. * Full architecture docs, comprehensive docstrings, commenting, and worked examples included.
Repo (MIT): https://github.com/hankbesser/recursive-companion
Built by questioning everything. Learning by building, built for learning.
Thanks for reading and really looking for any feedback and open to contributors, no question or discussion is too big or small.
No comments yet