Show HN: MirrorMind – A Recursive Agent Framework for Interpretable AI

Hi HN,

I’d like to share MirrorMind – a recursive agent architecture designed to make AI systems more interpretable, controllable, and reflectively “human.”

This is not just prompt engineering or character simulation. MirrorMind encodes internal structure into LLM behavior through five explicit coefficients:

- Emotion - Reasoning - Expression - Values - Bias

Each persona is a function over these dimensions. The MVP lets you run parallel AI personas with different settings to see how the same input produces radically different (but explainable) outputs.

App: https://hwan-oh-mirrormind-mvp.streamlit.app GitHub: https://github.com/HWAN-OH/MirrorMind-MVP Full story: https://medium.com/@hawn21

This is part of a 4-paper series exploring controllable cognition and AI-personality architecture:

1. Formal Architecture – https://zenodo.org/doi/10.5281/zenodo.15921374 2. Human-Agent Coevolution – https://zenodo.org/doi/10.5281/zenodo.15929330 3. Persona Collapse & Resilience – https://zenodo.org/doi/10.5281/zenodo.15929615 4. Mathematical Proof of Efficiency – https://zenodo.org/doi/10.5281/zenodo.15939176

This project was built not by full-time AI researchers, but by someone working in energy strategy, trying to use AI as a thinking partner. I hope it can evolve into a framework for building interpretable AI agents that mirror human complexity.

Would love your thoughts – critical feedback, ideas, or collaboration.

Thanks!

Comments (1)

SH_Oh · 2d ago
Thanks for checking this out.

I built this after-hours as a way to explore AI not just as a tool, but as a mirror — something that could reflect the shape of human reasoning itself.

It’s still very early, but I’d love to hear what feels valuable (or flawed!) about the idea, the architecture, or the use cases.

Open to all thoughts — critical or curious.