Show HN: Persistent Mind Model (PMM) – Update: an model-agnostic "mind-layer"

2 HimTortons 0 8/28/2025, 6:34:54 PM github.com ↗
A few weeks ago I shared the Persistent Mind Model (PMM) — a Python framework for giving an AI assistant a durable identity and memory across sessions, devices, and even model back-ends.

Since then, I’ve added some big updates:

- DevTaskManager — PMM can now autonomously open, track, and close its own development tasks, with event-logged lifecycle (task_created, task_progress, task_closed).

- BehaviorEngine hook — scans replies for artifacts (e.g. Done: lines, PR links, file references) and uto-generates evidence events; commitments now close with confidence thresholds instead of vibes.

- Autonomy probes — new API endpoints (/autonomy/tasks, /autonomy/status) expose live metrics: open tasks, commitment close rates, reflection contract pass-rate, drift signals.

- Slow-burn evolution — identity and personality traits evolve steadily through reflections and “drift,” rather than resetting each session.

Why this matters: Most agent frameworks feel impressive for a single run but collapse without continuity. PMM is different: it keeps an append-only event chain (SQLite hash-chained), a JSON self-model, and evidence-gated commitments. That means it can persist identity and behavior across LLMs — swap OpenAI for a local Ollama model and the “mind” stays intact.

In simple terms: PMM is an AI that remembers, stays consistent, and slowly develops a self-referential identity over time.

Right now the evolution of it "identity" is slow, for stability and testing reasons, but it works.

I’d love feedback on:

What you’d want from an “AI mind-layer” like this.

Whether the probes (metrics, pass-rate, evidence ratio) surface the right signals.

How you’d imagine using something like this (personal assistant, embodied agent, research tool?).

Comments (0)

No comments yet