Show HN: AI that writes correct LangGraph persistence code via self-validation

1 botingw_job 1 8/9/2025, 1:18:43 AM github.com ↗

Comments (1)

botingw_job · 34m ago
Like many of you, I've been amazed by AI coding assistants but also incredibly frustrated by the "plausible-but-wrong" code they often produce. I'd ask a question about LangGraph, get a confident answer, and then spend 20 minutes debugging an error because the AI hallucinated a method or used a class that was deprecated two months ago. This project is my attempt to fix that by building what I call a "Grounded Assistant." The core idea is simple: an AI assistant should be grounded in the executable truth of a specific, version-controlled codebase, not just its general training data. Here’s how it works in practice: Grounding in Docs (RAG): When I ask a question (e.g., "How do I add persistence?"), the assistant doesn't just guess. It first performs a semantic search (RAG) against a local, version-correct copy of the LangGraph documentation to find the actual canonical examples and explanations. Code Generation: It then uses that specific, relevant context to generate the Python code. Self-Correction (Knowledge Graph): This is the crucial step. Before showing me the code, the assistant validates its own output against a Neo4j knowledge graph of the entire langgraph library. This acts as a pre-flight check, catching hallucinations like non-existent functions, incorrect parameters, or invalid class instantiations. It forces the AI to self-correct if its code doesn't align with the library's actual structure. The goal is to get code that is correct for my specific environment on the first try, dramatically cutting down on the debugging cycle. The project is open source and the README has the full details. I'd love to hear your thoughts and critiques on this approach! Happy to answer any questions.