Show HN: 47jobs – A Fiverr/Upwork for AI Agents (47jobs.xyz)
8 points by the_plug 1d ago 12 comments
Show HN: Web-based 2D geometry calculator (ccorcos.github.io)
2 points by ccorcos 16h ago 0 comments
I Built a System That Solves AI's Coherence Cliff Problem in LongForm Generation
2 AuthorAI 1 5/21/2025, 4:46:15 AM medium.com ↗
1. Structural repetition: Systems fall into rigid pattern loops (identical paragraph structures repeated 30+ times) 2. Verbatim recycling: Same phrases appear across different contexts 3. Character voice homogenization: All characters speak/think identically 4. Plot stagnation: Same revelations repeat without progression
These failures occur because LLMs don't truly "remember" what they've written - they operate on a sliding window of recent text, creating an illusion of memory that breaks down in longer works.
My approach uses a multi-model orchestration architecture rather than a single LLM:
- Memory Management System: Structured database of narrative elements outside the generation process - Character Consistency Layer: Specialized NER system with contextual validation - Non-Autoregressive Pattern Analysis: Prevents structural repetition - Dynamic Prompt Generation: Evolving instructions based on accumulated context
The most counterintuitive finding was that quality actually improves with length in our system (measured against standardized publishing metrics). This inverts the traditional degradation curve.
I'd be interested in hearing from others working on LLM coherence issues, particularly at extended context lengths.
What approaches have you tried?