Show HN: Evolving Text Compression Algorithms by Mutating Code with LLMs

2 Sai_Praneeth 1 5/25/2025, 1:57:48 PM github.com ↗

Comments (1)

Sai_Praneeth · 51m ago
I built this as a weekend experiment to see how far you can push a basic LZ-style compressor using LLM-guided code mutations. No fancy ML models here—just a simple loop: mutate, evaluate, keep what works.

The LLM (GPT-4.1) suggests small code changes to improve compression ratio. Mutations are applied and tested on a real input file (big.txt). If the round-trip decompress fails, it's discarded. Everything is logged in a local SQLite DB.

Selection is dumb but effective: top 3 elites + 2 random survivors per generation. Each spawns 4 children. Repeat for N generations or until stagnation. At around 30 generations, I hit a compression ratio of 1.85×. Still decent, considering the starting baseline.

It's not a framework, it's not Pareto, and there's no multi-objective fluff. Just a tiny search loop hacking away at code. Curious if others have tried something similar with code-evolving setups.