Where's the Shovelware? Why AI Coding Claims Don't Add Up

2 flail 1 9/5/2025, 6:36:21 AM substack.com ↗

Comments (1)

measurablefunc · 1h ago
I recently asked some of the SOTA models to generate a webGL application for the Hopf fibration. Most of them did ok but it wasn't clear to me if the code was actually correct or not & lacking any sensible structure there was no way for me to modify it to generate other kinds of visualization by basic geometric operations. The main reason stuff like this continues to be problematic even though all the mathematics & programming infrastructure is readily available & formally specified is that LLMs do not have any real understanding of basic logical & inductive constructions. Trying to "predict" the most plausible sequence of tokens given some context is not sufficient for logical understanding of what is involved in constructing the Hopf fibration & whether it can be deformed & mapped into a different space by base changes, surgeries, & homotopies.

So the shovelware won't happen until LLMs are combined w/ algorithms that deal w/ logical structures like abstract syntax trees & abstract interpreters for verifying & validating logical constraints & specifications.