Type-constrained code generation with language models

108 tough 46 5/13/2025, 10:15:30 PM arxiv.org ↗

Comments (46)

_jayhack_ · 1h ago
Also worth checking out MultiLSPy, effectively a python wrapper around multiple LSPs: https://github.com/microsoft/multilspy

Used in multiple similar publications, including "Guiding Language Models of Code with Global Context using Monitors" (https://arxiv.org/abs/2306.10763), which uses static analysis beyond the type system to filter out e.g. invalid variable names, invalid control flow etc.

cpfiffer · 27m ago
We (.txt, the outlines people) had a brief thread about this paper on twitter if you're interested: https://x.com/dottxtai/status/1922322194379551128
homebrewer · 3h ago
Hejlsberg mentioned the ability to quickly provide accurate type information to LLMs as one of the reasons for rewriting tsc into Go:

https://youtu.be/10qowKUW82U?t=3186

tough · 3h ago
But isn't TypeScript already a typed language to begin with?
habitue · 3h ago
This is about the speed with which the compiler can advise an LLM that a particular thing checks or doesn't check. Typescript is much slower than Go
tough · 3h ago
okay so basically the faster compiling means a tigher feedback loop for the LLM to -know- if the code compiles or not etc? interesting

is go faster than rust?

raincole · 2h ago
> is go faster than rust

No.

They rewrote in go because go is similar enough to typescript, while being faster than typescript.

Source: https://github.com/microsoft/typescript-go/discussions/411

notnullorvoid · 2h ago
> is go faster than rust?

Depends on how you write the Go or Rust code. The most optimal Rust re-write of the TypeScript compiler would very likely be faster than the most optimal version in Go. However they didn't want to do a re-write, they wanted to port the existing compiler codebase written in TS. Go like TS (ultimately the JS runtime) also has GC which makes a 1-to-1 port much easier.

yoyohello13 · 3h ago
Go’s compiler is WAY faster than Rust’s. As far as speed of the actual program, Rust will generally be faster.
notnullorvoid · 2h ago
Go or Rust compiler speeds won't have any effect here. The program in this context is the TypeScript compiler.
koakuma-chan · 2h ago
cargo check is WAY faster than go build
Thaxll · 2h ago
Working with both I can say that this is a big no, go mod is as fast if not faster, usually Go dep are much faster because Go does not import as much dependencies as Rust.
koakuma-chan · 2h ago
In Rust you only need to compile your dependencies once. After that it's just your app because dependencies don't change.
binary132 · 5m ago
that is also the case in Go…?
nikolayasdf123 · 44m ago
maybe rust is faster. but to make fast rust code (as compared to fast go code) is barely impossible. borderline unusable language, both to humans and to LLMs (unless we get super-intelligence, maybe it can write good rust easily)
PartiallyTyped · 2h ago
No. Ignore the other comments.

The reason for this decision is that they wanted a near 1:1 port of the typescript code to go, keeping design and structure almost identical.

You can’t do that in rust as easily because of all the cyclical references and indirection involved.

A rust port would be a rewrite. This is merely a migration.

ArcaneMoose · 3h ago
I think TypeScript is uniquely positioned to be the optimal language for LLMs. Tons of training data (benefiting from all the JS examples as well) plus the structure of types for LLMs to follow and tools to enforce.
pram · 2h ago
LLMs work well with any static analysis tool. I frequently instruct Claude to use stuff like “go vet” and “deadcode” when it goes on a tear and writes a bunch of broken trash and declares mission accomplished.
koakuma-chan · 2h ago
> LLMs work well with any static analysis tool.

tsc error messages are so bad that every time my LLM sees one of those "SomeType is not assignable to SomeLongAssTypeDontEvenTryToUnderstandWhatsGoingOnHere<<<<>>>>>>>>>>>>>>>>>>>>" it just gives up and casts to any. goes for python too.

floydnoel · 2h ago
ha, that's always been my biggest gripe with ts
AaronAPU · 46m ago
I can’t be the only one who hopes this was a joke.
yoyohello13 · 2h ago
God help us…
marviel · 2h ago
what do you dislike about it?
OutOfHere · 2h ago
There are languages that constrain types a lot more tightly than TypeScript, e.g. Kotlin, Rust, and Haskell. The more constrained the types, the more correct the program could be.
mindwok · 2h ago
Yep, and Rust famously goes beyond this by modelling memory ownership at compile time.

In fact, the more behaviour we can model at compile time the better when it comes to LLMs - there's some cool ideas here like transpiling Rust into languages for formal verification. See https://github.com/formal-land/coq-of-rust as an example.

Formal verification was one of those things that was previously so annoying to do that it rarely made it past academic use cases or extremely important libraries, but I think LLMs take the tedium out of it. Perhaps formal verification will have a "test driven development" type of moment in the sun thanks to this.

koakuma-chan · 2h ago
Can LLMs properly code in Rust yet? There is way more TypeScript code out there compared to Rust, and I doubt structured output can alleviate this.
steveklabnik · 2h ago
They can, yes.
muglug · 2h ago
Really cool results!

That this research comes out of universities, and not large AI labs, makes me think those labs believe that larger models are still the way to go.

aibrother · 1h ago
+1 this seems like healthy development
notnullorvoid · 3h ago
The general idea seems very promising, I had been hoping someone would do something like this since seeing JSON schema structured outputs for LLMs.

Need to dig in a bit more on the implementation, but I was surprised that the paper didn't mention hooking into existing language service/server. There's more than types that an LLM could leverage from existing language tooling. Auto imports is a good example, it is handy for the human developer to keep a linear writing flow, something a LLM needs even more.

koakuma-chan · 2h ago
The vibe code society would benefit way more if libraries hosted their docs in a way that's easy to copy and paste into an LLM.
tough · 2h ago
many docs now include llms.txt https://llmstxt.org/
koakuma-chan · 2h ago
I saw that but it doesn't work for me. I use Gemini 2.5 Pro Preview right now, and it cannot fetch content from links. What I am looking for is a large text file with public API class, function, etc. signatures, plain text documentation and code examples.
tough · 2h ago
koakuma-chan · 2h ago
Depends on the library I guess, I spent 12~ hours today vibe coding with LiveKit and their /llms.txt is https://docs.livekit.io/llms.txt
slt2021 · 2h ago
we really need LLM trained on AST, instead of token, is there any research on this?
tough · 2h ago
ASTrust: Towards More Trustworthy and Interpretable LLMs for Code through Syntax-Grounded Explanations

https://arxiv.org/abs/2407.08983

AST-T5: Structure-Aware Pretraining for Code Generation and Understanding

https://arxiv.org/abs/2401.03003

CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation

https://arxiv.org/abs/2405.02355

compacct27 · 3h ago
Honestly it's already working great in Cursor. Even adapting one type structure to another is quickly handled.
tough · 3h ago
nikolayasdf123 · 43m ago
nice. the speed of AI development is accelerating so fast
bmc7505 · 2h ago
The correct way to do this is with finite model theory but we're not there yet.
jiggawatts · 3h ago
This was an obvious next step. Most current products can only restrict the token prediction to valid JSON or a specific JSON schema at best. There's no reason that this should be the only grammar available for constrained output mode.

The real challenge will be to make this detect and switch languages automatically. For example, a snippet of code could include a LaTeX formula in a comment and SQL in a string literal. There are many more examples, such as regex inside a shell script, and so on.

The obvious next step after that is back-tracking. It's possible to emit a token that is valid, but then allows no further completions that are valid. In other words, the model can paint itself into a corner. To my knowledge, no current online LLM service uses any kind of backtracking, they run in append ("forwards") mode only.

tough · 3h ago
SRLCG: Self-Rectified Large-Scale Code Generation with Multidimensional Chain-of-Thought and Dynamic Backtracking

https://arxiv.org/abs/2504.00532

IterGen: Iterative Semantic-aware Structured LLM Generation with Backtracking

https://arxiv.org/abs/2410.07295

ROCODE: Integrating Backtracking Mechanism and Program Analysis in Large Language Models for Code Generation

https://arxiv.org/abs/2411.07112v1

foota · 3h ago
I believe Microsoft introduced a framework that did this sort of backtracking that you're suggesting. I'm not sure how much traction it got.
helltone · 3h ago
Backtracking idea is interesting, could maybe diffusion help? At some point it turns into sat solving.
grafmax · 3h ago
Sat solving I guess because types encode proofs?

No comments yet