My current thinking on model checking (still evolving):
Modeling languages are useful to check the correctness of an algorithm during development. During development, a model can also serve as a specification for the actual implementation. This requires that your the modeling language is readable to a broad range of developers, which TLA+ is not. We have been experimenting with FizzBee (fizzbee.io) which looks promising in this regards.
> I think we can increase the TLC model checker throughput to 1 billion states per minute (1000x speedup) by writing a bytecode interpreter.
Truffle [1] can convert an interpreter to a JIT compiler -- there's no need to invent bytecode, and instead of an interpreter you get compilation to native, and it's easy to add intrinsics in Java; optimisations can be added gradually over time. This would probably be the most effective strategy, even in general, and certainly compared to cost.
A few years ago I tried making some contributions to the TLC codebase. It was definitely "academic code," lacking tests, reinvesting basic structured instead of using them from libraries, and largely the work of a single contributor with seemingly no code reviews for commits. I was motivated to try to help improve things and wanted to get a sense for what that would be like by sending a small PR to get a feeling for working with the code owners. They basically stonewalled me. It was odd.
alfalfasprout · 2h ago
TBH, as cool as TLA+ is, the biggest issue I generally see with trying to use formal methods in practice is that you need to keep the specification matching the actual implementation. Otherwise whatever you've formally verified doesn't match what actually got built.
So formal methods may be used for extremely critical parts of systems (eg; safety critical systems or in embedded where later fixes cannot be readily rolled out) but they fail to make inroads in most other development because it's a lot of extra work.
jazzyjackson · 50m ago
On the other hand, how many million man hours are spent re inventing the wheel that could instead be spent contributing to a library of extremely well-specified wheels?
ahelwer · 2h ago
Hillel Wayne wrote a post[0] about this issue recently, but on a practical level I think I want to address it by writing a "how-to" on trace validation & model-based testing. There are a lot of projects out there that have tried this, where you either get your formal model to generate events that push your system around the state space or you collect traces from your system and validate that they're a correct behavior of your specification. Unfortunately, there isn't a good guide out there on how to do this; everybody kind of rolls their own, presents the conference talk, rinse repeat.
But yeah, that's basically the answer to the conformance problem for these sort of lightweight formal methods. Trace validation or model-based testing.
why are the lower case L's in that document bolded? a different weight? Not sure what the right technical change term is for the visual difference but it was extremely noticeable immediately upon opening the document
yuppiemephisto · 2h ago
What do you think of embedding it in a formal system like Lean as a frontend?
amenghra · 2h ago
"I think we can increase the TLC model checker throughput to 1 billion states per minute (1000x speedup) by writing a bytecode interpreter. C"
I never used TLC with a large model, but I bet making the tool faster would make it more useful to lots of people.
I wonder what the speedup would be if the code targeted a GPU?
PessimalDecimal · 22m ago
Agreed.
But I wonder if the logic of model checking is actually amenable to vectorization. I suspect not really, even for something basic like checking safety properties where you could try to shard the state space across cores. There is still likely to be some synchronization that is needed that eliminates the benefits. A cheaper way to test it would be to look to vectorize on the CPU first.
For a pure hardware based speedup, if there is effort to transcompile TLA+ specs to C++, there could then be a further step to transcompile that to say Verilog and try to run the model checking on an FPGA. That _might_ pay off.
dgan · 1h ago
about a year ago, at my job, i wrote a spec for authentication in TLA+, and while writing it, i discovered a bug/attack vector, which would allow an attacker to basically bypass the double-authentication.
It surely did produce a fancy, mathy PDF which I proudly shown and explained to my team, but honestly, a little duck-talking would have permitted to find the same bug without TLA+
For aome context, it's a CRUD api + web interface for external clients, nothing too complicated, and I really wanted to try TLA+ in real life
femto2151 · 3h ago
I think the major contribution that can be made to TLA is in the field of divulgation. Sofware engineers don't know modern formal methods because for years they were pidgeonholed into safety critical systems. Universities are not teaching them anymore despite having being successfully applied since the 2010s to the development of so many cloud systems that other companies can rely on.
layer8 · 2h ago
The article doesn’t mention PlusCal. What is the future of that, will it co-evolve with TLA⁺?
cmrdporcupine · 2h ago
"The 2025 TLA⁺ Community Event was held last week on May 4th at McMaster University in Hamilton, Ontario, Canada. "
Damn. Happened a 10 minute drive from my house and I didn't even know about it.
TLA+ is on the infinite bucket list for me. I'm sure like many others, I know the value of learning and applying formal verification, but it feels impenetrable knowing really how to jump in.
Modeling languages are useful to check the correctness of an algorithm during development. During development, a model can also serve as a specification for the actual implementation. This requires that your the modeling language is readable to a broad range of developers, which TLA+ is not. We have been experimenting with FizzBee (fizzbee.io) which looks promising in this regards.
When you go to prod, you really want to test your actual implementation, not a model of it. For this you want something like https://github.com/awslabs/shuttle (for Rust), or https://github.com/cmu-pasta/fray (for Java). Or use something custom.
Truffle [1] can convert an interpreter to a JIT compiler -- there's no need to invent bytecode, and instead of an interpreter you get compilation to native, and it's easy to add intrinsics in Java; optimisations can be added gradually over time. This would probably be the most effective strategy, even in general, and certainly compared to cost.
[1]: https://www.graalvm.org/latest/graalvm-as-a-platform/languag...
A few years ago I tried making some contributions to the TLC codebase. It was definitely "academic code," lacking tests, reinvesting basic structured instead of using them from libraries, and largely the work of a single contributor with seemingly no code reviews for commits. I was motivated to try to help improve things and wanted to get a sense for what that would be like by sending a small PR to get a feeling for working with the code owners. They basically stonewalled me. It was odd.
So formal methods may be used for extremely critical parts of systems (eg; safety critical systems or in embedded where later fixes cannot be readily rolled out) but they fail to make inroads in most other development because it's a lot of extra work.
But yeah, that's basically the answer to the conformance problem for these sort of lightweight formal methods. Trace validation or model-based testing.
[0] https://buttondown.com/hillelwayne/archive/requirements-chan...)
0. https://conf.tlapl.us/2020/11-Star_Dorminey-Kayfabe_Model_ba...
I never used TLC with a large model, but I bet making the tool faster would make it more useful to lots of people.
I wonder what the speedup would be if the code targeted a GPU?
But I wonder if the logic of model checking is actually amenable to vectorization. I suspect not really, even for something basic like checking safety properties where you could try to shard the state space across cores. There is still likely to be some synchronization that is needed that eliminates the benefits. A cheaper way to test it would be to look to vectorize on the CPU first.
For a pure hardware based speedup, if there is effort to transcompile TLA+ specs to C++, there could then be a further step to transcompile that to say Verilog and try to run the model checking on an FPGA. That _might_ pay off.
It surely did produce a fancy, mathy PDF which I proudly shown and explained to my team, but honestly, a little duck-talking would have permitted to find the same bug without TLA+
For aome context, it's a CRUD api + web interface for external clients, nothing too complicated, and I really wanted to try TLA+ in real life
Damn. Happened a 10 minute drive from my house and I didn't even know about it.
TLA+ is on the infinite bucket list for me. I'm sure like many others, I know the value of learning and applying formal verification, but it feels impenetrable knowing really how to jump in.