In my opinion it's a tragedy there are so few resources in using "Propositions as Types"/"Curry–Howard correspondence"[0] in didactics in tandem with teaching functional programming to teach structured proof-writing.
Many students do not feel comfortable with proof-writing and cannot dispatch/discharge implications or quantifications correctly when writing proofs and I believe that a structured approach using the Curry-Howard correspondence could help.
While I am first in line to say that programming is math whether you like it or not, it is certainly the case that few programmers are using that sort of math in their heads when they are programming. I suspect that if, oh, about 60-70 years of trying to change that has not had any headway that there is little reason to suspect that's going to change in the next 30.
But you can sort of backdoor the idea with something like this: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion... I won't claim it is exactly the same, but it is certainly similar. Tell a programmer, "Hey, you can build a type that carries certain guarantees with it, so that the rest of the program doesn't need to be constantly validating it", a problem any 5+ year dev ought to be very familiar with, and I think you make more progress in the direction of this notion than a paper full of logic diagrams in a notation even most computer scientist undergrads will never encounter.
(I'm not even certain if I directly encountered that in my Master's degree. It's been a while. I think I did, but I think it was only one class, and not a huge part of it. I'm not sure because I know I did more with it on my own, so I can't remember if I was formally taught it or if I picked it up entirely myself.)
skybrian · 3h ago
While I generally agree with defining new types to assert that validation has been done, I think your blog post could have explained more about what kinds of validation are practical to do. For example:
> Address that represents a “street address” that has been validated by your street address to exist
What does it even mean to verify that a street address exists? Verifying real-world relationships is complex and error-prone. I’m reminded of the genre of articles starting with “Falsehoods programmers believe about names” [1].
In practice, the rules that can be enforced are often imposed by the system itself. It simply doesn’t accept data that isn’t in correct format or isn’t consistent with other data it already has. And we need to be cautious about what harm might be caused by these rejections.
Having the validation logic in one place will certainly help when fixing mistakes, but then what do you do about data that’s already been accepted, but is no longer “valid?” This sort of thing makes long-running systems like databases hard to maintain.
Yeah, the issue is that proofs are harder than people think, even for trivial things (try a few easy leetcode problems in Lean with a proof of correctness), and less useful than people think, especially when you get past low level algorithms and into domain logic (your point exactly). They also don't serialize well, so a database or API call with a "proof" field would be susceptible to fudging, or the definition could change between deployments. They're also easy to make incompatible: one library requires bounds proofs for signed ints, but your app uses unsigned ints, so you have to either rewrite your app or rewrite the library, or cast, in which your type system has to handle like a "checked exception" and propagate that possibility throughout your whole type domain and app logic.
I'm pretty convinced there's a good reason that while "propositions as types" is cool in theory, it's unlikely we'll ever see it catch on in practice.
I've been thinking the same thing and mentioned it the other day[0]. When I was learning proofs, I saw people struggle with the idea of needing to choose a "generic" element to prove a forall statement, for example. I suspect it might make more sense if you taught things in terms of needing to write a function x=>P(x). I think in some cases, thinking in terms of programming might also change how we think about structuring things. e.g. define a data structure for a "point with error tolerance" (x, epsilon), then continuity says given a toleranced-point y* at f(x), I can find a toleranced-point x* at x such that f(x*) is "compatible" (equal at the base point and within tolerance) with y*. This factoring lets you avoid shocking new students with quantifier soup. Likewise the chain rule is just straightforward composition when you define a "derivative at a point" data structure Df((x,m))=(f(x), m*f'(x)).
This is not at all new, but I suppose it's currently a lot to ask students to learn proof mechanics while also unwinding multiple layers of definitions for abstractions. Computers can help do the unwinding (or lifting) automatically to make it easier to make small "quality of life" definitions that otherwise wouldn't be hugely useful when pen-and-paper-proofs would always be unwinding them anyway.
Basically, math education could look a lot like software engineering education. The concerns and solution patterns are basically the same. e.g. typeclasses are pretty much how mathematics does polymorphism, and are probably usually the right way to do it in programming too.
I don't think the C-H correspondence is necessary for this. It would be a useful way to think even if the C-H correspondence were false.
mietek · 4h ago
One of the best modern resources is "Programming language foundations in Agda", co-authored by Wen Kokke and the same Philip Wadler, and used for teaching at multiple universities.
The connection to intuitionism (see https://en.wikipedia.org/wiki/Intuitionism) gives this kind of thinking much broader appeal and application. It seems to me we live in a time so dominated by analytical thinking that we completely ignore other useful and effective modes.
johnnyjeans · 3h ago
Intuitionism in this context just means the proofs have to be constructive. (no proof-by-contradiction, or other half-assed academic hacks)
AnimalMuppet · 2h ago
Why do you say that proof-by-contradiction is a "half-assed academic hack"?
In particular, if someone isn't already an intuitionist or constructivist, what reason can you give that would be valid in their frame of reference?
johnnyjeans · 2h ago
Because they explain nothing and aren't useful for building new insights. It's not a direct verification of the existence or lack-of-existence of a thing with given properties. It relies on rules lawyering in a very specific logic to say something with basically no substance. Allowing it causes more problems for automated proofs than they solve, so long as your domain only deals with finites (which all real-world domains do exclusively.)
They're also generally much easier to create than constructive proofs, which makes them a convenient go-to for lazy academics.
ndriscoll · 2h ago
Proof by contradiction is fine for lack-of-existence (indeed, ¬P is defined in type theory as P -> false). I think also if I got my types right, you can do
For a constructive proof of ¬¬¬P => ¬P? So it's really just ¬¬P => P that causes trouble. It's not clear to me though whether LEM is actually okay in the "fast and loose reasoning is morally correct" sense (i.e. if it's okay as long as you don't cheat and do something like use a non-terminating loop or exception to make your ¬¬P) though? Are there cases where you didn't "obviously" cheat where it's "wrong" to use it? In some sense, I see ¬¬P => P as a "cast" from an opaque function type to specifically be a closure {f => f(p)} for some p: P.
cmrx64 · 1h ago
that isn’t proof by contradiction, that’s plain old proof of negation. proof of negation is Neg : (P → False) → ¬P, proof by contradiction is PBC : (¬P → False) → P. a subtle yet crucial distinction
ndriscoll · 1h ago
Ah, right. It's been long enough that I've forgotten what the words mean, though I think with my ninja edit, PBC is actually still constructively valid as a method to prove a negation?
Tainnor · 33m ago
> They're also generally much easier to create than constructive proofs
This is generally seen as a good thing by most "lazy academics". I guess your priorities are just different.
Constructivism is also not at all opposed to infinite quantities (that would be finitism). "Real-world domains" deal with infinite spaces (e.g. of all functions R->R) quite regularly across all scientific domains.
mrkeen · 4h ago
This may not go over as well as you'd think.
I took Haskell in the same uni course with FSAs, Turing machines, Lambda calculus, natural deduction, Hoare logic and structural induction.
So I was exposed to the "base case & step case" of structural induction at the same time as learning about it in Haskell. And for whatever reason it didn't leave a good impression. Implementing it formally was harder and more error-prone than shooting from the hip in an IDE. What was the point!?
Now I smash out Haskell code daily as the quickest way to end up with little binaries that just-work-if-they-compile. It took me a while to realise that the upside of all this formality/mathiness is that other people did the proofs so that you don't need to. I get to take for granted that a boolean is true or false (and not some brilliant third value!) and that (map f . map g) is (map (f . g)).
edef · 2h ago
> I get to take for granted that a boolean is true or false (and not some brilliant third value!)
but in Haskell, yes, it's undefined. Which isn't a real value! For example, infinite loops are undefined. Theorists like to call it a value of every type, but in practical terms, it's more like a computation that never produces a value, and undefined "values" are semantically equivalent to infinite-looping pure functions (but sometimes they crash the program instead of making it hang forever - so do some infinite loops!). Writing "undefined" should be thought of as shorthand for writing a deliberate infinite loop.
mpweiher · 1h ago
> other people did the proofs so that you don't need to.
So?
You don't need to either.
There simply is no evidence that having such proofs has any significant effect on software quality.
“A man is rich in proportion to the number of things which he can afford to let alone.”
― Henry David Thoreau, Walden or, Life in the Woods
If I have to do the proof, then as you say, it's probably harder and (in many cases) more error prone than if I didn't use a proof. If the language can do it for me, and I get the lack of errors without having to do the work (and without the chance of me making the mistakes)? Yeah, now we're talking. (Yes, I am aware that I still have to design the types to fit what the program is doing.)
If a tool introduces complexity, it has to make up for it by eliminating at least that much complexity somewhere else. If it doesn't, the tool isn't worth using.
indyjo · 3h ago
As much as the concept blew me away when I first heard of it, I can't shake the feeling that the Curry-Howard correspondence is somehow mis-marketed as something that would immediately cater to programmers. The idea to encode propositions into types sounds enticing for programmers, because there are indeed a lot of cases where something needs to be conveyed that can't be conveyed using type systems (or other features) of common programming languages.
Examples include the famous "the caller of this function guarantees that the argument is a non-empty list" but also "the caller guarantees that the argument object has been properly locked against concurrent access before using the function".
However, in my experience, the community is more interested in mathematics than in programming and I don't know if anybody is really working to provide propositions-as-types to mainstream programmers. This is certainly because it is hard enough to prove soundness of a strict functional programming language as Agda or Rocq, much more for anything imperative, stateful, "real-world", non-strict, non-pure, concurrent, ill-defined, you name it.
So, for me the promise of "showing programmers that what they do is actually mathematics" is not really kept as long as the definition of "programmer" is so narrow that it excludes the vast majority of people who define themselves as programmers.
What I'm trying to say: I wish there were more efforts to bring more powerful formal methods, especially as powerful as dependent types, to existing mainstream programming languages where they could be useful in an industrial context. Or at least try to come up with new programming languages that are more pragmatic and do not force the programmer into some narrow paradigm.
Am I wrong? I hope so :)
ndriscoll · 3h ago
Programmers regularly do this stuff under various names/phrases. "Make illegal states unrepresentable", "parse, don't validate", "resource acquisition is initialization", etc. It's all about encoding the fact that your structure isn't just data, but also represents some proof of provenance that guarantees some property. Scala is an industrial language that gives more tools than you might usually find for this while also letting you be pragmatic and integrate with Java's massive ecosystem (though you quickly learn you'd rather not because Java code tends to not do a good job of leveraging types in a sane way).
immibis · 41m ago
C-H isn't useful to programmers at all. The programs that represent proofs end up being useless programs, most of the time, and the proofs represented by real-world programs end up being useless proofs, most of the time.
Most programs deal with several objects of the same type - your program probably contains more than one integer (shocking, right?). Since C-H maps each unique type to a unique proof term, the same type maps to the same proof term. A function that calculates greatest common factor of two numbers proves that (A∧A)→A, or "(A and A) implies A" which is... uninteresting.
In the reverse direction, the program generated by (A∧A)→A is either the first or second entry selector from a tuple, and doesn't calculate the greatest common factor of two integers. (A∨A)→A is slightly more interesting, taking Either<int, int> and returning int.
It's true that C-H of logical disjunction gives you Either types, conjunction gives you tuple types, and implication gives you function types. Kinda fascinating, enough to write a blog post about, but still without any practical purpose.
Practical types with additional requirements (such as non-empty lists) have nothing to do with C-H and could exist even if C-H were false. Same for dependent types. As I said, C-H is completely useless in actual programming.
I do wish we had more formal methods though. Formal methods of programming have nothing to do with C-H either. Static typing has nothing to do with C-H. The Rust borrow checker has nothing to do with C-H. Complete formal verification (as in seL4) has nothing to do with C-H.
AnimalMuppet · 3h ago
No, you're not wrong. I mean, in some circles, it's a battle to get programmers to use types at all. And, while not every proposition can currently be usefully encoded in a type, every type currently encodes a proposition. So if we can get the people who don't use types to start using them, that's probably the lowest-hanging fruit.
From there, every step to improve the expressiveness of types allows you to encode more within the type system. For example, one could think about encoding that non-empty requirement in C++ or Java collection types. It would be nontrivial - a number of things would need adjusting - but you could think about doing it. (Or rather, it would be nontrivial to do with static types. You could more easily make it throw if the condition was not satisfied.)
Your "properly locked" example is much harder. It would require telling the type system what the definition of "properly locked" is. Even at that, I can't see any way to do it as a static type. And that's a bummer, because I far prefer static types. I prefer my proofs to fail at compile time.
But my main point is, "proving" is not binary. Every incremental advance in what types can check proves more things, and therefore leaves the programmers having to prove less in their heads.
daxfohl · 1h ago
And it is continually improving. Rust borrow checker is an example of this.
But as far as jumping into the deep end of dependent types, that's a whole other ball of wax. Like, imagine (or try!) writing leetcode solutions in Lean or Idris, with a type that proves they are correct and (bonus points) run in the specified time complexity. Even defining the hypothesis is non trivial. It's all doable, but takes orders of magnitude longer. But with dependent types you have to do it because the next function you call may require a proof that the leet function returns the thing it's supposed to.
That's just for algorithmic leet problems. Now imagine having to write proofs in a complex multithreaded system that your code is not accessing out of bounds arrays or leaking memory, and integrating libraries that each have slightly different semantics for doing the same, or they use unsigned types that make all the proofs incompatible, etc. At that point, you have to basically give up on your borrow checker and fall back to using runtime checks anyway. And even if you did get your system into a fully proved state, that only applies to that one binary; it makes no guarantees about distributed interactions, rollouts and rollbacks, or any of the other things that are the more frequent cause of bugs in production systems. In fact, it may encourage more 'distributed spaghetti" just to work around having to prove everything.
There's an analogy with how checked exceptions work in Java: cool thought, but mostly get in the way of what you're really trying to do after a while, or break things when new types of exceptions get added to the function, so everyone ends up just wrapping them with unchecked exceptions anyway. This is what would end up happening with full dependent types too, except it would pervade the entire type system and every function you write. The eventual outcome would likely be everyone just works around them (every single function, even fairly trivial ones like divide, would return an Option that the caller would have to handle or bubble up), and the actual code would be even less type safe than it would be with a simpler type system.
So, ultimately the way Rust is going, where some key things like borrow checking, are built into the language, seems to be the better approach.
zmgsabst · 2h ago
Programmers use it all the time!
They turn diagrams representing the logic of the business domain into expressions in their type system of choice.
I think it’s a failure of pedagogy to focus on the abstract “prove stuff” sense, rather than the applied “domain diagram logic to types” at the core of what SDEs actually do. But turning a business domain diagram into a systems diagram into code is translating a category into another via embedding before translating into a type theory.
My opinion is that you need to start with that core and explain how the other features make that better, eg, how does polynomial monads address the weirdness between logging and errors (and errors in logging errors, etc)?
gitroom · 17m ago
I feel like every time I try to do super formal proofs for code, it ends up way more brutal than I expect and slower than just fixing stuff as I go. Still love types catching my mess-ups for me though.
cbdumas · 2m ago
This is the eternal tradeoff of testing, brought to its logical (no pun intended) conclusion. Testing (and formal verification) don't get you anything for free: writing detailed specs or tests is at least as difficult as writing a correct implementation. In fact given a sufficiently detailed spec, a program conforming to the spec can be derived automatically.
js8 · 2h ago
I really recommend [56] M. H. Sørensen and P. Urzyczyn. Lectures on the Curry-Howard
isomorphism. Elsevier, 2006 on this topic. I found it online.
It is very self-contained and explains some interesting topics such as relation of intuitionist and classical logic.
emorning3 · 42m ago
I dont want to prove that my code is correct.
I would prefer to create specifications in logic and have the code derived for me.
Other than something like Prolog, are there tools for working this way?
emorning3 · 5m ago
PS...
I've been waiting for such tools for 40 years.
I just now realized that vibe coding is probably the closest we'll get in my lifetime.
It's a start I guess.
Tainnor · 25m ago
This is impossible to solve in the general case due to Gödel's first incompleteness theorem.
aatd86 · 2h ago
Could have written further into set theory and set theoretic interpretation of types.
The Curry-Howard correspondence seems like one of those things that's weird and unexpected but not actually very useful?
Most of the types that correspond to propositions, and programs that correspond to proofs, aren't of much utility in actual programming. And most of the useful programs and their types don't correspond to interesting proofs and propositions.
The paper relates it to other connections between fields, such as Cartesian coordinates linking geometry and algebra. This allows you to bring the power of algebra into geometric problems and geometric intuition into algebra problems. But does the Curry-Howard correspondence bring similar powers?
--
This kind of rearrangement without really changing anything reminds me of the equivalence between first-order logic terms and sets. Consider [P] to be the set of models where statement P is true - then [A∧B] = [A]∩[B], [A∨B] = [A]∪[B] and so on. But this doesn't lead to any new insights. You basically just wrote the same thing with different symbols.
In some contexts (proving soundness/completeness) it can allow you to step down one turtle, but you still have aleph-null turtles below you so that doesn't seem that useful either.
Many students do not feel comfortable with proof-writing and cannot dispatch/discharge implications or quantifications correctly when writing proofs and I believe that a structured approach using the Curry-Howard correspondence could help.
[0]: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
But you can sort of backdoor the idea with something like this: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion... I won't claim it is exactly the same, but it is certainly similar. Tell a programmer, "Hey, you can build a type that carries certain guarantees with it, so that the rest of the program doesn't need to be constantly validating it", a problem any 5+ year dev ought to be very familiar with, and I think you make more progress in the direction of this notion than a paper full of logic diagrams in a notation even most computer scientist undergrads will never encounter.
(I'm not even certain if I directly encountered that in my Master's degree. It's been a while. I think I did, but I think it was only one class, and not a huge part of it. I'm not sure because I know I did more with it on my own, so I can't remember if I was formally taught it or if I picked it up entirely myself.)
> Address that represents a “street address” that has been validated by your street address to exist
What does it even mean to verify that a street address exists? Verifying real-world relationships is complex and error-prone. I’m reminded of the genre of articles starting with “Falsehoods programmers believe about names” [1].
In practice, the rules that can be enforced are often imposed by the system itself. It simply doesn’t accept data that isn’t in correct format or isn’t consistent with other data it already has. And we need to be cautious about what harm might be caused by these rejections.
Having the validation logic in one place will certainly help when fixing mistakes, but then what do you do about data that’s already been accepted, but is no longer “valid?” This sort of thing makes long-running systems like databases hard to maintain.
[1] https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...
I'm pretty convinced there's a good reason that while "propositions as types" is cool in theory, it's unlikely we'll ever see it catch on in practice.
This is not at all new, but I suppose it's currently a lot to ask students to learn proof mechanics while also unwinding multiple layers of definitions for abstractions. Computers can help do the unwinding (or lifting) automatically to make it easier to make small "quality of life" definitions that otherwise wouldn't be hugely useful when pen-and-paper-proofs would always be unwinding them anyway.
Basically, math education could look a lot like software engineering education. The concerns and solution patterns are basically the same. e.g. typeclasses are pretty much how mathematics does polymorphism, and are probably usually the right way to do it in programming too.
[0] https://news.ycombinator.com/item?id=43875101
https://plfa.github.io/
In particular, if someone isn't already an intuitionist or constructivist, what reason can you give that would be valid in their frame of reference?
They're also generally much easier to create than constructive proofs, which makes them a convenient go-to for lazy academics.
This is generally seen as a good thing by most "lazy academics". I guess your priorities are just different.
Constructivism is also not at all opposed to infinite quantities (that would be finitism). "Real-world domains" deal with infinite spaces (e.g. of all functions R->R) quite regularly across all scientific domains.
I took Haskell in the same uni course with FSAs, Turing machines, Lambda calculus, natural deduction, Hoare logic and structural induction.
So I was exposed to the "base case & step case" of structural induction at the same time as learning about it in Haskell. And for whatever reason it didn't leave a good impression. Implementing it formally was harder and more error-prone than shooting from the hip in an IDE. What was the point!?
Now I smash out Haskell code daily as the quickest way to end up with little binaries that just-work-if-they-compile. It took me a while to realise that the upside of all this formality/mathiness is that other people did the proofs so that you don't need to. I get to take for granted that a boolean is true or false (and not some brilliant third value!) and that (map f . map g) is (map (f . g)).
but in Haskell, yes, it's undefined. Which isn't a real value! For example, infinite loops are undefined. Theorists like to call it a value of every type, but in practical terms, it's more like a computation that never produces a value, and undefined "values" are semantically equivalent to infinite-looping pure functions (but sometimes they crash the program instead of making it hang forever - so do some infinite loops!). Writing "undefined" should be thought of as shorthand for writing a deliberate infinite loop.
So?
You don't need to either.
There simply is no evidence that having such proofs has any significant effect on software quality.
https://blog.metaobject.com/2014/06/the-safyness-of-static-t...
If I have to do the proof, then as you say, it's probably harder and (in many cases) more error prone than if I didn't use a proof. If the language can do it for me, and I get the lack of errors without having to do the work (and without the chance of me making the mistakes)? Yeah, now we're talking. (Yes, I am aware that I still have to design the types to fit what the program is doing.)
If a tool introduces complexity, it has to make up for it by eliminating at least that much complexity somewhere else. If it doesn't, the tool isn't worth using.
Examples include the famous "the caller of this function guarantees that the argument is a non-empty list" but also "the caller guarantees that the argument object has been properly locked against concurrent access before using the function".
However, in my experience, the community is more interested in mathematics than in programming and I don't know if anybody is really working to provide propositions-as-types to mainstream programmers. This is certainly because it is hard enough to prove soundness of a strict functional programming language as Agda or Rocq, much more for anything imperative, stateful, "real-world", non-strict, non-pure, concurrent, ill-defined, you name it.
So, for me the promise of "showing programmers that what they do is actually mathematics" is not really kept as long as the definition of "programmer" is so narrow that it excludes the vast majority of people who define themselves as programmers.
What I'm trying to say: I wish there were more efforts to bring more powerful formal methods, especially as powerful as dependent types, to existing mainstream programming languages where they could be useful in an industrial context. Or at least try to come up with new programming languages that are more pragmatic and do not force the programmer into some narrow paradigm.
Am I wrong? I hope so :)
Most programs deal with several objects of the same type - your program probably contains more than one integer (shocking, right?). Since C-H maps each unique type to a unique proof term, the same type maps to the same proof term. A function that calculates greatest common factor of two numbers proves that (A∧A)→A, or "(A and A) implies A" which is... uninteresting.
In the reverse direction, the program generated by (A∧A)→A is either the first or second entry selector from a tuple, and doesn't calculate the greatest common factor of two integers. (A∨A)→A is slightly more interesting, taking Either<int, int> and returning int.
It's true that C-H of logical disjunction gives you Either types, conjunction gives you tuple types, and implication gives you function types. Kinda fascinating, enough to write a blog post about, but still without any practical purpose.
Practical types with additional requirements (such as non-empty lists) have nothing to do with C-H and could exist even if C-H were false. Same for dependent types. As I said, C-H is completely useless in actual programming.
I do wish we had more formal methods though. Formal methods of programming have nothing to do with C-H either. Static typing has nothing to do with C-H. The Rust borrow checker has nothing to do with C-H. Complete formal verification (as in seL4) has nothing to do with C-H.
From there, every step to improve the expressiveness of types allows you to encode more within the type system. For example, one could think about encoding that non-empty requirement in C++ or Java collection types. It would be nontrivial - a number of things would need adjusting - but you could think about doing it. (Or rather, it would be nontrivial to do with static types. You could more easily make it throw if the condition was not satisfied.)
Your "properly locked" example is much harder. It would require telling the type system what the definition of "properly locked" is. Even at that, I can't see any way to do it as a static type. And that's a bummer, because I far prefer static types. I prefer my proofs to fail at compile time.
But my main point is, "proving" is not binary. Every incremental advance in what types can check proves more things, and therefore leaves the programmers having to prove less in their heads.
But as far as jumping into the deep end of dependent types, that's a whole other ball of wax. Like, imagine (or try!) writing leetcode solutions in Lean or Idris, with a type that proves they are correct and (bonus points) run in the specified time complexity. Even defining the hypothesis is non trivial. It's all doable, but takes orders of magnitude longer. But with dependent types you have to do it because the next function you call may require a proof that the leet function returns the thing it's supposed to.
That's just for algorithmic leet problems. Now imagine having to write proofs in a complex multithreaded system that your code is not accessing out of bounds arrays or leaking memory, and integrating libraries that each have slightly different semantics for doing the same, or they use unsigned types that make all the proofs incompatible, etc. At that point, you have to basically give up on your borrow checker and fall back to using runtime checks anyway. And even if you did get your system into a fully proved state, that only applies to that one binary; it makes no guarantees about distributed interactions, rollouts and rollbacks, or any of the other things that are the more frequent cause of bugs in production systems. In fact, it may encourage more 'distributed spaghetti" just to work around having to prove everything.
There's an analogy with how checked exceptions work in Java: cool thought, but mostly get in the way of what you're really trying to do after a while, or break things when new types of exceptions get added to the function, so everyone ends up just wrapping them with unchecked exceptions anyway. This is what would end up happening with full dependent types too, except it would pervade the entire type system and every function you write. The eventual outcome would likely be everyone just works around them (every single function, even fairly trivial ones like divide, would return an Option that the caller would have to handle or bubble up), and the actual code would be even less type safe than it would be with a simpler type system.
So, ultimately the way Rust is going, where some key things like borrow checking, are built into the language, seems to be the better approach.
They turn diagrams representing the logic of the business domain into expressions in their type system of choice.
I think it’s a failure of pedagogy to focus on the abstract “prove stuff” sense, rather than the applied “domain diagram logic to types” at the core of what SDEs actually do. But turning a business domain diagram into a systems diagram into code is translating a category into another via embedding before translating into a type theory.
My opinion is that you need to start with that core and explain how the other features make that better, eg, how does polynomial monads address the weirdness between logging and errors (and errors in logging errors, etc)?
It is very self-contained and explains some interesting topics such as relation of intuitionist and classical logic.
I would prefer to create specifications in logic and have the code derived for me.
Other than something like Prolog, are there tools for working this way?
I've been waiting for such tools for 40 years.
I just now realized that vibe coding is probably the closest we'll get in my lifetime.
It's a start I guess.
A proposition being a set intension.
Most of the types that correspond to propositions, and programs that correspond to proofs, aren't of much utility in actual programming. And most of the useful programs and their types don't correspond to interesting proofs and propositions.
The paper relates it to other connections between fields, such as Cartesian coordinates linking geometry and algebra. This allows you to bring the power of algebra into geometric problems and geometric intuition into algebra problems. But does the Curry-Howard correspondence bring similar powers?
--
This kind of rearrangement without really changing anything reminds me of the equivalence between first-order logic terms and sets. Consider [P] to be the set of models where statement P is true - then [A∧B] = [A]∩[B], [A∨B] = [A]∪[B] and so on. But this doesn't lead to any new insights. You basically just wrote the same thing with different symbols.
In some contexts (proving soundness/completeness) it can allow you to step down one turtle, but you still have aleph-null turtles below you so that doesn't seem that useful either.