Show HN: Plexe – ML Models from a Prompt (github.com)
30 points by vaibhavdubey97 1h ago 11 comments
Show HN: Pinggy – A free RSS reader for the web (pinggy.com)
5 points by vasanthv 15h ago 1 comments
Show HN: MP3 File Editor for Bulk Processing (cjmapp.net)
27 points by cutandjoin 2d ago 17 comments
Propositions as Types (2014) [pdf]
70 nill0 22 5/6/2025, 11:36:09 AM homepages.inf.ed.ac.uk ↗
Many students do not feel comfortable with proof-writing and cannot dispatch/discharge implications or quantifications correctly when writing proofs and I believe that a structured approach using the Curry-Howard correspondence could help.
[0]: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
But you can sort of backdoor the idea with something like this: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion... I won't claim it is exactly the same, but it is certainly similar. Tell a programmer, "Hey, you can build a type that carries certain guarantees with it, so that the rest of the program doesn't need to be constantly validating it", a problem any 5+ year dev ought to be very familiar with, and I think you make more progress in the direction of this notion than a paper full of logic diagrams in a notation even most computer scientist undergrads will never encounter.
(I'm not even certain if I directly encountered that in my Master's degree. It's been a while. I think I did, but I think it was only one class, and not a huge part of it. I'm not sure because I know I did more with it on my own, so I can't remember if I was formally taught it or if I picked it up entirely myself.)
> Address that represents a “street address” that has been validated by your street address to exist
What does it even mean to verify that a street address exists? Verifying real-world relationships is complex and error-prone. I’m reminded of the genre of articles starting with “Falsehoods programmers believe about names” [1].
In practice, the rules that can be enforced are often imposed by the system itself. It simply doesn’t accept data that isn’t in correct format or isn’t consistent with other data it already has. And we need to be cautious about what harm might be caused by these rejections.
Having the validation logic in one place will certainly help when fixing mistakes, but then what do you do about data that’s already been accepted, but is no longer “valid?” This sort of thing makes long-running systems like databases hard to maintain.
[1] https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...
https://plfa.github.io/
This is not at all new, but I suppose it's currently a lot to ask students to learn proof mechanics while also unwinding multiple layers of definitions for abstractions. Computers can help do the unwinding (or lifting) automatically to make it easier to make small "quality of life" definitions that otherwise wouldn't be hugely useful when pen-and-paper-proofs would always be unwinding them anyway.
Basically, math education could look a lot like software engineering education. The concerns and solution patterns are basically the same. e.g. typeclasses are pretty much how mathematics does polymorphism, and are probably usually the right way to do it in programming too.
[0] https://news.ycombinator.com/item?id=43875101
In particular, if someone isn't already an intuitionist or constructivist, what reason can you give that would be valid in their frame of reference?
They're also generally much easier to create than constructive proofs, which makes them a convenient go-to for lazy academics.
I took Haskell in the same uni course with FSAs, Turing machines, Lambda calculus, natural deduction, Hoare logic and structural induction.
So I was exposed to the "base case & step case" of structural induction at the same time as learning about it in Haskell. And for whatever reason it didn't leave a good impression. Implementing it formally was harder and more error-prone than shooting from the hip in an IDE. What was the point!?
Now I smash out Haskell code daily as the quickest way to end up with little binaries that just-work-if-they-compile. It took me a while to realise that the upside of all this formality/mathiness is that other people did the proofs so that you don't need to. I get to take for granted that a boolean is true or false (and not some brilliant third value!) and that (map f . map g) is (map (f . g)).
If I have to do the proof, then as you say, it's probably harder and (in many cases) more error prone than if I didn't use a proof. If the language can do it for me, and I get the lack of errors without having to do the work (and without the chance of me making the mistakes)? Yeah, now we're talking. (Yes, I am aware that I still have to design the types to fit what the program is doing.)
If a tool introduces complexity, it has to make up for it by eliminating at least that much complexity somewhere else. If it doesn't, the tool isn't worth using.
Examples include the famous "the caller of this function guarantees that the argument is a non-empty list" but also "the caller guarantees that the argument object has been properly locked against concurrent access before using the function".
However, in my experience, the community is more interested in mathematics than in programming and I don't know if anybody is really working to provide propositions-as-types to mainstream programmers. This is certainly because it is hard enough to prove soundness of a strict functional programming language as Agda or Rocq, much more for anything imperative, stateful, "real-world", non-strict, non-pure, concurrent, ill-defined, you name it.
So, for me the promise of "showing programmers that what they do is actually mathematics" is not really kept as long as the definition of "programmer" is so narrow that it excludes the vast majority of people who define themselves as programmers.
What I'm trying to say: I wish there were more efforts to bring more powerful formal methods, especially as powerful as dependent types, to existing mainstream programming languages where they could be useful in an industrial context. Or at least try to come up with new programming languages that are more pragmatic and do not force the programmer into some narrow paradigm.
Am I wrong? I hope so :)
From there, every step to improve the expressiveness of types allows you to encode more within the type system. For example, one could think about encoding that non-empty requirement in C++ or Java collection types. It would be nontrivial - a number of things would need adjusting - but you could think about doing it. (Or rather, it would be nontrivial to do with static types. You could more easily make it throw if the condition was not satisfied.)
Your "properly locked" example is much harder. It would require telling the type system what the definition of "properly locked" is. Even at that, I can't see any way to do it as a static type. And that's a bummer, because I far prefer static types. I prefer my proofs to fail at compile time.
But my main point is, "proving" is not binary. Every incremental advance in what types can check proves more things, and therefore leaves the programmers having to prove less in their heads.
They turn diagrams representing the logic of the business domain into expressions in their type system of choice.
I think it’s a failure of pedagogy to focus on the abstract “prove stuff” sense, rather than the applied “domain diagram logic to types” at the core of what SDEs actually do. But turning a business domain diagram into a systems diagram into code is translating a category into another via embedding before translating into a type theory.
My opinion is that you need to start with that core and explain how the other features make that better, eg, how does polynomial monads address the weirdness between logging and errors (and errors in logging errors, etc)?
It is very self-contained and explains some interesting topics such as relation of intuitionist and classical logic.
A proposition being a set intension.