This article discusses details of trying to use language features to model a fairly basic problem but doesn’t discuss the fundamentals of the problem very much.
If I have O operations (e.g. evaluate, stringify) and T types, then I need O•T implementations. If I fix O, I can add types until the cows come home by adding O implementations per new type. If I fix T, I can add new operations with T implementations per operation. If I want to let O and T vary, then the number of implementations I need to add per additional operation/type varies depending on what I’ve added before. No amount of programming language magic will change this basic count.
ISTM what’s really going on in the article is that the author is using the programming language as a registry of implementations. Those O•T implementations need to be stored somewhere so their callers can find them. In C++ this can be done when very verbose inheritance and virtual calls. In other languages, it’s less verbose. If multiple dispatch is built in, then one can use it directly. One can, of course, also build an actual table indexed on operation and type as a data structure in just about any language and use it, and in a language like C++ the result may well be quite a bit nicer than using a class hierarchy.
layer8 · 9h ago
The problem is how to statically ensure that you don’t run into the case of a missing implementation for some combination at runtime, in particular in languages with separate compilation. If you have a library A that defines a basic (T, O) set, and two libraries B and C that aren’t aware of each other but both use A, where B adds a new type and C adds a new operation, and finally you have a program D that uses both B and C, how do you statically detect or prevent the case where D tries to use the new type from B with the new operation from C, but no one has provided an implementation for that combination; while also supporting the case of the required implementation being present.
HarHarVeryFunny · 8h ago
> The problem is how to statically ensure that you don’t run into the case of a missing implementation
Static (compile-time) guarantees are only applicable to languages with static binding rules, in which case there is no problem - the compiler will just report "cannot resolve overloaded function" or something similar.
This is of course one of the downsides to languages with dynamic types and dynamic binding ... that many errors can only be detected at runtime.
HarHarVeryFunny · 9h ago
> In C++ this can be done when very verbose inheritance and virtual calls.
No - in C++ you'd just define the operators (either as global functions, or member functions of the types/classes in question), and the language's overload resolution rules would select the right implementation for each usage.
You could use inheritence (and virtual overrides if needed), or templates/generics, if you wanted to - C++ certainly gives you a lot of ways to make things more complex and shoot yourself in the foot, but the normal/obvious way would just be to define the overloaded operators and be done with it.
layer8 · 9h ago
The expression problem is about dynamic dispatch, not static dispatch. For example, you call a function F that returns an object of one of the relevant types, and another function G that returns one of the relevant operations, and then you apply the operation to the object. How can you have F extend the set of types, and also G extend the set of operations, without having to recompile the caller, or without F and G having to coordinate, and also avoid a runtime error due to a missing implementation of some operation for some type.
HarHarVeryFunny · 8h ago
I agree (although others in this thread do not), but the comment I was responding to was claiming that in C++ operator overloading requires inheritance and virtual methods, and I was just pointing out that this is not true.
layer8 · 8h ago
The comment you responded to doesn’t mention overloading.
HarHarVeryFunny · 8h ago
> In C++ this can be done when very verbose inheritance and virtual calls
Virtual methods overload the corresponding method in the parent class they are inheriting from. It could be a named method like "add()", or an operator like "+()".
The author of that comment is implicitly assuming that all types are derived via inheritance from a common base type that defines all the (virtual) methods/operators being overloaded.
layer8 · 8h ago
The operations are different methods, not overloads of each other, and the different types are the types on which the methods are defined, not arguments to the methods. That’s also how the article presents it. There are no overloads in the scenario.
HarHarVeryFunny · 8h ago
The article presents examples in many languages. His C++ inheritance/override example was an example of how NOT to do it, since if you add a new virtual method to the base class then all the derived classes either have to use that base class method or be modified to override it in an appropriate way.
Overuse of inheritance is one of the ways to shoot yourself in the foot with C++. If you want to overload (apply same operator to multiple types - related or not), then just overload - don't use inheritance.
Your own example, above (B defines new type, C defines new operator) doesn't appear to be about inheritance at all - you don't even say what language you are talking about, and elsewhere you say (contracticting the article author) that this "expression problem" only applies to dynamic languages ... you seem to be all over the map on this one!
layer8 · 3h ago
I’m sorry, but the expression problem is a fairly well-known problem, and you don’t seem to be familiar with it, so are drawing the wrong conclusions.
The problem statement is: You have code (e.g. a library) that supports polymorphic operations on a range of types. It is called the “expression problem” because the prototypical example is to have a type representing an expression (like an arithmetic expression) that can have various forms (subtypes, not necessarily in the OOP sense), and there are operations like evaluating the expression or printing the expression, which of course need to be implemented for all those subtypes (you need to know how to print a constant, how to print a binary expression, and so on).
There is code that can take an arbitrary expression whose exact subtype will only be known at runtime, and which invokes one of the available operations on the expression object.
Now, you want to be able to write secondary libraries (or programs) that extend the first library by extending the set of subtypes and/or the set of operations, but without modifying the first library (and without duplicating code from the first library). With OOP inheritance, adding new subtypes is easy. With the visitor pattern (or equivalent runtime case-distinction mechanisms like pattern matching in functional languages), adding new operations is easy. But the combination of both is difficult, if not unsolvable.
Overloading doesn’t apply, because overloading would dispatch on the static type of the expression variable, but the use case is that this is an expression parsed from a textual representation at runtime (e.g. in the context of an interpreter or compiler, or maybe some data format parser), so the variable holding the expression object has a generic type and could be holding any expression.
Kranar · 7h ago
Virtual methods and overloading are not the same thing.
You are likely mixing up the term overload with the term override.
HarHarVeryFunny · 7h ago
Apples and oranges.
Mechanism vs functionality.
kqr · 12h ago
Huh, this is a very neat way to put it. Something about it seems... too neat? But I'm the wrong person to poke holes in it. Can I see other people debate it civilly?
magnio · 12h ago
My first exposure to the expression problem is the book Crafting Interpreters, Section 5.3.1,[1] which is very similar to the GP comment. The subsection is a short few paragraphs, with one or two tables for illustration, and it was one of the most eureka moments of my programming career. Ever since then, I see this trade-off everywhere, and I get reminded that engineering is all about finding the best trade-off and that the holy-grail of a perfect language or design is a waste of time.
I think the article as it stands makes that all pretty clear. But the point of the article is more that the language selection will determine how difficult it is to do that if you don't have control over the original definition of the operations or types.
moelf · 11h ago
this is counting it as if O only takes an argument, which is single dispatch, which OOP covers already (the `self` is the single argument).
In practice, Multiple Dispatch shines when you have 1) more than one argument type (duh) 2) higher order `O` operation [1]
[1]: think of a numerical routine that calls eigen or something, and you want eigen to exploit the matrix's type, such as symmetry or upper-triangular, and this is encoded as a matrix type)
layer8 · 8h ago
In the expression problem, the other argument is the operation to be invoked, which can vary at runtime as well. OOP doesn’t cover that.
naasking · 9h ago
> One can, of course, also build an actual table indexed on operation and type as a data structure in just about any language and use it, and in a language like C++ the result may well be quite a bit nicer than using a class hierarchy.
This requires unsafe casting and thus doesn't actually solve the expression problem, which is about how to do this without compromising safety. Your approach is what the solution to the expression problem in Haskell effectively does at runtime though, via type classes and the dictionary-passing translation that they undergo during compilation, but it's all type safe.
HarHarVeryFunny · 11h ago
Overloading operators like "+" to work on new types, and maybe mixed types, requires each desired combination of operator and operand types to be implemented (as one can easily do in a language like C++, and presumably also in more modern languages). If there is any commonality between types as to how these operators are meant to work, then generics such as C++'s templates can also be used to reduce the number of implementations to those that are actually distinct.
I don't understand what problem the author is trying to solve here - maybe it's language specific? More related to dynamic typing and efficient dispatch?
In any case, operator overloading, while it can make for cute-to-read code, isn't really a good idea for code readability and maintenance.
moelf · 11h ago
> operator overloading, while it can make for cute-to-read code, isn't really a good idea for code readability and maintenance
so should we write `add_int_int(1,1)` and `add_int_double(1, 1.0)` and `add_double_double(1.0, 1.0)`?...
HarHarVeryFunny · 10h ago
The expectation is that arithmetic operators are performing the corresponding arithmetic operations, so overloading for different arithmetic types (int, float, complex) doesn't produce any surprises or need to lookup operator definitions.
Applying arithmetic operators to non-arithmetic types starts to become a problem, even if some cases (set union/difference, string catenation) are natural...
The same goes for other operators... overloading '[]' indexing for maps as well as arrays seems natural, but would be highly confusing if you overloaded it for something unrelated to an index-like concept.
JadeNB · 9h ago
I think a problem here is that everyone has a different idea of when you hit the boundary between obvious/non-surprising and confusing, so you can't just say that overloading is OK as long as it is unsurprising.
HarHarVeryFunny · 9h ago
Just because people have differing opinions (some well founded, some not!) doesn't mean that defining best practices doesn't make sense. Code readability and lack of surprise objectively are good things, and certainly important for large enterprise projects.
I think there are two alternate guidelines for use of operator overloading that would make sense.
1) Just don't overload operators at all for your own types! Leave that to the standard libraries and types that they define.
OR
2) Overload, but keep to the semantics of the operators as used by the language's built-in types (e.g. use arithmetic operators for arithmetic operations, comparison operators for ordering operations, etc). If your project was using a style guideline like this, then violations would be caught by code review (maybe automated by AI in the future?).
Kranar · 11h ago
>I don't understand what problem the author is trying to solve here - maybe it's language specific? More related to dynamic typing and efficient dispatch?
The expression problem only arises in statically typed programming languages, it does not exist in dynamically typed programming languages.
Operating overloading has nothing to do with the problem and can not be used to resolve it. Operators are nothing more than a one-off syntax so we can use familiar childhood notation like a + b, etc... they are not particularly meaningful. The ability to overload operators or functions in general is also irrelevant since such overloads are resolved statically at compile time.
munificent · 10h ago
> The expression problem only arises in statically typed programming languages, it does not exist in dynamically typed programming languages.
Wadler's list of requirements for solving the expression problem include "with static checking that all cases are covered", so in one sense, yes, dynamic languages don't have this "problem". But in another sense, dynamic languages simply have no chance to solve the problem because they don't statically check anything.
It is true that it's much easier to do multiple dispatch and open-ended extension in dymamic languages. That's a nice benefit of them. But you do sacrifice all of the safety, code navigation, and performance of static types in order to get that.
The problem Wadler is trying to solve is "how can we have this sort of open-ended extension in both directions without giving up static safety?"
adgjlsfhk1 · 1h ago
This isn't true. Julia manages to maintain good performance with multiple dispatch through lots of type inference in the compiler (and carefully written code in the language to preserve "type stability")
kibwen · 10h ago
> does not exist in dynamically typed programming languages
The "problem" exists for dynamically-typed languages as well. If you define a new class in Python, you still need to teach existing code how to operate on it (though you might be using inheritance to automatically derive some of those implementations, but inheritance obviously isn't limited to dynamic languages).
HarHarVeryFunny · 9h ago
> Operating overloading has nothing to do with the problem
You've got T types (some new), and O operators (some new) and want to implement all operators for all types ... This is the exact definition of operator overloading.
There is no magic (other than inheritance or generics, if applicable) that will remove the need to individually implement all those O x T cases, and while that is obviously necessary, it is also all that you need to do.
If you are not talking about operator overloading - supporting the same operator for multiple different custom types, then what are you talking about ?!
In short, the solution is to define an interface / typeclass / trait / protocol that factors out the particular kind of operation, and separates it from any specific class or data type. Basically it allows to tag any particular data representation with a trait, and then offer an implementation of that trait, without touching any pre-existing code related to that data representation.
In languages that do not allow that (e.g. Java), one has to implement it by hand, using a Visitor pattern. Instead of relying on the language to do the dynamic dispatch, one has to explicitly add a visiting method for another data type.
To me, the turn towards interfaces / typeclasses / traits / protocols is one of the most notable bits of progress in the newer crop of programming languages (and, importantly, their standard libraries).
Does Clojure solve this to a satisfying degree? One gives up static typing to embrace Clojure’s solutions.
On the other hand, Rust / Haskell provide partial solutions with static typing — but run into annoying boilerplate related to the orphan rule.
I’m not sure if there’s a true solution, given the level of thinking which has gone into these two forks in the road.
zelphirkalt · 5h ago
I think one of the solutions is the data oriented approach, that is already shown in SICP, where you have a global lookup table, in which you lookup by types and operation. In SICP the characters or you then add modules, which is done by registering new types-operations pairs in the table, if I remember correctly.
The article's `expression problem matrix` section states that the goal is make it `easy to add ops` and `easy to add types`. My learning of Rust so far indicates Rust satisfies both: traits satisfies the `ops` problem for all traits you want to support the op, and Rust's implementations (impl) solves the problem of adding types. Of course, for each new {op,type} combination, one must write the code, but Rust allows you to do that with its trait and generic systems. Am I missing something important?
wavemode · 11h ago
When people talk about the "expression problem", what they're describing is the fact that, (in your example) if you add a new method to the trait, you have to go around and implement that new method on every type that implements the trait.
This is in contrast to if you had used an enum (sum type) instead, wherein adding a new operation is easy and can be done in a single place. But then in exchange, adding new variants requires going around and updating every existing pattern match to support the new variant.
DarkSucker · 10h ago
Thanks. I wasn't thinking of enums. To the extent one designs a trait to use an enum type (or the enum to satisfy the trait), one wins. But it seems impossible to avoid code to handle all future {type, op} combinations. The nice thing I've seen with Rust is the ability to add to what's been done before without breaking what already works. I'm thinking of "orphan rules" here.
TuringTest · 10h ago
I'm thinking, didn't inheritance and abstract methods work to solve the problem?
I know inheritance has its own severe problems, but adding a generic abstract method at the base class could create reusable code that can be accessed by any new class that inherits from it.
P.S. ah ok, it's mentioned in the article at the Visitor section.
zelphirkalt · 5h ago
I think the problem is, that at the base class, you don't necessarily know how to handle things, that are encountered at the concrete class level, or don't want to put the logic for all implementations into your base class. That would defeat the purpose of your abstract class and the hierarchy.
jauntywundrkind · 8h ago
This seems like it's easy to workaround by not modifying the existing trait but by defining a new trait with your new method, and impl'ing it atop the old trait.
That seems like a pretty ok 90% solution, and in a lot of ways cleaner and more well defined a way to grow your types anyhow.
almostgotcaught · 5h ago
ITT: lots of people who think they understand the expression problem but don't actually understand the expression problem.
jauntywundrkind · 12h ago
Rust doesn't merely have a way around the Expression Problem, the entire language & it's set of libraries is built constructively / additively bottom-up atop very dumb types, with operations added latter! Traits and impls invert the responsibility to define operations, are still static definitions & static types but late-created / created-elsewhere types, that aggregate more and more impls than the type is initially defined with. An inversion of control of typing. Dynamic but still compile-time static types.
It's refreshing to see this very clear problem statement, which feels like an ideal anti-thesis that reveals the hidden glory of Rust's traits/impls:
> It turns out that adding new operations isn't as easy as adding new types. We'd have to change the Expr interface, and consequently change every existing expression type to support the new method(s). If we don't control the original code or it's hard to change it for other reasons, we're in trouble.
> In other words, we'd have to violate the venerable open-closed principle, one of the main principles of object-oriented design, defined as:
> software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification
Apologies for the self link, but this came up very recently in a sub thread about Rust & I'm so glad to close the loop & find a name for this. I said there & I'll say again, I think this is one of Rust's Greta superpowers, that we are not trapped by the design parameters of the libraries we consume, that Rust allows us to layer in more and more to our types, as we please. An amazing superpower of Rust that imo gets way too little attention, where-as the borrow-checking tends to soak all the attention/fixation for the language. https://news.ycombinator.com/item?id=45041286#45045202
There's a whole chapter of the Rust book on Rust & OOP, btw, which provides a side-ways look at how Rust & OOP and the Expression Problem relate. https://doc.rust-lang.org/book/ch18-00-oop.html
C# and others have extension methods, where you can add operations. But most of the language & it's use is pretty conventional OOP; it's a feature not the core. Mentioned in the thread above, there's a claim that Standard ML functors have similarities to rust: I haven't investigated yet but that'a piqued my interest & I'm eager to find out at some point!
Kranar · 12h ago
Nothing you mention is related to this article and neither Rust or C# solve the expression problem.
The expression problem is about being able to extend both data types (new cases) and operations (new functions) without modifying existing code while preserving static type safety.
C#'s extension methods can't be virtual, so you can't use them to actually add any new operations which can be dispatched on. They're just syntactic sugar for adding static methods which in non-OOP languages can be accomplished by declaring a free function.
Rust traits let you add new types (just impl the Trait), but it doesn't let you add new functions, so it doesn't do anything to solve the problem.
justinpombrio · 11h ago
Rust's traits _do_ solve the expression problem.
Each data type is a `struct`. Each operation is a trait. You `impl` each trait on each struct.
This works even if you're using a library that has declared `struct A` and `struct B` and `trait F` and `trait G`, and you want to add both a `struct C` and a `trait H`, and fill out the whole 3x3 grid without modifying the library.
The library says:
struct A { ... }
struct B { ... }
trait F { ... }
impl F for A { ... }
impl F for B { ... }
trait G { ... }
impl G for A { ... }
impl G for B { ... }
fn some_function<T: F + G>(data: T) { ... }
Your code says:
use library::{A, B, F, G};
struct C { ... }
impl F for C { ... }
impl G for C { ... }
trait H { ... }
impl H for A { ... }
impl H for B { ... }
impl H for C { ... }
fn another_function<T: F + G + H>(data: T);
Now `library::some_function()` can be called with an A, B, or C, even though C was defined by the user of the library. And `another_function()` can be called with A, B, or C, even though A was defined by the library.
Kranar · 10h ago
While this has nothing to do with the expression problem, it's worth noting that in any case your solution does not work in general.
Rust does let you impl traits for types or traits that are inside of your crate, so your example strictly speaking works, but it does not let you impl traits if both the type and the trait are not inside of your crate. This is known as the orphan rule:
As the article points out, the expression problem itself is pretty simple to solve for code that you control, it's when writing modular software that can vary independently that gives rise to issues.
ceronman · 9h ago
Why would the the orphan rule be a problem here?
The orphan rule only disallow impls if both the trait and the type are defined outside the crate.
But in this example if you are adding a new type (struct) or a new operation (trait), well this new item should be in your crate, so all the impls that follow are allowed.
Kranar · 7h ago
It's not a problem here as it has nothing to do with this to begin with. I am pointing out a limitation in a feature that the author has presented, but that feature does not resolve anything about the topic being discussed.
The goal isn't to allow a new type to work for an existing implementation of a function nor is it to take an existing type and write a new function that works with it. In the proposed solution you have `some_function` and the author claims that this solves the expression problem because you can take a new type C and pass it into some_function. Pretty much every language has a way to define new types and pass them into existing functions.
The goal is to allow for that new type, C, to have its own implementation of `some_function` that is particular to C, as well as the ability to write new functions that can be specialized for existing types. In particular we want calls to `some_function` through an interface to call C's specific implementation when the runtime type of an object resolves to C, and calls whatever other implementations exist when called through another interface.
The author's solution doesn't do that, it literally does nothing that you can't do in pretty much any other language.
wavemode · 11h ago
That's not the expression problem. The expression problem is the question of, what is required to add new methods to an existing trait. In your example, it requires modifying the existing impls for all types implementing the trait.
What you've done is different, you've simply added a new trait entirely. That's not unique to Rust, you can add new interfaces in any language...
ceronman · 9h ago
> That's not the expression problem.
To me it looks like this is exactly the expression problem. The expression problem is not about adding methods to a trait, it's about adding an operation to a set of types. In this case every operation is defined by a single trait.
The idea behind the expression problem is to be able to define either a new operation or a new type in such a way that the code is nicely together. Rust trait system accomplish this beautifully.
> That's not unique to Rust, you can add new interfaces in any language...
Many languages have interfaces, but most of them don't allow you to implement them for an arbitrary type that you have not defined. For example, in Java, if you create an interface called `PrettyPrintable`, but you can't implement it for the `ArrayList` type from the standard library. In Rust you can do this kind of things.
kbolino · 10h ago
There is something in Rust that can help, though. You can define multiple impls for different bounds.
Supposing we started off with a trait for expression nodes:
Now, this library has gone out into the wild and been implemented, so we can't add new methods to ExprNode (ignoring default implementations, which don't help solve the problem). However, we can define a new trait:
And now we get to what is (somewhat) unique to Rust: you can define different method sets based upon different generic bounds. Suppose we have a parent Expr struct which encapsulates the root node and the context:
Of course, this is a trivial example (and doesn't even require the intersection bound), but it does illustrate how the expression problem can be solved. The problem this strategy creates, though, is combinatorial explosion. When we just have two traits, it's not a big deal. When we have several traits, and useful operations start to require various combinations of them (other than just Original + New), the number of impls and test cases starts to grow rapidly.
ceronman · 8h ago
I don't know why you're getting down voted. But you are right. Rust type system solves this in a very nice way. Maybe to clarify we can show how to do the exact same example shown with Clojure multi-methods, but in Rust:
Traits have methods. How is that not adding new functions?
Kranar · 11h ago
Classes in C++ have methods too.
The problem is that you can't add new methods to an existing class.
kbolino · 11h ago
More specifically, you cannot add new abstract (aka "pure virtual") methods to an existing class/interface/trait when that class/interface/trait is already extended/implemented by code you don't control.
bitwize · 11h ago
Rust lets you combine multiple traits per type.
Kranar · 10h ago
C++ lets you inherit from multiple classes as well. I don't see how this has anything to do with being able to add new methods to existing types.
tomjakubowski · 10h ago
There is an important difference: in Rust you can write a new trait, with new methods, and impl that trait for a type without having to change the existing structs/enums which comprise that type at all. You can also do this (with some restrictions, "the orphan rule") for types defined in another library.
In C++ classes, you have to be able to modify a class definition to extend it with new methods (or a new ancestor to multiply inherit from).
adastra22 · 8h ago
You can add traits (with their methods) to existing types, without modification.
bitwize · 9h ago
C++ classes are types. Rust traits are applied to types.
Kranar · 9h ago
And what exactly do you think traits apply to types exactly?
If your answer doesn't start with an "m" and end with a "ethod", then you may need to re-read the Rust book found here:
> Traits have methods. How is that not adding new functions?
If you add a method to a trait, every implementation of that trait has to be modified in the original source code. The point of the expression problem is that you want to be able to add new operations in a way that is type safe and checked for completeness, even if you don't have access to the original source code.
Rust can probably do this, because its traits are equivalent to Haskell type classes and those have a solution to the expression problem, but it's not trivial. See the link in the article.
adastra22 · 8h ago
In rust you would add a NEW trait, which is brought in scope by those who need it. Or add a new method to an existing trait, with a default implementation.
The problem you describe makes no sense. It sounds like, for example, wanting to add a new variant to an enum while also not wanting to modify match statements which now fail exhaustive testing. That’s a direct contradiction.
The only sensible charitable interpretation I can come up with is that you want to add new methods to a type without having to update any existing type definitions. This is what traits do.
naasking · 8h ago
> In rust you would add a NEW trait, which is brought in scope by those who need it.
No, that's not sufficient if done naively as I think you're describing. You seem to be missing the context of this discussion as described by the article. Other code already depends on the current compiled abstraction, and you want to extend it in various ways, safely, so that existing code continues to work without recompilation, but you can extend the abstraction that code is already safely handling with new types and new operations without modifying the original source code.
If you want to add a new node type to an AST, with algebraic data types you would have to modify the original source code of the original functions that match on those types, so types are closed to extension. You can leave the type open to extension via traits, but now the operations are closed to extension without modifying the original source code.
> It sounds like, for example, wanting to add a new variant to an enum while also not wanting to modify match statements which now fail exhaustive testing. That’s a direct contradiction.
No it's not, if enums and match statements were safely open to extension rather than closed. That's exactly what would solve the expression problem. This can and has been done [1] as the article described, via multimethods or via lifting data constructors to the type class level. It's obtuse and awkward, but in theory it doesn't have to be.
The ultimate goal is that the new type and/or pattern matching cases have to be provided together to cover all of missing combinations, but importantly "together" means by the time the final program is assembled and not in the source code that defined the original types or operations.
[1] open pattern matching and open algebraic data types have also been done in research languages
adastra22 · 5h ago
> No, that's not sufficient if done naively as I think you're describing. You seem to be missing the context of this discussion as described by the article. Other code already depends on the current compiled abstraction, and you want to extend it in various ways, safely, so that existing code continues to work without recompilation, but you can extend the abstraction that code is already safely handling with new types and new operations without modifying the original source code.
That is exactly what a new trait would accomplish. I remain confused as to the distinction you are making.
naasking · 3h ago
Read the Haskell-specific follow-up which I mentioned, type classes have a correspondance with traits so the problems with this approach are the same:
Unfortunately I don’t speak Haskell. As none of the terminology used by Haskell folks align with languages I do know, I really can’t make head or tails of that article.
If I have O operations (e.g. evaluate, stringify) and T types, then I need O•T implementations. If I fix O, I can add types until the cows come home by adding O implementations per new type. If I fix T, I can add new operations with T implementations per operation. If I want to let O and T vary, then the number of implementations I need to add per additional operation/type varies depending on what I’ve added before. No amount of programming language magic will change this basic count.
ISTM what’s really going on in the article is that the author is using the programming language as a registry of implementations. Those O•T implementations need to be stored somewhere so their callers can find them. In C++ this can be done when very verbose inheritance and virtual calls. In other languages, it’s less verbose. If multiple dispatch is built in, then one can use it directly. One can, of course, also build an actual table indexed on operation and type as a data structure in just about any language and use it, and in a language like C++ the result may well be quite a bit nicer than using a class hierarchy.
Static (compile-time) guarantees are only applicable to languages with static binding rules, in which case there is no problem - the compiler will just report "cannot resolve overloaded function" or something similar.
This is of course one of the downsides to languages with dynamic types and dynamic binding ... that many errors can only be detected at runtime.
No - in C++ you'd just define the operators (either as global functions, or member functions of the types/classes in question), and the language's overload resolution rules would select the right implementation for each usage.
You could use inheritence (and virtual overrides if needed), or templates/generics, if you wanted to - C++ certainly gives you a lot of ways to make things more complex and shoot yourself in the foot, but the normal/obvious way would just be to define the overloaded operators and be done with it.
Virtual methods overload the corresponding method in the parent class they are inheriting from. It could be a named method like "add()", or an operator like "+()".
The author of that comment is implicitly assuming that all types are derived via inheritance from a common base type that defines all the (virtual) methods/operators being overloaded.
Overuse of inheritance is one of the ways to shoot yourself in the foot with C++. If you want to overload (apply same operator to multiple types - related or not), then just overload - don't use inheritance.
Your own example, above (B defines new type, C defines new operator) doesn't appear to be about inheritance at all - you don't even say what language you are talking about, and elsewhere you say (contracticting the article author) that this "expression problem" only applies to dynamic languages ... you seem to be all over the map on this one!
The problem statement is: You have code (e.g. a library) that supports polymorphic operations on a range of types. It is called the “expression problem” because the prototypical example is to have a type representing an expression (like an arithmetic expression) that can have various forms (subtypes, not necessarily in the OOP sense), and there are operations like evaluating the expression or printing the expression, which of course need to be implemented for all those subtypes (you need to know how to print a constant, how to print a binary expression, and so on).
There is code that can take an arbitrary expression whose exact subtype will only be known at runtime, and which invokes one of the available operations on the expression object.
Now, you want to be able to write secondary libraries (or programs) that extend the first library by extending the set of subtypes and/or the set of operations, but without modifying the first library (and without duplicating code from the first library). With OOP inheritance, adding new subtypes is easy. With the visitor pattern (or equivalent runtime case-distinction mechanisms like pattern matching in functional languages), adding new operations is easy. But the combination of both is difficult, if not unsolvable.
Overloading doesn’t apply, because overloading would dispatch on the static type of the expression variable, but the use case is that this is an expression parsed from a textual representation at runtime (e.g. in the context of an interpreter or compiler, or maybe some data format parser), so the variable holding the expression object has a generic type and could be holding any expression.
You are likely mixing up the term overload with the term override.
Mechanism vs functionality.
[1]: https://craftinginterpreters.com/representing-code.html#the-...
In practice, Multiple Dispatch shines when you have 1) more than one argument type (duh) 2) higher order `O` operation [1]
[1]: think of a numerical routine that calls eigen or something, and you want eigen to exploit the matrix's type, such as symmetry or upper-triangular, and this is encoded as a matrix type)
This requires unsafe casting and thus doesn't actually solve the expression problem, which is about how to do this without compromising safety. Your approach is what the solution to the expression problem in Haskell effectively does at runtime though, via type classes and the dictionary-passing translation that they undergo during compilation, but it's all type safe.
I don't understand what problem the author is trying to solve here - maybe it's language specific? More related to dynamic typing and efficient dispatch?
In any case, operator overloading, while it can make for cute-to-read code, isn't really a good idea for code readability and maintenance.
so should we write `add_int_int(1,1)` and `add_int_double(1, 1.0)` and `add_double_double(1.0, 1.0)`?...
Applying arithmetic operators to non-arithmetic types starts to become a problem, even if some cases (set union/difference, string catenation) are natural...
The same goes for other operators... overloading '[]' indexing for maps as well as arrays seems natural, but would be highly confusing if you overloaded it for something unrelated to an index-like concept.
I think there are two alternate guidelines for use of operator overloading that would make sense.
1) Just don't overload operators at all for your own types! Leave that to the standard libraries and types that they define.
OR
2) Overload, but keep to the semantics of the operators as used by the language's built-in types (e.g. use arithmetic operators for arithmetic operations, comparison operators for ordering operations, etc). If your project was using a style guideline like this, then violations would be caught by code review (maybe automated by AI in the future?).
The expression problem only arises in statically typed programming languages, it does not exist in dynamically typed programming languages.
Operating overloading has nothing to do with the problem and can not be used to resolve it. Operators are nothing more than a one-off syntax so we can use familiar childhood notation like a + b, etc... they are not particularly meaningful. The ability to overload operators or functions in general is also irrelevant since such overloads are resolved statically at compile time.
Wadler's list of requirements for solving the expression problem include "with static checking that all cases are covered", so in one sense, yes, dynamic languages don't have this "problem". But in another sense, dynamic languages simply have no chance to solve the problem because they don't statically check anything.
It is true that it's much easier to do multiple dispatch and open-ended extension in dymamic languages. That's a nice benefit of them. But you do sacrifice all of the safety, code navigation, and performance of static types in order to get that.
The problem Wadler is trying to solve is "how can we have this sort of open-ended extension in both directions without giving up static safety?"
The "problem" exists for dynamically-typed languages as well. If you define a new class in Python, you still need to teach existing code how to operate on it (though you might be using inheritance to automatically derive some of those implementations, but inheritance obviously isn't limited to dynamic languages).
You've got T types (some new), and O operators (some new) and want to implement all operators for all types ... This is the exact definition of operator overloading.
There is no magic (other than inheritance or generics, if applicable) that will remove the need to individually implement all those O x T cases, and while that is obviously necessary, it is also all that you need to do.
If you are not talking about operator overloading - supporting the same operator for multiple different custom types, then what are you talking about ?!
Object algebras: https://i.cs.hku.hk/~bruno/oa/
Tagless final: https://okmij.org/ftp/tagless-final/course/lecture.pdf
and Data types a’la carte: https://webspace.science.uu.nl/~swier004/publications/2008-j...
In languages that do not allow that (e.g. Java), one has to implement it by hand, using a Visitor pattern. Instead of relying on the language to do the dynamic dispatch, one has to explicitly add a visiting method for another data type.
To me, the turn towards interfaces / typeclasses / traits / protocols is one of the most notable bits of progress in the newer crop of programming languages (and, importantly, their standard libraries).
The Expression Problem and its solutions - https://news.ycombinator.com/item?id=11683379 - May 2016 (49 comments)
On the other hand, Rust / Haskell provide partial solutions with static typing — but run into annoying boilerplate related to the orphan rule.
I’m not sure if there’s a true solution, given the level of thinking which has gone into these two forks in the road.
We use this pretty effectively in our iOS app to model and implement Effect types in a functional core / imperative shell architecture.
I think Antlr 4 does similar, I just haven't used it in as much anger because its runtime always jammed me up https://github.com/antlr/antlr4/blob/4.13.2/doc/listeners.md...
I thought that IntelliJ did similarly but I am misremembering since they generate recursive descent style https://github.com/JetBrains/Grammar-Kit/blob/v2023.3/HOWTO....
This is in contrast to if you had used an enum (sum type) instead, wherein adding a new operation is easy and can be done in a single place. But then in exchange, adding new variants requires going around and updating every existing pattern match to support the new variant.
I know inheritance has its own severe problems, but adding a generic abstract method at the base class could create reusable code that can be accessed by any new class that inherits from it.
P.S. ah ok, it's mentioned in the article at the Visitor section.
That seems like a pretty ok 90% solution, and in a lot of ways cleaner and more well defined a way to grow your types anyhow.
It's refreshing to see this very clear problem statement, which feels like an ideal anti-thesis that reveals the hidden glory of Rust's traits/impls:
> It turns out that adding new operations isn't as easy as adding new types. We'd have to change the Expr interface, and consequently change every existing expression type to support the new method(s). If we don't control the original code or it's hard to change it for other reasons, we're in trouble.
> In other words, we'd have to violate the venerable open-closed principle, one of the main principles of object-oriented design, defined as:
> software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification
Apologies for the self link, but this came up very recently in a sub thread about Rust & I'm so glad to close the loop & find a name for this. I said there & I'll say again, I think this is one of Rust's Greta superpowers, that we are not trapped by the design parameters of the libraries we consume, that Rust allows us to layer in more and more to our types, as we please. An amazing superpower of Rust that imo gets way too little attention, where-as the borrow-checking tends to soak all the attention/fixation for the language. https://news.ycombinator.com/item?id=45041286#45045202
There's a whole chapter of the Rust book on Rust & OOP, btw, which provides a side-ways look at how Rust & OOP and the Expression Problem relate. https://doc.rust-lang.org/book/ch18-00-oop.html
C# and others have extension methods, where you can add operations. But most of the language & it's use is pretty conventional OOP; it's a feature not the core. Mentioned in the thread above, there's a claim that Standard ML functors have similarities to rust: I haven't investigated yet but that'a piqued my interest & I'm eager to find out at some point!
The expression problem is about being able to extend both data types (new cases) and operations (new functions) without modifying existing code while preserving static type safety.
C#'s extension methods can't be virtual, so you can't use them to actually add any new operations which can be dispatched on. They're just syntactic sugar for adding static methods which in non-OOP languages can be accomplished by declaring a free function.
Rust traits let you add new types (just impl the Trait), but it doesn't let you add new functions, so it doesn't do anything to solve the problem.
Each data type is a `struct`. Each operation is a trait. You `impl` each trait on each struct.
This works even if you're using a library that has declared `struct A` and `struct B` and `trait F` and `trait G`, and you want to add both a `struct C` and a `trait H`, and fill out the whole 3x3 grid without modifying the library.
The library says:
Your code says: Now `library::some_function()` can be called with an A, B, or C, even though C was defined by the user of the library. And `another_function()` can be called with A, B, or C, even though A was defined by the library.Rust does let you impl traits for types or traits that are inside of your crate, so your example strictly speaking works, but it does not let you impl traits if both the type and the trait are not inside of your crate. This is known as the orphan rule:
https://doc.rust-lang.org/reference/items/implementations.ht...
As the article points out, the expression problem itself is pretty simple to solve for code that you control, it's when writing modular software that can vary independently that gives rise to issues.
The orphan rule only disallow impls if both the trait and the type are defined outside the crate.
But in this example if you are adding a new type (struct) or a new operation (trait), well this new item should be in your crate, so all the impls that follow are allowed.
The goal isn't to allow a new type to work for an existing implementation of a function nor is it to take an existing type and write a new function that works with it. In the proposed solution you have `some_function` and the author claims that this solves the expression problem because you can take a new type C and pass it into some_function. Pretty much every language has a way to define new types and pass them into existing functions.
The goal is to allow for that new type, C, to have its own implementation of `some_function` that is particular to C, as well as the ability to write new functions that can be specialized for existing types. In particular we want calls to `some_function` through an interface to call C's specific implementation when the runtime type of an object resolves to C, and calls whatever other implementations exist when called through another interface.
The author's solution doesn't do that, it literally does nothing that you can't do in pretty much any other language.
What you've done is different, you've simply added a new trait entirely. That's not unique to Rust, you can add new interfaces in any language...
To me it looks like this is exactly the expression problem. The expression problem is not about adding methods to a trait, it's about adding an operation to a set of types. In this case every operation is defined by a single trait.
The idea behind the expression problem is to be able to define either a new operation or a new type in such a way that the code is nicely together. Rust trait system accomplish this beautifully.
> That's not unique to Rust, you can add new interfaces in any language...
Many languages have interfaces, but most of them don't allow you to implement them for an arbitrary type that you have not defined. For example, in Java, if you create an interface called `PrettyPrintable`, but you can't implement it for the `ArrayList` type from the standard library. In Rust you can do this kind of things.
Supposing we started off with a trait for expression nodes:
Now, this library has gone out into the wild and been implemented, so we can't add new methods to ExprNode (ignoring default implementations, which don't help solve the problem). However, we can define a new trait: And now we get to what is (somewhat) unique to Rust: you can define different method sets based upon different generic bounds. Suppose we have a parent Expr struct which encapsulates the root node and the context: We would probably already have this impl: Now we just need to add this impl: Of course, this is a trivial example (and doesn't even require the intersection bound), but it does illustrate how the expression problem can be solved. The problem this strategy creates, though, is combinatorial explosion. When we just have two traits, it's not a big deal. When we have several traits, and useful operations start to require various combinations of them (other than just Original + New), the number of impls and test cases starts to grow rapidly.The problem is that you can't add new methods to an existing class.
In C++ classes, you have to be able to modify a class definition to extend it with new methods (or a new ancestor to multiply inherit from).
If your answer doesn't start with an "m" and end with a "ethod", then you may need to re-read the Rust book found here:
https://doc.rust-lang.org/book/ch10-02-traits.html
If you add a method to a trait, every implementation of that trait has to be modified in the original source code. The point of the expression problem is that you want to be able to add new operations in a way that is type safe and checked for completeness, even if you don't have access to the original source code.
Rust can probably do this, because its traits are equivalent to Haskell type classes and those have a solution to the expression problem, but it's not trivial. See the link in the article.
The problem you describe makes no sense. It sounds like, for example, wanting to add a new variant to an enum while also not wanting to modify match statements which now fail exhaustive testing. That’s a direct contradiction.
The only sensible charitable interpretation I can come up with is that you want to add new methods to a type without having to update any existing type definitions. This is what traits do.
No, that's not sufficient if done naively as I think you're describing. You seem to be missing the context of this discussion as described by the article. Other code already depends on the current compiled abstraction, and you want to extend it in various ways, safely, so that existing code continues to work without recompilation, but you can extend the abstraction that code is already safely handling with new types and new operations without modifying the original source code.
If you want to add a new node type to an AST, with algebraic data types you would have to modify the original source code of the original functions that match on those types, so types are closed to extension. You can leave the type open to extension via traits, but now the operations are closed to extension without modifying the original source code.
> It sounds like, for example, wanting to add a new variant to an enum while also not wanting to modify match statements which now fail exhaustive testing. That’s a direct contradiction.
No it's not, if enums and match statements were safely open to extension rather than closed. That's exactly what would solve the expression problem. This can and has been done [1] as the article described, via multimethods or via lifting data constructors to the type class level. It's obtuse and awkward, but in theory it doesn't have to be.
The ultimate goal is that the new type and/or pattern matching cases have to be provided together to cover all of missing combinations, but importantly "together" means by the time the final program is assembled and not in the source code that defined the original types or operations.
[1] open pattern matching and open algebraic data types have also been done in research languages
That is exactly what a new trait would accomplish. I remain confused as to the distinction you are making.
https://eli.thegreenplace.net/2018/more-thoughts-on-the-expr...