Why Algebraic Effects?

187 jiggawatts 81 5/24/2025, 3:00:53 AM antelang.org ↗

Comments (81)

nemo1618 · 7h ago
I see two downsides. Looking at this snippet:

    my_function (): Unit can AllErrors =
      x = LibraryA.foo ()
      y = LibraryB.bar ()
The first thing to note is that there is no indication that foo or bar can fail. You have to lookup their type signature (or at least hover over them in your IDE) to discover that these calls might invoke an error handler.

The second thing to note is that, once you ascertain that foo and bar can fail, how do you find the code that will run when they do fail? You would have to traverse the callstack upwards until you find a 'with' expression, then descend into the handler. And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

I do think this is a really neat concept, but I have major reservations about the readability/debuggability of the resulting code.

tome · 2m ago
> no indication that foo or bar can fail ... how do you find the code that will run when they do fail

If that's what you're looking for you might want to try my Haskell effects library Bluefin (it's not an "algebraic" effects library, though). The equivalent code would be

    myFunction :: e :> es -> Exception String e -> Eff es r
    myFunction ex = do
      x <- LibraryA.foo ex
      y <- LibraryB.foo ex
      z <- LibraryC.foo
      ...
This answers the first part of your question: the presence of `ex` argument (an `Exception String` handle) shows that a String-value exception can be thrown wherever they are used. For example, we know that `LibraryC.foo` does not throw that exception.

It also answers the second part of your question: the code that runs on failure is exactly the code that created the `Exception String` handle. Any exception arising from that handle is always caught where the handle was created, and nowhere else. For example, it could be here:

    try $ \ex -> do
        v <- myFunction ex
        ...
`try` catches the exception and turns into into the `Left` branch of a Haskell `Either` type. Or it could be here:

    myFunction :: e :> es -> Exception String e -> Eff es r
    myFunction ex = do
      catch
        (\ex2 -> do
          x <- LibraryA.foo ex
          y <- LibraryB.foo ex2
          z <- LibraryC.foo
          ...)
        (\errMsg -> logErr errMsg)
So the exception thrown by `LibraryB.foo` is always handled with the `logErr` (and nowhere else), and the exception thrown by `LibraryA.foo` is always handled by the exception handler higher up which created `ex` (and nowhere else).

Let me know what you think!

cryptonector · 5h ago
> how do you find the code that will run when they do fail?

That's part of the point: it's dynamic code injection. You can use shallow- or deep-binding strategies for implementing this just as with any dynamic feature. Dynamic means just that bindings are introduced by call frames of callers or callers of callers, etc., so yes, notionally you have to traverse the stack.

> And this cannot be done statically (i.e. your IDE can't jump to the definition),

Correct, because this is a _dynamic_ feature.

However, you are expected not to care. Why? Because you're writing pure code but for the effects it invokes, but those effects could be pure or impure depending on context. Thus your code can be used in prod and hooked up to a mock for testing, where the mock simply interposes effects other than real IO effects.

It's just dependency injection.

You can do this with plain old monads too you know, and that's a much more static feature, but you still need to look way up the call stack to find where _the_ monad you're using might actually be instantiated.

In other words, you get some benefits from these techniques, but you also pay a price. And the price and the benefit are two sides of the same coin: you get to do code injection that lets you do testing and sandboxing, but it becomes less obvious what might be going on.

YmiYugy · 10m ago
I think the readability problem can be solved by having your LSP tell your editor to display some virtual text, indicating that the foo and bar calls might error.

I have to admit I don't understand the second point. If you could statically determine from the definition of foo and bar what code handles their errors, than there would be no reason for foo or bar to error, they could just call the error handling code. If foo and bar return Result sum types and my_function just passes those errors up, it would be no different. You don't know what callers of my_function would do with those errors.

edding4500 · 6h ago
I wrote my bachelors thesis about IDE support for lexical effects and handlers: https://se.cs.uni-tuebingen.de/teaching/thesis/2021/11/01/ID...

All of what you state is very doable.

zvrba · 6h ago
> [...] how do you find the code that will run when they do fail? You would have to traverse [...]

I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation; some even do it for DTOs. "Go to implementation" of a method, and you end up in the interface's declaration so you have to jump through additional hoops to get to it. And you're out of luck when the implementation is in another assembly. The IDE _could_ decompile it if it were a direct reference, but it can't find it for you. When you're out of luck, you have to debug and step into it.

But this brings me to dependency injection containers. More powerful ones (e.g., Autofac) can establish hierarchical scopes, where new scopes can (re)define registrations; similar to LISP's dynamically scoped variables. What a service resolves to at run-time depends on the current DI scope hierarchy.

Which brings me to the point: I've realized that effects can be simulated to some degree by injecting an instance of `ISomeEffectHandler` into a class/method and invoking methods on it to cause the effect. How the effect is handled is determined by the current DI registration of `ISomeEffectHandler`, which can be varied dynamically throughout the program.

So instead of writing

    void DoSomething(...) {
        throw SomeException(...);
    }
you establish an error protocol through interface `IErrorConditions` and write

    void DoSomething(IErrorConditions ec, ...) {
        ec.Report(...);
    }
(Alternately, inject it as a class member.) Now, the currently installed implementation of `IErrorConditions` can throw, log, or whatever. I haven't fully pursued this line of though with stuff like `yield`.
SkiFire13 · 5h ago
> I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation

I work on a Java backend that is similar to what you're describing, but Intellij IDEA is smart enough to notice there is exactly one non-test implementation and bring me to its source code.

jiggawatts · 4h ago
The annoyance is that the .NET standard library already does this precise thing, but haphazardly and in far fewer places than ideal.

ILogger and IProgress<T> comes to mind immediately, but IMemoryCache too if you squint at it. It literally just "sets" and "gets" a dictionary of values, which makes it a "state" effect. TimeProvider might be considered an algebraic effect also.

abathologist · 6h ago
> The first thing to note is that there is no indication that foo or bar can fail

I think this is a part of the point: we are able to simply write direct style, and not worry at all about the effectual context.

> how do you find the code that will run when they do fail

AFAIU, this is also the point: you are able to abstract away from any particular implementation of how the effects are handled. The code that will when they fail is determined later, whenever you decide how you want to run it. Just as, in `f : g:(A -> B) -> t(A) -> B` there is no way to find "the" code that will run when `g` is executed, because we are abstracting over any particular implementation of `g`.

nine_k · 6h ago
It looks like exceptions (write the happy path in direct style, etc), but with exceptions, there is a `catch`. You can look for it and see the alternate path.

What might be a good way to find / navigate to the effectual context quickly? Should we just expect an IDE / LSP color it differently, or something?

MrJohz · 6h ago
There's a `catch` with effects as well, though, the effect handler. And it works very similarly to `catch` in that it's not local to the function, but happens somewhere in the calling code. So if you're looking at a function and you want to know how that function's exceptions get handled, you need to look at the calling code.

No comments yet

MrJohz · 7h ago
> And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

I believe this can be done statically (that's one of the key points of algebraic effects). It works work essentially the same as "jump to caller", where your ide would give you a selection of options, and you can find which caller/handler is the one you're interested in.

wavemode · 6h ago
> there is no indication that foo or bar can fail

Sounds like you're just criticizing try-catch style error handling, rather than criticizing algebraic effects specifically.

Which, I mean, is perfectly fair to not like this sort of error handling (lack of callsite indication that an exception can be raised). But it's not really a step backward from a vast majority of programming languages. And there are some definite upsides to it as well.

practal · 5h ago
Algebraic effects seem very interesting. I have heard about this idea before, but assumed that it somehow belonged into the territory of static type systems. I am not a fan of static type systems, so I didn't look further into the idea.

But I found these two articles [1] about an earlier dynamic version of Eff (the new version is statically typed), which explains the idea nicely without introducing types or categories (well, they use "free algebra" and "unique homomorphism", just think "terms" and "evaluation" instead). I find it particularly intriguing that what Andrej Bauer describes there as "parameterised operation with generalised arity", I would just call an abstraction of shape [0, 1] (see [2]). So this might be helpful for using concepts from algebraic effects to turn abstraction algebra into a programming language.

[1] https://math.andrej.com/2010/09/27/programming-with-effects-...

[2] http://abstractionlogic.com

nicoty · 4h ago
What's wrong with static type systems?
practal · 3h ago
I've summarized my opinion on this here: https://doi.org/10.5281/zenodo.15118670

In normal programming languages, I see static type systems as a necessary evil: TypeScript is better than JavaScript, as long as you don't confuse types with terms...

But in a logic, types are superfluous: You already have a notion of truth, and types just overcomplicate things. That doesn't mean that you cannot have mathematical objects x and A → B, such that x ∈ A → B, of course. Here you can indeed use terms instead of types.

So in a logic, I think types represent a form of premature optimisation of the language that invariants are expressed in.

frumplestlatz · 3h ago
This is a very reductive definition of types, if not a facile category error entirely (see: curry-howard), and what you call "premature optimization" -- if we're discussing type systems -- is really "a best effort at formalizations within which we can do useful work".

AL doesn’t make types obsolete -- it just relocates the same constraints into another formalism. You still have types, you’re just not calling them that.

practal · 2h ago
I think I have a reference to Curry in my summary link. Anyways, curry-howard is a nice correspondence, about as important to AL as the correspondence between the groups (ℝ, 0, +) and (ℝ \ 0, 1, *); by which I mean, not at all. But type people like bringing it up even when it is not relevant at all.

No, sorry, I really don't have types. Maybe trying to reduce all approaches to logic to curry-howard is the very reductive view here.

frumplestlatz · 2h ago
If your system encodes invariants, constrains terms, and supports abstraction via formal rules, then you’re doing the work of types whether you like the name or not.

Dismissing Curry–Howard without addressing its foundational and extricable relevance to computation and logic isn’t a rebuttal.

practal · 2h ago
Saying "Curry-Howard, Curry-Howard, Curry-Howard" isn't an argument, either.

I am not saying that types cannot do this work. I am saying that to do this work you don't need types, and AL is the proof for that. Well, first-order logic is already the proof for that, but it doesn't have general binders.

Now, you are saying, whenever this work is done, it is Curry-Howard, but that is just plain wrong. Curry-Howard has a specific meaning, maybe read up on it.

frumplestlatz · 1h ago
Curry–Howard applies when there’s a computational interpretation of proofs — like in AL, which encodes computation and abstraction in a logic.

You don’t get to do type-like work, then deny the structural analogy just because you renamed the machinery. It’s a type system built while insisting type systems are obsolete.

practal · 1h ago
You seem to know AL very well, I didn't even know that there is a computational interpretation of AL proofs! Can you tell me what it is?
_flux · 3h ago
Personally I would enjoy if TLA+ would have types, though, and TLA+ belongs to the land of logic, right? I do not know how it differs from the abstraction logic referred in your writing and your other whitepapers.

What is commonly used is a TypeOK predicate that verifies that your variables have the expected type. This is fine, except your intermediate values can still end up being of mis-intended values, so you won't spot the mistake until you evaluate the TypeOK predicate, and not at all if the checker doesn't visit the right corners of the state space. At least TypeOK can be much more expressive than any type system.

There is a new project in the same domain called Quint, it has types.

practal · 3h ago
Practically, in abstraction logic (AL) I would solve that (AL is not a practical thing yet, unlike TLA+, I need libraries and tools for it) by having an error value ⊥, and making sure that abstractions return ⊥ whenever ⊥ is an argument of the abstraction, or when the return value is otherwise not well-defined [1]. For example,

    div 7 0 = ⊥, 
and

    plus 3 ⊥ = ⊥, 
so

    plus 3 (div 7 0) = ⊥.

In principle, that could be done in TLA+ as well, I would guess. So you would have to prove a predicate Defined, where Defined x just means x ≠ ⊥.

[1] Actually, it is probably rather the other way around: Make sure that if your expression e is well-defined under certain preconditions, that you can then prove e ≠ ⊥ under the same preconditions.

deredede · 2h ago
> At least TypeOK can be much more expressive than any type system.

Can you clarify what you mean by that? Dependent types or more practically refinement types (à la F*) can embed arbitrary predicates.

exceptione · 2h ago
> But in a logic,

I am not sure if I misunderstand you. Types are for domain, real world semantics, they help to disambiguate human language, they make context explicit which humans just assume when they talk about their domain.

Logic is abstract. If you implied people should be able to express a type system in their host language, that would be interesting. I can see something like Prolog as type annotations, embedded in any programming language, it would give tons of flexibility, but then you shift quite some burden onto the programmer.

Has this idea been tried?

practal · 2h ago
Types for real-world semantics are fine, they are pretty much like predicates if you understand them like that.

The idea to use predicates instead of types has been tried many times; the main problem (I think) is that you still need a nice way of binding variables, and types seem the only way to do so, so you will introduce types anyway, and what is the point then? The nice thing about AL is that you can have a general variable binding mechanism without having to introduce types.

frumplestlatz · 1h ago
AL as described sounds like it reinvents parts of the meta-theoretic infrastructure of Isabelle/HOL, but repackaged as a clean break from type theory instead of what it seems to be — a notational reshuffling of well-trod ideas in type theory.

What am I missing?

agumonkey · 2h ago
I'm not a logician but do you mean that predicates and their algebra are a more granular and universal way to describe what a type is.. basically that names are a problem ?
practal · 2h ago
Yes and no. Yes, predicates are more flexible, because they can range over the entire mathematical universe, as they do for example in (one-sorted) first-order logic. No, names are not a problem, predicates can have names, too.
agumonkey · 28m ago
so if names are not an issue, the problem with the usual static type systems is that they lack a way to manipulate / recombine user defined types to avoid expressive dead ends ?
hinoki · 4h ago
Forget it Jake, it’s land of Lisp.
AdieuToLogic · 8h ago
> You can think of algebraic effects essentially as exceptions that you can resume.

How is this substantively different than using an ApplicativeError or MonadError[0] type class?

> You can “throw” an effect by calling the function, and the function you’re in must declare it can use that effect similar to checked exceptions ...

This would be the declared error type in one of the above type classes along with its `raiseError` method.

> And you can “catch” effects with a handle expression (think of these as try/catch expressions)

That is literally what these type classes provide, with a "handle expression" using `handleError` or `handleErrorWith` (depending on need).

> Algebraic effects1 (a.k.a. effect handlers) are a very useful up-and-coming feature that I personally think will see a huge surge in popularity in the programming languages of tomorrow.

Not only will "algebraic effects" have popularity "in the programming languages of tomorrow", they actually enjoy popularity in programming languages today.

https://typelevel.org/cats/typeclasses/applicativemonaderror...

abathologist · 6h ago
It seems to me that monads and effects are likely best viewed as complementary approaches to reasoning about computational contexts, rather than as rivals. See, e.g., https://goto.ucsd.edu/~nvazou/koka/padl16.pdf or https://goto.ucsd.edu/~nvazou/koka/padl16.pdf .
davery22 · 7h ago
Algebraic effects are in delimited continuation territory, operating on the program stack. No amount of monad shenanigans is going to allow you to immediately jump to an effect handler 5 levels up the call stack, update some local variables in that stack frame, and then jump back to execution at the same point 5 levels down.
grg0 · 7h ago
That sounds like a fucking nightmare to debug. Like goto, but you don't even need to name a label.
cryptonector · 5h ago
> Like goto, but you don't even need to name a label.

That's what exceptions are.

But effects don't cause you to see huge stack traces in errors because the whole point is that you provide the effect and values expected and the code goes on running.

vkazanov · 6h ago
Well, you test the fact that the handler receives the right kind of data, and then how it processes it.

And it is useful to be able to provide these handlers in tests.

Effects are AMAZING

agumonkey · 2h ago
clos condition system is said to be just that, people seem to like it

also this kind of non local stack/tree rebinding is one way to implement prolog i believe

cryptonector · 7h ago
> How is this substantively different than using an ApplicativeError or MonadError[0] type class?

It think it's about static vs. dynamic behavior.

In monadic programming you have to implement all the relevant methods in your monad, but with effects you can dynamically install effects handlers wherever you need to override whatever the currently in-effect handler would be.

I could see the combination of the two systems being useful. For example you could use a bespoke IO-compatible monad for testing and sandboxing, and still have effects handlers below which.. can still only invoke your IO-like monad.

SkiFire13 · 3h ago
> How is this substantively different than using an ApplicativeError or MonadError[0] type class?

If you're limiting yourself to just a single effect there's probably not much difference, however once you have multiple effects at the same time then explicit support for them starts to become nicer than nesting monads (which requires picking an order and sometimes reorder them due to the output of some functions not matching the exact set or order of monads used by the calling function).

threeseed · 5h ago
> they actually enjoy popularity in programming languages today

They have enjoyed popularity amongst the Scala FP minority.

They are not broadly popular as they come with an unacceptable amount of downsides i.e. increased complexity, difficult to debug, harder to instantly reason about, uses far more resources etc. I have built many applications using them and the ROI simply isn't there.

It's why Odersky for example didn't just bundle it into the compiler and instead looked at how to achieve the same outcomes in a simpler and more direct way i.e. Gears, Capabilities.

anon-3988 · 8h ago
I don't really get it, but is this related to delimited continuation as well?
tempodox · 8h ago
cryptonector · 5h ago
That's just an implementation detail. I don't think there's anything about effects that _requires_ delimited continuations to implement them.
artemonster · 5m ago
With AE you get for free: generators, stackful coroutines, dependency injection, "dynamic" variables (as in anti-lexical ones), resumable exceptions, advanced error handling and much more. all packaged neatly into ONE concept. I dream of TS and Effekt some day merging :)
nevertoolate · 4h ago
When I see a new (for me) idea coming from (presumably) category theory I wonder if it really will land in any mainstream language. In my experience having cohesion on the philosophical level of the language is the reason why it is nice to work with it in a team of programmers who are adept in both programming and in the business context. A set of programming patterns to solve a problem usually can be replaced with a possibly disjunct set of patterns where both solutions have all the same ilities in the code and solve the business problem.

My question is - can a mainstream language adopt the algebraic effects (handlers?) without creating deep confusion or a new language should be built from the ground up building on top of these abstractions in some form.

ww520 · 4h ago
> can a mainstream language adopt the algebraic effects (handlers?) without creating deep confusion or a new language should be built from the ground up building on top of these abstractions in some form.

Algebraic Effect is a variant/enhancement of dependency injection formalized into a language. Dependency injection has massive usage in the wild for a long time with just library implementation.

threeseed · 2h ago
> Algebraic Effect is a variant/enhancement of dependency injection

Every library so far that has implemented effects e.g. Cats, ZIO, Effects has done so to make concurrency easier and safer.

Not for dependency injection.

nwienert · 4h ago
React hooks are them, basically. Not at the language level, but widely adopted and understood.
michalsustr · 2h ago
AE (algebraic effect) are very interesting! Great article, thank you.

Reading through, I have some concerns about usability in larger projects, mainly because of "jumping around".

> Algebraic effects can also make designing cleaner APIs easier.

This is debatable. It adds a layer of indirection (which I concede is present in many real non-AE codebases).

My main concern is: When I put a breakpoint in code, how do I figure out where the object I work with was created? With explicit passing, I can go up and down the stack trace, and can find it. But with AE composition, it can be hard to find the instantiation source -- you have to jump around, leading to yo-yo problem [1].

I don't have personal experience with AE, but with python generators, which the article says they are the same (resp. AE can be used to implement generators). Working through large complex generator expressions was very tedious and error-prone in my experience.

> And we can use this to help clean up code that uses one or more context objects.

The functions involved still need to write `can Use Strings` in their signature. From practical point of view, I fail to see the difference between explicitly passing strings and adding the `can Use Strings` signature -- when you want add passing extra context to existing functions, you still need to go to all of them and add the appropriate plumbing.

---

As I understand it, AE on low level is implemented as a longjmp instruction with register handling (so you can resume). Given this, it is likely inevitable that in a code base where you have lots of AE, composing in various ways, you can get to a severe yo-yo problem, and getting really lost in what is the code doing. This is probably not so severe on a single-person project, but in larger teams where you don't have the codebase in your head, this can be huge efficiency problem.

Btw. if someone understands how AE deal with memory allocations for resuming, I'd be very interested in a good link for reading, thank you!

[1]: https://en.wikipedia.org/wiki/Yo-yo_problem

cdaringe · 9h ago
I did protohackers in ocaml 5 alpha a couple of years ago with effects. It was fun, but the toolchain was a lil clunky back then. This looks and feels very similar. Looking forward to seeing it progressing.
abathologist · 7h ago
Effects in OCaml 5.3 are quite a bit cleaner than there were a few years back (tho still not typed).
wild_egg · 8h ago
> You can think of algebraic effects essentially as exceptions that you can resume.

So conditions in Common Lisp? I do love the endless cycle of renaming old ideas

valcron1000 · 6h ago
No, algebraic effects are a generalization that support more cases than LISP's condition system since continuations are multi-shot. The closest thing is `call/cc` from Scheme.

Sometimes making these parallelism hurts more than not having them in the first place

riffraff · 7h ago
Also literal "resumable exceptions" in Smalltalk.
ww520 · 5h ago
Also dependency injection.
nikita2206 · 5h ago
Have you thought of using generators as a closest example to compare effects to? I think they are much closer to effects than exceptions are. Great explainer anyway, it was tge first time I have read about this idea and it was immediately obvious
yen223 · 5h ago
Exceptions are usually used because the syntax for "performing" exceptions (throw) vs handling exceptions (try-catch) is familiar to most programmers, and is basically the same as the syntax for "performing" an effect vs handling the effect, except that the latter also includes resuming a function.

It would be cool to see how generators will be implemented with algebraic effects.

rixed · 6h ago
Maybe I'm too archaic but I do not share the author's hope that algebraic effects will ever become prevalently used. They certainly can be useful now and then, but the similitude with dynamic scoping brings too many painful memories.
threeseed · 5h ago
I wouldn't worry. Even the simplest and most friendly effects library: https://effect.website

Shows clearly why they will never be a mainstream concept. The value proposition is only there when you have more elaborate concurrency needs. But that is a tiny fraction of the applications most people are writing today.

xixixao · 4h ago
It feels powerful. I think the effects in return types could be inferred.

But I share the concerns of others about the downsides of dependency injection. And this is DI on steroids.

For testing, I much prefer to “override” (mock) the single concrete implementation in the test environment, rather than to lose the static caller -> callee relationship in non-test code.

yyyk · 5h ago
The state effects example seems unlike the others - the examples avoid syntax for indentation, omit polymorphic effect mention and use minimal syntax for functions - but for state effects you need to repeat "can Use Strings" each function? Presumably one may want to group those under type Strings or can Use Strings, at which point you have a namespace of sorts...
ollysb · 6h ago
As I understand it this was the inspiration for React's hooks model. The compiler won't give you the same assurances but in practice hooks do at least allow to inject effects into components.
YuukiRey · 6h ago
I don’t see the similarity. Since hooks aren’t actually passed to, or injected into components, there’s no way to evaluate the same hooks in different ways.

I can’t have a hook that talks to a real API in one environment but to a fake one in another. I’d have to use Jest style mocking, which is more like monkey patching.

From the point of view of a React end user, there’s also no list of effects that I can access. I can’t see which effects or hooks a component carries around, which ones weren’t yet evaluated, and so on.

ww520 · 6h ago
It's different. Hooks in React is basically callback on dependency of state changes. It's more similar to the signaling system.
knuckleheads · 6h ago
First time in a long while where I’ve read the intro to a piece about new programming languages and not recognized any of the examples given at all even vaguely. How times change!
charcircuit · 9h ago
This doesn't give a focused explaination on why. I don't see how dependency injection is a benefit when languages without algebraic effects also have dependency injection. It doesn't explain if this dependency injections is faster to execute or compile or what.
cryptonector · 5h ago
It's the same as with monads:

1) Testing. Write pure code with "effects" but, while in production the effects are real interactions with the real world, in testing they are mocked. This allows you to write pure code that does I/O, as opposed to writing pure code that doesn't do I/O and needs a procedural shell around it that does do the I/O -- you get to write tests for more of your code this way.

2) Sandboxing. Like in (1), but where your mock isn't a mock but a firewall that limits what the code can do.

(2) is a highly-desirable use-case. Think of it as a mitigation for supply-chain vulnerabilities. Think of log4j.

Both of these are doable with monads as it is. Effects can be more ergonomic. But they're also more dynamic, which complicates the implementations. Dynamic features are always more costly than static features.

charcircuit · 4h ago
Again you are listing things that are possible but not explaining why it's better to do it via algebraic effects as opposed to the alternatives.

For example if you were in a meeting with Oracle to try and convince them to invest 100 million dollars for adding algebraic effects to Java and its ecosystem how would you convince them it would be providing enough value to developers to justify it over some other improvement they may want to do.

For example, "Writing mocks for tests using algebraic effects is better than using jmock because ..."

cryptonector · 4h ago
The only reason I can think of -but I'm not the right person to ask- is ergonomics, that in many cases it might be easier to push an effect handler than to build a whole monad or whatever. Elsewhere in this thread there's talk of effects solving the color problem.
yen223 · 8h ago
The way dependency injection is implemented in mainstream languages usually involves using metaprogramming to work around the language, not with the language. It's not uncommon to get errors in dependency-injected code that would be impossible to get with normal code.

It's interesting to see how things can work if the language itself was designed to support dependency injection from the get-go. Algebraic effects is one of the ways to achieve that.

vlovich123 · 7h ago
Don’t algebraic effects offer a compelling answer to the color problem and all sorts of related similar things?
threeseed · 5h ago
But they also introduce their own color-like problems.

For example with Scala we have ZIO which is an effect system where you wrap all your code in their type e.g. getName(): ZIO[String]. And it doesn't matter if getName returns immediately or in the future which is nice.

But then the problem is that you can't use normal operators e.g. for/while/if-else you need to use their versions e.g. ZIO.if / ZIO.repeat.

So you don't have the colour problem because everything is their colour.

OtomotO · 5h ago
But that's only a problem if it's a library and not done on the language level?!
threeseed · 5h ago
But in the research languages listed they still are colouring function types.

So it doesn't seem to matter whether it's a library or in the language.

Either everything is an effect. Or you have to deal with two worlds of code: effects and non-effects.

charcircuit · 7h ago
>It's interesting

Which is why I was asking for that interesting thing to be written in the article on why it would better.

thrance · 2h ago
I like the idea of Algebraic effects but I'm a little skeptical of the amount of extra syntax.

Let's say I'm building a web server, my endpoint handler now needs to declare that it can call the database, call the s3, throw x, y and z... And same story for most of the functions it calls itself. You solved the "coloration problem" at the cost of adding a thousand colors.

Checked exceptions are the ideal error handling (imho) but no one uses them properly because it's a hassle declaring every error types a function may return. And adding an exception to a function means you need to add/handle it in many of its callers, and their callers in turn, etc.

nabla9 · 5h ago
Yet another thing Common Lisp programmers have been doing since the time of hoe and axe.
ctenb · 5h ago
But Lisp is untyped, which is a major difference. EDIT: dynamically typed I mean

No comments yet

thdhhghgbhy · 5h ago
Just seems to be a way of organising code really.