I'd argue the exact opposite. Compared to what you can do if you can write compilers anything that involves composing functions is weak beer and most monad examples cover computational pipelines as opposed to computational graphs. It's like that Graham book On Lisp, it's a really fun book but then you realize that screwing around with functions and macros doesn't hold a candle to what you learn from the Dragon Book.
I maintain that the big advantage of the On Lisp approach is that all of that is available without having to write a new compiler.
Granted, I also don't have as heavy an attachment to pure functional as most people seem to build. Don't get me wrong, wanton nonsense is nonsensical. But that is just as true in immutable contexts.
What I found remarkable about that book is that 80% of what is in it can be done with functions and no macros, mostly you can rewrite the examples in Python except for the coroutines but Python already has coroutines. It also irks me that the I don’t think the explanation of coroutines in Scheme is very clear but it’s become the dominant one you find in the net and I can’t find a better one.
As for ‘compiler’ you also don’t need to go all the way to bare metal, some runtime like WASM or the JVM which is more civilized is a good target these days.
Totally fair. I think a lot of the things we used to do in the name of efficiency has been completely lost in the progress of time. Largely from the emergence and refinement of JIT compilers, I think?
That is, a lot of why you would go with macros in the past was to avoid the expense of function calls. Right? We have so far left the world of caring about function call overhead for most projects, that it is hard to really comprehend.
Coroutines still strike me as a hard one to really grok. I remember reading them in Knuth's work and originally thinking it was a fancy way of saying what we came to call functions and methods. I think without defining threads first, defining a coroutine is really hard to nail down. And too many of us take understanding of threads as a given. Despite many of us (myself not immune) having a bad understanding of threads.
Coroutines as a technique to implement state machines is the first things which comes to my mind. It's a more abstract and requires a way less fundamentals to know comparing to concurrency.
But coroutines really only work any better than "objects" if you understand the implication to the stack pointer? Which requires understanding exactly what a thread is. Right?
That is, a basic class that has defined state and methods to modify the state is already enough to explain a state machine. What makes coroutines better for it?
The issue with state you should handle it. Compare for example some iterator implementation in Java and Python. Latter needs only one method with coroutine and no state to store on object level.
> the state is already enough to explain a state machine.
I did not talk about explaining state machines but implementing state machines as coroutines. Progression: give an idea of state machines, show how hard is to handle state, present coroutines as way to handle states.
Fair that they are a way to implement without having to give more details. Those details are pushed to somewhere else, though. Which tends to make then less good for pedagogical reasons, to me. Also a bit harder to reason about to me, for some reason.
Yeah, I've had fun using macros to create optimised functions at runtime (inline caching effectively) and/or generate code that is more friendly to the JVM JIT.
Also, there's always plenty of use for doing work at compile time.
In some sense they can also be seen as a better code generation.