I don't disagree that manipulating deep arrays and graph data can be harder in an immutable functional language, at least with default commands.. however many fp languages have powerful and mature libraries that let you work with deep data structures very gracefully and correctly. Clojure has Specter, Haskell has lenses and stuff.
For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.
How cool would it be if the runtime stats could let the jit either unroll and inline a function if it's called on small arrays from network buffers, or push it to gpu if it's frequently called on huge in ram buffers? Not there yet, but as the tower of complex tooling needed to work in a compute parallel world keeps growing, we are going to be relying more and more on the compiler/runtime to optimally use that power.
> For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.
I would guess we are at least 100 years away from this. Current imperative compilers miss obvious optimizations and FP typically needs many layers of optimization before it would be efficient. Any one layer failing to make progress would result in a very slow program.
Currently FP compilers generally don't even use successors for natural numbers because it would be slow. Typical programs use other data structures more complex than numbers that would need to be optimized into a mutable structure. If FP compilers can't do it for integers, they definitely can't for FP programs as a whole.
For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.
How cool would it be if the runtime stats could let the jit either unroll and inline a function if it's called on small arrays from network buffers, or push it to gpu if it's frequently called on huge in ram buffers? Not there yet, but as the tower of complex tooling needed to work in a compute parallel world keeps growing, we are going to be relying more and more on the compiler/runtime to optimally use that power.