Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

It doesn't silently transform O(1) to O(n), the code is explicitly O(n^2) space worst-case. The only 'silent' transformation that could happen here is an optimization to improve performance. Also, I don't know where you got the 'enormous constant time overhead' part.

> Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption.

A normal programmer wouldn't be defining writing their own sort functions. A normal Haskell programmer would understand mutability in Haskell.

> Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++.

It really depends on what you're writing and how much performance you actually need. Implementing a moderate-complexity parallel data processing algorithm in Haskell may result in a slightly slower but much simpler implementation than C++. An implementation of the same complexity in C++ may be slower than Haskell. Writing performant, parallel, safe code for a moderately complex algorithm in C++ is far from easy.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: