Hacker Newsnew | past | comments | ask | show | jobs | submit | emih's commentslogin

Mathematically it's quite pretty, and it gives you elegant partial application for free (at least if you want to partially apply the first N arguments).

Well, I disagree, you are, effectively, calling entire textbooks and CS sub disciplines merely "pretty", which is again the strawman I am referring to. This is like calling theoretical Turing award level advances mathematically pretty. I hope you see why that is problematic and biased framing.

More plausible is that Haskell designers recognized that Currying is a fundamental phenomenon of the lambda calculus so it needed some kind of primitive syntax for it. I'm not an expert but that is the most reasonable supposition for a rationale to start with. One can then argue if the syntax is good or not, but to do away with currying entirely is changing the premise of recognizing fundamental properties of Turing-complete functional programming language paradigms. It's not about prettiness, it's about the science.


I didn't know Standard ML, that's interesting.

And yeah I think this is the way to go. For higher-order functions like map it feels too elegant not to write it in a curried style.


That's a very good point, I never thought really about how this relates to the execution model & graph reduction and such. Do you have an example of a function where this can make a difference? I might add something to the article about it.

It's also a question of whether this is exclusive to a curried definition or if such an optimization may also apply to partial application with a special operator like in the article. I think it could, but the compiler might need to do some extra work?


An example where this is useful is to help inline otherwise recursive functions, by writing the function to take some useful parameters first, then return a recursive function which takes the remaining parameters. This allows the function to be partially in-lined, resulting in better performance due to the specialization on the first parameters. For example, foldr:

foldr f z = go

  where

    go [] = z

    go (x : xs) = f x (go xs)
when called with (+) and 0 can be inlined to

go xs = case xs of

    [] -> 0

    (x : xs) = x + go xs
which doesn't have to create a closure to pass around the function and zero value, and can subsequently inline (+), etc.

One slightly contrived example would be if you had a function that returned the point of a set closest to another given point.

getClosest :: Set Point -> Point -> Point

You could imagine getClosest build a quadtree internally and that tree wouldn't depend on the second argument. I say slightly contrived because I would probably prefer to make the tree explicit if this was important.

Another example would be if you were wrapping a C-library but were exposing a pure interface. Say you had to create some object and lock a mutex for the first argument but the second was safe. If this was a function intended to be passed to higher-order functions then you might avoid a lot of unnecessary lock contention.

You may be able to achieve something like this with optimisations of your explicit syntax, but argument order is relevant for this. I don't immediately see how it would be achieved without compiling a function for every permutation of the arguments.


I think we need to see a few non-contrived examples, because i think in every case where you might take advantage of currying like this, you actually want to make it explicit, as you say.

The flip side of your example is that people see a function signature like getClosest, and think it's fine to call it many times with a set and a point, and now you're building a fresh quadtree on each call. Making the staging explicit steers them away from this.


> and now you're building a fresh quadtree on each call [...] Making the staging explicit steers them away from this.

Irrespective of currying, this is a really interesting point - that the structure of an API should reflect its runtime resource requirements.


Consider a function like ‘match regex str’. While non-lazy languages may offer an alternate API for pre-compiling the regex to speed up matching, partial evaluation makes that unnecessary.

Those are nice examples, thanks.

I was imagining you might achieve this optimization by inlining the function. So if you have

  getClosest(points, p) = findInTree(buildTree(points), p)
And call it like

  myPoints = [...]
  map (getClosest(myPoints, $)) myPoints
Then the compiler might unfold the definition of getClosest and give you

  map (\p -> findInTree(buildTree(myPoints), p)) myPoints
Where it then notices the first part does not depend on p, and rewrite this to

  let tree = buildTree(myPoints) in map (\p -> findInTree(tree, p)) myPoints
Again, pretty contrived example. But maybe it could work.

I didn't consider inlining but I believe you're correct, you could regain the optimisation for this example since the function is non-recursive and the application is shallow. The GHC optimisation I had in mind is like the opposite of inlining, it factors out a common part out of a lambda expression that doesn't depend on the variable.

I don't believe inlining can take you to the exact same place though. Thinking about explicit INLINE pragmas, I envision that if you were to implement your partial function application sugar you would have to decide whether the output of your sugar is marked INLINE and either way you choose would be a compromise, right? The compromise with Haskell and curried functions today is that the programmer has to consider the order of arguments, it only works in one direction but on the other hand the optimisation is very dependable.


Glad to hear the article did what I meant for it to do :)

And yes, another comment mentioned that Scala supports this syntax!


It's not that serious :)

You can still do this though:

  let result = (barbalyze(c, d, $) . foobinade(a, b, $)) input
Or if you prefer left-to-right:

  let result = input
    |> foobinade(a, b, $)
    |> barbalyze(c, d, $)
Maybe what isn't clear is that this hole operator would bind to the innermost function call, not the whole statement.

Even better, this method lets you pipeline into a parameter which isn't the last one:

  let result = input
    |> add_prefix_and_suffix("They said '", $, "'!")

Yeah, especially in F#, a language that means to interpolate with .Net libraries (most not written with "data input at last" mindset.) now I'm quite surprised that F# doesn't have this feature.

Wow, this convinced me. It's so obviously the right approach when you put it this way.

This is essentially how Mathematica does it: the sugar `Foo[x,#,z]&` is semantically the same as `Function[{y}, Foo[x,y,z]]`. The `&` syntax essentially controls what hole belongs where.

Thanks for sharing, interesting to see that people writing functional languages also experience the same issues in practice. And they give some reasons I didn't think about.

That's a fair point, they are all isomorphic.

The distinction is mostly semantic so you could say they are the same. But I thought it makes sense to emphasize that the former is a feature of function types, and the latter is still technically single-parameter.

I suppose one real difference is that you cannot feed a tuple into a parameter list function. Like:

fn do_something(name: &str, age: u32) { ... }

let person = ("Alice", 40);

do_something(person); // doesn't compile


Machine learning is definitely enabling writing _proofs_ within a proof assistant, and I'm sure it will help to make formal verification more viable in the future.

Where it cannot (fully) replace humans, is writing the _theorems_ themselves. A human has to check that the theorem being proven is actually what you were trying to prove, and this is not safe from LLM hallucinations. If you ask an LLM, is this bridge safe, and it writes `Theorem bridge_is_safe : 1 + 1 = 2.` and proves this theorem, that does _not_ mean the bridge is safe...

The article then also makes some wild extrapolations:

> We could imagine an LLM assistant for finance that provides an answer only if it can generate a formal proof that it adheres to accounting rules or legal constraints.

I guess it's true because you could imagine this, hypothetically. But it's not going to happen, because you cannot formalize a financial or legal statement in a proof assistant. It's a fundamentally informal, real-world thing, and proof assistants are fundamentally for proving formal, abstract things.


Yes.

Here is another way to think of this. We all understand that the value of a lawyer in contract negotiations lies not only in drafting a document that, when fed to judge, produces the desired outcome. Rather, lawyers help clients (and counterparties) decide on what their interests consist in.

Developing software is always something of a principal-agent coordination problem, and comes with transaction costs.

Much of the time, most of us labor under the illusion that each of us understands our desires and interests better than any other party could.


Graph coloring is NP-hard so it would be very difficult to replace it with an O(1) algorithm.

If you mean graph coloring restricted to planar graphs, yes it can always be done with at most 4 colors. But it could still be less, so the answer is not always the same.

(I know it was probably not a very serious comment but I just wanted to infodump about graph theory.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: