Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Don't use functions as callbacks unless they're designed for it (jakearchibald.com)
161 points by pavel_lishin on Feb 6, 2021 | hide | past | favorite | 186 comments


*in Javascript.

In most other languages where unwanted extra arguments raise an error instead of being silently ignored, this mostly isn't a problem


I'm not terribly impressed with claims that some programming language will hinder a programmer's development of their skills, but a language that teaches its users to reflexively avoid passing functions as arguments in this way is definitely concerning.


What language does teach avoiding passing functions? C++ is typed, and passing a function just involves telling the compiler "yo, listen up, this parameter foo is a function taking two ints and returns a boolean. Let me know if I messed this up". Okay, actually it's " std::function<bool(int, int)> foo". The whole class of errors described on the article just disappeared.

Sometimes a dynamically typed language is the right tool for a job, but IMHO this mostly holds for auxiliary stuff. Typed languages are more effort during programming/learning, but the benefit is gigantic.

But maybe I'm just too biased because I'm usually involved with "is this fails, people may [literally] die".


You mean to avoid passing named functions? The suggested alternative still passes a function. Although unnecessarily wrapping a named function in a lambda is bad code style (in most languages), it doesn’t seem that disastrous of a habit to get into.


You should be impressed by it.

The true nature of type checking is basically a method of hindering you. The set of all correct programs is much smaller then the set of all programs that exist so anything that hinders a programmer from operating in the bigger parent set outside the set of correct programs is a good and impressive thing.

What’s going on here is a type checking issue. JavaScript and typescript is a little too loose. The map method takes a function of <Arity 3, 2, or 1> So if a library changes a function from <arity 1> to <arity 2 or 1> you should get a type error, but the type checker is too loose. It’s subtle.

Basically a type of <arity 3, 2, or 1> should only type check with <arity 3> or <arity 2> or <arity 1> it should not allow <arity 2 or 1>. You see what’s going on here? Subtle.

This is indeed as much of a type checker problem as it is defining what type correctness is. The definition above is simply a way of defining type correctness that fits with our intuition of what is correct for this given situation, so take what I wrote with a grain of salt. There could be situations where the current definition of type correctness in typescript is more correct then the definition I provided.

Our intuition is complex and if you think long and hard enough you may be able to come up with a formal definition of type correctness that perfectly fits our intuition and therefore elegantly unionized typescripts looser definition of correctness and my own stricter definition.

Beware though, often human intuition can be contradictory. This means that a formalization of our intuitive notions of type correctness will also be contradictory and therefore unusable. In other words there may not be a way to type check for this issue while maintaining the convenience of the status quo.

Intuitively I think it’s possible, you just need special syntax to tell the type checker whether to use my stricter definition or the original looser definition that’s in use now.

Also I’m not sure if there’s any type checker in existence that handles that case (don’t know). So I believe this is more than just a JavaScript issue.


> Beware though, often human intuition can be contradictory. This means that a formalization of our intuitive notions of type correctness will also be contradictory and therefore unusable.

It is my very strong suspicion that in almost all (if not all) of the similar cases in programming methodology, there have been not just arguments on both sides, but implementations and real-world lessons on both sides.

A veeeery minor example: I’m equally as sure that someone designed and deployed systems that specifically created errors when you tried to pass less than the required number of params as I am that other people (or even the same people) specifically designed and implemented systems where params were optional (most likely because they hated having to specify empty params every time).


In a language that actually tries to help you creating error-free code, a type check would prevent this.

This is really just a wonderful example how javascript is a mental burden to the programmers instead of a useful tool.


How would a type check prevent this?


Well the issue mentioned is that map calls the callback function with up to three arguments[1].

A type check could prevent this because it would require map to take a reference to a function with three parameters, or the compiler would complain inside the map implementation.

Similarly, passing it a function with only one parameter would be a type violation and the compiler would complain.

Now in a language with type checking, you could still potentially run afoul.

Say the map function was overloaded with one variant for one-parameter callbacks, one variant for two-parameter callbacks etc. Then the compiler might figure out it could use the second overload if the "toReadableNumber" function got changed to take the extra "base" parameter.

So again you end up with the numbers getting converted with a variable base.

Though, IMHO, having such an overloaded map function is inviting trouble and is a very poor design.

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


> or the compiler

Functions can be assigned at runtime. It would have to be a runtime error.


A proper typesystem for functional types will check at compile-time that in all places where you would assign such a function, the function will always have the right type.

So it's NOT a runtime error, it's a compile-time error, even though you can assign different functions at runtime.


You can assign a reference to a function and pass that to a function that expects a callback, and then change that assignment based on incoming data. And if that’s not enough you can create and modify functions at runtime. And you’d have to content with JavaScript’s spread syntax and rest parameters.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


But it would only have to be a runtime error (your words) for the code which is dynamically loaded.

Static code will get checked at compile-time, even though the exact function passed as callback depends on the incoming data via a series of if/case statements say, from a dictionary or whatever.


It would check the arguments and return type of the function. Map takes a function with three arguments, toReadableNumber only takes one, therefore the functions are of a different type. So someNumbers.map(toReadableNumber) would be an error and not execute at all, instead of being a "bad practice" / potential mistake.


What if toReadable also takes 3 args of the same type but different meaning?


You can create custom types to communicate semantics to the type checker. Whether that's always a good idea is arguable though, but the tools are there (e.g. there's a wide field between weak and strong typing, and an overly strong type checker can be quite a hassle to work with while an overly weak type checker isn't much better than duck typing).


A type check would give map a type, say [a] -> (int -> a -> b) -> [b]. If you try to apply it to any function that does not have the type (int -> a -> b), the check would fail.


So basically any language that does any form of type checking at all.


See also magicalhippo’s comment on overloading. We could introduce real types instead of primitives, so that ‘number_formatting_base_t’ would conflict with ‘array_index_t’, but nobody in the world bothers beyond bare ‘int’.


Yeah, I see the point, in for example C# if you have overloaded methods it tries it best to map it to the accepted type. However you'd have to work really hard to invent a case where a library updates a signature to accept another type, and then you in your code have 2 overloads of a method that you pass and it now choses the wrong version, all while the library change doesn't break any other code.

Scenario: Library changes signature so you always pass functions with more parameters.

Result: This will break almost every codebase, they probably wouldn't do this in a minor patch.

Scenario: Library adds a new signature where you can pass functions with more parameters and keep them overloaded between each other.

Result: Compiler can't identify which of the two signatures to pass your overloaded function and throws a compilation error.


Actually, that's what I use inline types in kotlin for.

So username, password are distinct from string and objectid might be backed by an int, but is distinct from int.

Obviously, once compiled, there's no overhead, it's a zero cost abstraction. But an incredibly useful one.


On my personal projects with F#, never got a chance to use it professionally, I actually use them. Makes modeling the data much easier.


Not including TypeScript.


Just to clarify, if you update toReadableNumber to take a second parameter and that parameter is not a number, typescript will complain.*

TypeScript won't catch the original landmine because it ignores the extra parameters; maybe some linter would? Is there a rule that enforces "functions used by map must spell out all the parameters?"

* Example of typescript giving an error on toReadableNumber_v2: https://www.typescriptlang.org/play?#code/FAYw9gdgzgLgBAQwE5...


If your code has tests, an API change like the one described in the article would get caught immediately.

And it only works because `Array.prototype.map` callback has one required arguments and two optional ones. What language with optional function arguments protects you from this sort of behavior? Do people in that language not test this sort of behavior? Or at least run through the app?

More than that, the whole hurrdurr-javascript-bad thing is so tired. Lots of work is being done in JavaScript. Sure, it has its quirks, but those quirks come with expressiveness. You can use functional or imperative style, throw lambdas around, and it runs on pretty much all the phones and computers in the world.


If you have a good type system you don't need to write tests like this. I test my algorithms not my function calls.

Also, > A lot of work is being done in JavaScript

Is that it? It's one of the best funded languages on Earth, what do you expect - with D for example we can do all the things you mentioned and catch errors like this, and we're basically just some guys working on the language not the combined might of the entire Internet sector.


> What language with optional function arguments protects you from this sort of behavior?

Languages with a sane type system.

TypeScript has no issues catching such mistakes. Writing tests to catch simple type errors is such an incredible waste of time.


The article has a whole section called "TypeScript doesn't solve this", with examples and stuff. Is it mistaken?


Partially.

TypeScript doesn't complain when you pass in a function that ignores some of its arguments. Which is totally fine and safe. If you upgrade your function from no second argument to a numeric second argument TypeScript will not complain and your program might break.

It will not crash, because it still perfectly type-safe, but it might not behave like you want to. So in that sense the article has a point.

However, this is just one instance of a larger issue with changing the behavior of a function, while keeping the types compatible.

  - Using a function as a callback to .map() with two numeric arguments and then swapping the arguments.
  - Returning a tuple of two of the same type and swapping the order.
  - Returning a string in a new encoding.
  - ...
Basic rule: if it's a type error and the program might crash, TypeScript will complain. If the types are fine and only the behavior changes, TypeScript will (obviously) not complain. Callback functions and optional arguments are not special in this regard.


Thanks for the explanation.

> If you upgrade your function from no second argument to a numeric second argument [...] It will not crash, because it still perfectly type-safe

It's type-safe using a definition of "type-safe" that is defined relative to the underlying JavaScript model of "corrupt the user's data rather than crash". It wouldn't be type-safe using most other languages' (including Python, for example) model of "functions have a type that includes the number of arguments, and applying them to the wrong number of arguments is meaningless and hence not considered type-safe".

It's fair for TypeScript to use this approach. But it is surprising to many of us, who view type systems as tools for ruling out some dumb functional bugs, not just crashes.


> It will not crash, because it still perfectly type-safe, but it might not behave like you want to.

Which is generally worse than crashing, because silent data corruption can have far-reaching impacts.


Here you go:

  Array.prototype.map = function (func) {
    if (func.length !== 3) {
        throw new Error(
            "bad! this function you've passed must take THREE arguments. Grr!"
        );
    }
    for (let i = 0; i < this.length; i++) {
        this[i] = func(this[i], i, this);
    }
    return this;
  };
;)

(making this workable and bug free is left to the reader or multi-billion dollar corporations)

There can be a non-indexed map as well.

I'm most curious as to who uses the last argument in the JS map function!


> I'm most curious as to who uses the last argument in the JS map function!

It's convenient for some things that would otherwise require a reduce (but where reduce isn't particularly more efficient, because you just need lookahead/lookbehind) or an imperative loop, like transforming a list to a set of moving averages over the list.

It's a little more expressive than reduce our imperative lots loops in those cases, too.


> I'm most curious as to who uses the last argument in the JS map function!

array.map((val, index, {length}) => if (index + 1 == length) { alert("last element") } else { alert("element #"+index) })


It also provides a useful example of the limitations of TypeScript’s type checking. TS is an improvement on JS in this area, but sometimes people overestimate the safety guarantees it offers and forget that its type system is still unsound.


TypeScript trivially catches the mistake:

    const func = (i: number, x: boolean) => i * 2
    const nope = [1, 2, 3].map(func) // type error!


That case really is a type mismatch, so you would hope a decent static type system would catch it.

However, you don’t see the same warning in the case of functions that can be called with variable numbers of arguments if the types of the arguments being unintentionally supplied do match, because within the rules of TS, this is working as designed.

Combined with the perhaps unfortunate decision to provide a standard `map` function that doesn’t use its callback as most languages do, there is still the potential for an unexpected change of behaviour that the type checker can’t warn you about here.


It doesn't if the x parameter in func is :number.


Because that's no longer a type error. TS is not going to help you calling the right function. :)


It kind of is a type error, in that the error comes from the programmer incorrectly thinking that the function has type t => r, when it actually has type (t, s) => r. It's not a type error from TypeScript's point of view, because according to TypeScript's rules, both t => r and (t, s) => r are subtypes of (t, s, u) => r. The point of type systems is to detect cases where the programmer is wrong about the types of some value they're using, so it is a limitation of TypeScript that it wasn't able to detect the error in this case (I'm not sure if there's a good way to allow it to detect this particular error while also maintaining compatibility with JavaScript, but it is a cost that's being paid for that compatibility).


TS is taking on an impossible challenge in trying to add a robust type system on top of JS without harming compatibility. Its designers have chosen to favour compatibility where the two can’t be reconciled, and that is a reasonable, pragmatic choice. Better a new language that offers some improvements and lots of people actually use than a new language that offers somewhat more improvements that hardly anyone uses?

Unfortunately, this does mean TypeScript’s type system can’t be entirely sound. A classic situation that is also legal according to the rules of TS but “ought” to fail type checking is something like this:

    let arr_num: Array<number> = [1, 2, 3]
    let arr_opt: Array<number | null> = arr_num  // Erm...
    arr_opt[0] = null  // ERM!!!
Now arr_num[0] is null, clearly violating the intended type constraint.

This problem could be fixed by making it an error to alias arr_opt to arr_num. However, that might also cause a lot of extra work for anyone trying to migrate an existing JS code base, particularly if the types involved are not of their choosing but instead determined by code written elsewhere.

For example, if you called a library function that returned an Array<number> and you passed that into another library function that required an Array<number | null> and wasn’t going to modify that array, enforcing the constraint could mean that working code was broken for no real benefit.

Then you get into deeper questions about enforcing immutability using the type system, and finding that again you’re building on sand because you still have JS underneath. IMHO, it’s hard to blame the TS designers for not wanting to go down these kinds of rabbit holes.


In the end isn't that the same argument for JS? The programmer is responsible for passing a correct callback that accepts the 3 params that the map() provides, ie "calling the right function".


I think the author may have gotten their TypeScript example wrong.

Yes, TS is fine with passing more arguments to a callback that takes fewer. The callback cannot possibly use the additional arguments, so it doesn't matter what gets passed as it will not change the outcome.

This is very different from passing the wrong kinds of arguments to functions that do read them and do something with them, like parseInt.

Now, if you decide to pass a function with an optional second argument that matches the second argument that will get passed to the callback and expect that it will not be used because why would anyone pass additional arguments to a map callback - then yes, you will have the problem again.

  function addOneByDefault(num: number, addAmount = 1) {
    return num + addAmount
  }

  [1,2,3,4].map(addOneByDefault) // this typechecks but works poorly
This extra example is missing in the article and might be helpful to add.


Does Dart handle it better?


It is very possible for a type checker to catch this mistake. I illustrate the intuition here:

https://news.ycombinator.com/item?id=26046808


I don’t see any disagreement that some type checker could catch this unintended behaviour. Many popular languages have checkers that would. The question here appears to be whether TypeScript’s type checker could do it without other consequences that are considered unacceptable.


Read carefully. The distinction here is that the type checker must allow for intended behavior within JavaScript while checking for an error.

The type checking I am talking about is not a sum type. It is not that the function can take a two different possible types. It's the fact that the parameter function can mutate into two different types depending on the usage. It has (<arity 1 or 2>) not (<arity 1> or <arity 2>) if you catch my meaning.... Or in other words the concrete type is not evaluated when you pass the function as a parameter but only when it is called with a certain amount of parameters... which is not something type checkers I know about look for.


The fundamental problem with the example under discussion seems to be that while the behaviour might not be intended by the programmer, it is working as specified as far as the language is concerned and changing that specification to make the unwanted behaviour fail a type check could have additional and unwanted side effects.

Perhaps I’m not correctly understanding your idea around arity as part of the function types, but so far it’s not obvious to me how what I think you’re describing helps to resolve that contradiction. Are you suggesting a way the type system could be changed without causing those additional, unwanted side effects?

Do you by any chance have a more rigorous definition or even a formal semantics for your proposed arity types that you could share, so the rest of us can understand exactly what you’re proposing here?


> The fundamental problem with the example under discussion seems to be that while the behaviour might not be intended by the programmer, it is working as specified as far as the language is concerned and changing that specification to make the unwanted behaviour fail a type check could have additional and unwanted side effects.

You don't need to change the behavior of the program. You can change the type checker to catch the unwanted error.

>Perhaps I’m not correctly understanding your idea around arity as part of the function types, but so far it’s not obvious to me how what I think you’re describing helps to resolve that contradiction. Are you suggesting a way the type system could be changed without causing those additional, unwanted side effects?

It's not formalized anywhere to my knowledge and I'm not willing to go through the rigor to do this in the comments. But it can easily be explained.

Simply put, what is the type signature of a function that can accept either two variables or one variable? I've never seen this specified in any formal language.

To fix this specific issue you want the type signature here to specify only certain functions with a fixed arity.

When some external library is updated with a function that previously had arity 1 to <arity 1 or 2> that could be thought of as type change that should trigger a type error.

Right now type checker recognizes F(a) and F(a, b=c) (where c is a default parameter that can be optionally overridden) as functions with matching types.

  F(a) == F(a, b=c)
  F(a,b) == F(a, b=c) <-----(F(a,b) in this case is a function where b is NOT optional) 
  F(a) != F(a, b) 
From the example above you can see the type checker lacks transitivity (a == c and b == c does not imply a == b), because the type of a function with an optional parameter is not really well defined or thought out.

This is exactly the problem the author is describing. The type checker assumes that when the library changed F(a) to F(a, b=c) that the types are still equivalent, but this breaks transitivity so it's a bad choice and will lead to strange errors because programmers assume transitivity is a given.

You don't see this problem in other type checkers because JavaScript is weird in the sense that you can call a function of arity 1 with 5 parameters.


Exactly, Typescript still remains a, transpiler.


I don't think we can excuse it by citing the nature of source-to-source compilers. Other languages like Haxe, Elm, ReasonML, ClojureScript etc that target JS don't suffer from this.


Unsound type system was a sacrifice made for easy migration from javascript.


And wildly successful at that. At least I can use typescript with my existing codebase. Rewriting everything in haxe isn’t exactly appealing.


That may be for the static type system but you could still check and signal an error at runtime instead of silently ignoring it.


In Ruby there’s a similar situation.

While blocks, procs, and lambdas all have arity metadata, only lambdas check for the argument count when called. The other two drop excess arguments and fill missing arguments with nil.


I think this problem is almost nonexistent in Ruby.

If you're inlining your block as a literal do-end block on the call site, it's just a matter of knowing what kind of data you're calling the block-taking method on. So blocks are kinda different.

If you're designing a more intricate piece of code to be used repeatedly by a 'map' or 'reduce' (like in the example), nothing is preventing you from defining a lambda instead of proc. And nothing is preventing you from designing your library so that it exposes only arity-checking lambdas to the outside.

But it's also quite usual to define callbacks as plain old methods (e.g. Rails before and after actions). Methods can be easily used as a block by getting the actual Method object first with the 'method' method, then using the & syntax to automatically convert them to a proc (e.g. map(&method(:foobar)) which again, converts them to arity-checking lambdas.


Yep the problem is very much reduced but it still exists: an API/DSL provider that hinges on blocks can change the args under your feet and it would only blow up when the argument values start to receive unexpected methods, instead of at the interface.

As you mentioned, lambdas and methods check for that, but it’s sad to have to give up the syntactic and lexical niceties of blocks.


I was about to say, just use typescript and it'll throw you an error that the function expected one argument but received three.


This won’t prevent all errors - the author explicitly addresses this towards the end.


Yes. I meant specifically just the "too many args" issue


This would have been one of the things one would expect to be fixed in strict mode.


You should check the parameters. For example:

  function toReadableNumber(num, base, trap) {
    if(base == undefined) base = 10;
    if(typeof base != "number") throw new Error("Second argument should be the base! base=" + base + " (" + (typeof base) + ")");
    if(trap != undefined) throw new Error("Did not expect a third argument. Are you using this with map? Then use an intermediate function.");


This won't help in most cases, because you're not going to be able to get the people writing libraries to add guards to every single function they make.

This is a problem that should be handled at the language level, not by adding multiple lines of potentially incorrect code for every dozen lines of regular code.


You do not have to add "guards" to every function, just the functions publicly available via API. And you only need to add them when making (breaking) changes, like adding more parameters, but most of the time you can figure out what the caller wants to do and keep your code backwards compatible.

Also you should wait until your API is somewhat stable before adding the guards. So for most code, you do not need guards. But if your code is used by many, that defensive coding/guards, taking only a few minutes to add, will save countless man-hours that would otherwise be spent debugging.

As a general rule I like errors to throw early. So when I found a bug, (I first write a test to automatically reproduce the bug, then) I backtrack and add guards to each step (with helpful debug/data in the error message), so that the bug would be caught at the surface, rather then causing weird issues several layers down.

And guards are much easier to write then complicated type definitions. And the errors will be more informative, helpful and human friendly then errors from a type-checker.

Defensive coding is mostly useful in long living apps that have a lot of state, and which is constantly developed (new features added, breaking changes, etc). You would not need defensive code in programs that are executed once and then thrown away.


I personally prefer extra arguments to just be ignored (and allow to use a default like `function fn(a, b='x') {`). I also believe in most other _dynamic_ languages this is allowed, so it's just about what kind of language it is, not just JS vs the rest.


> I also believe in most other _dynamic_ languages this is allowed

Most definitely not. It's not allowed in Python, it's not allowed in Ruby, it's not allowed in any Lisp I know of[0], … it is allowed in PHP, which is about what I'd expect from that[1]. In most dynamic languages the arity is not a suggestion[2].

Which is exactly the issue at hand: `Array#map` was (stupidly) defined as calling its callback with 3 parameters. The last 2 are useless 99.99% of the time (and in better language you'd compose them in if and only if you needed them), as a result it's almost universal that you'd pass single-parameter callbacks which works… until it doesn't because the callback now takes 2+ parameters and starts taking in account the previously ignored garbage `Array#map` feeds it. The average JS developer likely doesn't even know Array#map callbacks receive 3 parameters, and usually aren't going to think about it: in 99% of cases it's has no relevance whatsoever.

[0] but most lisps make significant uses of variable-arity functions, which is a very different and much more formal proposition

[1] PHP's one saving grace being that HoFs have historically not been much of a thing, though I have not tracked how it's used these days

[2] as long as it's present at all AFAIK in Perl functions don't have formal parameters lists


This seems to work on Python:

  def hello(a, b = 'world'):
      print(a, b)

  hello('hello')
  hello('hello', 'world')
But these don't, so fair point:

  hello('hello', 'world', 'there)
  # nor
  def hello(a):
  ...
  hello('hello', 'world')


Yeah, problem with JavaScript function signatures is that every function argument is optional, including all the arguments you didn't write.

In python those functions would look like:

    def F(a = None, b = None, *args): ...
And you can't write any other kind of function in javascript. I don't really like that aspect of javascript, it creates so many hard to debug situations.


The first is an example of default argument syntax, it doesn't mean you can call functions with extra arguments, only that a value will be provided from the declaration if the call doesn't.


I'm a bit uncomfortable with the overtly broad use of the term "callback" in the article ... most examples aren't callbacks per se, just higher order functions expecting function expressions as inputs (which may or may not act as "callbacks").

Is this a "in the javascript world everything is a callback" thing, or is the author just using the term loosely?


Thanks for pointing out this confusion i'm a backend guy and anytime i have to deal with JS i keep wondering wtf people are talking about and if I'm dumb. Apparently they just switch words for free.

This is not helping beginners and it make documentation and qa especially hard to search.


I agree. I started my notes on this under the heading "callbacks" but ended up under "functions". The article could be titled "Pass arrow functions and partials, not objects you don't control."


It is articles and comment threads like this that make me glad I am not a Javascript developer :-)


What would be the strict definition of "callback" ?


A function that will be called... a continuation. eg. "call me back when you are ready". or "call me when you are outside", "call me when you have arrived", "call me later".


The term "callback" is usually used to refer to a function that you want called after the completion of an async operation (like a database read, a file read, a network call, etc).

This article uses the term callback but then uses functions passed to "map" as its examples; "map" is synchronous and so this usage of the term "callback" is atypical, probably atypical to the point that we can just say it's incorrect :).


FWIW, Wikipedia's defines "callback" as "any executable code that is passed as an argument to other code; that other code is expected to call back (execute) the argument at a given time" and synchronous callbacks are mentioned explicitly [0]. I don't think Jake's usage of the word is atypical.

0 - https://en.wikipedia.org/wiki/Callback_(computer_programming...


The term "callback" is usually used to refer to a function that you want called after the completion of an async operation

I’m not aware of any authoritative definition, but the term “callback” has been used for functions passed in as parameters to higher-order functions since long before the modern idioms for asynchronous code were around. The first use of the term I can remember personally was for the comparator function passed to qsort in C. That was probably sometime in the 1980s. Another common usage that goes way back is for event handlers in event-driven systems.


Exactly, the term typically used for the specific type of callback being discussed here is a "continuation callback", and the specific style is "continuation passing style"

The term is actually used correctly; callback's definition is broad. Of course, noone can resist the temptation to berate members of the JS community.


It's not exactly wrong as map does call the function passed to it. Maybe the topic was named such as to get up-votes as many people do not like callbacks (which is only a convention to higher order functions used extensively in Node.js for async, as JavaScript by itself does not have any async functions!)... The proper title would be something like: "Unexpected parameters when using Array.map and new Promise"


Yep that‘s it. Most of the javascript world is very open to beginners and not that worried about being strictly technically correct, when it doesn‘t really matter. A callback is pretty much any function that you pass as an argument. It‘s still not as much a programming language community as it is a website making community.


Callback is technically correct. The specific type of callback that parent has in mind is a continuation callback.

Maybe you should check your bias about the JS world (as well as your definitions).


Another way to frame this for library developers: adding parameters is a breaking change in javascript, and it should be noted in changelogs as such.


> adding parameters is a breaking change in javascript

TBF adding parameters is a breaking change in most languages unless they have defaults, or even then.

In javascript it's a corrupting change, it may silently break all callers.


I would not say that - maybe you meant only dynamically typed languages? Even then I'm not sure...


> I would not say that - maybe you meant only dynamically typed languages?

No I don't? A breaking change means working code doesn't work anymore. In a statically typed language, if a dependency adds a parameter to a function your code stops compiling. That's very much a breaking change.


Sorry, for some reason I must have over-read your "defaults". Without those you are certainly right.


The way I’ve seen this dealt with in objective C, and I use this pattern in JavaScript, is to make a new function with the additional arguments, and then refactor the old function to be a convenience function that calls the new function and passes defaults to the new parameters.

Original function:

  doSomething(foo){
    *body*
  }
Refactored functions:

  doNewSomething(foo, bar){
    *body*
  }
//now a convenience function

  doSomething(foo){
    bar = defaultValue;
    doNewSomething(foo, bar);
  }
Doesn’t this solve this when updating a library?


So when I look up `toReadableNumber(num, base = 10)` in the current library documentation and pass it binary 11001010, in your codebase I get '11,001,010' instead of 202.


Wait, what?

Like if you only pass one variable, you get the old behavior, and to pass two variables you have to call a new function name.

What am I misunderstanding about what you are writing.


so you would have a new function called

toReadableNumberWithBase(num, base)


This is partly why I think sacrificing speed and type safety in the name of ease-of-writing is pointless in the long run - any productivity you gain gets lost in the test suite or in heisenbugs hidden down the stack that blow up where you can't easily fix them.


Yeah. Pretty horrible design flaw. Being able to add parameters in a backwards compatible way is essential for software development. I guess the only way to do it is via passing an object, which is like passing keyword parameters in other languages.


I don't understand this argument. Let's say you remove a typed param and replace it with a new one of the same type, how will your typed language protect you then? Is that a case that should be handled in a backwards compatible way? Changing a function signature is a breaking change. In some cases like function arity or different types your compiler could catch it and throw errors (and break your program, hence breaking change).

A better language might have a special syntax or specific types for mapping functions but that's not the argument you were making.


If you remove a typed param and replace it with a new one of the same type, typing will indeed not save you. But there should be other checks to warn you:

- Version should be bumped - Dependencies should be informed via the changelog - Existing tests of dependencies should fail


> Changing a function signature is a breaking change

Not if your language has default parameters and does not allow callers to pass "extra" parameters. Then you can easily add a new parameter with a default value and older callers will work as expected.


I forgot to say I meant adding an optional new parameter to a function. Obviously adding a new parameter to a function is a breaking change for any API regardless of language.


Or you end up with java-like names once you forgot the option object. toReabableNumber, toReadableNumberWithBase, toReadableNumberWithBaseAndPrecision, .... or I guess toReabableNumberWithOptions.


uhm java has method overloading..


So does javascript.


I believe that JS doesn't have function overloading. Function overloading, the way I understand it, is the ability to declare multiple functions with the same name but a different signature (different tuple of arguments) and let the runtime/compiler decide upon the implementation used.


JavaScript supports this here is a snippet from out sdk

https://pastebin.com/xCnY8ZzV


That's TypeScript and it works only with type signatures, you still have only one actual implementation.


Isn't that more Objective-C style?


Aaand in objc, it would traditionally be:

  [array mapSelector:@selector(toReadableNumber:withOptions:)
          fromObject:[NSFormattingManager localizedFormattingManager]
       withArguments:@[@{kCFNumberFormattingBaseKey:@(10)}]];
Thank god they invented @-syntax for core type literals.


I am not sure this really has to do with callbacks - isn't this more due to the dangers and hassle of allowing a function with 3 arguments to be called with just one?


It's actually quite the opposite, in this case JS is allowing a function with one argument to be called with two extra argument without complaining. In my view it's ways worse.

Optionals arguments (ideally named and with default values) are quite easily dealt with and the developer is aware he'll break compatibility if he touch them. No error on extra argument is another flavor of madness.


I gathered that it's the danger of allowing a function with 1 parameter to be called with 3 arguments, but I think you're right too.


You can still have a signature mismatch if you're not sending the correct args. The point is to not pass the function to map directly but use the more explicit arrow function.


Absolutely.


I would recommend using eslint and the following rule : https://github.com/sindresorhus/eslint-plugin-unicorn/blob/m...

eslint-plugin-unicorn has a lot of great rules, some are opinionated but you don't have to use all the rules.


The problem in general is a language feature silently increasing the API surface.

In this specific case the function was already callable with two parameters, without the library author's intent.

Another example to the general rule is the fragile base problem: every method is a customization point by default.

These "convenience" features can easily turn into headaches like this for library authors.


Let's talk straight about Javascript. Repeat with me:

Literally any kind of change to a function in javascript is a breaking change.

Don't believe it? Give me an example of a function and how you change it and I will show you the code that works with the original function but breaks with the changed one.


How do you feel about adding arguments to a "single object as named arguments that is destructured in function declaration"? I can also always come up with something that breaks, but seems pretty safe.

add({a:2, b:4})

function add({a, b}) { return a + b }

function _add({a, b, multiplier = 1}) { return a+b*multiplier }

challenge: come up with a breaking use that does not involve something having a property named "multiplier"


First of all, I _can_ demonstrate code that will work when using "add" but explodes when using the changed version - even with your restriction of not having a "multiplier" property.

However, let's keep the tension for while. The fact that having a working function using "multiplier" is already a bit broken - it shouldn't work from the beginning, but javascript is designed so that it does. Hence you have to give me this restriction, because otherwise it is obvious how your example can be "broken".

Before I expose my (very simple) solution to still break your example even without using "multiplier", I would like to ask you to try to come up with another example first, where you don't require restrictions. I think it's a good exercise. :)

If no one comes up with one, I'll show it in, say, a day from now.


Allright, I didn't really come up with anything clever. I may have been to sloppy in the original function/restriction. Obviously once you start passing non-numbers, all bets are off. E.g. a simple add({a: "hello", b: "world"}) breaks. Add may well have been intended as a string concatination function so yeah, fair enough.

A tricky thing I could see without ever explicitly defining "multiplier" (e.g. on the object prototype) is passing a Proxy that e.g. has a fallback for all missing properties. Detecting a proxy is only kind of possible (?) but we can copy all the original target properties from it, which should make it safe.

So here goes, my safe solution for modifying function signature in a non breaking way:

  function add({ ...args }) {
    const { a, b, ...rest } = args;
    if (typeof a !== "number" || typeof b !== "number") {
      throw "all arguments must be numbers";
    }
    if (Object.keys(rest).length > 0) {
      throw "You may only pass arguments a and b";
    }
    return a + b;
  }

  function _add({ ...args }) {
    const { a, b, multiplier = 1, ...rest } = args;
    if (
      typeof a !== "number" ||
      typeof b !== "number" ||
      typeof multiplier !== "number"
    ) {
      throw "all arguments must be numbers";
    }
    if (Object.keys(rest).length > 0) {
      throw "You may only pass arguments a, b and multiplier";
    }
    return a + b * multiplier;
  }
Most of these issues (not the proxy one) should be solved by typescript.


Okay, you definitely deserve praise for that this prevents problems in a real world scenario. Now I feel a bit bad for having made you write so much code.

In the evil world, I can break your code like that:

    try {
      add(1, 2, 3, 4)
    } catch (e) {
      if(e !== "You may only pass arguments a and b")
        throw "boom";
    }
However, you can of course make your exception string generic.

Then I'll have no choice to use one of my jokers: calling "add.toString()" and inspect your function in detail. Before you scream that this is stupid, please mind that this is actually used out there (looking for example at you, angular).


Oh you wanna have a go? Let's have a go

  function add({ ...args }) {
    const { a, b, ...rest } = args;
    if (typeof a !== "number" || typeof b !== "number") {
      throw "no";
    }
    if (Object.keys(rest).length > 0) {
      throw "no";
    }
    return a + b;
  }

  function _add({ ...args }) {
    const { a, b, multiplier = 1, ...rest } = args;
    if (
      typeof a !== "number" ||
      typeof b !== "number" ||
      typeof multiplier !== "number"
    ) {
      throw "no";
    }
    if (Object.keys(rest).length > 0) {
      throw "no";
    }
    return a + b * multiplier;
  }
  add.toString = () => "nice try";

  _add.toString = () => "nice try";
Edit: OK I think we are stretching HN comment ettiquete to far with this much code. This was fun though. Thanks.


Sure, let's do that :)

    if( Function.prototype.toString.call(add).includes("multiplier") ) throw "boom!";
> Edit: OK I think we are stretching HN comment ettiquete to far with this much code. This was fun though. Thanks.

Huh? Would you mind to educate me about what part of the ettiquete we are not following?


IDK not a real thing, you just never see it. I feel like it's a forum not a chat room and it doesn't collapse deep threads by default so the long code makes the page very long. Probably doesn't matter though.

At this point we can go ahead and break the world:

  add.toString = () => `function add({ ...args }) { const { a, b, ...rest } = args; if (typeof a !== "number" || typeof b !== "number") { throw "no"; } if (Object.keys(rest).length > 0) { throw "no";} return a + b;}`;
  _add.toString = () => `function add({ ...args }) { const { a, b, ...rest } = args; if (typeof a !== "number" || typeof b !== "number") { throw "no"; } if (Object.keys(rest).length > 0) { throw "no";} return a + b;}`;
  Function.prototype.toString = () => `function add({ ...args }) { const { a, b, ...rest } = args; if (typeof a !== "number" || typeof b !== "number") { throw "no"; } if (Object.keys(rest).length > 0) { throw "no";} return a + b;}`;


I think it's very interesting for others to follow this.

Now, we are leaving the original scope (not just changing a function, but modifying globals). read-only globals even. But prepare for my counter:

    let frame = document.createElement('x');
    document.body.appendChild(frame);
    if( frame.contentWindow.Function.toString.call(add).includes("multiplier")  ) throw "boom!";
You might go to also kill "document.createElement", but there are many ways for me to get a new frame. I think when we come to the point where all these are disabled, I would say only a small fraction of the websites that use javascript would still properly operate. It would be your victory though. ;)


> Before you scream that this is stupid, please mind that this is actually used out there

That is stupid though. It's like saying changing a private field in Java is a breaking change because someone might have used reflection to access it.

Taken to the moronic extreme: any detectable change is a breaking change because someone could write a function that pulls your latest release and depends on every bit being identical with the previous release.


Yeah, it is stupid, but it is also true.

> It's like saying changing a private field in Java is a breaking change because someone might have used reflection to access it.

Which is true, both in theory and practice.

Even look at misc.Unsafe - which is deliberately named unsafe and everyone was told not to use it. Then they tried to drop support for it and people freaked out so much that support was continued. (https://jaxenter.com/java-9-without-sun-misc-unsafe-119026.h...)

> Taken to the moronic extreme: any detectable change is a breaking change because someone could write a function that pulls your latest release and depends on every bit being identical with the previous release.

I would say it is best described here: https://xkcd.com/1172/


Yeah ok, it‘s a saturday in lockdown, why not. I have a hunch that the only way I can do this safely is with a bunch of safety checks. I have to think about wether your breaking has something to do with the intricacies of function arguments in js or if you consider addition of a property to any type of "options" object that is passed around always breaking.

For day to day, I feel this pattern is good enough, as typescript works nicely with it.


const add = ({ a, b }) => a + b; const _add = ({ a, b }) => b + a;


That one is rather simple to break.

    console.log(add({a:"1", b:2})) //12
    console.log(_add({a:"1", b:2})) //21
So to make the code break with the change to _add, I can just do:

    if(add({a:"1", b:2}) != 12) boom()


Ah of course!


How about this? (I am not a 100% about this one)

    f = Object.defineProperties(x => x, {toString:{value:()=>'a'}, [Symbol.toStringTag]:{value:'a'}})
and we will change it to:

    f = Object.defineProperties(y => y, {toString:{value:()=>'a'}, [Symbol.toStringTag]:{value:'a'}})

Edit: ah you -can- break it actually. A challenge for others to figure out how to break this one.


Hehe :) smart one!


To be clear, you mean "literally and kind of change to a function signature [...]", right?


I wished so, but no. It's even true for changes in a function's body. Maybe that's a good hint, no? :)


id = x => x;

logging_id = x => { log(x); return x };


What's log? console.log?


Could be anything, just logging the value of x to some place. Assume it never raises exceptions.


I guess then I have to result in the techniques (toString) that I have discussed in the other thread here!


Haha, someone was blaming 'typescript' after they'd made this exact mistake in Ask HN :

https://news.ycombinator.com/item?id=26039826

Strange coincidence.

Edit: I always wondered if jquery's .each deliberately had a signature of function(i, ele) to discourage people from mistakes like this, or if it was a happy accident.


You could derive a decent interview question out of that. Not the specific output of the bad call to parseInt, but simply "What arguments can you pass into parseInt?", and then simply ask about why it would be bad to pass as a callback to map. I'd probably have tripped up on the latter having not read this post, because I'd have never thought to not explicitly pass the radix to parseInt and rarely use anything but anonymous functions in a map call.

Though on second thought, maybe not a great question, hard to say. I know I've tripped up on the std sort function having not had used it in a while.


Positional optional arguments seems to be the bad idea here.

If you have to have optional arguments, its probably best to make them named.

This would this make an unintended clash a lot less likely. The name has to match - and if you have the types, both the type AND the name would need to match.

Not only that, it also it lets you have any number of optional arguments with a lot less fuss, and pass any subset of them.


First-class functions considered harmful. Of course it's JavaScript.

The inmates are not only running the asylum, they built it too.


For me the core issue seems to be that the map function has, in essence, three overloads which does very different things.

A strongly typed compiler would not catch the error in the article with a similarly overloaded map function, so type checking can't really rescue you from this situation.

Sure the chances of it happening silently might be slightly less, as the second and third parameter would have to match in type (integers in this case), but it could still absolutely happen silently.

So to me the core take-away is that overloading a function in such a way is a very poor design choice, regardless of language.


I don't quite buy it. If a library gets a update that is not backward-compatible, it does not matter whether you previously called that function as a callback or as direct invocation: your code may break one way or the other.


Any update is going to be backwards incompatible to some extent, even a change to the version number can cause and has caused breakage in software that has buggy version checks. Because of that, taking backwards compatibility as an absolute is not useful, and instead we generally limit it to where the old version was being used correctly in supported ways. Whether calling a function with more arguments than it is specified to take is supported will depend on who is maintaining that function, but unless documented I would assume it isn't.


That much is obvious. The blog post has an example of something that appears to be non-breaking (at least it would be in many other languages), but actually does break things. Both the library creator and user messed up in the example given, but if you're not intimately familiar with the language, both mistakes seem reasonable.


The keyword here is appear, but in reality it wasn't a non-breaking change.


But if you write defensively, like the post suggests, you won't be surprised by bad library writers.


That is good advice, but my original stance holds: it is not a specificity of callbacks per se that is the problem here.


Yes, but here the argument is to not blindly apply just any function to a map since the signatures won't necessarily match. It's harder to spot the errors and is just technically wrong, even if it does work in that particular situation.


Agreed 100%. As a library author I'd also recommend if you create functions to make way for this specific case. It's just very useful to be able to `.map()` so allowing people to do it is a net gain, I've done so in my library `files` (https://documentation.page/github/franciscop/files):

> Ignores the second parameter if it's not an object so you can work with arrays better like .map(read)

It's also pretty simple to implement:

    // Assuming we are taking an options object {}
    export const myFunc(arg1, arg2 = {}) => {
      if (typeof arg2 === 'number') arg2 = {};

      // ... rest of the code as usual
    };
So, library authors, do a favour to users and in those functions that could be used as a callback, add this option.

Some other tips/niceties I've learned over the years:

- For highly async libraries, allow to accept an unresolved promise. It's pretty safe and easy to do on the library-side, and will probably remove a bug or two on your users' side.

- Export default and named so the users don't need to worry about whether to `import * as files from` or `import files from` (assuming the library is small and there's no concern about tree-shaking).

- I also use a higher-order promise abstraction I created, Swear (https://documentation.page/github/franciscop/swear), but that's totally optional and you can treat any of my async libraries as normal Promises.


1. Not every function invocation is a callback. 2. A JS library that changes function arity as if it doesn't break API isn't being responsible. 3. This sort of paranoia defensiveness in API usage is a direct result of language (not developer) shortcomings.


Well, typescript doesn't solve this because "passing callback function that doesn't use all provided parameters" is so common in JS world that it would cause much hurdle when migrating JS project or when writing idiomatic JS code.

But I definitely can image typescript could provide another "strict" (or even something outside of "strict" group, like recent "pedantic" option for index access [1]) option that would check against that potential errors.

[1] https://github.com/microsoft/TypeScript/pull/39560


Good article, but like - is this really a problem or even a pattern people use?


I've been bitten by it.

I'm used to statically typed languages that allow calling HOF's with a function reference. If the signature matches, there's no reason to wrap it in a lambda.

So I did the same, reflexively in a Node project recently.

I've learned my lesson and now I just know to always use a lambda forJS function arguments.


In my experience yes and I use it myself as well, but with care. Optional parameters are a thing so when you're doing something like the author describe, you need to consider that your function is quietly receiving these parameters but just discarding them.

I see people using .filter(Boolean) a lot for example. If the signature of that were to change, for sure it wouldn't be something that quietly gets implemented, but I wouldn't pass formatting functions to these operations carelessly, especially in a codebase where there may not be tests. Some of the safer ways I've seen are use of unary helper functions to wrap the callback, or having the callback actually take the arguments but discarding them like (element, _index, _array). At least that way you communicate some intent.


I don't think so, maybe before JS had arrow functions. Now you would usually have an anonymous arrow function as a callback to explicitly state the calling contract. Then inside it apply the original function.


The parseInt bug that's mentioned in thd article is very likely to catch you eventually if you use Javascript for long enough.

The habit that I developed as a response to that is to always pass a lambda to the callback. That is, write this:

    foo(x => f(x))
instead of

    foo(f)


Yeh, first thought when I saw it was "who does this lol"


Yes it is (JavaScript is so widely used that probably every pattern is in use somewhere. I myself have done this many times and seen it being done as well!)


The core issue is that now you’re calling a different function.

A type system may help, but the fact persists. Type systems only catch changes in (args.length ++ args.map(primitive_typeof)). I think that this issue is not with types or arguments, but with a loose naming and handling of dependencies and backwards compatibility. A theoretical author of toReadableNumber() simply ditched one function and introduced another one, in-place. Apparently they did that because there is no way in their project to fork, retain and maintain both. You may say that types solve 95% of this, but dynamic languages exist for a reason, and when you feel like using one or see an advantage in it, other techniques may be applied. We could resolve that by using semver on functions instead of modules (renaming them at import for convenience), but nobody does that. Functions are fundamental building blocks, they take your args, do their job, return results and may have a separate environment (not in js), but somehow they are not autonomous entities. In contrast, in a real world we use explicit versions of things: gtx 1060 6gb, iphone se, cat 6020b, and the same for their part numbers. Nobody specifies just “gtx” or “cat” in their package.xls.


I think a more important lesson before you get anywhere near this one in that logic chain is:

Don't attempt to code for every possible future.


…in JavaScript.


> but I still got folks on Twitter telling me to "just use TypeScript",

This type of behavior is exactly the reason why I think the existence of Typescript adds downsides to being a JS dev.

Saying "just use TS" is of equal value as saying: "just use Assembly" or "just use Dart". It has no value.

First an foremost this type of logical behavior is a problem that needs to be adressed in JavaScript. The dynamically typped language that is embedded into every major browser. A heuristic will have to be discovered by JS devs. Or TC39 will have to extend the standard.

Ultimately, it then comes down to a contexual personal choice of using TS over JS.

When it comes to separation of concerns, I can recommend reading this essay by Dijkstra: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/E...


Personally, I'm not a fan of TypeScript anyway.

JavaScript is a lost cause to me. I'm not a full time JS Dev, so every time I have to touch one of our Node projects at work, I mentally prepare myself for very slow dev speed (ironic, considering the arguments that dynamically typed, loosey goosey, language speed you up) and frustrating bugs around `this`, mixed up function arguments, forgetting to await promises returned from functions, etc.

But TypeScript really doesn't actually help that much compared to an IDE that understands JSDoc. Its type system is unsound and it's too accommodating of JavaScript's nonsense.

If I ever start a new project that just has to run on Node, I'd probably try one of these languages that transpiles to JS, but is totally different, like Clojure or OCaml.


Hopefully I can avoid stacks tainted by this kind of unnecessary “flexibility” until the end of my programming career. Typescript was considered the “fix all problems with JavaScript” language but I poked in its type system within the first 10 minutes of using it.


The thing is, most of that flexibility or any of the things that could be considered quirks/flaws of the language have been around for a long time and probably will be due to browser compatibility issues. It isn't great, but it is what it is.

I enjoy TS and for web applications I wouldn't want to do without it, but it for sure isn't the "end all" since it still needs to work with those issues. Ultimately though in the case of some of the examples, a type system in a language with optional parameters doesn't excuse you from having to use your brain. Especially if you're the kind of developer who thinks testing your code is just unnecessary.


Yeah, TypeScript sounds great, but I quickly became a skeptic. JavaScript is a lost cause. There cannot be a JavaScript++ that will be reasonably safe.

I'm not a frontend guy, but I truly don't know what I'd do with a frontend project. ClojureScript? Elm? OCaml? I would even do JavaScript with JSDoc comments before I'd bother doing TypeScript.


What do you make of Dart?


Good question. I've only seen cursory information about Dart, so I can't really have an opinion. I'd be willing to try it, though.

From a quick web search, it looks like JavaScript interop isn't totally frictionless, which is probably a good thing to- as crazy as that may sound.

But it looks like it does rely on making up a type signature for the JS you call into. I assume there must be a way to use TypeScript signature files, too.

It's interesting. I'd like to look into it some time.


I'll admit to not being fully ready for all of JavaScript's footguns when I first switched from other languages, and most of the function related ones can be (as this article points out) mitigated by blindly wrapping everything in lambdas.

As one fun example suppose you're working with the snippet:

const f = foo.predicate; return arr.filter(f);

If foo is a class instance and references any instance variables via `this`, then just storing the method before you attempt to use it will cause the whole house of cards to blow up. So...adding a reference to `this` is a breaking change for even moderately sane code. That problem is also easily mitigated with lambdas: const foo = (x) => foo.predicate(x);


Yes, that annoys me every time I see it. One other example that is often used is `someArray.filter(Boolean)`. While that will likely never break, it absolutely could.

Thanks for writing this. It's bookmarked and will share it with my team when necessary.


How could it break? If `Boolean` (which is basically just a casting function, right?) changed its signature?

That does sound very unlikely.


I understand the point, and also wish typescript were a bit stricter about this, but the main reason I still do this sometimes is for the filter function when the function I pass in is a type guard - wrapping it in an anonymous function requires also duplicating the type guard logic, which is enough repetition for me to sidestep it. That said it’s easy enough to just disable the ESLint rule that enforced this limitation for those lines.


Fwiw adding new arguments to a function should be a minor release increment (1.2.3 -> 1.3.0), so theoretically you could prevent this issue by locking the minor version numbers on dependencies (or you could definitely prevent it by locking your patch numbers)

Practically speaking, I've virtually never run into this problem in 6 years of writing JS professionally. Of course your mileage may vary


The article content is great and a good advice, but let me nit-pick on the title a little bit.

It should be "Don't use functions as _arguments_ unless they're desdigned for it".

Function passed as arguments to higher-order function are just that, arguments.

A function callback on the other hand is just that, a call back, after a longer asynchronous run.


> The developers of toReadableNumber felt they were making a backwards-compatible change.

To me that’s the real point. In JavaScript where a function’s signature is not changed by the number of parameters, adding even an optional argument probably constitutes a breaking change.


Point-free function passing is only valuable from a stylistic point of view. In any dynamic language I don't think the downsides make it worthwhile.


If you did not use map, Promise nor object destructuring you would not have these problems. Code would not look as cool though.


I agree and will take it a step further: if you didn't use JavaScript, you would not have these problems.


I'm sure other languages have the same problems too, like features getting added to the language which makes it harder to understand, leading to subtle bugs. I'm not a grumpy old saying that the old times where better, there are plenty of old JavaScript features that I also don't use unless I'm working with someone that is so in love with those features that they can't live without them, I just inform them of the implications so at least they are aware of the warts. The problem is when you are in love/hyped you easily neglect the ugly parts, and can tolerate a lot of pain. And then there's the curse of knowledge that once you learn something it becomes apparent/obvious.


I've been bitten by this multiple times.


The javascript hate, combined with the elitist / superior programmer attitude, is the lamest thing I ever witnessed in this industry, especially on HN.

Who cares about the web, right?

While people keep crying about how bad JS is, I (and many others) use it for what it is, a tool with his flaws but also its advantages. I wrote Rust, Python, PHP, a bit of C and honestly JS is still my favourite language to write together with rust.

People are lame.


Can't have a rando library break on you if you don't import any rando libraries.


const readableNumbers = someNumbers.map((item, _, _) => toReadableNumber(item));


Why not just:

const readableNumbers = someNumbers.map(item => toReadableNumber(item));


Hmm.. haven't coded in it for a while, but wouldn't typescript solve this?! Why are people still using vanilla JS?!


Perhaps you should read the article


Oops, definitely




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: