Hacker Newsnew | past | comments | ask | show | jobs | submit | reikonomusha's commentslogin

The value NIL has type NULL.

No value has type NIL.


I think this article over-sells (to a point of being misleading) the amount of static type checking offered in Common Lisp even by a relatively good implementation like SBCL.

SBCL's static type checking provides in practice virtually zero guarantees about whether compiling your code actually implies the inexistence of type errors. In that regard, I like to consider SBCL's static type errors to be a mere courtesy [1].

Typically, the static type errors TFA demonstrates are nullified as soon as anything is abstracted away into a function. To show a trivial example:

    (+ 2 "2")
may error statically because the Common Lisp implementation has special rules and knowledge about the + function, but

    (defun id (x) x)

    (+ 2 (id "2"))
will not. Furthermore, Common Lisp has no way to label the type of the function ID to constrain its input type to be the same as its output type that works for all possible types. That is, there's nothing like the Haskell declaration

    id :: a -> a
    id x = x
or the C++

    template <typename A>
    A id(A x) {
        return x;
    }
In Common Lisp, the closest you can get is either to declare ID to be monomorphic (i.e., just select a single type it can work with), as in

    (declaim (ftype (function (string) string) id))
or to say that ID can take anything as input and produce anything (possibly different from the input) as output, as in

    (declaim (ftype (function (t) t) id))
Here, T is not a type variable, but instead the name of the "top" type, the supertype of all types.

[1] SBCL and CMUCL aren't totally random in what static type checking they perform, and the programmer can absolutely make profitable use of this checking, but you have to be quite knowledgeable of the implementation to know what exactly you can rely on.


(+ 2 (id "2")) may also produce a compile time warning. Nothing precludes from having a special rule for that. Like (+ 2 (print "2")) does warn.


I think the criticism is relevant because TFA isn't the first to exercise the term "cognitive load" in the context of computing. It's a term thrown around quite often, so we should cross reference its alleged meaning to literature.

I myself find it to be a term that's effectively used as a thought-terminating cliche, sometimes as a way to defend a critic's preferred coding style and organization.


hmm. Using a term from formal science literature to loosely argue or back questionable arguments withe the ruse of scientific basis is a common issue. I pointed out that this article does not use the formal definition of the term, which you point out is itself an issue. Put that way i agree.

I think the article could have used a different term, or made a more clear declaration of what they specifically meant with the term to resolve this issue. Though i don't think it was done intentionally to deceive since the article makes no mention of the formal literature or theory of "cognitive load" to back its arguments.


S-expressions make the user interface to code generation, a component to hot-reloading, more facile. It's not a triviality or gimmick in this context.

We can see from OP that it's actually quite annoying to specify code in a non-S-expression language (or really any language lacking a meta-syntax), usually requiring either

- stuffing and interpolating strings

- building ASTs with API functions provided by the implementation in a tiresome or verbose manner

But you're right that there are more aspects to hot-reloading than just the syntax and data structure.

Common Lisp gets away with it because it actually defines the semantics of redefinition of functions and classes. For instance, the standard says what will happen to all existing FOO object instances in memory if I change

    (defclass foo ()
      ((a :initarg :a)))
to

    (defclass foo ()
      ((a :initarg :a)
       (b :initarg :b)))
and even lets the programmer customize the behavior of such a redefinition.

Few if any languages go through the trouble of actually defining these semantics, especially in the context of a compiled language (like Common Lisp), making all but the simplest instances of reloading code well defined.



Lololol

> ITA Software by Google Airfare search engine and airline scheduling software. Cambridge, MA. Common Lisp is used for the core flight search engine. The larger Flights project is roughly equal parts CL, C++, and Java.

Read the last sentence AND this company got acquired by Google like 15 years ago. So ya my question still stands.


You asked where Lisp is useful, and I supplied a list of companies that find (or, in some cases of recent history, found) Lisp useful. Your Google example is pertinent, because Google had the resources to wholesale eliminate its use of Lisp any time within the last 15 years, but for some reason worth pondering, hasn't. Instead, they continue to develop the product in Lisp, and continue to contribute to the Common Lisp open-source ecosystem.

But that aside, if you want a fresh look at what people are thinking about with Lisp, maybe check out the talks that were given this year at the 2025 European Lisp Symposium [1,2]. Or perhaps look at how someone shipped a platformer game on Steam with Common Lisp [3,4], and is in the finishing lap porting it to the Nintendo Switch [5].

I realize, though, that this kind of "debate" (?) is never satisfying to the instigator. If it does satisfy though, I will agree with you that—despite all of the claims of alleged productivity and power the language offers—Common Lisp remains far less popular than Python, which I assume is your only real point here.

[1] A presentation about how adding a static type system to Common Lisp à la Haskell helps write mission critical programs in defense and quantum computing: https://youtu.be/of92m4XNgrM

[2] A talk from employees of Keepit, a company that supplies a SaaS backup service, discussed how they train people on Common Lisp when employing them: https://youtu.be/UCxy1tvsjMs?t=66m51s

[3] Discusses technical details of how Lisp was used to implement a game that was actually shipped: https://reader.tymoon.eu/article/413

[4] The actual game that you can buy: https://store.steampowered.com/app/1261430/Kandria/ (This is not intended to be an advertisement and I'm unaffiliated. It's just a demonstration of a recently "shipped" product written in Common Lisp where you might not expect it.)

[5] Technical discussion of the Nintendo Switch port: https://youtu.be/kiMmo0yWGKI?t=113m20s


> Ok now do the part where lisp is actually used for any remotely useful project today...

> So ya my question still stands.

That list has 100 companies using lisp today. Were you actually asking if any new companies write in Lisp? Cuz those exist as well - in the same list...

Not sure you know what your own question is!


> That list has 100 companies using lisp today

https://en.m.wikipedia.org/wiki/Wikipedia:Spot_checking_sour...


Ah yes big lisp propaganda out there trying to convince you of the lie


Don't understand what this has to do with my point - do you think you're supposed to only verify "propaganda" sources?


Not surprising! You didn't make any point you just posted a link to a wikipedia page. Again, sources were posted and you cherry-picked one that still confirmed what you asked about. Nw you're doubling down/moving goalposts. Real companies use lisp, in 2025. It's not that big of a deal.


CLOG [1] seems to do something similar (though it's hard to tell; Seed's README isn't terribly informative), except CLOG has more tutorials, is better documented, has a more fleshed out README, and has ongoing support.

[1] https://github.com/rabbibotton/clog


CLOG is something I've wanted to try for a long time but then I need to spend some time learning CL.


CLOG has its own CL course.


It doesn't make sense in and of itself. We usually think of division as a closed operation: we divide two things (like real numbers) and we get the same kind of thing out (another real number).

In Hamilton's original view of quaternions, he defined a "geometric quotient" of two 3d directed lines (one kind of object) as being a quaternion (another kind of object), and gave all sorts of complicated geometric formulas for how to calculate it.


Since a macro is a definition of syntax, I think you'd essentially need something like typing judgments to show how the typed elements of the syntax relate to one another, so that the type checker (e.g., a typical Hindley-Milner unifier) can use those rules. These are usually written as the fraction-looking things that show up in PLT papers. This is, as GP says, essentially extending the type system, which is a task fraught with peril (people write entire papers about type system extensions and their soundness, confluence, etc.).


Depends on your point of view really. I'd define a macro as a function with slightly weird evaluation rules.

If you want to write a macro where any call to it which typechecks creates code which itself typechecks, you have to deal with eval of sexpr which is roughly a recursive typecheck on partial information, which sounds tractable to me.


It can detect simple-ish instances, like calling

    (fn-t "hello" "world")
But a good rule-of-thumb is that these compile-time type errors are more of a courtesy, rather than a guarantee. As soon as you abstract over fn-t with another function, like so:

    (defun g (x y)
      (fn-t x y))
and proceed to use g in your code, all the static checking won't happen anymore, because as far as g is concerned, it can take any input argument types.

    CL-USER> (defun will-it-type-error? ()
               (g "x" "y"))
    ;; compilation WILL-IT-TYPE-ERROR? successful
No compile-time warning is issued. Contrast with Coalton:

    COALTON-USER> (coalton-toplevel
                    (declare fn-t (U8 -> U8 -> U8))
                    (define (fn-t x y)
                      (* x y))
                    
                    (define (g x y)
                      (fn-t x y))
                    
                    (define (will-it-type-error?)
                      (g "hello" "world")))

    error: Type mismatch
      --> <macroexpansion>:8:7
       |
     8 |      (G "hello" "world")))
       |         ^^^^^^^ Expected type 'U8' but got 'STRING'
       [Condition of type COALTON-IMPL/TYPECHECKER/BASE:TC-ERROR]


I think it's three things:

1. Bringing abstractions that are only possible with static types, like ad hoc polymorphism via type classes. For example, type classes allow polymorphism on the return type rather than the argument types. Something like

    (declare stringify (Into :a String => :a -> :a -> String))
    (define (stringify a b)
      (str:concat (into a) (into b)))

    ; COALTON-USER> (coalton (stringify 1 2))
    ; "12"
The function `into` is not possible in a typical dynamically typed language, at least if we aim for the language to be efficient. It only takes one argument, but what it does depends on what it's expected to return. Here, it's expected to return a string, so it knows to convert the argument type to a string (should knowledge of how to do that be known by the compiler). Common Lisp's closest equivalents would be

    (concatenate 'string (coerce a 'string) (coerce b 'string))
which, incidentally, won't actually do what we want.

2. Making high performance more accessible. It's possible to get very high performance out of Common Lisp, but it usually leads to creating difficult or inextensible abstractions. A lot of very high performance Common Lisp code ends up effectively looking like monomorphic imperative code; it's the most practical way to coax the compiler into producing efficient assembly.

Coalton, though, has an optimizing compiler that does (some amount of) heuristic inlining, representation selection, stack allocation, constant folding, call-site optimization, code motion, etc. Common Lisp often can't do certain optimizations because the language must respect the standard, which allows things to be redefined at run-time, for example. Coalton's delineation of "development" and "release" modes gives the programmer the option to say "I'm done!" and let the compiler rip through the code and optimize it.

3. Type safety, of course, in the spirit of ML/Haskell/etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: