Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ooh Ooh My Turn Why Lisp? (2008) (smuglispweeny.blogspot.com)
99 points by bootload on Aug 1, 2016 | hide | past | favorite | 157 comments


"The question is where will we be in five years, the answer is using Common Lisp, because of language features" says the author in the comments. This was 8 years ago.

Kind of rude of me to take that potshot, but it's there. I've been reading these "Why Lisp?" Arguments for years and yet it still remains pretty niche. The longer this goes on the more I am unable tell Lisp advocates apart from Perl advocates. They both sound like write-only languages. They make a lone programmer feel very productive and Free, but it results in essentially unmaintainable code. I've written and attempted to maintain much more Perl than Lisp, many years ago, so please tell me if I'm wrong.


Heh, yeah. Lisp has powerful features, I'm sure, but something that's always stumped me is, if that power mattered for delivering better code, would we have to strain so hard to find examples of why it's better?

Paul Graham's viaweb story, well, that's great, but why wasn't there a whole cohort of super-powered startups winning the day with lisp in the original dot com boom? Why are there so few big wins with lisp? The "lisp is powerful" story just doesn't seem like the whole story, when there are so few successes on the ground compared to C, C++, Python, Perl, PHP, Ruby, VB, you name it.


You confuse powerful with popular. Many startups don't need powerful tools, other than a CRUD framework and JQuery. Also, existence and abundance of libraries is a factor, and Lisp didn't get much love from the OSS web folks til recently.


Well, no, I'm not confusing powerful and popular. I'm asking why, for all lisp's power, there's almost no examples of that power actually doing anyone any good.

Is it because most projects don't need something powerful? The benefits of that power aren't actually that great compared to the rest of the lisp baggage?

We have this trope of the smug lisp weenie and there's definitely a whiff of high wizardry around lisp, but who's actually shipping anything with it?


> doing anyone any good

There's a difference between doing and talking about. Compare an amount of noise from Rust crowd to anything useful being done in Rust.

Let's look at only one product implemented in Lisp, by only one vendor, and it's users in only one industry - http://allegrograph.com/healthcare/. Pfizer, Mayo Clinic, GSK, Novartis - should we stop with that 'doing anyone any good' mantra?


AllegroGraph is a great example. I wrote a book (actually 2 books, with Java and Common Lisp editions) using AllegroGraph. A free PDFs of both editions are available on the book page on my website.


The thing is, I'm trying to ask, if lisp is super-powerful and a secret weapon (the general sense of the claims) why isn't the real world littered with examples of companies who seriously out-competed using Lisp?

So far all people can do is say "no, SOMEONE is using Lisp!"

Well, great, but that's not the claim from the Lisp community.

The claim is that Lisp is powerful, so powerful that it's a secret weapon. Why is there so little evidence of that power being meaningful in actual competition?


The competition you're referring to is an economical one that's based on the product. For any product which is a software application (whatever UI should it sport, native, web based, remotely hosted, etc.), it's commercial success is tied to its features and UX. For something like Amazon or Google, it's not all that important if it's written in Lisp or BCPL or Prolog or whatnot, that's irrelevant in the competition among products.

Lisp's competition is on a technical level with other programming languages, and it's target audience is programmers. It can only help allowing easier development of technology, more fluid continuity from idea to product, robust and helpful debugging tools.

So writing a search engine in Lisp won't necessarily mean that you'll be one step ahead of Google in search business in terms of end product, you still need to come up with a better idea or better UX if you want to compete them, and other factors like advertising, conversion etc are in the play too. But with Lisp you may need less development resources to produce the product than you'd need with C++ or Java.


The website of the company that became amazon was written in lisp.


I'm not trying to sound smug or dismissive, but I don't have much time for deep discussion right now. The question that pops into my mind is this:

How many unsuccessful languages have multiple long-lived commercial offerings?[0][1]

[0] https://en.wikipedia.org/wiki/Allegro_Common_Lisp

[1] https://en.wikipedia.org/wiki/LispWorks


Emacs's been shipping for three decades. It's the eldest of the living open source software.


Compared to bsd userland?


Well if current BSDs are counted as direct continuations thereof (instead of descendants), maybe it's older. But Emacs directly continues since day one.


You've probably played a Naughty Dog game. They use Scheme as the scripting language, but their PS1 games were written entirely in a Lisp.

…except it didn't have garbage collection. Lisp weenies are all GC weenies too, right? Is that acceptable?


Nah, GC weenies would probably be the Java guys ;). GC is an important and helpful part of a working Lisp system, but it's not the core of the magic.


It is actually. Without GC it's almost impossible to have a productive "image-based model" that accelerate development so much.


Well for a long time the only proper Lisp environments were commercial.

Still people get by with Emacs + SLIME + whatever open source version, as making use of those environments.



DWAVE uses Lisp / SBCL.

http://www.dwavesys.com

Some company doing mobile satellite-based Internet streaming used Lisp / Clozure CL.


Eh, i think pg's strategy is sound for a startup. If you know lisp inside and out, absolutely use it to build your MVP. but once you get traction, you need a plan to transition to something you can actually hire for, just like reddit. wasn't orbitz lisp? there are a few that kicked around.

On the other hand, if you don't know lisp, you should use what you know instead. it's just not worth learning a language and solving your startup's problems at the same time.


> wasn't orbitz lisp?

Yes and still is, as far as I know. The pricing engine is written by ITA Software which was acquired by Google a while back.


I like niche stuff.

What language doesn't result in unmaintainable code, really?

And let me just say... it's a bit rude to spread this rumor that Lisp is anti-social and bad for business when you've barely even tried it. Seriously.

"Prove me wrong" is not a good way to discuss. See for yourself instead: learn Lisp and try to use it to write maintainable code.


I think you touch an interesting area. For me, figuring out how to make code maintainable (including error handling) is still an area of research, and I'm not a newbie. At least Lisp, with its metaprogramming facilities, gives you some good tools for experimenting. And I do think that the experimental character of the original problem space when Lisp was discovered plays an important role there.


Agreed. I think creating maintainable code is for various reasons very difficult, and I have barely seen any projects that succeed, especially not commercial software.

Based on my experience, I could say that unmaintainable software results from using Java, JavaScript, Ruby, shell, and so on... but I don't like to blame the languages.

Instead, I think it's simply that no language by itself can enforce properties that lead to maintainable software. For example, Java is very clean and simple, but that doesn't mean that typical Java code bases are clean or simple.

Programmers will eventually always find a way to introduce strange complexity, inappropriate metaphors, incomprehensible abstractions, befuddling reuse, overly clever tricks, etc.

If we really want to learn to make maintainable software, we should probably stop focusing on language features (and boring language wars) and think more about stuff like Eric Evans's concept of ubiquitous language... and, yeah, experiment with ways of writing clear code in whatever languages we use.


Well, I'm still using Common Lisp. If there was anything better, I'd have probably switched.


Curious, what are you using it for? Would you say it is well suited for writing operating system helper tools (working with files, pipes, kernel api, calling other programs)? I like bash expressiveness a lot but sometimes I feel I could use a more powerful scripting language.


I just discovered Turtle yesterday, a library for Haskell. Obviously I haven't used it much in that time, but superficially it looks amazing.

See http://hackage.haskell.org/package/turtle-1.0.0/docs/Turtle-... and http://www.haskellforall.com/2015/01/use-haskell-for-shell-s...

Here's an example:

  #!/usr/bin/env runhaskell

  {-# LANGUAGE OverloadedStrings #-}

  import Turtle

  main = do
      cd "/tmp"
      mkdir "test"
      output "test/foo" "Hello, world!"  -- Write "Hello, world!" to "test/foo"
      stdout (input "test/foo")          -- Stream "test/foo" to stdout
      rm "test/foo"
      rmdir "test"
      sleep 1
      die "Urk!"

Haskell is strongly typed: the first argument of `output` is a `FilePath`; the second is a `Text`. In some respects, strongly typed is the opposite of power tho, because there's all these things you can do, like accidentally (or deliberately) pass a Text to a function expected a FilePath without translating it first.


I think the real magic here is you can drop down (fly up?) to writing Haskell code alongside this bash like DSL and have sane flow control.


Thanks, I'll have a look. But I'm slightly intimidated by new complex languages such as Haskell...


SCSH, a scheme library/derivitive built on scheme48, works well here. It's got functions and macros for doing all those things. It works quite well, but is sadly seemingly barely maintained. However, its features have been ported to just about every scheme out there, to some extent.


Hmm. In many walks of life, popular != best. In fashion, cars, investing, and to some extent cooking, for example, the top performing adherent is doing things differently than the crowd.

Lisp code can be very readable, but it takes a few years to learn how to write it that way. I spent many years in Perl 5 myself, and I found Common Lisp tricky at first but ultimately very satisfying.

Two jedi mind tricks for the beginner:

1. Don't think of Common Lisp as the language you want. The language you want is one layer above CL, and fits the business domain you operate in. Build that. (Yes, this requires a little investment up front.) Then solve your problems in that "language". Because Lisp macros are powerful fully-transformative engines, you can do almost anything (less breaking a few simple syntax rules... but newbs often underestimate what can be broken/bent).

2. As silly as it sounds, the parentheses coming first does bother folks. Every editor does syntax highlighting now. Do two things: (a) color the parens close enough to your background color that they visually fade and "back off", but are still visible when you need to look at them - your code instantly looks a bit more like python, indentation sticks out; (b) used a structured editing mode that inserts closing parens, keeps parens balanced automatically, and offers neat structured editing hotkeys to copy/cut/paste/delete sexp's (expressions) as units. Both steps are extremely comfort-building and the ubiquitous parens remove a class of bugs.

The greatest compliment to Common Lisp that I've seen over the past 8 years is the constant pilfering of language features to other mainstream languages.

As for readability - like Perl, TIMTOWTDI, and even moreso in Lisp because one can write macros as utilities to greatly simplify the actionable code and have it blow out into lower-level code that does the mechanical dirty work.

As a result of what I've said above, people working in CL do tend to write it slightly differently. You can see some evidence of that if you study different open source libraries, etc. It's not so much a rigid code style as an interest and ethos that binds the community.

I'll leave with a little example of macros at work, via the old and venerable SERIES library, that allows one to specify loops in a semi-declarative style, that if executed literally would cause multiple passes; but which compiles to a single pass over the data sequence.

The input is stock market "bars":

  (defvar *spy2006*
    '(("29-Dec-06" 142.08 142.54 141.43 141.62 45461200 141.62)
      ("28-Dec-06" 142.41 142.70 141.99 142.21 37288800 142.21)
      ("27-Dec-06" 141.87 142.60 141.83 142.51 39727100 142.51)
      ;; ...
     ))
This is what you might code, which I humbly submit to you is very readable:

  (defpackage iteration-testing-series (:use :cl :series))
  (in-package :iteration-testing-series)

  (defun combine-bars (bars)
    "Summarize a sequence of BARS into one bar."
    (series::let ((zbars (scan 'list bars)))
      (list (first (first bars))
            (second (first bars))
            (collect-max (map-fn 'float #'third zbars))
            (collect-min (map-fn 'float #'fourth zbars))
            (fifth (car (last bars)))
            (collect-sum (map-fn 'integer #'sixth zbars)))))
That function is very easy for me to read. Partially expanded, it becomes the following, which is itself further expanded. Note that it was trivial for me to request this expansion, and that multiple macros are composable so that many features can be engage at once. I can write something high-level, but also observe the expansion down to low-level code all the way (SBCL can even trivially show me the resulting assembler code for my CPU). I am freed up to think at a high level - but I don't lose control over the low level.

The following block may be daunting in size but if you know that SETQ is an assignment, and within the TAGBODY there are labels for GO which is like a goto, it should be possible for a programmer of other languages to follow. Not also the type declarations carried over for me automatically, which will allow for speed and debugging optimization, which in CL is a knob that I can turn - something I don't know exists in other languages.

  (LET* ((#:OUT-1014 BARS))
    (LET (ZBARS
          (#:LISTPTR-1012 #:OUT-1014)
          (#:ITEMS-1019 0.0)
          (#:NUMBER-1017 NIL)
          (#:ITEMS-1026 0.0)
          (#:NUMBER-1024 NIL)
          (#:ITEMS-1032 0)
          (#:SUM-1030 0))
      (DECLARE (TYPE LIST #:LISTPTR-1012)
               (TYPE FLOAT #:ITEMS-1019)
               (TYPE FLOAT #:ITEMS-1026)
               (TYPE INTEGER #:ITEMS-1032)
               (TYPE NUMBER #:SUM-1030))
      (TAGBODY
       #:LL-1035
        (IF (ENDP #:LISTPTR-1012)
            (GO SERIES::END))
        (SETQ ZBARS (CAR #:LISTPTR-1012))
        (SETQ #:LISTPTR-1012 (CDR #:LISTPTR-1012))
        (SETQ #:ITEMS-1019 (THIRD ZBARS))
        (IF (OR (NULL #:NUMBER-1017) (< #:NUMBER-1017 #:ITEMS-1019))
            (SETQ #:NUMBER-1017 #:ITEMS-1019))
        (SETQ #:ITEMS-1026 (FOURTH ZBARS))
        (IF (OR (NULL #:NUMBER-1024) (> #:NUMBER-1024 #:ITEMS-1026))
            (SETQ #:NUMBER-1024 #:ITEMS-1026))
        (SETQ #:ITEMS-1032 (SIXTH ZBARS))
        (SETQ #:SUM-1030 (+ #:SUM-1030 #:ITEMS-1032))
        (GO #:LL-1035)
       SERIES::END)
      (IF (NULL #:NUMBER-1017)
          (SETQ #:NUMBER-1017 NIL))
      (IF (NULL #:NUMBER-1024)
          (SETQ #:NUMBER-1024 NIL))
      (LIST (FIRST (FIRST BARS)) (SECOND (FIRST BARS)) #:NUMBER-1017
            #:NUMBER-1024 (FIFTH (CAR (LAST BARS))) #:SUM-1030)))


Regarding the hate for parens: rainbow coloring helps out big time. When each pair of parens has it's own color, it is very easy to see exactly how they are grouped in the more complex statements. For Vim, that would be Rainbowtags. It's actually an improvement for any languages, imo.


>This is what you might code, which I humbly submit to you is very readable:

It's not, if that's a real life example. There is no context for what the data is, like a variable names.


Valid point. In that example it's using a list to hold each bar because I was focusing on the iteration and the raw data came from a yahoo api utility.

You could use a struct or a class which would give you accessor functions, and would be better for a real program.

You setup a struct:

  (defstruct bar date open high low close vol)
And this would be a one-time conversion of the data:

  (mapcar (lambda (b)
            (apply 'make-bar
                   (mapcan 'list '(:date :open :high :low :close :vol) b)))
          cl-user::*spy2006*)
Then you can access the elements with nicer names:

  (bar-date bar)
  (bar-high bar)
  ...etc...
(If you use DEFCLASS you can do a few more things.)

That would make the revised COMBINE-BARS look like the following, which has more context, and would yield a BAR struct back.

  (defun combine-bars (bars)
      "Summarize a sequence of BARS into one bar."
      (let ((opening-bar (first bars))
            (closing-bar (car (last bars))))
        (series::let ((zbars (scan 'list bars)))
          (make-bar :date (bar-date opening-bar)
                    :open (bar-open opening-bar)
                    :high (collect-max (map-fn 'float #'bar-high zbars))
                    :low (collect-min (map-fn 'float #'bar-low zbars))
                    :close (bar-close closing-bar)
                    :vol (collect-sum (map-fn 'integer #'bar-vol zbars))))))

(edit: added revised combine-bars)


Another trick is to define the struct like (defstruct (foo :type list) ....) and then you can use the struct accessors on any list of the appropriate size. (Or, if you want constant time element access, specify the type as vector)


I'm developing a language that's a superset of Lua and Lisp. It's still in its infancy, but here's an example of a Lisp/Lua function in a module that can be imported as a (+...) macro that expands into Lua `a + b + c + ...` expression.

https://github.com/meric/l2l/blob/rewrite/l2l/macro/arithmet...

The idea is programmers will write in Lua whenever readability is prioritised, and when homoiconicity is required, for example in the use of domain specific languages, lisp can be switched to. The lisp part can be used to produce macros that expand into Lua.

Would be great to know your thoughts!


You know about metalua, right? http://metalua.luaforge.net/


Yes, I wanted a version of Lua with Macro's that reused familiar syntax (for lispers), that can run on LuaJIT, and is implemented in Lua completely.


https://github.com/baguette/lemma

I've unfortunately let it languish for the last few years, but I'm about to pick back up a bit for the 2016 Lisp Summer Game Jam. I've got a handful of big, backward-compat-breaking changes in the queue, but it should start to stabilize again within a couple of weeks.


Clojure got reasonably popular around that time.


I'm of the opinion that there is only one lisp right now with some serious potential to dominate in the future, and that is Racket. Why? For one very simple reason: unlike all other lisps, there is serious, ongoing, and lengthy research into correctly bringing a static type-checking process to the language. Clojure's core.typed doesn't count here, as it is full of significant holes that invalidate its entire point -- though, those holes are very possibly getting filled in the coming years as they attempt to refactor core.typed following the Racket model. But the Racket team is doing something with lisp that is true to its core, not just with types, but with many other features as well.

Interesting piece of trivia: this website you are reading right now is built on top of Racket.


It might be good to point out to you that the general industry cares very little about things like this.

Racket could have literally the most advanced and robust type system out of all programming languages in existence, and still be used by the same amount of people as today and have a hard time convincing anyone else to use it.

What makes a language popular from what I have seen are libraries, frameworks, and community.

Edit: Racket has all of the above, so obviously there are other factors as well.


The general industry is in fact starting to care about static typing; I've seen it happening lately. The last batch of hot languages included a lot of untyped languages like Python, Ruby, and JS. But the latest batch of rising star languages are typed, eg Go, Rust, Scala. And C/C++, C#, and Java are still all going strong.

People are no longer buying the myth that the benefits of static typing can only be gained at the expense of heaps of boilerplate. The gospel is slowly spreading.


Don't forget Apple's new flagship language, Swift.


The tide is turning. Naughty Dog video game company (Uncharted, Crash Bandicoot, Jak and Daxter) uses Racket as its scripting language now.


do you have a source for that? it's not that i don't believe you. it's that i am interested. :)

i have seen that one powerpoint from a while back where they used their own scheme dialect to script things. but i thought i had read that when they moved to the ps4 they left that behind and used c++ tooling for the same needs.


The scheme they have been using was PLT Scheme, which was renamed Racket.


ahh. i know racket was plt scheme originally. but from my understanding, naughty dog was using their own lisp/scheme and now no longer uses that system.

https://en.m.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp

your comment made it sound like they had returned to using a lisp but had chosen to use racket this time around.


Naughty Dog used a custom Scheme system for their early games. That one was written in Allegro Common Lisp.

When they were bought by Sony, they abandoned their tools and moved on to C++, hoping to share code with the rest of Sony game developers. Didn't work out as wished. They brought Scheme back into their game development, but differently. For example:

http://www.slideshare.net/naughty_dog/statebased-scripting-i...


i have seen that presentation, but i am fairly certain i recall from an uncharted 4 chat with a naughty dog developer that they no longer use that system once they moved to the ps4. this is in large part, i believe, due to their being a technology developer for multiple sony studios.



That confirms that at least in 2009 they were using Racket.


My understanding is that they are currently using Racket, but I don't work there. This is often mentioned in the Racket community, I think one of the ND engineers spoke at a lisp conference recently about it.


Dan Liebgold: Racket on the Playstation 3? It's Not What you Think!

Video: http://www.youtube.com/watch?v=oSmqbnhHp1c

Slides: https://con.racket-lang.org/2013/danl-slides.pdf


What makes the Racket model more correct than the system based in Sequent Calculus [1] provided by Shen?

[1] - http://www.shenlanguage.org/learn-shen/types/types_sequent_c...


This is the first I've heard of Shen, so I can't say.

However the Racket engineers, particularly Matthias Felleisen, are doing cutting-edge research into types. Typing is not a solved problem, because if it was, there wouldn't be a static vs dynamic type debate that lingers ad nauseum.

Racket is in an interesting sweet spot for typing research because it is a traditional lisp with dynamic types, to which gradual typing that works both in typed and untyped land has been introduced and continues to undergo extensive attention.


"Typing is not a solved problem, because if it was, there wouldn't be a static vs dynamic type debate that lingers ad nauseum." Could you clarify? Surely even if typing was completely solved, we would still have debates of that kind, because there are benefits to both approaches and there will always be people preferring one or another.


> because there are benefits to both approaches and there will always be people preferring one or another

And that is exactly my point: people prefer one or the other because it is not a solved problem, there is no universally accepted way.


The reason typing is not a solved problem is this: there are valid programs which can be expressed in an untyped language that cannot be (directly) expressed in a typed one at the moment. For example:

    (define (foo p?)
      (if p?
        42
        "forty-two"))
What is the type of `foo`? It could be `bool -> int` or `bool -> string`, depending on the result of `p?`! In an untyped language, this is a perfectly valid program. In a typed language, this presents a type error, even if `p?` happens to always result in `true` or always result in `false`. The programmer reading the program knows that if `p?` is always `true`, then the type of `foo` is `bool -> int`, and that if `p?` is always `false`, the type is `bool -> string`. However, this program, as-is, is will be rejected as ill-typed.

In a language providing sum types, we can work around this by using a type like `(int * string) either`:

    (define (foo p?)
      (if p?
        (left 42)
        (right "forty-two")))
Where `left` and `right` are type constructors for values of type `('a * 'b) either`. Now, the type of `foo` is `bool -> (int * string) either`. This workaround effectively adds a run-time tag to the `either` values to distinguish the `left` case from the `right` case, which is essentially the way dynamically checked languages operate for values of all types but in a more restricted, specific, and arguably meaningful form. In many (most?) languages with sum types, the language will ensure that the programmer always checks the tag of the `either` values before accessing the underlying "actual" value -- this is the only way to guarantee type safety while ensuring that the program does not barf due to a "type" error at run-time.

Now, it could be argued that the typing problem presented above is basically solved by sum types. But it could also be argued that this doesn't really solve the problem, because what we'd really like is a way to specify that the type of `foo` depends on the result of `p?`, which could maybe be written as `p:=bool -> (p ? int : string)`, for example. Perhaps dependent typing could help us come to a true solution for this case, but there are other cases where other solutions will be needed.

According to Benjamin C. Pierce in Types and Programming Languages:

"Being static, type systems are necessarily also conservative : they can categorically prove the absence of some bad program behaviors, but they cannot prove their presence, and hence they must also sometimes reject programs that actually behave well at run time. ... The tension between conservativity and expressiveness is a fundamental fact of life in the design of type systems. The desire to allow more programs to be typed -- by assigning more accurate types to their parts -- is the main force driving research in the field.[0]

In essence, while there is already a vast array of programs we already know how to type, and while it's possible to work around cases that resist typing, type theory is still an active field of research with plenty of open questions yet to be answered. The corollary to this is that there are still valid reasons to prefer untyped languages to typed ones, as my sibling commentor pointed out. The day when typing is solved is the day people can no longer not look like an bozos for preferring untyped languages, but until that day comes, untyped languages remain fundamentally more expressive than typed ones (in the sense that they're fundamentally capable of expressing more programs[1]).

---

[0]: Pierce, Benjamin C. Types and Programming Languages. MIT Press, 2002. §1.1, pp. 2,3.

[1]: There's also a sense in which typed languages are more expressive than untyped ones, and that's the sense that they are capable of expressing more properties of the program as part of the program itself.


The type of FOO in SBCL is:

    (FUNCTION (T)
      (VALUES
        (OR (INTEGER 42 42)
            (SIMPLE-ARRAY CHARACTER (9)))
       &OPTIONAL))
SBCL is based on Kaplan & Ullman flow-graph analysis (inherited from CMUCL), not Hindley-Milner. See also http://home.pipeline.com/~hbaker1/TInference.html (Nimble Type Inference, H. Baker). Apparently, Typed Racket's papers have other references about related work, albeit not explicitly those.

In OCaml, a function which throws an exception has not a different type than one which doesn't. So you can get runtime errors even when you typecheck. In SBCL, the NIL type (bottom) effectively conveys the lack of return value and is used when you call (ERROR ...) or simply with an infinite loop (LOOP). Things are different because the need are different. Basically, types in ML must be disjoint, which is impractical in Lisp.


> The type of FOO in SBCL is:

Is that the type of FOO or the return type of FOO? How are the types or number of arguments specified for function types?

> SBCL is based on Kaplan & Ullman flow-graph analysis (inherited from CMUCL), not Hindley-Milner. See also http://home.pipeline.com/~hbaker1/TInference.html (Nimble Type Inference, H. Baker). Apparently, Typed Racket's papers have other references about related work, albeit not explicitly those.

I've read some of Henry Baker's other work, and the man is brilliant. I wish he were more well-known.

This is no exception, however it seems that this inferencer is useful primarily for enabling compiler optimizations rather than for performing type checking. Is that correct?

What I mean is, say you want to pass the result from FOO above into another function BAR which expects a numeric argument. It seems to me that BAR would need to have an argument of type (OR (INTEGER 42 42) (SIMPLE-ARRAY CHARACTER (9))) in order to guarantee type safety. Otherwise, the type checker couldn't reject a program that passes a string result of FOO as a numeric argument of BAR. Am I understanding this correctly?

> In OCaml, a function which throws an exception has not a different type than one which doesn't. So you can get runtime errors even when you typecheck.

Right, but I'm talking specifically about run-time type errors, i.e., errors that occur because some value was passed to an operation that does not handle values of such type. What I meant by my statement is that even with type checking, the programmer must handle all cases of a sum type when dealing with it in order for the program to be type safe (in the sense that not only does it not "go wrong" in the Milner sense, but also that it does not suffer a type error at run-time).


> Is that the type of FOO or the return type of FOO?

It is the type of FOO, it reads as: a function which takes one argument of any type (T) and returns exactly one value, which is either 42 or a string of length 9.

Another example:

    (defun xyz (x y z)
      (declare (type fixnum x)
               (type float y)
               (type (vector (unsigned-byte 8) 1024) z))
      (aref z (+ x (round y))))

    (FUNCTION
     (FIXNUM FLOAT (VECTOR (UNSIGNED-BYTE 8) 1024))
     (VALUES (UNSIGNED-BYTE 8) &OPTIONAL))
Here there are three arguments, the third one being a vector of bytes of length 1024. Note that the return values was inferred from the inputs.

> This is no exception, however it seems that this inferencer is useful primarily for enabling compiler optimizations rather than for performing type checking. Is that correct?

It is a mix of both, really.

CL is primarily designed to be dynamic. Static analyses are used to optimize code and prevent classes of errors if they can be detected in advance. If you define detecting a type error as a positive test, then SBCL allows to have false negatives. That happens in cases where the expected and actual types overlap: there might be an error, or not, so the actual check is delegated at runtime.

Another thing is that with global functions (defun), it seems that there is some widening happening, for the return type in particular, so that (OR INTEGER STRING) is treated as T. This does not happen with inline or local functions. Note also that global functions can be called from anywhere, be redefined (except standard ones) and they are generally responsible for checking their arguments, except when you explicitely turn the safety knob down and add type declarations.

So let's say that XYZ above is declared to be inlined, and we use it as follows:

    (defun use-xyz-1 (x y z)
      (declare (type positive-float y)
               (type (integer 0 3000) x))
      (xyz x y z))
The above is compiled without problems, even though you could give values which would make an out-of-bounds access. However, you will surely agree that there are theoretical limits to static type checking, so it is expected that not all expressions can be typed in CL as precisely as you could wish (of course, going up the lattice, functions accept type T arguments). However, when you change type declarations so that the intersection of expected/actual types is empty:

    (defun use-xyz-2 (x y z)
      (declare (type positive-float y)
               (type (integer 2000 3000) x))
      (xyz x y z))
... the compiler warns you that:

    ;; Derived type (INTEGER 2000 4611686018427387900) is not a suitable
    ;; index for (VECTOR (UNSIGNED-BYTE 8) 1024)


The way SBCL treats declaration is that they are used as assertions (except for return types in global declarations, see manual). So what does it mean to treat declaration as assertions? Here is FOO:

    (defun foo (float)
      (make-string (abs (ceiling float)) :initial-element #\#))
It makes a new string made of N times character "#", where N is computed from the float input. Then, we call FOO from BAR:

    (defun bar (x)
      (foo x)
      (typecase x
        (float   0)
        (integer 1)
        (t       2)))
The unique value returned by BAR is of type `(integer 0 0)`, because knowing that `(foo x)` succeeds allows us to conclude that X was indeed a FLOAT, and thus the TYPECASE expression necessarily returns zero. Declarations, assertions, etc. can be used by the compiler. Note that the equivalent (w.r.t. return value) function below has a different type:

    (defun bar (x)
      (prog1
          (typecase x
            (float   0)
            (integer 1)
            (t       2))
        (foo x)))
This time, the return type is (MOD 3), i.e. the set {0,1,2}, even though propagation could be applied backward. However, backward propagation seems to pose problem w.r.t. the CL standard, at least that's what is said here (a good reference, by the way):

https://www.pvk.ca/Blog/2013/04/13/starting-to-hack-on-sbcl/

Static typing in SBCL gives something I did not yet witness in other languages. I defined a state machine with local functions, roughly as follows:

    (defun sm ()
      (let ((state))
        (labels ((a () (setf state #'b))
                 (b () (setf state #'c))
                 (c () (if (plusp (random 2))
                           (setf state #'d)
                           (setf state #'b)))
                 (d () (setf state #'a))
                 (e () (return-from sm)))
          (loop (funcall state)))))
So the local variable "state" holds current function. And so, this compiles (and the actual, longer code did so as well) with a note saying "deleting unused function (LABELS E :IN SM)", because "state" is known to never reach E. By the way, function SM never returns normally, as explained by the NIL return type. That was a useful and unexpected thing to notice.


Wow! That's a much more detailed response than I'd expected. Thank you! I wish I could upvote you twice! :)

I've used Common Lisp and even SBCL quite a bit, and had no idea SBCL could do some of this stuff. That last example is particularly impressive, given how hairy control-flow analysis can get when higher-order functions are involved.


In Shen, we easily typecheck foo to the specific values 42 and "forty-two", not merely either String or Int.

    \\ enforce type checking
    (tc +)

    \\ create a forty-two type
    (datatype forty-two
      _______________
      42 : forty-two;
      ________________________
      "forty-two" : forty-two;)

    \* example

    (7+) 42
    42 : number
    (8+) 42 : number
    42 : forty-two
    (9+) 43
    43 : number
    (10+) 43 : forty-two
    type error

    *\

    \\ define our foo that takes a boolean
    \\ value and returns a forty-two
    (define foo
      { boolean --> forty-two }
      X -> (if X 42 "forty-two"))

    \* example

    (4+) (== (forty-two true) 42)
    true : boolean

    (5+) (== (forty-two false) "forty-two")
    true : boolean

    *\


Interesting! Such an extensional definition of a type could be really flexible, but seems like it would be limited to finite types. Is this the case? Or can you specify types with arbitrary judgments?


It's only limited in such that you must be able to express the rule using the Shen lisp language. The datatypes are sequents, essentially propositions, that the type checker attempts to prove true.

For example, I could define odd integers like this:

    (datatype odd-integer
      if (integer? X)
      if (odd? X)
      ________________
      X : odd-integer;)


This is quite a nice write-up, thank you. Regarding this point:

>because what we'd really like is a way to specify that the type of `foo` depends on the result of `p?`

Is this situation not resolved by allowing p? to not be a bool necessarily but also a sum type that informs foo which case of a different sum type to return?

Anyway, I like your footnote [1] which, for me, is what allows a typed program to actually be more expressive, in the sense that you can literally express the types in your head as you code, which is difficult or simply not provided in a dynamic language, despite that most developers will be "thinking in types" regardless of typing of the language.

Now, "expressive" also often means "elegant" because you can express an idea easily with less verbosity in a dynamic language.

A language with type inference is an interesting medium that sort of removes the expressivity of explicit type annotations while gaining the expressivity of a dynamic language in some respects.


> Is this situation not resolved by allowing p? to not be a bool necessarily but also a sum type that informs foo which case of a different sum type to return?

In the example I provided, the `bool -> (int * string) either` function is essentially a map from a `bool` to an `(int * string) either`.

From a type-theoretic perspective, bool (which has two nullary type constructors, `true` and `false`) can be expressed as the sum of two units: `1 + 1` (or equivalently, `() + ()` or `unit + unit`) = `2`. If I understand your question correctly, then you're asking if we could just use some other sum type to express the branch to take, like: `type if-branch = consequent | alternative;`, and the answer is that such a type is precisely isomorphic to a Boolean -- because it consists of two nullary constructors, it can also be expressed as the sum of two units, `1 + 1` = `2`.

To answer your question more directly, `p?` already is a sum type that informs `foo` which case to return: `type bool = true | false;`.

> Anyway, I like your footnote [1] which, for me, is what allows a typed program to actually be more expressive, in the sense that you can literally express the types in your head as you code, which is difficult or simply not provided in a dynamic language, despite that most developers will be "thinking in types" regardless of typing of the language.

"Expressive" can mean any of quite a few different things, so I wanted to be specific which I meant. In this case, I mean that untyped languages are more expressive because they're capable of expressing a greater number of programs than their typed counterparts. The ultimate goal of type theory is to eliminate that gap, but short of some miracle revelation, we've resigned to gradually filling it in.

Note that this isn't the same as saying you can express it more elegantly or less verbosely in an untyped language -- there are some programs that are strictly not expressible with current type systems. That's on purpose! The point of the type system is to reject programs which have bugs caused by type errors, so we make those programs inexpressible on purpose, so it's usually a good thing to be less expressive in that way. The problem is that our type systems sometimes reject bug-free programs that actually would run totally fine simply because it wasn't able to prove that such was the case. Again, type theorists are working to improve the situation.

What it comes down to is a question of what is being expressed. Taking programs as descriptions of processes, at the lowest level a language allows us to express a run-time process. Going up one level to the level of types, we can express information about the program itself. Thus, in a sense, by using a typed language we trade in the expressiveness of some processes for the expressiveness of some metaprocesses.


Isn't HN on Arc?


Arc runs on mzscheme which turned into Racket. The last official release of Arc requires an older release of mzscheme from before lists were immutable. There is a community-maintained fork called Anarki [1] that runs on modern Racket.

[1] - http://arclanguage.github.io/


I believe YC builds their new apps in Rails now.


The post linked has another interesting post in the comments: http://www.defmacro.de/?p=13 . Its about an interview McCarthy gave in the year 2000 to infoQ.


It's a nice perspective. Very practical and down to earth. There is another side though, and that is that learning to think of code as data is extremely empowering and translates well when working with a fairly large set of other languages (Perl, Python, Ruby, and now even Java). Learning Lisp turns you into a better programmer.


I like Scheme/Lisp but I don't know if I buy the whole "learning Lisp turns you into a better programmer". Maybe more knowledgeable but not necessarily a better a developer.

In some ways learning a really flexible language like Lisp can turn you even into a really bad developer. I say "developer" instead of "programer" because I want to emphasize working with others and thus sharing code with others. Of course this is based on some past observations working with MIT grads and various other academia so take my opinion with a grain of salt.

IMO the language that really changed everything for me was the ML family of languages and maybe C. I would say knowing C and ML is more worthwhile than Lisp (Lisp is not exactly hard to learn anyway... the basics that is).


There are some Lisp-related things which provide developers with new tools and new ways to think, maybe making them better:

a) code as data

b) interactive software development

c) user-defined language extensions where the developer is part-time language designer, syntactic abstraction

d) dynamic object-oriented programming

e) meta-object programming


Could you expand on d)? What specifically is different between what you get with Lisp here and OO in the C++/Java sense?


Common Lisp allows us to develop object oriented software while it is running. Interactively. It changes classes and other objects, based on programmer changes. We can also add behavior to OOP mechanisms, like instance creation, slot allocation, slot access, class redefinition. One can add/change/delete methods at runtime. You can even change the class of an instance.

This enables you to have large OO applications running and change them while they are running. This enables interactive incremental development.

Interaction with LispWorks:

Define a class with a slot.

    CL-USER 61 > (defclass person () ((name :initarg :name)))
    #<STANDARD-CLASS PERSON 4020002153>
Create an instance of that class.

    CL-USER 62 > (defparameter *p1* (make-instance 'person :name "Drumpf"))
    *P1*
What is it?

    CL-USER 63 > (describe *p1*)

    #<PERSON 402000DE43> is a PERSON
    NAME      "Drumpf"
Hmm, let's make a politician class, with person as superclass and a slot TYPE.

    CL-USER 64 > (defclass politician (person) ((type :initarg :type)))
    #<STANDARD-CLASS POLITICIAN 4020284743>
We change our object to the new class and add information about the new slot content:

    CL-USER 65 > (change-class *p1* 'politician :type 'populist)
    #<POLITICIAN 402000DE43>
What is it now?

    CL-USER 66 > (describe *p1*)

    #<POLITICIAN 402000DE43> is a POLITICIAN
    TYPE      POPULIST
    NAME      "Drumpf"
Class of the instance has changed, the new slot has been added, and the old slot is still there.


If you try this at home, don't forget to clean your environment afterward:

    (mkunbound '*p1*)


I think that more specifically, "Learning FP makes you a better programmer", and you often learn FP when learning lisp, so the original statement is often true as well.


It is also worth noting that failure is often a step on the path to success. Learning to apply the "code is data" principle in other programming languages is not without pitfalls as you say and falling on one's face sometimes is to be expected.

However, that's part of the process too as is learning when to use a hammer vs a wrench vs a screwdriver. There is the point in learning a new tool where everything looks like a nail. Yes, people will really mess things up by applying the wrong concepts but how else do we learn?


I'm biased, but there's a big difference between "learning lisp" (well enough to transliterate that python program) and really getting the code as data concept.

I'd say that learning X really well is always a positive because whenever you learn something in depth you can then apply the concepts elsewhere. And there are still lisp-only concepts, so learning lisp is positive. ML is good too. Also spending time with a well designed concurrent language is beneficial.


With Lisp I get the data thing but with ML you learn almost everything is a language.

I can't really explain it but the whole variant/ADT pattern matching really forces to make you think of your problem domain as a specification or language (e.g. DSL). It is one of the reason why I think so many compilers are written in ML (that and the toolset is awesome for it).


Another important feature of ML is abstract data types (opaque signature ascription), which forces abstraction clients not to rely on implementation details. It's Dijkstra's “separation of concerns”, integrated into the type checking process! See: https://existentialtype.wordpress.com/2011/04/16/modules-mat...


Having looked up variants just now, I can definitely see how it would be useful. It seems (to me) to fall into the same category of macro-like things that enhance the expressiveness of the language. I think the most important point is the idea of lifting the programming language into the problem domain. When you can do this in a clean way, the result is a very readable solution that you can reason about.


I think there's a difference, though. With ML (IIUC), you build a language on top of ML that is a set of functions (and the data structures they operate on) that make it easy to express the program.

With Lisp macros, though, you change the syntax of Lisp. You take things that were not valid Lisp syntax, and you make them valid for your program.

So ML "language building" is in terms of semantics, but in Lisp, it's both semantics and syntax. (Again, IIUC).


Pattern matching is the feature i enjoy most in functional languages. It's simple to use and comes with great value.


That description of ML reminds me what I really like about REBOL and RED as well.


With the “slight” difference that ML actually enforces that your program is consistent with your description of the problem domain.


I personally think the "code as data" concept is not so important in lisp. It is critical to writing macros, but you often don't write macros and beginners to lisp often abuse macros.

I've written Clojure professionally now for a couple years and am pretty fast and proficient at it, but when I write a function call, I don't think of it as a list, even though that is what it is. I don't think any lisper does. It's just a function with arguments and I'm writing in the syntax necessary, which happens to be a list.

In those rare occasions where I think a macro is necessary, sure, it helps that a function call is a list, but that's hardly the main role in a day's work.


With respect to Clojure, you don't write macros, but many of the really interesting language features within Clojure are built upon macros. (core.async and core.match come to mind). It would be difficult to add them without macros.

I do actually think of my code as a tree/graph structure. Not just in lisps but in other languages as well, I'm always manipulating a stack of trees and thinking about how the data flows through the branches.


> many of the really interesting language features within Clojure are built upon macros

That's true with any lisp. But that doesn't mean the the ability to write macros is a requirement for most development work on a daily basis, and homoiconicity is often touted as amazing specifically because of its help in writing macros. Therefore, since writing macros is not super common or necessary most of the time, I would say that "code as data" is also not a big deal most of the time.

I really like lisp, I just don't think some of its unique features are as big a deal as they advertise.


code as data, however, does become an important concept in becoming really good at Perl or Python (and I would assume Ruby etc), and understanding common pitfalls here also helps one spot, for example, LSP problems and provide better solutions (because most LSP problems can be solved by replacing a mutator with something that returns a new object).

In Javascript, also, effectively you have to work with code as data.

So this isn't just about writing macros. There are a lot of areas where this does touch things.


> object identity for everything but numbers and characters

Since when is this supposed to be a selling point? Not having compound values (not the same thing as compound objects!) is a pain.


You seem to be very confused. This is referring to #'eq (which is why #'eql exists). What lack of compound values are you lamenting? Lisp has built in classes, structs, arrays, lists, and maps. What is missing?

> An implementation is permitted to make ``copies'' of characters and numbers at any time. The effect is that Common Lisp makes no guarantee that eq is true even when both its arguments are ``the same thing'' if that thing is a character or number.

http://www.lispworks.com/documentation/HyperSpec/Body/f_eq.h...


> Lisp has built in classes

Objects aren't compound values. Objects are compound, well, objects. And pointers to objects are primitive, indivisible values - not compound!

> lists

Racket and ML have lists. Lisp has mutable linked data structures, built out of `cons` cells, whose value at any given point in time may be a list. But the value itself isn't first-class, because you can only bind the identity of the cons cell to a variable.



> Contiguous memory

I wasn't talking about the memory representation of anything. A first-class value is something that a variable may denote. In ML, if the variable `xs` denotes the list `[1,2,3]` within a given environment, it will always do so. In Lisp, I can mutate the cons cells out of which `(list 1 2 3)` is built, so what is actually bound to the variable is the object identity of the first `cons` cell. Not a list value!


Oh. You have a strange way of phrasing your critiques, but you're right, it's all in IO/IORef.


I wanted to formulate my observation in very general terms, rather than mention specific facilities of other programming languages. I also wanted to avoid suggesting a connection with static types or effect segregation. The main benefit of values over objects is that using values leads to designs that have less moving parts, and thus are easier to understand, modify, extend and test.


For UI application programming You can do without FP language features. Hell, you can even do without smart pointers checked arrays or GC.

But you cannot do without a decent desktop or web development experience. I got involved with pretty large desktop applications written in delphi. The design-time, the debugger, native compilation without having to install some msvc runtime, deriving and combining visual components, it just gives a smooth experience for desktop apps. If lisp has such well thought-out frameworks, I would jump ship just to get at the language features it offers.



I did buy frameworks for work, and this does look OK, but coding gui by hand is not something our developers can handle in large scales.


It seems that there is an interface builder.

http://www.lispworks.com/documentation/lcl50/clwug/clw-216.h...


CAPI is pretty nice. Years ago, LispWorks gave me free licenses in return for my doing some cleanup on their documentation. LispWorks is very good, but my licenses are for what is now a very old version, so I now use SBCL and sometimes Clozure. For industrial settings, LispWorks and also Franz are very good products. I used Franz products on a medical large data project several years ago, and their support was nothing short of amazing. That said, on the same project we had an issue with SBCL and we simply paid one of the maintainers to promptly fix the issue.

Sometimes it just takes spending some money.


> For UI application programming You can do without FP language features.

Wow, do I ever disagree with you on this. The absolute best UI development process I've ever had -- super easy, fun, fast, painless -- was using Clojurescript's Om, which is functional to the core. Doing anything else, like going back to the polished UI API's of Apple still feels arcane by comparison.


You should never use CLJS because then you'll hate going back to JS.


Strange comment. That's like saying "never use C++ because then you'll hate going back to C" or "never use an automobile because then you'll hate riding horses."


Your analogies don't help me understand why you think it is strange...


I'm sorry to hear that.


You should be.


Funny, the hottest new thing in UI (React/Redux) loves to be functional.

For what it's worth, nothing matches the Delphi experience, not even today so I don't imagine CL would be any better. I worked for the Delphi guys for a bit and their maniacal focus on the end user is something that few development environments can claim as a philosophy.


Classic Kenny!

Common Lisp used to be much of my programming world also.


I want to see people's list of reasons for why not lisp.

Edit: Ok, this is not trolling. I quite liked Lisp and actually had implemented a version of common lisp from scratch based the Guy Steele's CL reference back in the undergrad time. I just found that beyond academic and a few Emacs packages, I didn't use Lisp at all, for one reason or the other. I just want to hear people's reason for not picking Lisp for their work.


I've tried on a few occasions to really like Lisp. I love it's simplicity and recursiveness. The "code as data, data as code" thing is really cool.

But I love static typing. Not only for "safety-net" reasons, where the compiler lets you know that you've broken stuff, but also for documentation reasons (I don't have to guess at the shape of each "c" in "cs" in "(mapcar some-fun cs)", I can just look at its type and know its shape), and I really like it when that type system lets me be more expressive, like in MLs (where I can pattern-match by the various members of an ADT and destructure them as I go).

I also find that idiomatic Lisp likes to nest things too much to be really readable for me. If I don't have types to tell me how the data's being transformed, at least give me named functions and well-named variables instead of very-deeply-nested anonymous functions everywhere.

And finally, I've never been completely convinced that macros are a great idea. They're far too easily abused to create "the Scala problem" of having code in language X that is still unrecognizable to someone else who also writes in language X.


I totally agree with you here; statically typed languages (OMG OCaml) are great for making code very readable/documented, IMHO. I just have so much more confidence in my code when the compiler has my back.

Another commenter pointed this out, but Typed Racket is a Racket variant (which is a scheme-ish variant) that implements "gradual typing". I've played with it here and there and am about to dive in again.

https://docs.racket-lang.org/ts-guide/


That's a very good point. Type information certainly help reading. That might explain why Lisp is easy to write but hard to read. I did find myself having to go into a function's code to figure out what it does and what the parameters are. Hard to read code make sharing difficult and hinder widespread language adoption.


I dislike Lisp (and Lisp-inspired languages like Scheme and Clojure) for two reasons:

1. their weak and dynamic type system, and 2. they don't control side-effects.


Haskel programmers: you can spot them by that constant pained look, as if they have to step in horse poop whenever they go outside.


Your comment is almost accurate when referring to Haskell programmers who have not used the language for very long.


Why did you feel the need to deride someone for simply expressing their opinion? Do you think it helps the discussion?


I meant no disrespect. The horse poop outside was a metaphor for side effects.

During my brief exploration of Haskell, I went to great lengths to avoid dealing with side effects, and when I couldn't avoid them, the pure functional solutions were pretty painful to learn.


> I meant no disrespect.

I don't believe you.


Wow. Must be a culture clash or something. Let's both just have a nice day, then.


I'll agree that clojure doesn't enforce control of side-effects. But with the recent work on clojure.spec, your first complaint has become (even more of) an asset.


Okay, I've spent a lot of time writing Common Lisp and even more time reading about Common Lisp, and to be honest, I'm just a little tired of the cult around it. People who know it well gloat about how great Common Lisp is, which just so happens to make them look great too. And people who don't know Common Lisp all talk about the little Common Lisp they know, because they don't want to seem like they aren't in on it, and because they don't know better.

Let's talk about this fabled exchange between Norvig and McCarthy, where McCarthy asks if Python can gracefully manipulate code as data, and Norvig said no, and supposedly a thousand words were said in the ensuing silence.

Here's the thing; a thousand words weren't said. I don’t know what Tilton thinks was said there, but it certainly wasn’t a one sided silence in which only McCarthy’s side gets to smugly smirk as if it proves anything.

Let’s talk about code as data, shall we? What enables that? Of course, it’s the s-expressions and prefix notation. So let’s write a macro. But we won’t want to write a macro that simply sits at the beginning of an s-expression and takes in a bunch of arguments, because then we could just write a function. No, we want to do a transformation on the code that creates a domain-specific language, because that’s the power of macros, right?

So the very first thing you do with your macro powers, and pretty much the only useful thing you can do, is break s-expressions. Once the first macro is written you can no longer assume that inputs to macros will be s-expressions with the function at the beginning and arguments following. Every future macro must account for every previous macro. The more you use the capability to manipulate code as data gracefully, the less graceful it becomes.

And, in my opinion, far more damningly, macros make your program harder to reason about. Before macros, a reader could assume that the first thing in the s-expression was operating on everything else in the s-expression. But no longer. We can’t even assume that sub-s-expressions in the s-expression are evaluated first.

This isn’t a hypothetical problem. Common Lisp programmers spend a ton of time talking about how to write macros so that they’re not going to come back and bite you in the butt when they get used in an unexpected situation. And the reason is, nobody really knows how to do it.

So, with a great deal of respect for McCarthy based on his many other achievements, I have to say I don’t care about macros, and actually think we’re better off without them. First-class functions are a much more coherent, consumable way of using code-as-data.

Most of the features of Common Lisp I actually want are in other languages now. Garbage collection? REPLs? First-class functions? Higher-order functions? They’re all in other languages now. Python, for example has all those things.

And to be honest, other languages have done a lot better things with the functional programming aspects of Common Lisp. McCarthy gets credit for inventing a lot of stuff, but we don’t fly Wright-brothers style planes today and we shouldn’t use Common Lisp just because it was first to have those things. More sophisticated type systems make lambdas more powerful (and incidentally, a lot of the problems with macros can be seen as type problems).

People on this thread are claiming, “Learning Common Lisp turns you into a better programmer”. But I tend to think that learning functional programming is the part that people are referring to, and frankly, there are better languages in which to learn functional programming. Haskell, Standard ML, or OCaml would be a better choice.

And sure, there are other features that only Common Lisp has, but nobody is talking about those. Restarts? I’d love to see more people experimenting with those. Then again, Erlang has a way better threading model than anything else and much more sophisticated pattern matching. Scheme has call/cc. Standard ML has a powerful type system. Haskell has functional purity. Prolog has unification. A great many of these are more interesting than restarts.

I don’t hate Common Lisp; if nothing else, Common Lisp has a few very solid implementations out there and lots of existing libraries that make it a very useful tool. There are some programs which I wouldn’t consider writing in another language. I just don’t really think it’s the be-all and end-all of programming languages any more, and I’m kind of tired of the cult that has formed around it.

EDIT: s/Lisp/Common Lisp/g because not everyone takes “Lisp” to just mean “Common Lisp”.


> a lot of the problems with macros can be seen as type problems

If it had a type, it would be something like code->code. You seem to hold one of those "not even wrong" kind of notions about types and/or macros.


> Let’s talk about code as data, shall we? What enables that? Of course, it’s the s-expressions and prefix notation.

s-expressions and prefix notation are orthogonal concepts. Nothing in the definition of s-expressions says it uses prefix notation. S-expressions are just a data format and its external notation.

> But we won’t want to write a macro that simply sits at the beginning of an s-expression and takes in a bunch of arguments, because then we could just write a function.

Functions and prefix notation are also unrelated concepts. Lisp syntax uses prefix notation, but not everything is a function call. There are also Lisp variants, which don't use prefix notations, but still have s-expressions and even macros,

Take a lambda expression: (lambda (a b) (* a b (+ a b)))

It has a lambda symbol in prefix position, but it is not a function call.

> So the very first thing you do with your macro powers, and pretty much the only useful thing you can do, is break s-expressions.

Which is nonsense. Macros don't break s-expressions.

> Every future macro must account for every previous macro

Which is also nonsense. The typical macro expansion mechanism takes care of that most of the time.

> Once the first macro is written you can no longer assume that inputs to macros will be s-expressions with the function at the beginning and arguments following.

Well, now we not only have special forms (!), functions, lambda expressions, etc., but also macros. The main difficulty now is: more syntax and even user-defined syntax.

> First-class functions are a much more coherent, consumable way of using code-as-data.

Which are unrelated concepts.

> Most of the features of Common Lisp I actually want are in other languages now. Garbage collection? REPLs? First-class functions? Higher-order functions? They’re all in other languages now. Python, for example has all those things.

There are cars which have wings, can swim, etc. But that does not make them especially useful, say, to bring the kids to school every morning. The raw assembly of features is not the point, it's their integration for certain use cases.

> Wright-brothers style planes today and we shouldn’t use Common Lisp just because it was first to have those things.

Common Lisp wasn't first. CL was defined 26 years (in 1984) after Lisp (1958) was invented and standardized after ten years more work (1994).

> But I tend to think that learning functional programming is the part that people are referring to,

Not really. One can learn functional programming with much simpler languages. Legions of students used simple Lisp dialects/subsets to learn some FP concepts. See SICP (and many other books/courses) - which doesn't use macros, btw.

> and frankly, there are better languages in which to learn functional programming. Haskell, Standard ML, or OCaml would be a better choice.

Since Common Lisp was never designed to enforce or advance statically-typed Functional Programming, it's only logical that it is not particular good at it.


> So the very first thing you do with your macro powers, and pretty much the only useful thing you can do, is break s-expressions. Once the first macro is written you can no longer assume that inputs to macros will be s-expressions with the function at the beginning and arguments following. Every future macro must account for every previous macro. The more you use the capability to manipulate code as data gracefully, the less graceful it becomes.

You should check out Racket's macro system. It's a lot more sophisticated than Common Lisp's. Common Lisp macros are to C (e.g., gensym is a macro-level malloc, you need to manually destructure S-expressions, etc.) as Racket macros are to ML and Haskell (syntax objects are aware of which variables are in scope, so automatic fresh name generation is possible; user-defined syntax classes and patterns let you process arbitrarily complicated structures in a sane way, etc.). If you like the idea of metaprogramming, but `defmacro` left you with a bad taste in the mouth, Racket is totally the language for you.

> First-class functions are a much more coherent, consumable way of using code-as-data.

First-class functions are easier to use than macros, but they are not “code as data”. Furthermore, “code as data” itself is only true with a caveat: the full version is ”code in an object language is data in the metalanguage”, which is obvious to anyone who has written a compiler. Of course, macros make it easy to use Lisp as its own metalanguage, but there's still a phase distinction between macro-expansion time and when the generated code is actually used.

> And to be honest, other languages have done a lot better things with the functional programming aspects of Common Lisp.

Common Lisp is a ridiculously powerful language, but it isn't a functional language. It fails to meet the zeroth nonnegotiable requirement in a practical functional language, namely, a notion of compound value: https://news.ycombinator.com/item?id=12199981


>You should check out Racket's macro system. It's a lot more sophisticated than Common Lisp's. Common Lisp macros are to C (e.g., gensym is a macro-level malloc, you need to manually destructure S-expressions, etc.) as Racket macros are to ML and Haskell (syntax objects are aware of which variables are in scope, so automatic fresh name generation is possible; user-defined syntax classes and patterns let you process arbitrarily complicated structures in a sane way, etc.). If you like the idea of metaprogramming, but `defmacro` left you with a bad taste in the mouth, Racket is totally the language for you.

Better yet, check out some other schemes. ir, er, and sc macros has the raw procedurual power of defmacro, but with the hygene and safety of syntax-case/syntax-rules, without the declarative syntax of syntax-rules, and the disadvantages of syntax-case (stupidly complex, breaking the standard macro abstraction with syntax/datum distinctions, etc.).

Given, syntax-case has some advantages, but I don't think it carries its own weight from a programmer's perspective.


> You should check out Racket's macro system. It's a lot more sophisticated than Common Lisp's. Common Lisp macros are to C (e.g., gensym is a macro-level malloc, you need to manually destructure S-expressions, etc.) as Racket macros are to ML and Haskell (syntax objects are aware of which variables are in scope, so automatic fresh name generation is possible; user-defined syntax classes and patterns let you process arbitrarily complicated structures in a sane way, etc.).

Agreed. I did play around with this part of Racket quite a bit and I'm convinced that it's the best system if I wanted to create a domain-specific language. But it still runs into the problem where you're defining a new language with new syntax, which forces you to define even more new language with more syntax in order to make that language useful. It's the best way to build a DSL

But given we already have a pretty good multi-purpose language with a lot of work put into it (Racket) the number of situations where it's worthwhile to create an equally-well-thought-out DSL is pretty low. Racket makes it easier, but it's still not easy. Add to this the fact that other people are going to write half-assed DSLs in my code, the net tradeoff is still usually negative, even with Rackets clearly superior macro system.

> Furthermore, “code as data” itself is only true with a caveat: the full version is ”code in an object language is data in the metalanguage”, which is obvious to anyone who has written a compiler.

Uh, I've written a compiler and that's not obvious.

If you want to disagree with me on what "code as data" means, you're welcome to do so. As long as you understood what I said I don't care which words got me there.

> Common Lisp is a ridiculously powerful language, but it isn't a functional language. It fails to meet the zeroth nonnegotiable requirement in a practical functional language, namely, a notion of compound value: https://news.ycombinator.com/item?id=12199981

How about we assume when I said "functional programming" I'm using the Wikipedia definition[1].

[1] https://en.wikipedia.org/wiki/Functional_programming


> Racket makes it easier, but it's still not easy. Add to this the fact that other people are going to write half-assed DSLs in my code, the net tradeoff is still usually negative, even with Rackets clearly superior macro system.

You have a point there. In any case, metaprogramming shouldn't be an everyday activity.

> If you want to disagree with me on what "code as data" means, you're welcome to do so.

What I mean by “code in an object language is data in the metalanguage” is that, in the (meta)language in which you're writing a compiler or interpreter, the (object language) program that you're processing is represented as a data structure (say, as a syntax tree). It's just a triviality, I'm not saying anything really deep.

From this point of view, it should be clear that a macro system is essentially a language-integrated facility for writing compiler plugins.

> How about we assume when I said "functional programming" I'm using the Wikipedia definition[1].

My definition of functional programming is “using (procedures that compute) mathematical functions whenever possible”. A mathematical function is a mapping from values to values, so if a language doesn't have a good notion of (possibly compound) value, then you're going to run into trouble writing procedures that compute mathematical functions.

EDIT: The Wikipedia definition essentially agrees with me.

“In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions [emphasis mine] and avoids changing-state and mutable data.”

And a mathematical function is a mapping from values (in a domain) to values (in a codomain). Again, quoting Wikipedia[0]:

“A function can be defined by any mathematical condition relating each argument (input value) to the corresponding output value.”

[0] https://en.wikipedia.org/wiki/Function_(mathematics)#Specify...


> What I mean by “code in an object language is data in the metalanguage” is that, in the metalanguage in which you're writing a compiler or interpreter, the object language program that you're processing is represented as a data structure (say, as a syntax tree). It's just a triviality, I'm not saying anything really deep.

I understood what you were saying, and didn't ask for an explanation. I just don't see why you felt the need to correct me on my calling first=class functions "code as data" and substitute your own definition that had nothing to do with what I was saying.

> My definition of functional programming is “using (procedures that compute) mathematical functions whenever possible”. A mathematical function is a mapping from values to values, so if a language doesn't have a good notion of (possibly compound) value, then you're going to run into trouble writing procedures that compute mathematical functions.

Again, I didn't ask what your definition was, because it wasn't relevant to the conversation.

You're basically interrupting me to tell me I'm not using the same definitions of words as you are, and it's not particularly endearing. If you don't understand what I'm saying, I'll be happy to explain. If you do, however, understand what I'm saying well enough to correct me on my usage of the English language, then my usage of the language has been clear enough for my goals, so I wouldn't be interested in your corrections even if they represented HN's common usage (which they don't).


> I just don't see why you felt the need to correct me on my calling first=class functions "code as data"

Because first-class functions aren't “code as data”. When you have a first-class function, the only thing you can do with it is call it. If it were a data structure, you could analyze its constituent parts.

> You're basically interrupting me to tell me I'm not using the same definitions of words as you are,

The Wikipedia article you linked says:

“In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions [emphasis mine] and avoids changing-state and mutable data.”

The emphasized part is just what I said. That a mathematical function is a mapping from values to values is unquestionable - it's not something I'm saying, it's a mathematical fact. So if you can't express compound values, you're limited to a world where mathematical functions can only manipulate primitive values.


> Because first-class functions aren't “code as data”. When you have a first-class function, the only thing you can do with it is call it. If it were a data structure, you could analyze its constituent parts.

There are lots of ways in which you can treat a first class function as data, even if it can't be treated 100% equivalently to data in every single situation. You can, for example, pass it to other functions like data. Sure, most languages don't let you inspect the internals, but that's only one way of treating code as data.

> The emphasized part is just what I said. That a mathematical function is a mapping from values to values is unquestionable - it's not something I'm saying, it's a mathematical fact. So if you can't express compound values, you're limited to a world where mathematical functions can only manipulate primitive values.

I said, "And to be honest, other languages have done a lot better things with the functional programming aspects of Common Lisp." Common Lisp isn't a purely functional language, but it does have functional aspects. Note in the article I quoted, that Common Lisp is the first language listed in the "prominent programming languages which support functional programming such as" section.

You're literally not even disagreeing with me, you're just defining "functional programming language" as "purely functional programming language", when I literally never even called Common Lisp a functional programming language.

Insisting on your definitions instead of trying to understand what I said doesn't make me want to engage with you further.


> There are lots of ways in which you can treat a first class function as data, even if it can't be treated 100% equivalently to data in every single situation. You can, for example, pass it to other functions like data.

Strictly speaking, you can't pass functions as data. You can only pass thunks that, when forced, yield functions. A thunk is data, but a function is a computation. Computations are “too active” to be stored or passed around unthunked. The technical details are here: http://www.cs.bham.ac.uk/~pbl/cbpv.html, http://www.cs.bham.ac.uk/~pbl/papers/. (I am not the owner of the website, just in case.)

> Sure, most languages don't let you inspect the internals, but that's only one way of treating code as data.

What are the others?

> You're literally not even disagreeing with me, you're just defining "functional programming language" as "purely functional programming language",

How did you conclude that? I never said anything of the sort. What I said is “functional programming is programming with procedures that compute mathematical functions whenever possible”. Pure functional programming imposes further requirements, like effect segregation (as in Haskell) or even the total absence of effects (obviously unsuitable for a general-purpose language). FWIW, I'd count ML, Racket, Clojure and Erlang as functional languages.

> when I literally never even called Common Lisp a functional programming language.

You said Common Lisp has “functional aspects”. Well, closures make a language higher-order, but so do Java-style objects! For a language to be called “functional”, however, it has to make functional programming actually pleasant. I showed one fundamental limitation of Common Lisp in this regard: you can't define functions that take or return compound values, because Common Lisp doesn't have compound values in the first place.


> So the very first thing you do with your macro powers, and pretty much the only useful thing you can do, is break s-expressions. Once the first macro is written you can no longer assume that inputs to macros will be s-expressions with the function at the beginning and arguments following.

That's just completely wrong, and makes me wonder if you have actually spent much time writing Common Lisp at all. Macros are orthogonal to S-expressions: they manipulate code (which is just lists) in-memory, after it's been read in from S-expressions by the reader. They don't break S-expressions at all (and indeed, the output of a macro is necessarily PRINTable as an S-expression, e.g. (macroexpand '(with-open-file (foo "/tmp/"))) expands to:

    (LET ((FOO (OPEN "/tmp/foo")) (#:G637 T))
         (UNWIND-PROTECT (MULTIPLE-VALUE-PROG1 (PROGN) (SETQ #:G637 NIL))
                         (WHEN FOO (CLOSE FOO :ABORT #:G637))))
When Lisp programmers talk of implementing a DSL with its own syntax, we almost always mean an S-expression-based DSL, where the fundamental syntax still uses S-expressions and the DSL is built atop them. An example would be CL-WHO[1], where HTML tags are represented e.g. as (:p "Some text " (:em "emphasised")).

Maybe you're thinking of read macros, which really can be used to implement other syntaxes, e.g. CLSQL[2], which uses custom reader syntax to implement inline SQL expressions? In that case, you're not wrong: implementing your own reader syntax does, indeed, break S-expressions. But it doesn't matter: due to the specification of the reader algorithm, neither elements containing nor elements within custom syntax care. The reader algorithm is quite elegant that way.

> I have to say I don’t care about macros, and actually think we’re better off without them. First-class functions are a much more coherent, consumable way of using code-as-data.

At the cost of being much more verbose. I've taken a look at trying to implement restarts in Go, and the sheer number of times I'd have to type 'func' is insane.

> But I tend to think that learning functional programming is the part that people are referring to

Once again, I disagree. IMHO writing Lisp makes one a better programmer because one starts to think of code as clay, which can be sculpted and crafted, moved around, deleted, generated &c. The worst good programmers I know of simply aren't comfortable with that concept; the best programmers I know are.

Compared to that, functional programming is merely interesting IMHO.

[1] http://weitz.de/cl-who/ [2] http://clsql.kpe.io/


>Let's talk about this fabled exchange between Norvig and McCarthy, where McCarthy asks if Python can gracefully manipulate code as data, and Norvig said no, and supposedly a thousand words were said in the ensuing silence.

Python has this: https://docs.python.org/2/library/ast.html so it is possible to manipulate code as data. It just happens that it's not homoiconic, but so what?


> Python has this: https://docs.python.org/2/library/ast.html so it is possible to manipulate code as data. It just happens that it's not homoiconic, but so what?

The difference is that when you can do it gracefully, you'll end up doing it and using this possibility to build your project. If you can't do it gracefully, you'll dread using it and will not consider it a possible solution to your problem unless all other options are exhausted.


Well, the key part is gracefully. Lots of languages can manipulate code as data, but so for for me only in homoiconic languages does it end up feeling graceful for a while (although as I noted, it stops feeling graceful pretty quickly).


>I'm just a little tired of the cult around it

Me too, but you seem to not be entirely clued in to how macros work. So there's that.

>So the very first thing you do with your macro powers, and pretty much the only useful thing you can do, is break s-expressions. Once the first macro is written you can no longer assume that inputs to macros will be s-expressions with the function at the beginning and arguments following. Every future macro must account for every previous macro. The more you use the capability to manipulate code as data gracefully, the less graceful it becomes.

While perhaps more true in Common Lisp than in scheme, that's still fairly untrue. IIRC, macro expansion is innermost-first, so macros don't have to worry about tripping over each other, but even if that's not true, you're still wrong. Macros are sexprs, just like anything else: they break the semantic rules of lisp, NOT the syntactic rules of lisp. Since macros operate primarily on a syntactic level (their job is to sugar over Lisp), and when they operate on a semantic level, they're being used to write DSLs, which aren't lisp, the problem you describe is rare to nonexistant.

>macros make your program harder to reason about.

Yes, but not for the reasons you think. Macros make your program harder to reason about for the same reason functions do: they're an abstraction. The higher level the abstraction you're using is, the harder it is to reason about your code, because by definition, you can't see what it's doing.

>Common Lisp programmers spend a ton of time talking about how to write macros so that they’re not going to come back and bite you in the butt when they get used in an unexpected situation. And the reason is, nobody really knows how to do it.

However, us schemers (NOT racketeers, Racket using syntax-case instead, which while similar seeming, is an entirely different ball game, although it also fixes this) have had hygenic defmacro-style lowlevel macros for years now, and that fixes the worst of it.

>Python, for example has all those things.

...Not actually true. Python's functions aren't first-class. They try very hard to be, but they're not. Which is why idiomatic Python doesn't use a lot of higher-order functions. Ruby does better, but of the trinity of popular high-level scripting languages, only JS has true first-class functions. In fact, for a while, JS had no distinct syntax for function declaration.

>People on this thread are claiming, “Learning Common Lisp turns you into a better programmer”. But I tend to think that learning functional programming is the part that people are referring to

It's not what I'd refer to. FP makes you a better programmer to, but learn than in Haskell or ML, or even Scheme, though it's not as good for that purpose. Learning lisp exposes you to code-as-data in a very viceral way, and can help you understand a variety of programming concepts, but more importantly, it exposes you to new, different ways of programming, which always makes you a better programmer.

>And sure, there are other features that only Common Lisp has, but nobody is talking about those. Restarts? I’d love to see more people experimenting with those. Then again, Erlang has a way better threading model than anything else and much more sophisticated pattern matching. Scheme has call/cc. Standard ML has a powerful type system. Haskell has functional purity. Prolog has unification. A great many of these are more interesting than restarts.

I agree, but restarts are the one I want most. It might seem like we schemers got a better deal with call/cc, and I love call/cc, but sometimes I look at restarts, and wish we had gone the other way (call/cc and functioning restarts are effectively mututally exclusive). Trust me, call/cc looks cool, but so does self-rewriting asm. Both come in handy from time to time, but you rarely want either in production (and especially not if your scheme doesn't have cheney on the mta compilation, because then call/cc is criminally slow)

>I just don’t really think it’s the be-all and end-all of programming languages any more, and I’m kind of tired of the cult that has formed around it.

That's true. No language is, or ever can be the be-all, end-all. Rust and FP taught me that: some paradigms and ideas require resrictions, others require freedom. There's always something new to learn. And the cult of common lisp is not one you need to join. In fact, if you're a good lisper, you can't really be part of it.


A great read. Few words, and everything relevant mentioned.


> most Common Lisp implementations are native compiled. ... isn't it nice to compile down to the metal?

Is just the interpreter native or does cl compile your app to native as well?


To native with elegance [1] and even closer to the metal "because sometimes C abstracts away too much" [2].

[1] https://www.pvk.ca/Blog/2014/08/16/how-to-define-new-intrins...

[2] http://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-assembly...


SBCL compiles to native. In fact thats how you check if the function was tail-optimized -- it will have a JMP opcode instead of CALL


To add to this, Most of them compile to native. I think the exception is CLISP. Certain implementations have both because compiling was costly at one point.


> he simply asked if Python could gracefully manipulate Python code as data

Have there been any big LISP macro code injection vulnerabilities in the wild?


As the other comment says, that doesn't really make sense. However, I have had to debug a codebase where lisps unhygienic macros led to extremely weird errors. The second time that happened in a big project I decided that schemes hygienic macros are superior. The extra cruft writing them is worth it. I have literally spent weeks debugging weird macro errors introduced years after the macro was first written.


You're not wrong: simple macros can be written poorly. Macros which don't properly use GENSYM should be rejected in review. Also, it's entirely possible to write a hygienic macro implementation with DEFMACRO: nothing stops a team from doing that (or using one off-the-shelf).

But sometimes you do need the full power of DEFMACRO, and it's not possible to write DEFMACRO with DEFINE-SYNTAX. So one tool enables you to do anything you need, and the other only some of what you need. I know what I prefer.


Let's agree to agree then :) I much prefer Racket's macros. Their version of syntax-case is a bit simplified compared to the one shipped with other schemes like guile and chez.

If you want to see where you can go by (ab)using racket macros, watch this: https://www.youtube.com/watch?v=WQGh_NemRy4


What do you mean with "macro code injection vulnerabilities"? Macros are expanded before the code is compiled; by the time someone is running the program there are no macros. Well, technically, someone could call `EVAL` in a production app, but that's strongly discouraged.


Did Kenny ever release his educational software?


It appears so, yes: http://tiltonsalgebra.com/#


This is amazing. Kenny, please fix the fonts.


Is there any advantage in using a functional language when programming non-mathematically related code?

I see how it can help write Neural Networks, AI, financial code, etc. But if you know Python,Ruby,PHP,Java and Lisp (well), why would you choose to write a blog, webstore, webmail client or social media* app (which probably is about 90% of what people actually do) in Lisp over the four three?

I'll tell you why I would chose the first four:

1. Large community. 2. Many more libraries. 3. Easier to hire.

*Except for (perhaps) a small spam-block/feed ai module


Yes. Functional programming (although it isn't necessarily a trait of Lisp) has, in fact, little relation to math (despite what many people believe). It's also very far removed from the field of AI right now (not for any particular reason, but there's no more specific reason to switch to Lisp for AI than there is for desktop or web apps). The reason you'd use Lisp is quite simple -- it saves time and makes programming easier. Macros are the biggest time saver in the world, and you don't realize how much time you waste writing repetitive code until you get to use them.

As for your 3 reasons in favor of the other languages, I think the first means almost nothing at all. Lisp is much easier (once you grasp it) than many other languages, so I think the fact that you'll get less answers on StackOverflow is irrelevant. As for the second reason, as long as you have one library that works for what you want -- say some library for writing web servers -- it doesn't really matter after that. There's rarely any reason to reinvent the wheel here, and there's always someone that attempted such a common task before you. Just because you don't have 3,000 different choices like in Java, doesn't mean that you won't find high-quality code for what you need to do (in fact, I'd argue that Lisp code has a much higer quality on average than Java code). As for hiring, I think Paul Graham has already given the best comments on this. Simply put, a good programmer can be taught to write good Lisp, even if they don't know it by the time you hire them.


I'd say web's request -> response model is pretty well suited to functional programming.

It seems popular amongst JavaScript programmers at the moment as well. React, which appears the current hotness, is based on making a functional interface to the DOM.

Although actually, I'd say the main reason to use a Lisp for web gunk is the macros rather than the functional programming. The tree structure maps very naturally onto HTML generation. There are some really nice libraries (in particular I'm familiar with Clojure libraries) which make the server-side part of a web app really neat. You can basically skip the whole 'templating' bit.


The whole manipulating code as date just always seems like such a rich target for exploitation like that ruby yaml bug from a while ago [0]. It seems like a features that would be great if your code never needs data from anywhere else but as soon as you gave data coming in from other sources it would be a bit of a nightmare.

0: http://blog.codeclimate.com/blog/2013/01/10/rails-remote-cod...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: