Tangential, but to people who find this topic interesting I highly recommend the book What is Intelligence [0] by Blaise Agüera y Arcas that views life through the lens of mutating self replicating Turing machines.
In the book he also talks about GoF, but one of the fascinating experiments he did is that a "computronium" of initially random Brainfuck programs (that obviously don't do anything interesting in the beginning) that mutate (by flipping random bits) and merge (by taking two random programs and sticking them together) eventually, after a sudden phase transition, start to self replicate by producing gradually better copies of themselves!
He also argues that symbiogenesis (merging replicators into a whole that does more than its parts) is the main driving force of evolution instead of just random mutations, because the random Brainfuck computronium eventually produces replicators even without the random bit flips.
> immutability in Clojure doesn't mean inefficiency.
You are still doing a gazillion allocations compared to:
for (let i = 0; i < data.length; i++) { hist[data[i]]++; }
But apart from that the mutable code in many cases is just much clearer compared to something like your fold above. Sometimes it's genuinely easier to assemble a data structure "as you go" instead of from the "bottom up" as in FP.
Sure, that’s faster. But do you really care? How big is your data? How many distinct things are you counting? What are their data types? All that matters. It’s easy to write a simple for-loop and say “It’s faster.” Most of the time, it doesn’t matter that much. When that’s the case, Clojure allows you to operate at a higher level with inherent thread safety. If you figure out that this particular code matters, then Clojure gives you the ability to optimize it, either with transients or by dropping down into Java interop where you have standard Java mutable arrays and other data structures at your disposal. When you use Java interop, you give up the safety of Clojure’s immutable data structures, but you can write code that is more optimized to your particular problem. I’ll be honest that I’ve never had to do that. But it’s nice to know that it’s there.
The allocation overhead rarely matters in practice - in some cases it does. For majority of "general-purpose" tasks like web-services, etc. it doesn't - GC is extremely fast; allocations are cheap on modern VMs.
The second point I don't even buy anymore - once you're used to `reduce`, it's equally (if not more) readable. Besides, in practice you don't typically use it - there are tons of helper functions in core library to deal with data, I'd probably use `(frequencies coll)` - I just didn't even mentioned it so it didn't feel like I'm cheating. One function call - still O(n), idiomatic, no reduce boilerplate, intent is crystal clear. Aggressively optimized under the hood and far more readable.
Let's not get into strawman olympics - I'm not selling snake oil. Clojure wasn't written in some garage by a grad student last week - it's a mature and battle-tested language endorsed by many renowned CS people, there are tons of companies using it in production. In the context of (im)mutability it clearly demonstrates incontestable, pragmatic benefits. Yes, of course, it's not a silver bullet, nothing is. There are legitimate cases where it's not a good choice, but you can argue that point pretty much about any tool.
If there was a language that didn't require pure and impure code to look different but still tracked mutability at the type level like the ST monad (so you can't call an impure function from a pure one) - so not Clojure - then that'd be perfect.
But as it stands immutability often feels like jumping through unnecessary hoops for little gain really.
There's no such thing as "perfect" for everyone and for every case.
> feels like jumping through unnecessary hoops for little gain really.
I dunno what you're talking about - Apple runs their payment backend; Walmart their billing system; Cisco their cybersec stack; Netflix their social data analysis; Nubank empowers entire Latin America - they all running Clojure, pushing massive amounts of data through it.
I suppose they just have shitload of money and can afford to go through "unnecessary hoops". But wait, why then tons of smaller startups running on Clojure, on Elixir? I guess they just don't know any better - stupid fucks.
But ok, if mutability is always worse, why not use a pure language then? No more cowardly swap! and transient data structures or sending messages back and forth like in Erlang.
But then you get to monads (otherwise you'd end up with Elm and I'd like to see Apple's payment backend written in Elm), monad transformers, arrows and the like and coincidentally that's when many Clojure programmers start whining about "jumping through unnecessary hoops" :D
Anyway, this was just a private observation I've reached after being an FP zealot for a decade, all is good, no need to convert me, Clojure is cool :)
Clojure is not "cool". Matter of fact, for a novice it may look distasteful, it really does. Ask anyone with a prior programming experience - Python, JS, Java to read some Clojure code for the first time and they start cringing.
What Clojure actually is - it is "down to earth PL", it values substance over marketing, prioritizes developers happiness in the long run - which comes in a spectrum; it doesn't pretend everyone wants the same thing. A junior can write useful code quickly, while someone who wants to dive into FP theory can. Both are first-class citizens.
Qualia exist "outside" spacetime, e.g. redness doesn't have a position in spacetime. If consciousness is purely physical, then how can two identical systems (identical brains with identical sensory input) giving rise by definition to the same qualia not literally be the same consciousness?
> Qualia exist "outside" spacetime, e.g. redness doesn't have a position in spacetime.
I'm sensing redness here and now, so the sensation of redness exists very clearly tied to a particular point in spacetime. In what sense is the qualia of redness not firmly anchored in spacetime? Of course, you could talk about the concept redness, like the concept Pi, but even then, these concepts exist in the mind of a human thinking about them, still tied to a particular location in spacetime.
> If consciousness is purely physical, then how can two identical systems (identical brains with identical sensory input) giving rise by definition to the same qualia not literally be the same consciousness?
The two brains don't receive the same sensory inputs, nothing in the experiment says they do. From the second right after the duplicate is created, their sensory inputs diverge, and so they become separate consciousnesses with the same history. They are interchangeable initially, if you gave the same sensory inputs to either of them, they would have the same output (even internally). But, they are not identical: giving some sensory input to one of them will not create any effect directly in the other one.
> I'm sensing redness here and now, so the sensation of redness exists very clearly tied to a particular point in spacetime. In what sense is the qualia of redness not firmly anchored in spacetime? Of course, you could talk about the concept redness, like the concept Pi, but even then, these concepts exist in the mind of a human thinking about them, still tied to a particular location in spacetime.
But qualia are inherently subjective. You can correlate brain activity (which exists at a position in spacetime) to subjective experience, but that experience is not related to spacetime.
Said otherwise: imagine you are in the Matrix at a coffee shop and sense redness, but your brain is actually in a vat somewhere being fed fake sensory input. "Where" is the redness? You would clearly say that it arises in your brain in the coffee shop. Imagine then the vat is moved (so its position in spacetime changes), your brain is rolled back to its previous state, and then fed the same sensory input again. Where is the redness now?
You can't differentiate the two sensations of redness based on the actual position of the brain in spacetime. For all intents and purposes, they are the same. Qualia only depend on the internal brain state at a point in time and on the sensory input. Spacetime is nowhere to be found in that equation.
> The two brains don't receive the same sensory inputs
But let's say they do. Identical brains, identical inputs = identical qualia. What differentiates both consciousnesses?
> But let's say they do. Identical brains, identical inputs = identical qualia. What differentiates both consciousnesses?
I'll start with this, because it should help with the other item. We know there are two identical consciousnesses exactly because they are separate in spacetime. That is, while I can send the same input to both and get the same mind, that's not the interesting thing. The interesting thing is that I also can send different inputs, and then I'll get different minds. If it really were a single consciousness, that would be impossible. For example, you can't feed me both pure redness and pure greenness at the same time, so I am a single consciousness.
Here is where we get back to the first item: if we accepted that qualia are not localized in spacetime, we'd have to accept that there is no difference between me experiencing redness and you experiencing redness. Even if you consider that your qualia are separate from my own because of our different contexts, that still doesn't fully help: perhaps two different beings on two different planets happen to lead identical lives up to some point when a meteorite hits one of the planets and gravely injures one of their bodies. Would you say that there was a single consciousness that both bodies shared, but that it suddenly split once the meteorite hit?
Now, that is a valid position to take, in some sense. But then that means that consciousness is not continuous in any way, in your view. The day the meteorite hit planet A is not special in any way for planet B. So, if the single consciousness that planet A and planet B shared stopped that day, only to give rise to two different consciousnesses, that means that this same phenomenon must happen every day, and in fact at every instant of time. So, we now must accept that any feeling of time passing must be a pure illusion, since my consciousness now is a completely different consciousness than then one that experienced the previous minute. While this is a self-consistent definition, it's much more alien than the alternative - where we would accept that consciousness is tied in spacetime to its substrate.
> Would you say that there was a single consciousness that both bodies shared, but that it suddenly split once the meteorite hit?
I agree, this is super weird. In a sense this seems to be the difference between viewing consciousness from the first person vs the third person. But until we understand how (if at all) matter generates felt experience the latter view can not, by definition, be about consciousness itself.
I guess this kind of perspective commits one to viewing first person experience in the way we understand abstract concepts - it is nonsensical to ask what the difference between this "1" here and that other "1" over there is. Well, you can say, they are at different positions and written in different materials etc, but those are not properties of the concept "1" anymore.
So yes, coming back to the thought experiment, one of the consequences of that would have to be that both bodies share the same consciousness and the moment something diverges the consciousnesses do too.
The point about time is interesting, and also directly related to AI. If at some point machines become conscious (leaving aside the question if that's possible at all and how we would know without solving the aforementioned hard problem), they would presumably have to generate quanta at discrete steps. But is that so strange? The nothingness in between would not be felt (kind of like going to sleep and waking up "the next moment").
But maybe this idea can be applied to dynamical continuous systems as well, like the brain.
(Btw this conversation was super interesting, thank you!)
> How does Claude write rhyming poetry? Consider this ditty:
> He saw a carrot and had to grab it,
> His hunger was like a starving rabbit
> To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with "grab it"), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.
> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.
In split-brain experiments this is exactly how one half of the brain retroactively justifies the action of the other half. Maybe it is the case in LLMs that an overpowered latent feature sets the overall direction of the "thought" and then inference just has to make the best of it.
Negative comments are so weird, it's like people forgot what GPT 2 was like. I know this isn't completely new, but it's a world simulation inside a goddamn LLM. Not perfect, not coherent over longer time periods, but still insane. I swear if tomorrow magic turned out to be real and wizards start controlling the literal fabric of the universe people will be like "meh" before the week ends :D
I'm still reading it, but something caught my eye:
> I interpret there to typically be hand waving on all sides of this issue; people concerned about AI risks from limited models rarely give specific failure cases, and people saying that models need to be more powerful to be dangerous rarely specify any conservative bound on that requirement.
I think these are two sides of the same coin - on one hand, AI safety researchers can very well give very specific failure cases of alignment that don't have any known solutions so far, and take this issue seriously (and have been for years while trying to raise awareness). On the other, finding and specifying that "conservative bound" precisely and in a foolproof way is exactly the holy grail of safety research.
I think the holy grail of safety research is widely understood to be a recipe for creating a friendly AGI (or, perhaps, a proof that dangerous AGI cannot be made, but that seems even more unlikely). Asking for a conservative lower bound is more like "at least prove that this LLM, which has finite memory and can only answer queries, is not capable of devising and executing a plan to kill all humans", and that turns out to be more difficult than you'd think even though it's not an AGI.
The problem is that concur-static generates static JS code that encodes all possible UI state transitions - so if the state space is big or infinite, so will be the resulting generated JS.
I wanted to explore the viability of generating simple static UIs with some level of dynamism. concur-static is definitely not intended as a replacement for full-blown client side UI libraries/frameworks.
> so if the state space is big or infinite, so will be the resulting generated JS.
I understand that, but it's not like this doesn't already choke on big input spaces. Why not allow me to use a big input space with a small state space by exploring the state space instead? Otherwise you just seem to be exploiting the fact that small input spaces make for small state spaces, which seems like an unnecessary indirection.
Or is it to try to enforce small state spaces to prevent programmer error?
That seems like a good idea indeed, however I'm not sure how it'd look in practice. The transition from input space to state space happens in the event handlers, which currently look like this:
onClick :: Bounded a => Enum a => a -> VDOM a
If I understand correctly, you're proposing something like:
onClick :: Bounded a => Enum a => a -> VDOM b
Where b can be whatever (i.e. an Int, etc)? How would a be converted into b?
In the book he also talks about GoF, but one of the fascinating experiments he did is that a "computronium" of initially random Brainfuck programs (that obviously don't do anything interesting in the beginning) that mutate (by flipping random bits) and merge (by taking two random programs and sticking them together) eventually, after a sudden phase transition, start to self replicate by producing gradually better copies of themselves!
He also argues that symbiogenesis (merging replicators into a whole that does more than its parts) is the main driving force of evolution instead of just random mutations, because the random Brainfuck computronium eventually produces replicators even without the random bit flips.
[0] https://whatisintelligence.antikythera.org