Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once a year or so I find myself on those forums and I'm always astounded how many people there are that dedicate massive amounts of time and brain power to this.


I think it appeals to the same itch that languages like Brainfuck scratch.

There's something exceedingly interesting about how you can model complexity with something extremely simple. Brainfuck is fun because it forces you to think extremely low level, because ultimately it is basically just a raw implementation of a Turing machine. I wouldn't want to write a big program in it, but it is fun to think about how you might express a complicated algorithm with it.

Similarly with CGOL, it is really interesting to see how far you can stretch really simple rules into something really complex.

I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.


I have a weird love for Brainfuck. It's a tiny, incredibly simple language that you can write an interpreter for in an hour, it's effectively a super-simple byte code that can easily be extended and used as a compilation target for simple languages.

Honestly, as an educational tool, the only thing wrong with it is the name!


for those who think brainfuck is too pedestrian, have a browse through the esolang wiki:

https://esolangs.org/wiki/Language_list


StupidStackLanguage is by far my favorite:

https://esolangs.org/wiki/StupidStackLanguage


Piet is mine - the programs are 2D images: https://esolangs.org/wiki/Piet

Primarily because of the note on the "calculating pi" example program:

> Richard Mitton supplies this amazing program which calculates an approximation of pi... literally by dividing a circular area by the radius twice.

> Naturally, a more accurate value can be obtained by using a bigger program.

https://www.dangermouse.net/esoteric/piet/samples.html


One of my favorite calculations of pi is to pick random coordinates in a unit square and count how many of them are in a circle. it's so stupid and so clever at the same time.

This was recreated from memory. I think it is close but I may have a bounding bug.

    import random

    def pi(count):
      inside = 0
      for i in range(count):
        test_x = random.random() 
        test_y = random.random()
        if test_x ** 2 + test_y ** 2 < 1:
          inside += 1
        return inside / count * 4 #above is a quarter circle
    
    print(pi(2 ** 30) )

With the metatopic of this thread being obscure languages, I had some fun squeezing this into some list comprehensions (maybe someone's got an idea of how to keep track of the state within the list):

```

$ cat << EOF > pi.py

state = [0, 0, 2*8, 2*12]; _ = [print(f'\rRun {state.__setitem__(0, state[0] + 1) or state[0]}/{state[3]} | Last \u03c0: {current_pi:.6f} | *Average \u03c0: {(state.__setitem__(1, state[1] + current_pi) or state[1]) / state[0]:.6f}*', end='', flush=True) for current_pi in [(4 * sum([1 for _ in range(state[2]) if __import__("random").random()*2 + __import__("random").random()*2 < 1]) / state[2]) for _ in range(state[3])]]; print()

EOF

$ time python3 pi.py

Run 4096/4096 | Last π: 3.140625 | *Average π: 3.143051*

python3 pi.py 0.41s user 0.01s system 99% cpu 0.429 total

```

Play around with the `2*8` and `2*10` values in the state, they control the amount of rounds and the range in which the random values get generated respectively.


I'm not sure about a bounding bug, but there's definitely an indent error on the return line (good old Python!)


I have no idea how I'd be able to pitch this to a university (or even who I could pitch it to), but I would absolutely love to teach a computability course using Brainfuck as the language, just to really show students how low-level logic can be.

I would probably need to find a similar language with a different name though.


You might find mlatu-6[0] interesting- it’s convertible to SKI calculus but concatenative (like Forth) rather than applicative. It’s actually a subset of Mlatu, a language I created for similar reasons to explore “how low can you go.”

[0]: https://esolangs.org/wiki/Mlatu-6


When I was an undergrad at Georgia Tech, one of my intro computer science classes had us implement something in brainfuck. Turns out college kids are quite comfortable with swear words.

Of course they are ... It's college administration that are uncomfortable

How about Assembly?

Assembly is higher level logic than brainfuck, especially on modern chips. You have built in instructions for arithmetic and conditionals/branches and you can allocate memory and point to it.

You don’t really get any of that with brainfuck. You have a theoretical tape and counters and that’s basically it.


SKI calculus is pretty neat, too. You get no tape, no counters. (But it's not quite as bad to program in as brainfuck, because you can built more ergonomic contraptions to help you along.)

SKI can, of course, be de-optimised a bit further by replacing I with SKK. You are right though that it is relatively simply to go from something that looks like a normal program languages to a pile of S and K combinators. Not the most efficient way to compute though!

Unlambda solves that problem.

"What if we had the lambda calculus without the lambda forms?" asked no one.

http://www.madore.org/~david/programs/unlambda/


Oh my god, that list lacks DATAFlex!

Dear Lord.

I remain utterly baffled how they made a lisp compiler with malbolge

> I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.

Writing a naive CGOL is fun and quick. But writing a _fast_ one can get arbitrarily complicated.

https://en.wikipedia.org/wiki/Hashlife is one particular example of where you can go for a faster-than-naive CGOL.


Yeah, a million years ago I did that one as a convolutions so I could run it on the GPU when I was learning OpenCL. That was my first exposure to “optimizing” CGOL.

In the music world there are people who will build whole symphonies out of one sample, one filter, and one delay patch.

"So did Achilles lose his friend in war, and Homer did no injustice to his grief by writing about it in dactylic hexameters" - Tobias Wolff, Old School

Same with Your World of Text [0], still going strong.

[0] https://www.yourworldoftext.com/


if you find that fascinating then you'll be blown away by something called 'Wolfarm physics project'. it basically is trying to recreate entire physics using such baseline 'graph update' rules like 'Game of Life'. So far no predictions yet but very interesting.


Wolfram is kind of obsessed with cellular automata, even went and wrote a whole book about them titled "A New Kind of Science". The reception to it was a bit mixed. CA are Turing-complete, so yeah, you can compute anything with them, I'm just not sure that in itself leads to any greater Revealed Truths. Does make for some fun visualizations though.


A new kind of science is one of my favorite books, I read the entirety of the book during a dreadful vacation when I was 19 or 20 on an iPod touch.

It goes much beyond just cellular automata, the thousand pages or so all seem to drive down the same few points:

- "I, Stephen Wolfram, am an unprecedented genius" (not my favorite part of the book) - Simple rules lead to complexity when iterated upon - The invention of field of computation is as big and important of an invention as the field of mathematics

The last one is less explicit, but it's what I took away from it. Computation is of course part of mathematics, but it is a kind of "live" mathematics. Executable mathematics.

Super cool book and absolutely worth reading if you're into this kind of thing.


I would give the same review, without seeing any of this as a positive. NKS was bloviating, grandiose, repetitive, and shallow. The fact that Wolfram himself didn’t show that CA were Turing complete when most theoretical computer scientists would say “it’s obvious, and not that interesting” kinda disproves his whole point about him being an under appreciated genius. Shrug.

That CA in general were Turing complete is 'obvious'. What was novel is that Wolfram's employee proved something like Turing completeness for a 1d CA with two states and only three cells total in the neighbourhood.

I say something-like-Turing completeness, because it requires a very specially prepared tape to work that makes it a bit borderline. (But please look it up properly, this is all from memory.)

Having said all that, the result is a nice optimisation / upper bound on how little you need in terms of CA to get Turing completeness, but I agree that philosophically nothing much changes compared to having to use a slightly more complicated CA to get to Turing completeness.


The question really ultimately resolves to whether the universe can be quantized at all levels or whether it is analog. If it is quantized I demand my 5 minutes with god, because I would see that as proof of all of this being a simulation. My lack of belief in such a being makes me hope that it is analog.


Computation does not necessarily need to be quantized and discrete; there are fully continuous models of computation, like ODEs or continuous cellular automata.

That's true, but we already know that a bunch of stuff about the universe is quantized. The question is whether or not that holds true for everything or rather not. And all 'fully continuous models of computation' in the end rely on a representation that is a quantized approximation of an ideal. In other words: any practical implementation of such a model that does not end up being a noise generator or an oscillator and that can be used for reliable computation is - as far as I know - based on some quantized model, and then there are still the cells themselves (arguably quanta) and their location (usually on a grid, but you could use a continuous representation for that as well). Now, 23 or 52 bits (depending on the size of the float representation you use for the 'continuous' values) is a lot, but it is not actually continuous. That's an analog concept and you can't really implement that concept with a fidelity high enough on a digital computer.

You could do it on an analog computer but then you'd be into the noise very quickly.

In theory you can, but in practice this is super hard to do.


If your underlying system is linear and stable, you can pick any arbitrary precision you are interested in and compute all future behaviour to that precision on a digital computer.

Btw, quantum mechanics is both linear and stable--and even deterministic. Admittedly it's a bit of a mystery how the observed chaotic nature of eg Newtonian billard balls emerges from quantum mechanics.

'Stable' in this case means that small perturbations in the input only lead to small perturbations in the output. You can insert your favourite epsilon-delta formalisation of that concept, if you wish.

To get back to the meat of your comment:

You can simulate such a stable system 'lazily'. Ie you simulate it with any given fixed precision at first, and (only) when someone zooms in to have a closer look at a specific part, you increase the precision of the numbers in your simulation. (Thanks to the finite speed of light, you might even get away with only re-simulating that part of your system with higher fidelity. But I'm not quite sure.)

Remember those fractal explorers like Fractint that used to be all the rage: they were digital at heart---obviously---but you could zoom in arbitrarily as if they had infinite continuous precision.


> If your underlying system is linear and stable

Sure, but that 'If' isn't true for all but the simplest analog systems. Non-linearities are present in the most unexpected places and just about every system can be made to oscillate.

That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior. Of course with LLMs we're throwing all of that gain overboard again but the basic premise still holds: if you don't quantize you drown in an accumulation of noise.


> Sure, but that 'If' isn't true for all but the simplest analog systems.

Quantum mechanics is linear and stable. Quantum mechanics is behind all systems (analog or otherwise), unless they become big enough that gravity becomes important.

> That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior.

It's more to do with precision: analog computers have tolerances. It's easier and cheaper to get to high precision with digital computers. Digital computers are also much easier to make programmable. And in the case of analog vs digital electronic computers: digital uses less energy than analog.


Just be careful with how aligned to reality your simulation is. When you get it exactly right, it's no longer just a simulation.

"It looks designed" means nothing. It could be our ignorance at play (we have a long proven track record of being ignorant about how things work).

Yes. Or it could be an optimisation algorithm like evolution.

Or even just lots and lots of variation and some process selecting which one we focus our attention one. Compare the anthropic principle.


For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".

In our current algorithmic-obsessed era, this is reminiscent of procedural generation (but down/up the scale of complexity, not "one man's sky" style of PG).

However, we also have a long track record of seeing the world as nails for our latest hammer. The idea of an algorithm, or even computation in general, could be in reality conceptually closer to "pointy stone tool" than "ultimate substrate".


> For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".

That's a tempting thing to say, but quantum mechanics suggests that we don't have infinite layers at the bottom. Most thermodynamic arguments combined with quantum mechanics. See eg also https://en.wikipedia.org/wiki/Bekenstein_bound about the amount of information that can even in theory be contained in a specific volume of space time.


From the link you shared:

> the maximum amount of information that is required to perfectly describe a given physical system _down to the quantum level_

(emphasis added by me)

It looks like it makes predictions for the quantum layer and above.

--

Historically, we humans have a long proven track record of missing layers at the bottom that were unknown but now are known.


If you're interested in science fiction based on this concept, Greg Egan has a book called Permutation City which is pretty interesting.

I think I've read it or maybe a portion of it, was not very captivating. will try again.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: