Error tolerance in this context means the parser produces a walkable AST even if the input code is syntactically invalid, instead of just throwing/reporting the error. It’s useful for IDEs, where the code is often in an invalid state as the developer is typing, but you still want to be able to report diagnostics on whatever parts of the code are syntactically valid.
Me: oh cool, this is interesting, I don’t quite understand what exactly that means, let me read the thread to learn more…
The thread:
> Replacing ECCA1 by version with step after the direction change could save something like 1% of the ecca1 bits size. Compiling agnosticized program instead of fixed lane program by ecca1 could save something like 1% as well (just guesses). Build of smaller ECCA1 would shorten binary portion, but it would be hardly seen in the ship size.
> Using agnosticized recipe in the fuse portion would definitely reduce its size. Better cordership seed and better salvo for gpse90 would help…
Dear lord I had no idea there’s this much jargon in the game of life community. Gonna be reading the wiki for hours
And for a related rabbit hole where people actually went all the to the bottom, there's of course the full implementation of Tetris in GoL which was nerd-sniped by a CodeGolf challenge
Sometimes you see something that makes you wonder how it is that you get to exist in the same world as people with the drive and intelligence to do truly awesome ( in the original sense) thing like this. I am proud of myself when the compiler works on the first try.
I think it's awesome that they can do this amazing fun esoteric stuff, but at the same time a small part of me thinks maybe they need to be doing something more meaningful in the real world.
I wonder, what would that be, that thing that is more meaningful?
I would make the case that, zoomed out far enough, nothing at all is meaningful, so you might as well make beautiful things, and this is a delightfully beautiful thing.
Once a year or so I find myself on those forums and I'm always astounded how many people there are that dedicate massive amounts of time and brain power to this.
I think it appeals to the same itch that languages like Brainfuck scratch.
There's something exceedingly interesting about how you can model complexity with something extremely simple. Brainfuck is fun because it forces you to think extremely low level, because ultimately it is basically just a raw implementation of a Turing machine. I wouldn't want to write a big program in it, but it is fun to think about how you might express a complicated algorithm with it.
Similarly with CGOL, it is really interesting to see how far you can stretch really simple rules into something really complex.
I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.
I have a weird love for Brainfuck. It's a tiny, incredibly simple language that you can write an interpreter for in an hour, it's effectively a super-simple byte code that can easily be extended and used as a compilation target for simple languages.
Honestly, as an educational tool, the only thing wrong with it is the name!
One of my favorite calculations of pi is to pick random coordinates in a unit square and count how many of them are in a circle. it's so stupid and so clever at the same time.
This was recreated from memory. I think it is close but I may have a bounding bug.
import random
def pi(count):
inside = 0
for i in range(count):
test_x = random.random()
test_y = random.random()
if test_x ** 2 + test_y ** 2 < 1:
inside += 1
return inside / count * 4 #above is a quarter circle
print(pi(2 ** 30) )
With the metatopic of this thread being obscure languages, I had some fun squeezing this into some list comprehensions (maybe someone's got an idea of how to keep track of the state within the list):
```
$ cat << EOF > pi.py
state = [0, 0, 2*8, 2*12]; _ = [print(f'\rRun {state.__setitem__(0, state[0] + 1) or state[0]}/{state[3]} | Last \u03c0: {current_pi:.6f} | *Average \u03c0: {(state.__setitem__(1, state[1] + current_pi) or state[1]) / state[0]:.6f}*', end='', flush=True) for current_pi in [(4 * sum([1 for _ in range(state[2]) if __import__("random").random()*2 + __import__("random").random()*2 < 1]) / state[2]) for _ in range(state[3])]]; print()
EOF
$ time python3 pi.py
Run 4096/4096 | Last π: 3.140625 | *Average π: 3.143051*
python3 pi.py 0.41s user 0.01s system 99% cpu 0.429 total
```
Play around with the `2*8` and `2*10` values in the state, they control the amount of rounds and the range in which the random values get generated respectively.
I have no idea how I'd be able to pitch this to a university (or even who I could pitch it to), but I would absolutely love to teach a computability course using Brainfuck as the language, just to really show students how low-level logic can be.
I would probably need to find a similar language with a different name though.
You might find mlatu-6[0] interesting- it’s convertible to SKI calculus but concatenative (like Forth) rather than applicative. It’s actually a subset of Mlatu, a language I created for similar reasons to explore “how low can you go.”
When I was an undergrad at Georgia Tech, one of my intro computer science classes had us implement something in brainfuck. Turns out college kids are quite comfortable with swear words.
Assembly is higher level logic than brainfuck, especially on modern chips. You have built in instructions for arithmetic and conditionals/branches and you can allocate memory and point to it.
You don’t really get any of that with brainfuck. You have a theoretical tape and counters and that’s basically it.
SKI calculus is pretty neat, too. You get no tape, no counters. (But it's not quite as bad to program in as brainfuck, because you can built more ergonomic contraptions to help you along.)
SKI can, of course, be de-optimised a bit further by replacing I with SKK. You are right though that it is relatively simply to go from something that looks like a normal program languages to a pile of S and K combinators. Not the most efficient way to compute though!
> I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.
Writing a naive CGOL is fun and quick. But writing a _fast_ one can get arbitrarily complicated.
Yeah, a million years ago I did that one as a convolutions so I could run it on the GPU when I was learning OpenCL. That was my first exposure to “optimizing” CGOL.
"So did Achilles lose his friend in war, and Homer did no injustice to his grief by writing about it in dactylic hexameters" - Tobias Wolff, Old School
if you find that fascinating then you'll be blown away by something called 'Wolfarm physics project'. it basically is trying to recreate entire physics using such baseline 'graph update' rules like 'Game of Life'. So far no predictions yet but very interesting.
Wolfram is kind of obsessed with cellular automata, even went and wrote a whole book about them titled "A New Kind of Science". The reception to it was a bit mixed. CA are Turing-complete, so yeah, you can compute anything with them, I'm just not sure that in itself leads to any greater Revealed Truths. Does make for some fun visualizations though.
A new kind of science is one of my favorite books, I read the entirety of the book during a dreadful vacation when I was 19 or 20 on an iPod touch.
It goes much beyond just cellular automata, the thousand pages or so all seem to drive down the same few points:
- "I, Stephen Wolfram, am an unprecedented genius" (not my favorite part of the book)
- Simple rules lead to complexity when iterated upon
- The invention of field of computation is as big and important of an invention as the field of mathematics
The last one is less explicit, but it's what I took away from it. Computation is of course part of mathematics, but it is a kind of "live" mathematics. Executable mathematics.
Super cool book and absolutely worth reading if you're into this kind of thing.
I would give the same review, without seeing any of this as a positive. NKS was bloviating, grandiose, repetitive, and shallow. The fact that Wolfram himself didn’t show that CA were Turing complete when most theoretical computer scientists would say “it’s obvious, and not that interesting” kinda disproves his whole point about him being an under appreciated genius. Shrug.
That CA in general were Turing complete is 'obvious'. What was novel is that Wolfram's employee proved something like Turing completeness for a 1d CA with two states and only three cells total in the neighbourhood.
I say something-like-Turing completeness, because it requires a very specially prepared tape to work that makes it a bit borderline. (But please look it up properly, this is all from memory.)
Having said all that, the result is a nice optimisation / upper bound on how little you need in terms of CA to get Turing completeness, but I agree that philosophically nothing much changes compared to having to use a slightly more complicated CA to get to Turing completeness.
The question really ultimately resolves to whether the universe can be quantized at all levels or whether it is analog. If it is quantized I demand my 5 minutes with god, because I would see that as proof of all of this being a simulation. My lack of belief in such a being makes me hope that it is analog.
Computation does not necessarily need to be quantized and discrete; there are fully continuous models of computation, like ODEs or continuous cellular automata.
That's true, but we already know that a bunch of stuff about the universe is quantized. The question is whether or not that holds true for everything or rather not. And all 'fully continuous models of computation' in the end rely on a representation that is a quantized approximation of an ideal. In other words: any practical implementation of such a model that does not end up being a noise generator or an oscillator and that can be used for reliable computation is - as far as I know - based on some quantized model, and then there are still the cells themselves (arguably quanta) and their location (usually on a grid, but you could use a continuous representation for that as well). Now, 23 or 52 bits (depending on the size of the float representation you use for the 'continuous' values) is a lot, but it is not actually continuous. That's an analog concept and you can't really implement that concept with a fidelity high enough on a digital computer.
You could do it on an analog computer but then you'd be into the noise very quickly.
In theory you can, but in practice this is super hard to do.
If your underlying system is linear and stable, you can pick any arbitrary precision you are interested in and compute all future behaviour to that precision on a digital computer.
Btw, quantum mechanics is both linear and stable--and even deterministic. Admittedly it's a bit of a mystery how the observed chaotic nature of eg Newtonian billard balls emerges from quantum mechanics.
'Stable' in this case means that small perturbations in the input only lead to small perturbations in the output. You can insert your favourite epsilon-delta formalisation of that concept, if you wish.
To get back to the meat of your comment:
You can simulate such a stable system 'lazily'. Ie you simulate it with any given fixed precision at first, and (only) when someone zooms in to have a closer look at a specific part, you increase the precision of the numbers in your simulation. (Thanks to the finite speed of light, you might even get away with only re-simulating that part of your system with higher fidelity. But I'm not quite sure.)
Remember those fractal explorers like Fractint that used to be all the rage: they were digital at heart---obviously---but you could zoom in arbitrarily as if they had infinite continuous precision.
Sure, but that 'If' isn't true for all but the simplest analog systems. Non-linearities are present in the most unexpected places and just about every system can be made to oscillate.
That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior. Of course with LLMs we're throwing all of that gain overboard again but the basic premise still holds: if you don't quantize you drown in an accumulation of noise.
> Sure, but that 'If' isn't true for all but the simplest analog systems.
Quantum mechanics is linear and stable. Quantum mechanics is behind all systems (analog or otherwise), unless they become big enough that gravity becomes important.
> That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior.
It's more to do with precision: analog computers have tolerances. It's easier and cheaper to get to high precision with digital computers. Digital computers are also much easier to make programmable. And in the case of analog vs digital electronic computers: digital uses less energy than analog.
For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".
In our current algorithmic-obsessed era, this is reminiscent of procedural generation (but down/up the scale of complexity, not "one man's sky" style of PG).
However, we also have a long track record of seeing the world as nails for our latest hammer. The idea of an algorithm, or even computation in general, could be in reality conceptually closer to "pointy stone tool" than "ultimate substrate".
> For all we know, it could be distinct layers all the way down to infinity. Each time you peel one, something completely different comes up. Never truly knowable. The universe has thrown more than a few hints that our obsession with precision and certainty could be seen cosmically as "silly".
That's a tempting thing to say, but quantum mechanics suggests that we don't have infinite layers at the bottom. Most thermodynamic arguments combined with quantum mechanics. See eg also https://en.wikipedia.org/wiki/Bekenstein_bound about the amount of information that can even in theory be contained in a specific volume of space time.
That's because it's not "game of life jargon", it's "cellular automata" jargon. Which is a field of math and comes along with a bunch of math jargon from related fields.
I searched several of these terms and they are all specifically jargon of game of life enthusiasts, (i.e. search reaults are all on fansites related to game of life) not general cellular automata jargon.
conwaylife.com isn't just for GOL, although it is the main subject, there is an Other Cellular Automata forum.
also it's not really a fansite, it's a bit more academic than what would be considered a "fansite".
The user is an entity from the DB, where the email field is nullable, which makes perfect sense. The input component only accepts a string for the defaultValue prop. So you coalesce the possible null to a string.
I’ll never understand this. A company puts in an immense amount of time money and effort into creating a product, and because it doesn’t work the way you want it to, it’s an assault on your freedom. Whaaa?!?! You can see things and ask things and learn things without using an AI company’s product, you know like, interacting with real people in the real world.
That's what they said about cars at first. Or credit cards. The question to ask is: will the world we make in the wake of this invention afford us to live without it? And if the answer is no, then it's all the more important to have access to truly free and uncensored AIs. How did we learn things before AI? We googled them. How's that working out in the age of AI? AI both poisons our search results and gets integrated with them. There's large interests in making sure everything we see hear and think is prevetted by some approved AI. That's not a future I want to live in, but the signs are there.
This is definitely a completely different thing, but for your problem, Qwen Image-Edit is a really good model that you can either download and run on your own hardware, or on an online service like civit.ai
Some good ones:
- United States v. One Solid Gold Object in Form of a Rooster
- United States v. 11 1/4 Dozen Packages of Articles Labeled in Part Mrs. Moffat's Shoo-Fly Powders for Drunkenness
- South Dakota v. Fifteen Impounded Cats
I've spent a lot of time in forfeiture court and it's always a chuckle to hear these cases get called. Especially the defendants' lawyer "Yes, your honor, I represent the cats."
Always wanted the cat, or the Honda Civic or whatever to ask to represent themselves. I guess if there was a foreclosure against an Nvidia Spark with a local LLM it might be able to give it a worthy try.
This is really cool but in my experience, RA is typically used for less popular “underground” (in very large quotes) events, warehouse parties, etc. It might be different in Europe or other cities here in the states. The data for such events is interesting in its own right, but it’s probably not quite representative of the most popular clubs and/or artists.
Perhaps doing a similar thing with Ticketmaster/tixr/all the other ticket apps and combining them together could produce more representative data, of course you’d still have the problem of having to filter out actual clubs.
on an unrelated note, what in gods name is that color scheme on the visualizations? xD
We have hundreds of thousands of TIFF files where I work which are scans of questionnaires filled out by clinical trial participants. The one annoying thing is that web browsers don’t natively display them. I did some incredibly inefficient JavaScript bs to decode the pixel data, plop it in a canvas, get a PNG data url from it, and set that as the src for an img element xD (why not just display the canvas? because I was too lazy to manually handle resizes…) good times
In the hard, logically rigorous sense of the word, yes they are deterministic. Computers are deterministic machines. Everything that runs on a computer is deterministic. If that wasn’t the case, computers wouldn’t work. Of course I am considering the idealized version of a computer that is immune to environmental disturbances (a stray cosmic ray striking just the right spot and flipping a bit, somebody yanking out a RAM card, etc etc).
LLMs are computation, they are very complex, but they are deterministic. If you run one on the same device, in the same state, with exactly the same input parameters multiple times, you will always get the same result. This is the case for every possible program. Most of the time, we don’t run them with exactly the same input parameters, or we run them on different devices, or some part of the state of the system has changed between runs, which could all potentially result in a different outcome (which, incidentally, is also the case for every possible program).
If the state of the system is the same, the scheduler will execute the same way. Usually, the state of the system is different between runs. But yeah that’s why I qualified it with the hard, logically rigorous sense of the word.
I do think that might be the only thing they turn out to be any good at, and only then because software is relatively easy to test and iterate on. But does that mean writing code is what the models are "for"? They're marketed as being good for a lot more than coding.
The tool did it because this is what it was designed and trained to do --- at great expense and effort --- but somewahat less than successfully.
You can't have it both ways --- the tool can't be "intelligent" yet too stupid to understand it's own limitations.
If people ask the wrong questions, the "intelligent" response would be, "Sorry, I can't do that".
Maybe the problem here is that this "intelligent" tool is really as dumb as a rock. And it's only a matter of time until lawyers argue this point to a jury in court.
reply