Hacker Newsnew | past | comments | ask | show | jobs | submit | hakuseki's commentslogin

Leaving aside the question of whether the universe is discrete or continuous, a simulation would still have lower "resolution" than the real world, and some information can be lost with each time step. To compensate for this, it can be helpful to have simulation step t+1 depend on both the step t and step t-1 states, even if this dependency seems "unphysical."


The article is about (position, move) pairs. Why not call these objects "steps"?


Yeah this is a good shot at using existing verbiage, better than the candidates I came up with at least. Still not entirely self-descriptive, and has some overlap with usage in other parts of the codebase, like in data processing and user onboarding, but maybe that's a fine trade-off to make in order to use a normal word. I'd be equally fine with them being called "steps", but now I'm attached to my Keps :D


In what sense is looking at a screen a failure to prepare for the world that adults navigate? Adults also look at screens.


It’s a huge missed opportunity for the developing brain to learn about the world, its sights, its sounds, its interactions, its physics, its rules. Everything.


> However, even using extra hardware wasn't enough in this case. So they pre-calculated lookup tables for sine, cosine, tangent etc. for every angle at the necessary precision.

Is this really the order of events? I imagine the pre-calculated route is what you'd try first, and only go for extra hardware if that failed somehow.


Lookup tables were commonplace in 80s-90s graphics programming. It should have been used before any consideration of custom hardware solutions.


Are you planning to decelerate by crashing into the planet at relativistic speed?


I guess you mean k log k, not n log n.


yep, thanks. "n log n" is how one always says it, it becomes a local variable in its phrase. Oops.


What's so difficult about optical links from deep space compared to low earth orbit, where 200 gigabit throughput has been achieved? Is it just the attenuation?

I would have imagined that we could upgrade the communication equipment on a space probe much more easily than we could add fuel for a return trip.


> What's so difficult about optical links from deep space compared to low earth orbit, where 200 gigabit throughput has been achieved? Is it just the attenuation?

Yup; 300km vs 600000000km. Less than a trillionth of the power comes through for the same emitted power and apertures.

So, you end up having to make apertures much larger and point much, much more precisely. You can also increase laser power some, but that's a small part of your solution.

(Or, of course, you can reduce speeds to have more energy per bit).

I don't think anyone is serious about shipping back physical data from deep space, but the station wagons full of tape thought experiment is always fun.


I am semi-serious. But it's more about a periodic backup of an on site science station.

An on site (orbital) station could run all the models and algorithms and analysis you want on the gobs of imagery and spectral maps / cubes you can gather in real time.

Analysis is an excellent compression algorithm. But, you'd want the raw data eventually, and that means either trickling back 0.1%, or waiting a few years for the full set.

https://josh.vanderhook.info/publications.html#josr2022sspe


Hmm

What’s your sense about SETI or the Fermi paradox, if a signal becomes so vastly diluted just within our solar system?

I’m sure the SETI people have thought about this and made various calculations, but with the inverse square law and the vastness of space, maybe “needle in a haystack” is optimistic.

Is it the wrong model to think that anything but maybe a galaxy scale civilization is just going to have it’s signals more or less totally dissolved into seemingly random cosmic fluctuations, relative to our sensors/receivers at least?


Maybe. Recovering data on interstellar distances is hard.

Integrating a long time to see if there's a signal there above background levels is maybe not so hard (especially if it was intended for detection in this scenario).

The big issue for data recovery is energy per symbol. If you can integrate for hours, that can still be a lot of "special photons" (whether they're on a weird radio frequency or light wavelength).


That is not strictly true. The weather at time t0 may affect non-weather phenomena at time t1 (e.g. traffic), which in turn may affect weather at time t2.

Furthermore, a predictive model is not working with a complete picture of the weather, but rather some limited-resolution measurements. So, even ignoring non-weather, there may be local weather phenomena detected at time t0, escaping detection at time t1, but still affecting weather at time t2.


> there is nothing in principle stopping us from running this machine for BB(748) steps

How would we compute the value of BB(748)?


Computing BB(748) would be best, but if we could get an upper-bound estimate that's reasonably close, that would suffice.


Do note that any function f(n) that is always (or even just eventually always) greater than BB(n), is uncomputable, for very similar reasons.


How can you come up with an "upper bound estimate" without having some idea of the structure of the computation of the specific 748 state Turing Machine in question?

Imagine you had an oracle telling you BB(748) excluding this machine. How do you get an upper bound on the runtime of this machine?

(There is an answer: this is surely not the optimal construction, and so such an oracle would give you an answer for some smaller ZFC machine which you could likely use to extrapolate a value for this machine. However, eventually you'll find a minimal state ZFC machine and that won't work anymore.)


Reaching a "reasonably close" upper bound estimate wouldn't provide proof ... probability maybe, but not proof.


If we've proved that a number is an upper bound on BB(748), then running for that many steps without halting means it has also run BB(748) steps without halting.


How were these numbers arrived at?


I think the numbers are taken from https://en.wikipedia.org/wiki/Game_complexity

We discuss the numbers for Go in the introduction of our paper https://matthieuw.github.io/go-games-number/AGoogolplexOfGoG...


I just wanted to say, I stumbled upon your website a few years ago through the Tromp-Taylor rules of Go and found the things you do impressive and inspiring. It’s a nice surprise to see you commenting here.


So to summarize, the game tree complexity is estimated by estimating the branching factor and the game length, and raising the former to the power of the latter.

I find it slightly odd that the game length is calibrated to "reasonable" games but the branching factor is not.

If the goal is to estimate the number of possible games of go, then the calculation would be dominated by the number of long games rather than the number of short games, and very long games are possible.

If the goal is to estimate the number of "reasonable" games of go, then the branching factor should also be much smaller, as most possible moves are not reasonable. Perhaps the logarithm of the branching factor could be estimated as the entropy of some policy model such at that of KataGo.

P.S. I am happy to have received a reply from the mighty Tromp!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: