Inspired by this post & TF comment I tried symbollic regression [0]
Basically it uses genetic algorithm to find a formula that matches known input and output vectors with minimal loss
I tried to force it to use pi constant but was unable
I don't have much expreience with this library but I'm sure with more tweaks you'll get the right result
from pysr import PySRRegressor
def f(n):
if n % 15 == 0:
return 3
elif n%5 == 0:
return 2
elif n%3 == 0:
return 1
return 0
n = 500
X = np.array(range(1,n)).reshape(-1,1)
Y = np.array([f(n) for n in range(1,n)]).reshape(-1,1)
model = PySRRegressor(
maxsize=25,
niterations=200, # < Increase me for better results
binary_operators=["+", "*"],
unary_operators=["cos", "sin", "exp"],
elementwise_loss="loss(prediction, target) = (prediction - target)^2",
with compleixty 22 loss: 0.000015800686
The first term is close to 2/3 * cos(2pi*n/3) which is featured in the actual formula in the article. the constant doesn't compare to 11/15 though
I didn't knew community/public whitelists exist, nor any browser extension that uses whitelists and blocks all other connections by default, like uMatrix does. Do you have any examples?
A question for strcat / other Graphene developers:
Can you clarify what can one expect from legacy extended support. Will old devices get any more updates? how long, how often, is it just security patches etc..
Not a dev, but legacy extended support means that there are no more feature updates, i.e., no new Android versions or QPRs, only AOSP security backports.
Obviously there aren't any firmware patches either, as these devices are unsupported by the manufacturer, who is responsible for delivering any changes to the firmware.
These so-called "hardening scripts" cause a lot of issues and volunteers have to waste their time helping clueless users who copied them without understanding what they do.
I see you're quite active regarding FF. Nevertheless I'm not sure what you're talking about. I've installed Betterfox ESR on Windows, Linux and Android with no apparent issues.
In general at high enough depth the ocean temperature is constant[0].
At high latitude like the southern ocean it's constant at whatever depth.
I think the surface ambient temperature is below(cooler) the temperature where the water density is the highest around 4 degrees for pure water(water has negative thermal expansion which causes it to expand and float!), for southern ocean salinity is between 33-34 and maximum density is below 0[1] but still ambient might be lower which means the colder water is lighter.
I've skimmed the result couple of weeks ago, and if I remember it states that there from all machines which takes t time to accomplish something there is one such that sqrt(t) memory is used. so it's at most but not on avg nor amorotized[not sure if et even make sense it the space where the problem lies] (you take the least memory hungry machine)
There must be more constraints, what if the problem is copying a string? Or is it only for turing tape machines where it has to backtrack during the copy, increasing number of steps?
That's a good example! So the sqrt(n) space in the result must refer to space beyond that needed to write down the output--basically a working memory scratchpad. In which case, a string-copying function could use just a constant amount of scratch space.
I mean, for all I know, the result may have been proved in the model of decision problems where the output is always one bit. But I'd guess that it generalizes just fine to the case where you have to write longer outputs.
However, your question does make me wonder if the result still applies in the RAM model, where you can move to any spot on the tape in constant time. My theory knowledge is getting rusty...
Update after youtube: It was already known that a time-t function can be computed in sqrt(t) space on a single-tape Turing machine. It's exactly like @cma said: a Turing machine wastes a lot of time moving the head back and forth. Williams' result shows sqrt(t) space for multi-tape Turing machines. Those are much more time-efficient in regard to head movement, so until Williams' proof it was not believed to be possible to to make any algorithm use only sqrt(t) space.
((cos((x0 + x0) * 1.0471969) * 0.66784626) + ((cos(sin(x0 * 0.628323) * -4.0887628) + 0.06374673) * 1.1508249)) + 1.1086457
with compleixty 22 loss: 0.000015800686 The first term is close to 2/3 * cos(2pi*n/3) which is featured in the actual formula in the article. the constant doesn't compare to 11/15 though
[0] https://github.com/MilesCranmer/PySR