I don't necessarily understand what Gisin is up to (it might help if somewhere he has an explicit delta between his axioms and K's?) but I might indeed have a propensity to find it interesting.
Schlep is the startup's equivalent of quantum tunnelling: you can't hope to out-schlep the established players in their own potential wells, but small amounts of schlep judiciously applied might get you through classical barriers, into a less-contested well?
Uhh for that you might have to bundle up something together with the Greene pilgrimage.. dont think he published his axioms anywhere [but you could also invite one of his students to a meal..]
Update: the review that set me on that future trail was this
OK, the bits [sorry] that I read don't seem to go any deeper than the "could it be" above.
My recollection of the basic problem in reconciling QM with GR is that of simultaneity: how do you manage to have an "open" future when various potential observers' future-fronts are all angled to each other?
I'd guess that the hyperbolic nature of light cones mean [hands waving madly enough to fly me to Africa] that there's enough wiggle room such that any conflict-resolving process occurring at space-like separations has finished by the time the events are in a past time-like cone; we don't even necessarily need a timelike source of random bits, as they are always naturally impinging from the rest of the universe.
What am I getting wrong with that model?
[Subtle gods could then leverage chaos to work in ways undetectable by their creations?]
What you may be getting right might yet underlie some resolution of at least a couple of the Hn’s :)
Lets just say, compressing an obscene tonnage of number theoretical hndwav’ry folks usually prefer one direction of SR <=> QM.., so whatever is “extra” in GR maps to the “extra” bits (sorry) in QFT (that JvN had almost no inkling of, that Feynman had just a wee bit (sorry))
[looking at the arxiv 16xx review again, it does seem that Gisin had the right motivations in 199x, but git mostly distracted (not sure about students).. i’d have thought that “heterodyne detection” wouldve set you off]
indeed! sorry, I'd been looking for a better entry into Gisin that might suit my limited (I have MTW, but have not yet worked through it, and my QM knowledge is also at an SR-level — when reading RPF's pop QED lectures I could see integrating e^ix terms instead of his little arrows, but that's about it) background, so I hadn't yet peeked into 16xx vintage Gisin.
after short reading, looks like PCs went out with the Convention: those who were not already without their heads on 4 Brumaire IV were without their jobs?
EDIT: also seem to have been plenty of zampolit in the VMF.
renominative only? Zombified PCs overcame (thermidorian/stalinist) reaction? (cf Lannes, reading like a serial founder story, compare Villeneuve, reading like a serial bigcorp CxO)
PoWaBu was a dead end anyway — now I'm much more interested in the overlap between 鮑羅廷 (MMG) and PMWL (1923-1925?); they would've had at least two languages in common.
Thanks for MSN; I wonder why he hasn't been publishing with Springer, where he'd be easier to find?
hmm. I'm not sure where MSN is going with all the mechanism to handle partiality.
JUNK makes everything total, so composition just works, and refinement doesn't need to take into account domain; the following:
f \refines g <=> f`x ~ g`x NB. (~) is (<=) for bunches
is true for ∀x, not just implied when x in dom`g.
On the other side, for program correctness, I'd always thought the whole point of Hoare triples was that in {p}f{q};{q}g{r}, {p} is chosen (wp or wlp) explicitly so that never not {q} ... which in turn ensures {r}.
Very far (tho I have some other, derivative-like, concepts, but much fuzzier)...
"Refinement" in this case is a CS term of art, sorry: to say that "f refines g" is to say that f's domain[0] includes g's domain, and each fibre of g^-1 includes the equivalent fibre of f^-1, so anywhere in a computation you had a g, it could safely (although not always lively![1]) be replaced with f. In particular, programs as deterministic implementations refine their indeterministic specifications.
(the best specification[2] is the closest to chaos that one can get that still does the right thing — which I guess loops back to leadership?)
The optimiser's (pedantry: we really should call it an "amelioriser") job is to reverse one of those arrows, so when the programmer writes f, the optimiser tries to discern the comm... sorry, programmer's intent to determine the spec h which coarsens f, and then substitute a g which also refines h: f <- h -> g.
[0] here JUNK simplifies by using an appropriate lattice: as Nuts (the undefined/unconstrained value) includes (~ = is included by) all defined values, we don't need to speak separately of domains: f \refines g <=> f`x ~ g`x
[1] there is a term of art from our army which refers to the art of making oneself scarce when tasks are being handed out; similarly, in CS there is a distinction between "safety" and "liveness": nothing is always a safe, albeit unproductive, thing to do.
[2] on the dual end of the lattice, in general there will be many programs in the neighbourhood of the maximally determinate that still makes progress. This is why, if we have a "science" at the spec end, when it comes to imps we instead have an "art" of hacking or a "discipline" of engineering.
yes, msn did; I hadn't found that yet when I wrote it, because I'd been expecting he'd be publishing in the same journals as, say, your fellow Ithacan Dexter Kozen.
Schlep is the startup's equivalent of quantum tunnelling: you can't hope to out-schlep the established players in their own potential wells, but small amounts of schlep judiciously applied might get you through classical barriers, into a less-contested well?