Hacker Newsnew | past | comments | ask | show | jobs | submit | more setra's commentslogin

When someone else runs the server you use, and your account is tied to their server, you are at the whims of the server owner, and are not able to migrate your identity.


These days programming languages actually are used for expressing proofs. They can automatically check them to. For example the Coq theorem proving language.

https://en.wikipedia.org/wiki/Coq

Some would argue that writing code IS more fault proof than writing proofs the traditional way.


I would still trust a mathematical argument I can understand over the output of some automated theorem prover.

Of course these aren't mutually exclusive, provided the theorem prover is simple enough to understand.


HN said bitcoin was a bubble when it was at $10. HN said bitcoin was a bubble when it hit $500. HN said bitcoin was a bubble when it hit $4000. HN said bitcoin was a bubble when it hit $10,000.

The issue with knowing something is a bubble is that unless you know when the bubble is going to pop its nearly useless knowledge.

Bitcoin has demonstrated that the end of the bubble is NOT obvious and that there has been plenty of money to be made. My recommendation (what I did) is buy some amount of bitcoin in the off chance that its worth a million dollars in a few years. Because bitcoin is a deflationary system there is at least a good reason to think it will rise in value forever.


I just sold a domain for $500. If I put that into bitcoin, could I easily sell it when it's worth $1500 you think? I don't want to buy it and be forced to keep it.

How can it rise in value 'forever'? At some point, government regulations or a major scandal is going to affect the market. Is it not?


Bitcoin is deflationary with a cap on the number of units. If there is economic growth the value of each bitcoin will increase. Every current conventional currency introduces more monetary units as the economy grows.


Considering this is DNS there has been a historical limit of 512 bytes. Despite this not usually being a limitation now you are really pushing the ideal packet size with hundreds of answers each of which are multiple bytes. High chance of packets being dropped.


> Considering this is DNS there has been a historical limit of 512 bytes.

Only with UDP transport, longer responses are told to requery via TCP.


These days EDNS0 allows bigger UDP responses in many cases, which may mean some fragment re-assembly. Unfortunately there are a staggering number of networks and firewalls that don't open TCP 53, and also ones that don't permit UDP fragments. So if you want DNS to work reliably /everywhere/, sadly it's wise to stay below the 512 limit.


We're talking about service discovery here. This is internal DNS traffic in AWS, where these issues to which you refer are nonexistent.


They are primarily targeting the cloud database market. There are many examples of non free software surviving just fine in this market. Both amazon and google run their own proprietary cloud databases like this.


And it is very easy to host your own instance of Amazon or Google :-)


How are they targeting location at such a high resolution? IP targeting is usually only accurate at the whole city level. In this they show tracking across a bus path. That would require GPS. What am I missing?


Ad networks allow advertisers to target based on fine-grained location which is presumably matched against the location reported by the phone's location services. That is, an app with GPS privileges displays a Geo targeted ad, then that display is reported to the advertiser. So yes, this is using GPS after a fashion.


So this has nothing to do with browsers or ISPs; it's a result of apps with GPS permissions transmitting that information to advertisers without the user's consent?


And/or users handing over consent for the app (and included ad-tech) to use GPS data without thinking through/caring enough about the consequences. Additionally, predatory apps that demand certain permissions to work can function to force the user to give up GPS data.


Maybe after the user declines a permission and the app prompts again, there should be a checkbox on the permissions prompt that makes Android pretend the permission was granted and just spoofs the relevant data.


That still means that you're safe if you don't use those apps. I just checked, none of the apps on my phone have location services always on, apps such as Facebook have it set to never. And I can't remember Facebook ever bothering me with it (as long as I don't post something at least).


This is explained in the article. Apps use high-precision geolocation when requesting location-targeted ads, and you can target ads to specific locations, so if you blanket an area in ads you can indirectly track movement through the grid.

> They then used that DSP to place a geographic grid of location-targeted ad buys around a three-mile square section


The game the target installed that is snitching GPS coordinates to an ad exchange near you.

How else did you think localized advertising works?

OpenRTB has supported GPS location since October 2012, proprietary exchanges longer than that.


> One test subject's commute across Seattle that the researchers tracked with ads in the app Talkatone. The dotted lines show the subject's real path, while the red dots show where the ads were delivered to the subject's phone to reveal his or her workplace, bus stop, home and local coffee shop. (The subject's actual home location has been somewhat obscured for privacy.)

It's not tracking across a bus path; that's their actual path. The red dots are the tracked locations.


Look at the red dots it places down. Two of the dots are within 1/8th of a mile. All of the dots are within 4 miles. 1/8th of a mile resolution is extreme and not IP level.


I can purchase a droplet on digital ocean than has gigabit and a terabyte of transfer every month for $5. A terabyte is more than enough content for me. I am sure most people could afford that to manage their interneting.


>I can purchase a droplet on digital ocean than has gigabit and a terabyte of transfer every month for $5

>I am sure most people could afford that to manage their interneting

And before the spec has even seen real adoption, we've already seen it centralize into a few major providers.

A bit tongue-in-cheek, but it's a real issue. HTTP isn't the reason things are centralizing so much as economies of scale and convenience are. I see nothing about IPFS that fundamentally changes that, and think we'd likely see similar centralization over time.

Git, one of the inspirational technologies, is in theory distributed as well and in practice hyper-centralized to only a few major providers.


Network infrastructure is also a powerful driver of centralisation.

I'd much rather host content from my home (and, living in a very sunny Australian city, power that with PV and battery storage at a low amortised cost) - but I can't, because the network to support that isn't there.

I get about 7Mb/s down and and 1Mb/s up - my link is highly asymmetric. When I finally get off ADSL and on to the new National Broadband Network, that'll still be asymmetric.

I can see why networks are built that way, given the current centralisation of infrastructure, but the build also reinforces centralisation.

Think back to 20 years ago when most business network connections, even for small business, were symmetric. Hosting stuff on-site (email servers, web servers, ...) was far more common.


Distributed technology keeps centralized providers honest. If github got complacent their customers could migrate their most important data in a very short time.


Github is complacent but people haven't moved because it's difficult. The issue tracker is proprietary and losing all of that and the account references for the comments makes moving non trivial.


> Git, one of the inspirational technologies, is in theory distributed as well and in practice hyper-centralized to only a few major providers.

Git repositories are replicated all over.

My laptop has mirrors of all my work's projects and many open source projects.

Imagine how many secure mirrors of, say, the React repository is out there. GitHub is basically just a conveniently located copy.

That's real and tangible decentralization. It's a magical property of the DVCS architecture that it's decentralized even when it's centralized, so to speak.

I agree that there are issues with central hubs though. Maybe the most significant one is that organizational structures and rules are defined in a centralized way on GitHub.

If you look at blockchains as another kind of DVCS that's totally focused on authority, legitimacy, and security, then it seems pretty likely that we'll end up using those to control commits and releases.


Git itself doesn't inherently provide the services that github does: discovery, project management, and social tools.

The kinds of tools that would make distributing those aspects of github are precisely what this article is advocating for!


That sounds expensive to me for a server unless you are spending a massive amount on bandwidth or storage or something. A dedicated server on Hetzner costs around 50$ a month for 32 GB DDR4, gigabit with 30 terabytes a month (before throttle), and 4TB of drives.

https://www.hetzner.com/dedicated-rootserver/ex41



Your critique of hypercomputation applies all the same to Turing complete machines. A Turing machine can never be implemented and is a totally theoretical model. At best we can implement a class of machine below called a "Linear bounded Turing machine". For these machines the halting problem is solvable (it can just take a while). You could even implement a linear bounded Turing machine as an FSM it would just take an ungodly number of states.


> You could even implement a linear bounded Turing machine as an FSM it would just take an ungodly number of states.

To be clear, these things have formal definitions, and this statement is not correct.

1. A linear bounded automaton[0] is a Turing machine that can only overwrite symbols presented on its input tape as input. However, the definition of the automaton is still required to be finite, but it is required to operate on unboundedly large input tapes.

2. A finite state machine[1] is a model of computation where the sequence of input symbols are observed once and the machine is at all times in one of a finite number of states. It is equivalent to a TM that can only move right (and consequently cannot read anything it writes to the tape).

They are different, formally[2]. There are languages that a LBA can accept that a FSM cannot accept (famously, the strings of balanced parentheses cannot be recognized by a FSM).

[0]: https://en.wikipedia.org/wiki/Linear_bounded_automaton

[1]: https://en.wikipedia.org/wiki/Finite-state_machine

[2]: https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...


This is correct from the unbound theoretical tape perspective. I should have expanded my point you are very correct. If you look in the context of physical realization (finite) a tape that can be written is some form of RAM. Beyond O(1) complexity all of these machines can express each other with different amounts of RAM. Even in the theoretical model an FSM can accept balanced parentheses of finite depth. My FSM point is basically wrong though.


Turing machines are a useful simplification: they eliminate the (literal) edge-case that we hit the end of the tape. We can either think of this as having "infinite tape", or we can imagine a "tape factory" on either end which extend the tape faster than the TM can read it.

Mathematically, this sort of simplification is used a lot: when our model requires some sort of bound, we can work under the assumption of it being 'sufficiently large' that we can ignore edge-cases. Another example is the set of "real" numbers, which we can represent as decimals and assume a sufficiently large number of decimal places to avoid rounding. In fact, similar to the "tape factories" of a TM, we can think of each real number as having a "decimal-place factory" which produces new digits more quickly than we can read them (for example, during Cantor diagonalisation).

The infinities in hypercomputation don't seem to be providing such a simplification. Their 'sufficiently large' assumption seems to be the number of steps which can be executed in a unit of time, which avoids the edge-case of non-halting programs. I'm not sure that's a useful simplification.


"It can just take a while" is a pretty egregious understatement. As the busy beaver question shows, even an extremely simple turing machine with just a handful of states can execute for a completely unfathomable number of steps.

https://en.wikipedia.org/wiki/Busy_beaver


That is gaming the definitions of "states" a bit. Your example has a official finite number of "states" combined with an unbounded tape of extra states while only counting the FSM controlling it. That tape is part of the state, the machine can edit / read the tape in its decision procedure.

Noting this a few more states than a handful are possible. For example my laptop with bitpacking could track the visited or not visited flag for 64000000000 states without even using disk. Tracking a 33 bit FSM.

I agree though that linear bounded Turing machines having a solution for the halting problem is not actually that useful for real computers considering by the time a few registers had been iterated over the sun would have exploded.


I think you're confusing "states" with cells on the tape. They are completely independent. All turing machines have an unbounded tape, but a well-defined number of states that act as an automaton controlling the head on the tape.


This is unintuitive to me. Source?

(I can understand why we might not be able to build a Turing machine because we don't have infinite tape, etc, but if a program never halts but merely needs finite space, how do you prove that, in general?)


There are only finitely many states. Detect loops.


Okay. "Loop detected. It terminates conditionally on the value of `a`".

I would think that deciding if a loop halts is equivalent to the halting problem (just wrap your program in a loop). Merely detecting loops seems insufficient.


I think the answer is you can solve the halting problem if you have finite memory. You just need to consider every possible memory value. Eventually no new memory values can be introduced, so the next state must be one of the already inspected memory values.

Of course all this takes at least exponential time and so is outside our normal computability assumptions.


Wouldn't the halting problem take at most exponential time?

We know that a program with n bits of memory has at most 2^n states, so we can maintain an n bit counter that is incremented once on each program iteration. If this counter reaches (2^n)-1, then the program does not terminate. (Depending on how you define things, there is an off-by-one error here; just run the program a few times before you start counting).


The general halting problem takes a potentially unbounded input.

That's why it's impossible to solve.

If it were finite (bounded), then you could do an enumeration of the possible programs of that bounded size, and you could make a program that simply looks up that list. (Sure, maybe the making of the list would be a bit problematic, because it'd take ... multiple eons, lesser gods would die, rise and die again during that time, but it's still finite.)


That's the point, yes. A real machine has finite memory, while a proper Turing machine can always advance the tape to get more memory. You can solve the former in ridiculous but non-infinite time, but not the latter.


By "detect loops" the parent meant "detect repetition of exact state". We have a fixed state machine. If we find ourselves in the same state, at the same location in the same tape, we must not halt because we will proceed in a loop indefinitely. If we have a finite tape, there are only finitely many states the entire system can occupy, so we must be able to exhaust them (or halt first, or loop first) in finite time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: