Hacker Newsnew | past | comments | ask | show | jobs | submit | tech_ken's commentslogin

That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:

> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.


WOW I've spent years thinking that I suck at typing on phone screens, I never even considered that it might be the keyboard software is just shitty....

I hear what you're saying but also have lately felt a lot of frustration with this framing. I definitely agree that large corporations share an outsized portion of the blame; they have misled and misdirected us past the brink of crisis in pursuit of profit. And as you point out, one of the special cruelties of their system is that it prearranges individual consumer options so that we have the illusion of choice, but ultimately wind up complicit no matter what we choose. Thus its incumbent on us (collectively) to make a decision that's not on the menu we've been handed. But (critically) its still going to be individuals making that choice. It's not enough to merely topple Big Oil at a social scale, we will also have to give up our F-450s and sprawling SFH tracts with four car garages. It's not necessarily fair, and it's not necessarily our fault, but it's still our responsibility (because it's ultimately our future that's at risk).

This is basically what I think of as the "fundamental problem of conservation ecology", ie. that 'conserving' an ecosystem inherently changes it. The pre-human state definitionally could never be recreated (or maintained) by human hands, it will always reflect the contributions of its human stewards. So if conservation is fundamentally impossible, what's the real goal of conservation ecology? Is it doomed to simply create "theme parks"? Or can we be said to meaningfully preserve something (even if it's a changed or debased form of the original). Conservation ecology is what happens when these stupid philosophical musings bump into actual engineers willing to try to do it.

Human communities (even digital ones) ultimately have the character of an ecological system IMO (albeit in a more allegorical sense). What the author calls fungibility I might call conservation; weird Twitter accounts "migrating" to Bluesky is sort of like pandas surviving primarily in zoos. Some element of the original system is kept, but ultimately the "conservation" is more like "recreation" or "homage" rather than truly freezing the system in amber. Does that mean that it's futile to try? Or is there some sense in which we can have meaningful translation of the relationship web from one medium to the next? Maybe we don't even care about maintaining the system; if it's just the biodiversity we prize then maybe pandas in zoos forever is enough.

I don't think this is adding anything new to what the author is saying per se, but I find the parallels interesting to muse about.


Adding an additional thought to this. Is it conservation if you’re just trying to recreate what once was? I think your thoughts parallel what the author was saying. You can’t just recreate a community (or any ecosystem) as it once was, it will be different. I would argue conservation should be trying to prevent the unnatural end to a community/ecosystem.

Despite humans being a natural part of our planet, and thus an end to an ecosystem could be considered natural in some way, we are unique in our ability to challenge and question our natural ways. Maybe we should be exempting ourselves from the natural order of things and working to allow the natural course of an ecosystem to die out on its own. Then we have the ability to inhabit and change without having destroyed in the process.

Idk, additional things to muse on.


> I don't get why we aren't building mixed use buildings, maybe the first floor can be retail and restaurants, the next two floors can be data centers, and then above that apartments.

I mean a DC needs a lot of infrastructure and space. I think the real estate economics in places where people want to live, shop, and eat preclude the kinds of land usage common in DC design. Keep in mind that most DCs are actually like 4 or 5 datahalls tethered together with massive fiber optic networks.

Also people prefer to build parking in those levels that you're proposing to put DCs into.


> The author uses the power capacity of the ISS's solar panels as a point of comparison, but SpaceX has already successfully deployed many times that capacity in Starlink satellites[1] without even needing to use Starship,

Your link here isn't really a fair comparison, and also you're still short a factor of 10x. Starlink has deployed 50x the ISS's solar cap across its entire fleet (admittedly 3 years ago); the author's calcs are 500x the ISS for one datacenter.

> and obviously the heat dissipation problem for those satellites has already been solved so there's little point in hand-wringing about that.

This reasoning doesn't make any sense to me, the heat dissipation issues seem very much unresolved. A single Starlink satellite is using power in the order of watts, a datacenter is hitting like O(1/10) of gigawatts. The heat dissipation problem is literally orders of magnitude more difficult for each DC than for their current fleet. This is like saying that your gaming PC will never overheat because NetGear already solved heat dissipation in their routers.

> The author also worries about ground communication bandwidth, claiming it is "difficult to get much more than about 1Gbps reliably", which seems completely ignorant of the fact that Starlink already has a capacity much greater than that.

Don't their current satellites have like 100Gbps capacity max? Do you have any idea how many 100Gbps routers go into connecting a single datacenter to the WAN? Or to each other (since intrahall model training is table stakes these days). They have at most like O(1)Pbps across their entire fleet (based on O(10K) satellites deployed and assuming they have no failover protection). They would need to entirely abandon their consumer base and use their entire fleet to support up/down + interconnections for just 2 or 3 datacenters. They would basically need to redeploy a sizeable chunk of their entire fleet every time they launched a DC.


> Starlink has deployed 50x the ISS's solar cap across its entire fleet (admittedly 3 years ago); the author's calcs are 500x the ISS for one datacenter.

So 3 years ago they managed to get to 10% of the power budget of one data center by accident, using satellites not explicitly designed for that purpose, using a partially reusable launch platform with 1/10th the payload capacity of Starship. My point is they've already demonstrated they can do this at the scale that's needed.

> A single Starlink satellite is using power in the order of watts

Then why does each satellite have a 6 kW solar array? Re-read that post I linked; the analysis is pretty thorough.

> Don't their current satellites have like 100Gbps capacity max?

Gen 3 is reportedly up to 1 Tbps ground link capacity, for one satellite.[1] There will be thousands.

> Do you have any idea how many 100Gbps routers go into connecting a single datacenter to the WAN? Or to each other (since intrahall model training is table stakes these days).

Intra-satellite connections use the laser links and would not consume any ground link capacity.

You're also ignoring that this is explicitly being pitched as a solution for compute-heavy workloads (AI training and inference) not bandwidth-heavy workloads.

[1]: https://starlink.com/updates/network-update


> So 3 years ago they managed to get to 10% of the power budget of one data center by accident, using satellites not explicitly designed for that purpose, using a partially reusable launch platform with 1/10th the payload capacity of Starship. My point is they've already demonstrated they can do this at scale.

How was it by accident? You make it sound like it was easy rather than a total revolution of the space industry? To achieve 1/10th of what they would need for a single DC (and most industry leaders have 5 or 6)? Demonstrating they could generate power at DC scale would be actually standing up a gigawatt of orbital power generation, IMO. And again, this is across thousands of units. They either have to build this capacity all in for a single DC, or somehow consolidate the power from thousands of satellites.

> Then why does each satellite have a 6 kW solar array? Re-read that post I linked; the analysis is pretty thorough.

You're right, my bad. So they're only short like 6 orders of magnitude instead of 9? Still seems massively disingenuous to conclude that they've solved the heat transfer issue.

> Gen 3 is reportedly up to 1 Tbps ground link capacity, for one satellite.[1] There will be thousands.

Okay I'll concede this one, they could probably get the data up and down. What's the latency like?


> How was it by accident?

I say by accident because high power capacity wasn't a design goal of Starlink, merely a side effect of deploying a communications network.

> My bad. So they're only short like 6 orders of magnitude instead of 9?

No, they're 1 order of magnitude off. (22 MW total capacity of the constellation vs your bar of 100 MW for a single DC.) Again, 3 years ago, using an inferior launch platform, without that even being a design goal.

> What's the latency like?

Starlink latency is quite good, about 30ms round trip for real-world customers on the ground connecting through the constellation to another site on the ground. Sun synchronous orbit would add another ms or two for speed of light delay.

AFAIK nobody outside SpaceX has metrics on intra-satilite latency using the laser links but I have no reason to think it would be materially worse than a direct fiber connection provided the satellites aren't spread out too far. (Starlink sats are very spread out, but you obviously wouldn't do that for a data center.)


> No, they're 1 order of magnitude off. (22 MW total capacity of the constellation vs your bar of 100 MW for a single DC.)

Why on earth would you compare their entire fleet to one project? Power generation trivially parallelizes only if you can transmit power between generation sites. Unless they've figure out how to beam power between satellites the appropriate comparison is 6Kw to 100Mw. And again, the generation is the easy side; the heat dissipation absolutely does not parallelize so that also needs to go by 3-5 orders of mag.

And also: radiation. Terrestrial GPUs are going to be substantially more power and heat efficient than space-based ones (as outlined in TFA). All this for what benefits? An additional 1.4x boost in solar power availability? There's simply no way the unit economics of this work out. Satellite communications have fundamental advantages over terrestrial networks if you can get the launch economics right. Orbital DCs have only the solar availability thing; everything else is cheaper and easier on land.


Why wouldn't you compare to the entire fleet? You think they're going to deploy an entire data center in one sat? That'd be as dumb as trying to deploy an entire data center in one rack back on Earth. Of course if you frame the problem that way it seems impossible.

I already gave my thoughts on radiation and economics in my original comment. I agree those could be significant challenges, but ones SpaceX has a plausible path to solving. Starship in particular will be key on the economic side; I find it very unlikely they'll be able to make the math work with just Falcon 9. Even with Starship it might not work out.

And it's not just a 1.4x boost in solar power availability. You also eliminate the need for batteries to maintain power during the night or cloudy days (or cloudy weeks), and the need for ground infrastructure (land, permitting, buildings, fire suppression systems, parking lots, physical security, utility hook-up, etc).


> I'm trying to understand what the criticism is here, because the example seems to support the point that these are meant to be a way of learning the future, not oppose it. I thought the whole point was that yes, people with inside knowledge will bet large sums of money on things they expect to happen, and that's what makes the prediction useful. The market is meant to incentivize people who know things to act on them in a way that makes them known.

You're ignoring the critical issue of timing. It's one thing to crowd-source knowledge in a steady, homogenous way. It's quite another for an actor with material knowledge of the situation to exploit this dramatic information asymmetry to turn a profit, revealing the new information at the last possible timepoint it could be used to lay a wager. Insider trading is quite different from a Hayek-style price signalling, and it's the same here. In principle (and on long time-scales) these markets can incentivize important information to come to light sure, but in infinite time we're all dead anyways. The short-time dynamics matter a lot more, from a social welfare perspective.


> If we evolved to be more rational, then the phenomenon probably wouldn't exist.

I strongly disagree here, I think the phenomenon is in fact part and parcel with the evolution of our rationality. Betting (as you accurately describe) is an intrinsic part of life, and rationality (as I understand) is our evolved response to wrangling that uncertainty. It shouldn't be surprising that the evolutionary reward mechanism for acting rational (big oxytocin hit when you predict something correctly) could get miscalibrated for a subset of the population; that's just biology.

I also think you're making a massive error by conflating criticism of betting with criticism of betting markets, though. Nobody is upset about people dealing with the natural risk of day-to-day life; the problem is that people are now being incentivized to take on additional risk (beyond the day-to-day) which they are poorly equipped to reason about. The systems governing the dynamics of geopolitical events (or other things you can estimate in a betting market) are staggeringly complex, and assigning credences to them is an epistemological nightmare that even the most seasoned experts struggled to do reliably. These market brokers are using a framework of rationalism/"wisdom of the crowds" as a cover to sucker non-expert consumers into what truly is gambling in the sense that you describe it. In principle a person could probably lay accurate odds on a horse race too, but the average punter is never doing that and the same is true of the majority of polymarket users.


My (admittedly a bit tinfoil) take on the recent self-hosting boom is that it's highly compatible with individualist suburban capitalism; and that while there are elements of it that offer an alternative path to techno-feudalism, by itself it doesn't really challenge the underlying ideology. It's become highly consumerist, and seems more like a way of expressing taste/aesthetics than something that's genuinely revolutionary. Cooperative services (as you describe) seem like they offer a way more legitimate challenge, but I feel like that's a big reason why they don't see as much fete-ing in the mainstream tech media and industry channels.

I say all this as someone who's been self-hosting services in one form or another for almost a decade at this point. The market incorporation/consumerfication of the hobby has been so noticeable in the last five years. Even this AI thing seems like another step in that direction; now even non-experts can drop $350+ on consumer hardware and maybe $100 on some network gear so that they can control their $50/bulb Hue lights and manage their expansive personal media collection.


Interesting! I'm not sure how severe the consumerisation really is, but yeah I can totally see the whole home-automation thing playing into it too.

I don't think mainstream tech media is deliberately omitting co-ops in their reporting due to them challenging the status quo. I think it's rather that actually, there aren't really many initiatives in the space.

And I think that is due to a lot of tech people thinking that if only the technology becomes good enough, then the problem will be solved, then, finally, everyone can have their own cloud at home.

I think that's wrong though, I think the solution in this case is that we organize the service differently, with power structured in a different way. We don't need more software to solve the problem. We know how to build a cloud services, technically. We know how to do it will. It's just that if the service is run for-profit, counter to the interests of the users, it will eventually become a problem for the users. That's the problem to fix, and it's not one to fix with technology, but just with organizing it differently.

It works for housing, in some areas it also works for utilities like internet, there are also co-ops for food. Why shouldn't it also work for modern-day utilities like cloud storage and email?

As a techie, don't be content with just running your own self-hosted service. Run it for your family, run it for your friends, run it for your neighborhood! Band together!


> It's just that if the service is run for-profit, counter to the interests of the users, it will eventually become a problem for the users. That's the problem to fix, and it's not one to fix with technology, but just with organizing it differently.

100% agree with you here, and yeah I'm definitely leaning a bit too conspiratorial about it. It's probably not actually intentional, and instead just a product of the larger dynamics.

A while ago I read some interesting economic analysis about why more co-ops hadn't popped up specifically in the gig worker space, since it seems to natural to cut out the platform rent that eg. Uber extracts as profit. I'm failing to recall the specific conclusions, but IIRC the authors seemed to feel that there were some structural obstacles preventing co-ops from growing in those space. Something something capex and unit costs. It's certainly an area I'd be interested to see further analysis in.

Also you sounds like you might get a kick out of mayfirst.coop (if you're not familiar with them already). It's not exactly what you're describing, but the spirit is there. I use them for my web-hosting needs and have been extremely satisfied.


I think this is a good idea so long as you ensure you've got a good backup going or don't put anything super critical on there. I think it's seriously outside odds that Claude `rm -rf /`s your server, but definitely not 0%.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: