I went down the self host route some years ago but once critical problems hit I realized that beyond a simple NAS it can be a very demanding hobby.
I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.
My spouse and I work at home and after the first couple multi-day power outages we invested in good UPSs and a whole house standby generator. Now when the power goes out it's down for at most 30 seconds.
This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.
Yeah we did a similar thing. Same situation, spouse and I both work from home, and we got hit by a multiple day power outage due to a rare severe ice storm. So now I have an EV and a transfer switch so I can go for a week without power, and I have a Starlink upstream connection in standby mode that can be activated in minutes.
Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.
Well, it's an EV with a big inverter, not a generator, but I get your point. And I do periodically fire it up and run the house on it for a little while, just to exercise the connection and maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
Oh, I see! Genuinely curious -- what kind of EV has a battery to power a house for a week?
> maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
That's the way to do it. I usually did my trial runs during the day with light readily available but underestimated how much I needed to see what I am doing. Now there's a grounding plug and a flashlight in the "oh shit kit".
> what kind of EV has a battery to power a house for a week?
Assuming their heating, cooking and hot water is gas, a house doesn't actually consume that much. With a 50kWh battery you can draw just under 300W continuously for a week. I'd expect the average house to draw ~200W with lighting and a few electronics, with a lean towards the evenings for the lighting.
On paper the numbers look right, but a week off _50kWh_ EV battery feels off.
What follows is back of the napkin calculations, so please treat it as such and correct me if I am wrong.
1. Inverters are not 100% efficient. Let's assume 90%
2. Let's also assume that the user does not want to draw battery to 0 to not become stranded or have to do the "Honda generator in the trunk" trick. Extra 10%?
3. 300W continuous sounds a bit low even with gas appliances. Things like the fridge and furnace blower have spiky loads that push the daily average. Let's add 100W to the average load? I might be being too generous here, but I used 300W, not the 200W lower bound.
4. Vehicle side might need some consumption. If powering off the battery, it would probably need to cool the battery or keep some smarts on to make sure it does not drain or overheat? Genuinely not sure how to estimate this, let's neglect it for now.
Math is (50kw - 10%(inverter loss) - 10%(reserve)) / 0.4 = 100 (hours), ~ 4 days.
The above calculations assume a sane configuration (proper bidirectional wire, not suicide cord into 12v outlet). Quick skim of search for cars with bidirectional charging support for home shows batteries between ~40kWh(Leaf) to 250 kWh (Hummer).
So looks like one should be looking for ~80kWh battery, which actually most of the cars in the list have.
Again, very back of the napkin, would probably wanna add 20% margin of error.
Actually yes one thing I didn't consider in my calculation is the fridge (mostly because it's a spiky load that rarely comes on and I based it off my own apartment's instantaneous consumption at the time which was ~100W since the fridge compressor wasn't running).
Indeed with the fridge it pushes it a bit. But to address some of your other points:
> it would probably need to cool the battery
I'd expect if you're in a storm then you probably don't need any cooling - not to mention a 300W load is nothing for an EV battery compared to actually moving the vehicle. I'd expect some computers in the vehicle to be alive but that should be a ~10-20W draw.
On the other hand, my calculation assumes ~300W continuous. I expect the consumption to lean into the evenings due to the extra lighting, and drop off during other times.
But yes 80kWh might very well be what the OP has; I intentionally picked 50kWh as the lowest option I found on a "<major ev brand> battery kwh" search.
2025 was the year of LiFePo power packs for me and my family. Absolute game changers: 1000Wh of power with a multi-socket inverter and UPS-like failover. You lose capacity over a gas genny but the simplicity and lack of fumes adds back a lot of value. If it’s sunny you can also make your own fuel.
Generac 26kW Guardian, natural gas fueled, connected to a pair of automatic transfer switches. We have two electric meters due to having a ground source heat pump on its own meter.
I thought I was smart because I invested in UPS backup from the start.
Then 5 years later there was a power outage and the UPS lasted for about 10 seconds before the batteries failed. That's how I learned about UPS battery maintenance schedules and the importance of testing.
I have a calendar alert to test the UPS. I groan whenever it comes up because I know there's a chance I'm going to discover the batteries won't hold up under load any more, which means I not only have to deal with the server losing power but I have to do the next round of guessing which replacement batteries are coming from a good brand this time. Using the same vendor doesn't even guarantee you're going to get the same quality when you only buy every several years.
Backup generators have their own maintenance schedule.
I think the future situation should be better with lithium chemistry UPS, but every time I look the available options are either exorbitantly expensive or they're cobbled together from parts in a way that kind of works but has a lot of limitations and up-front work.
My APC UPS self-tested and monitored battery status automatically. Then started to endlessly beep when it noticed the battery needed replacing (could be muted though). Eventually, I stopped using UPS since I rarely needed it and it was just another thing to keep and maintain.
Yea I think my own preference for self-hosting boils down to a distrust of a continuous dependency on a service in control of a company and a desire to minimize such dependencies. While there are FOSS and self-hostable alternatives to tailscale or indeed claude code, using those services themselves simply replaces old dependencies on externally-controlled cloud-based services on new ones
You can self-host Pocket ID (or another OIDC auth service) on a tiny $1/mo box and use that as your identity provider for Tailscale. Here's a video explaining how: https://www.youtube.com/watch?v=sPUkAm7yDlU
I really enjoy self-hosting on rented compute. It's theoretically easy to migrate to an on-prem setup, but I don't have to deal with the physical responsibilities while it's in the cloud.
Depends what you are trying to host. For many people it’s either to keep their private data local, or stuff that has to be on the home network (pi hole / home assistant)
If you just want to put a service on the internet, a VPS is the way to go.
Self hosting sounds so simple, but if you consider all the critical factors involved, in becomes a full time job. You own your server. In every regard.
And security is only one crucial aspect. How spam filters react to your IP is another story.
In the end I cherrish the dream but rely on third party server providers.
syncthing might be worth looking into. ive been using that more and more the last few years for anything that i use daily, things like keepass, plain-text notes, calendars/contacts, rss feeds, then everything else that im "self hosting" are just things that i might only use a few times a week so its no big deal if i lose access.
its so much simpler when you have the files stored locally, then syncing between devices is just something that can happen whenever. anything that is running on a server needs user permissions, wifi, a router etc etc, its just a lot of complexity for very little gain.
although keep in mind im the only one using all of this stuff. if i needed to share things with other people then syncthing gets a bit trickier and a central server starts to make more sense
Tailscale has passkey-only account support but requires you to sign up in a roundabout way (first use an SSO, then invite another user, throw away the original). The tailnet lock feature also protects you to some extent, arguably more so than solutions involving self-hosting a coordination server on a public cloud.
So now you need to test them regularly. And order new ones when they're not holding a charge any more. Then power down the server, unplug it, pull the UPS out, swap batteries, etc.
Then even when I think I've got the UPS automatic shutdown scripts and drivers finally working just right under linux, a routine version upgrade breaks it all for some reason and I'm spending another 30 minutes reading through obscure docs and running tests until it works again.
Not sure what to say then. I run nixos on ~15 different VMs / minipcs, a total of I guess 6 physical machines. Never had to deal with a UPS battery dying, and havent had to do anything to address NUT breaking. I broadcast NUT via synology NAS though, so the only direct client of the UPS status is the NAS. Ive never once had an issue in the ~5 years Ive had it setup like this.
My home server doesn't need to be high availability, and the BIOS is set to whatever state prior to power loss. I don't have a UPS. However, we were recently hit with a telco outage while visiting family out of town. As far as I can tell there wasn't a power outage, but it took a hard reboot of the modem to get connectivity back. Frustrating because it meant no checking home automation/security and of course no access to the servers. I'm not at a point where my homelab is important enough that I would invest in a redundant WAN though.
I've also worked in environments where the most pragmatic solution was to issue a reboot periodically and accept the minute or two of (external) downtime. Our problem is probably down to T-Mobile's lousy consumer hardware.
As another commenter said (but got downvoted to oblivion for some reason), its not really about uptime for the homelab, its about graceful shutdown/restart. And theres well defined protocols for it (look up network ups tools, aka NUT).
Power outages here tend to last an hour or more. A UPS doesn't last forever, and depending on how much home compute you have, might not last long enough for anything more than a brief outage. A UPS doesn't magically solve things. Maybe you need a home generator to handle extended outages...
How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.
Fairly frequently, 6kVA UPSs come up for sale locally to me, for dirt cheap (<$400). Yes, they're used, and yes, they'll need ~$500 worth of batteries immediately, but they will run a "normal" homelab for multiple hours. Mine will keep my 2.5kW rack running for at least 15 minutes - if your load is more like 250W (much more "normal" imo) that'll translate to around 2 hours of runtime.
Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.
Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.
> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."
I agree with you. My use case doesn't call for perfect uptime. Sounds like yours doesn't either (though you've got a pretty deep pit yourself, if 240v and generator weren't part of the sump plans and the rack just got to ride along (that's how it worked for me)).
But that doesn't mean its for us to say that someone else's use case is wrong. Some people self host a nextcloud instance and offer access to it to friends and family. What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
My point was simply that different people have different use cases and different needs, and it definitely can become a bottomless pit if you let it.
For me, IPMI, PiKVM, TinyPilot, any sort of remote management interface that can power on/off a device and be auto powered on when power is available, so you can reasonably always access it, and having THAT on the UPS means that you can power down the compute remotely, and also power back up remotely. Means you never have to send someone to reboot your rack while youre out of town, you dont shred your UPS battery in minutes by having the server auto boot when power is available. Eliminates reliance on other people while youre not home :tada:
But again, not quite a bottomless pit, but there are constant layers of complexity if you want to get it right.
This is a great example of how the homelab bottomless pit becomes normalized.
Rewiring the house for 240V supply and spending $400+500 to refurbish a second-hand UPS to keep the 2500W rack running for 15 minutes?
And then there's the electricity costs of running a 2.5kW load, and then cooling costs associated with getting that much heat out of the house constantly. That's like a space heater and a half running constantly.
I mean you're right in terms of it being a demanding hobby. The question is, is it worth the switch from other services.
I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.
> I was in another country when there was a power outage at home.
If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.
Hey, if tailscale is something you are worried about. There are open source alternatives to it as well but I think if your purpose is to just port forward a simple server port, wouldn't ssh in general itself be okay with you.
You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well
You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.
You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything
But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.
Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up
> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what
Oh yeah this is a really funny story considering what thread we are on, but I remember asking chatgpt or claude or gemini or anything xD to make QTM work and none of them could figure out
But I think in the end what ended up working was my frustration took over and I just copy pasted the commands from readme and if I remember correctly, they just worked.
This is really ironical considering on what thread we are on but in the end, Good readme's make self hosting on a home server easier and fun xD
(I don't exactly remember chatgpt's conversations, perhaps they might have helped a bit or not, but I am 99% sure that it was your readme which ended up helping and chatgpt etc. in fact took an hour or more and genuinely frustrated me from what I remember vaguely)
I hope QTM reaches more traction. Its build on solid primitives.
One thing I genuinely want you to perhaps take a look at if possible is creating an additional piece of software or adding the functionality where instead of the careful dance that we have to make it work (like we have to send two large data pieces from two computers, I had to use some hacky solution like piping server or wormhole itself for it)
So what I am asking is if there could be a possibility that you can make the initial node pairing (ticket?) [Sorry, I forgot the name of primitive] between A and B, you use wormhole itself and now instead of these two having to send large chunks of data between each other, they can now just send 6 words or similar
I even remember building some of my own CLI for something liek this and using chatgpt to build it xD but in the end gave up because I wasn't familiar with the codebase or how to make these two work together but I hope that you can add it. I sincerely hope so.
Another minor suggestion I feel like giving is to please have asciinema demo. I will create an asciinema patch if you want between two computers but a working demo gif from 0 -> running really really would've helped me save some/few hours
QTM has lots of potential. Iroh is so sane, it can run directly on top of ipv4 itself and talk directly if possible but it can even break through nats and you can even self host the middle part itself. I had thought about building such a project when I had first discovered QTM and you can just imagine my joy when I discovered QTM from one of your comments a long time ago for what its worth
Wishing the best of luck of your project! The idea is very fascinating. I would appreciate a visual demo a lot though and I hope we can discuss more!
Edit: I remember that qtm docs had this issue of where they really felt complex for me personally when all I wanted was one computer port mapped to another computer port and I think what helped in the end was the 4th comment if I remember correctly, I might have used LLM assistance or not or if it helped or not, I genuinely don't remember but it definitely took me an hour or two to figure things out but its okay since I still feel like the software is definitely positive and this might have been a skill issue from my side but I just want if you can add asciinema docs, I can't stress it enough if possible on how much it can genuinely help an average person to figure out the product.
(Slowly move towards the complex setups with asciinema demos for each of them if you wish)
Once again good luck! I can't stress qtm and I still strongly urge everyone to try qtm once https://gitlab.com/CGamesPlay/qtm since its highly relevant to the discussion
You aren't actually supposed to ever need to deal with tickets manually, unless you are trying to get a tunnel between two machines and neither can SSH into the other. It could be streamlined with something like Magic Wormhole, though. I'll add that to the backlog and see if there's interest. The normal way is to use SSH / docker exec / any remote shell to let QTM swap the tickets over it.
I've added an asciinema to the README now <https://asciinema.org/a/z2cdsoVDVJu0gIGn>, showing the manual connection steps. Thanks for the kind words. Hope you find it useful!
well my use case is the fact of connecting two servers behind nat. If I were to be able to gain ssh lets say, then I could've simply port forwarded in the first place.
Wow the asciinema is really good and very professional, thanks for creating it, I found it very helpful (in the sense that if I ever were to repeat my experiment, now I got your asciinema server) and I hope more people use it
> It could be streamlined with something like Magic Wormhole, though. I'll add that to the backlog and see if there's interest
To be really honest, its not that big of a deal considering one can do that on their own but I just had this idea for my own convenience when I was using QTM
I really like QTM a lot! Thanks for building it once again, I would try to integrate it more often and give you more feedback when possible from now.
Long time ago, it was popular for ISPs offer a small amount of space for personal websites. We might see a resurgence of this, but with cheap VPS. Eventually.
Free static site hosting and cheap VPSs already exist. Self hosting is less about putting sites on the internet now and more about replicating cloud services locally.
VPS's are really so dirt cheap that some of them only work because people dont use the servers 100% that they are allocated at or when people dont use the resources they have for most part because of economies of scale but vps's are definitely subsidized.
Cheap vps servers 1 gb ram and everything can cost around 10-11$ per year and using something like hetzner's cheap as well for around 30$ ish an year or 3$ per month most likely while having some great resilient numbers and everything
If anything, people self host because they own servers so upgrading becomes easier (but there are vps's which target a niche which people should look at like storage vps, high perf vps, high mem vps etc. which can sometimes provide servers for dirt cheap for your specific use case)
The other reason I feel like are the ownership aspect of things. I own this server, I can upgrade this server without costing a bank or like I can stack up my investment in a way and one other reason is that with your complete ownership, you don't have to enforce t&c's so much. Want to provide your friends or family vps servers or people on internet themselves? Set up a proxmox or incus server and do it.
Most vps servers sometimes either outright ban reselling or if they allow, they might sometimes ban your whole account for something that someone else might have done so somethings are at jeopardy if you do this simply because they have to find automated ways of dealing with abuse at scale and some cloud providers are more lenient than others in banning matters. (OVH is relaxed in this area whereas hetzner, for better or for worse, is strict on its enforcement)
Self hosting for me is important because I want to secure the data. I've got my files and photos on there, I want to have the drive encrypted with my key. Not just sitting on a drive I don't have any control over. Also because it plugs in to my smart home devices which requires being on the local network.
For something like a website I want on the public internet with perfect reliability, a VPS is a much better option.
I have a desktop I use but if I had to start again, I’d build a low power r pi or n100 type system that can be powered by a mobile battery backup with solar (flow type with sub 10ms switching and good battery chemistry for long life) that can do the basic homelab tasks. Planning for power outages from the get go rather than assuming unlimited and cheap power
For this reason I have hybrid homelab, with most stuff hosted at home, but critical things I'd need to have running are on a VM in cloud. Best of both worlds.
Honestly I think that there must be adapters which can use unlimited 5g sim's data plans as fallback network or perhaps (even primary?)
They would be cheaper than starlink fwiw and most connections can be robust usually.
That being said, one can use tailscale or cloudflare tunnels to expose the server even if its behind nat which you mention in your original comment that you might be against at for paranoid reasons and thats completely fine but there are ways to go do that if you want as well which I have talked about it on the other comment I have written here in-depth.
Some SOHO branch office routers like Cisco ISR models can take cellular dongles and/or SIM. Drivers for supported models are baked into ROM and everything works through CLI.
man I have this vague memory that I was at a neighbour's house and we were all kids and internet wasn't that widespread (I was really young) and I remember that they had this dongle in which they inserted an sim card in for network access. This is why this idea has always persisted in my head in the first place.
I don't know what's the name of dongle though, it was similar to those sd card to usb thing ykwim, I'd appreciate it if someone could help find this too if possible
but also yeah your point is also fascinating as well, y'know another benefit of doing this is that atleast in my area, 5g (500-700mbps) is really cheap (10-15$) with unlimited bandwidth per month and on the ethernet side of things I get 10x less bandwidth (40-80mbps) so much so that me and my brother genuinely thought of this idea
except that we thought that instead of buying a router like this, we use an old phone device and insert sim in it and access router through that way.
I think some people might be surprised to find one of these close to home. For example, I only discovered the Connections Museum by chance (they were manning a booth at another more well known museum).
>MonoGame is free to use on all platforms from the public repository, but the code for supporting console platforms is only accessible to authorized console developers.
>These platforms are provided as private code repositories that add integrations with the console vendor's APIs and platform-specific documentation.
> The MonoGame Foundation cannot directly give anyone access to the private console repositories without prior approval from the vendor due to NDA requirements set out by each vendor.
Blame here goes to Nintendo, Sony and Microsoft (though I'm not so sure about Microsoft)
This also applies to Godot, another open source game engine, which doesn't have any code for console support on its upstream repository.
But no one is paying MonoGame in this case? Maybe I'm just thick but X developer pays for MS/Sony/Nintendo to become authorized > and then they ask permission to use MonoGame per the page.
1. Apply to the vendor developer program (required for publishing).
2. Through the program, request access to the MonoGame console repositories
It is perfectly understandable that the people who really care about how their work was colour-graded, then suggest you turn off all the features that shit all over that work. Similarly for the other settings he mentions.
Don't get me wrong, I haven't seen the first season, so won't watch this, but creators / artists do and should care about this stuff.
Of course, people can watch things in whatever dreaded settings they want, but lots of TVs default to bad settings, so awareness is good.
I love great graphics but , Nintendo carved a nice big niche out for themselves by recognizing the constant drive for best graphics is a bit of rat race.
Nintendo has Mario, Zelda, Kirby, Donkey Kong, Starfox, Pokémon, and a few other less super famous and internationally known IP franchises. The core games and their spinoffs make more games than most children can reasonably expected to play through childhood and early adolescence. That the machine then collects dust doesn’t hurt Nintendo because they already sold it.
Yes Steam has huge library (my ‘want to play’ list is over 100 titles at this point) full of games of all genres, qualities, and niches. But Nintendo has more than enough to do what they have done for years, i.e. sit tight on their beloved IP and dole it out at varying levels of quality on strictly low end hardware and watch their earning go up.
Though, to be fair, my kids steal my Steam Deck from me more often than I try to get the Switch from them. The family share features of the Switch leave a lot to be desired.
People rarely buy a platform for the platform, they buy the platform to do the thing they want to do. A game is just a genre of software.
It is far, far better to have tons of high quality software available for a platform, than to have an amazing platform, but a limited choice of software.
They have enough first party games which only release on their hardware that people are willing to buy a Switch for nintendo games, and another gaming device for everything else.
Sad part is that I would be willing to pay a substantial mark up to be able to play some of those first party titles on my PC, but since my kids have a Switch I just settle for using it. So even if I don’t think I’d buy a console just for their games, I’m gonna end up buying it anyway and Nintendo still wins.
I'm on a Switch and will not move because of the "Game Key Card" bullshit where you have a card but still don't get the files you need to play them game.
However, Pokemon guarantees a certain amount of Switch 2 sales--Pokemon ZA sold about 6 million units.
However, the single digital service that hasn't killed my digital library at some point is Steam. Games that I bought many years ago are still fine. Sony, Microsoft, and Nintendo all killed digital games that I bought.
That having been said: I've transferred a lot of my purchasing to GoG. Steam doesn't get the benefit of the doubt anymore.
The elephant in the room: "will this game run on my Steam Machine?"
This is really the part a lot of people don't understand and not a qestion you even have to ask when you buy/download a game for a console.
Some of the biggest games right now like BF6, COD, or Fortnite, League of Legends, chinese gacha games won't run on this. That excludes a massive part of the market, many of whom would be the exact audience for a simpler, more console-like PC experience. There's also no guarantee that future AAA games will be compatible with this day one (8GB VRAM is very limiting already).
Yeah yeah indies but if people want to play X then offering them Z is not an option.
This is true also for steam deck but it’s a success anyway. COD, Fortnite, LoL players can stay on windows. I’m happy to play newest indie game on my Linux machine
Success is relative. The Steam Deck is only unsuccessful if you consider the goal of the device to be "outsell Nintendo". I would argue 4 million units is not merely a success, but a massive success.
The Steam Deck also had no marketing and is not sold in retail stores. It's also been a success in kicking off a whole product category of handheld PCs, of which most games will be bought on Steam.
You can likely install Windows on Steam Machine if you so wish, and then it would actually be a fairly competent mini PC while having great and silent cooling. However, I suppose most casual gamers aren't savvy enough to tinker and install their own OS.
My current setup (sorry Airbus not Boeing) is more modest and probably I'd enjoy to upgrade it even more but after point it really does become a bottomless pit where the endgame will be buying a decommissioned plane lol
There's a convenience store in Sydney, which has a full blown A320 flight simulator in the back, behind all the shelves of snacks. The owner of the store is a migrant to Australia, who is (was?) waiting for his overseas aircraft maintenance engineer qualifications to be locally recognised. He's turned the simulator into part of his business.
Edit: Based on the street address, it might have moved out of the convenience store into its own premises since the article was written?
Further edit: Sadly, it seems as if the owner, Ahmed Abdelwahed, might have been deported. Hopefully the deportation never went ahead, or he made it back into Australia on another try.
No not at all, just always loved video games and especially simulators like flight sims so one summer I decided to finally build a more immersive setup
But, since the Nobel was established, China has been invaded by Germany, Japan, Russia, Britain (largely India), France, the United States, Italy, Austria-Hungary, and Japan again, and had a civil war which hasn't technically ended (plus the end of the Boxer Rebellion), a revolution, and the worst famine in human history. But probably the worst event for its Nobel chances was https://en.wikipedia.org/wiki/Cultural_Revolution. The civil wars also brought to power brutal dictatorships, including in the so-called Republic of China.
The US has been invaded zero times and had zero civil wars during that period, and in the US, the Cultural Revolution and dictatorship are just starting. Consequently many people who might have been Chinese, German, Japanese, Russian, etc., during the period in question were instead born in the US. And note that, on the page I linked above, 6 Nobel laureates from the US were actually born in China: Charles K. Kao, Daniel C. Tsui, Edmond H. Fischer, Yang, Tsung-Dao Lee, and Walter H. Brattain (!).
> But, since the Nobel was established, China has been invaded...
> The US has been invaded zero times...
The number of external invasions is not a strong indicator of the number of Nobel Prizes, if you compare all countries, beyond just China or the US.
And as you mentioned, the Cultural Revolution greatly reduces the chance of Chinese Nobel, so internal events can take a large role. And Mao led to more deaths—not to mention destruction to science and culture—than external invasions in the last century combined.
> The civil wars also brought to power brutal dictatorships...
The dictatorship arguably hasn't ended, by taking another less brutal form. And to be precise, CCP brought the civil wars and its consequences, not the civil wars brought dictatorships.
I don't think even the Cultural Revolution or anything else Mao did had much of an effect on Nobel prize-worthy research, simply because there wasn't much to disrupt to begin with. In terms of education, the biggest change was in secondary school enrollment, which more than doubled during the Cultural Revolution before dropping back down, which I assume represents people staying students for longer instead of graduating, rather than an expansion of access. University education remained a rarity for long after that, only surpassing 10% enrollment in 2002: https://ourworldindata.org/grapher/primary-secondary-enrollm...
I guess we'll see Chinese scientists winning Nobels at a rate commensurate with other big countries in 20–40 years or so.
> I don't think even the Cultural Revolution or anything else Mao did had much of an effect on Nobel prize-worthy research, simply because there wasn't much to disrupt to begin with.
Count points:
- Intellectuals, academics, and teachers were persecuted, attacked, and killed by the youth (the Red Guards), in all schools and institutions in China.
- Search for “scholars killed during the cultural revolution”, or “list of scholars abnormally died in China during the cultural revolution” (or for a short list in Chinese https://zh.wikipedia.org/zh-cn/中华人民共和国被迫害人士列表#科学技术人士). This includes the leader of Two Bombs, One Satellite (nuclear weapon, ICBM, artificial satellite) 赵九章. Besides, those returned from overseas were considered traitors or spies, and just within the Chinese Academy of Sciences (top science institution), there are 229 scholars died due to the Cultural Revolution [1]. This destroyed the environment needed to do great science. Imagine if Yang went back to China in the early 1950s.
> I guess we'll see Chinese scientists winning Nobels at a rate commensurate with other big countries in 20–40 years or so.
Such predictions—Chinese scientists will win more science Nobels—has been made long ago. In 1998, “The Chinese-American Nobel Laureate Chen Ning Yang has also predicted that mainland scientists will win a prize within twenty years – even more than one, if the country’s economic development continues at its current rate.“ [2]
But reality shows otherwise, not until scientists and academics are respected in China. During COVID, politics overruled science, resulting in the Zero-COVID policy, which were brought down by widespread protests, not by science (counter evidence to the ineffectiveness of the Zero-COVID policy).
Unless you are implying that you predict a regime change by that time...
Of course the number of scholars killed is large in absolute terms and relative to the size of the Chinese research ecosystem at the time, but it's also small relative to the number of researchers worldwide at the time and to the hundreds of thousands (millions?) of Chinese scientists researching all kinds of things now, which is the result of explosive growth primarily over the past few decades.
Chen-Ning Yang was technically not wrong with his 1998 prediction, since Tu Youyou got 1/3 of the 2015 Nobel in medicine, but it didn't really make sense for him to link this to continued development, since the delay between discovery and award means that most of the prizes from 1998–2018 were for work that was already done before he made his prediction.
Over the same time frame, tertiary school enrollment went from 6.3% to 53.4%, and my 20–40-year prediction is based on a guess of how long it will take for the work of all those freshly-minted scientists to enter the range of consideration for a Nobel.
> Chen-Ning Yang was technically not wrong with his 1998 prediction, since Tu Youyou got 1/3 of the 2015 Nobel in medicine, but it didn't really make sense for him to link this to continued development, since the delay between discovery and award means that most of the prizes from 1998–2018 were for work that was already done before he made his prediction.
Agreed. Though to nitpick, the part on “even more than one, if the country’s economic development continues at its current rate” is technically wrong, if we just count Chinese Nobel scientists developed in Mainland China (only Tu Youyou).
Chen-Ning Yang was bullish on Chinese science, but reality did not deliver.
Shing-Tung Yau is as bullish on Chinese mathematics in the future, but even he admitted that China is still decades behind in mathematical research, due to systematic issue which ‘“places too much emphasis on material rewards” and tends to encourage young researchers to work for titles instead of scientific advancements’. [1]
[1]: https://archive.is/MRDlP "China has problems to solve before its mathematics research can rise above WWII levels, scholar says"
The parent poster is talking about the Younghusband expedition into Tibet of 1903[1], I don't think it is the boxer rebellion in which British Indian troops had no direct involvement(AFAIK) and also those events happened before 1901 establishment of the Nobel Prize, which is the time period OP cites as the starting time range.
To see the Tibetan military expedition as an invasion of China, means to accept the Qing dynasty and its successor states (ROC,PRC)claim of sovereignty and not suzerainty over Tibet. A claim at the time which was not recognized by other countries specifically Russia, Britain and also Tibet.
The refusal of Tibetan government to accept terms of treaties they were not party to directly (i.e. the ones Qing China signed) was the official reason stated for the invasion by the British.
Either way it is a deeply contentious topic never legally settled in the 1907 agreement and had implications both to that era and modern geopolitics. No one then or now is purely looking at merits of the arguments.
The points will end up into esoteric discussion on whether is kowtowing and kneeling are the same thing, or is acknowledging supremacy is same as sovereignty, or the differences between vassal state or autonomous region or protectorate or suzerain.
Also the views of the countries/entities (or their successors) have also changed including the Tibetan government-in-exile in the last 120 years.
My knowledge of history is at best a passing student at high school level, this kind of discussion requires deep understanding of relationship of states, and of Chinese culture and language during Qing dynasty i.e. professional expertise which I certainly don't have.
---
[1] My initial read was they meant either Arunachal Pradesh( South Tibet to the Chinese), Aksai Chin or the MacMohan line etc, but they clarified it wasn't the case.
The Boxer Protocol wasn't signed until September 01901 and involved permanently establishing a dozen foreign military bases inside China, so didn't, from my point of view, end the invasion. But it's true that the actual fighting was almost completely the previous year.
The research competition is basically a funding competition nowadays. In the 20th century, China had far less research funding for universities compared to U.S.. That's due to two facts:
1. China was poor back then.
2. China had barely no high-tech industries which can provide additional financial support to labs and cultivate talents.
Therefore, people got high-level education who want to pursue a research career would have less chance to get a job in China (at least mainland) and had to go to U.S., EU or Japan to utilize their knowledge.
In fact, until now, U.S. still offers the highest the research funding to its universities. That's why there're so many Chinese students in U.S. schools.
Yang got out before Mao. China managed to birth and educate several world-class mathematicians and scientists in the short span between the beginning of Westernized education and Mao's take over... and then it stopped for several decades. The lucky ones managed to get out.
Strange to think that revolutions, unrest, the Sino-Japanese war, and the civil war all provided better conditions for fostering top talent than Mao's China did.
India has similar number of laureates and nowhere had the similar kind of social upheaval or authoritarian regime like China or the soviet union had.
I think it is bit more nuanced than just Mao, pre 1935 you could do ground breaking research in almost any field with limited to no funding at all. Since the war you need increasingly large amount of budgets which western universities with full government support enjoy, ans it was not possible to compete for India or China or even the Soviet Union to keep up.
--
The cultural changes you allude to, certainly were a medium term negative factor, but the pre 1950 setup were hardly sustainable or efficient. In pre Mao China or similarly British India (or even till recently) it was not a meritocracy there was a privileged elite who had all the opportunity and few shined if they were also talented.
Today China is one of the most meritocratic economies after all - despite all the authoritarian flaws, we are only seeing positive growth in foundational scientific research and rapidly in contrast with the rising anti-science sentiment we are seeing in so many parts of the western and western influenced world.
The socio-cultural reset was important and necessary for both China and India to progress, the methods of the Mao era are questionable both for their cruelty and also for how efficient and effective they were it was just bad all around however the need of the reset came from a valid place I think.
---
There is whole dimension of bias which does disadvantage particularly Chinese research output today. Don't get me wrong I am not saying there is conscious bias against Chinese researchers. The bias is because despite the esteem the Nobel prize is not a global one.
The committee sit in Scandinavian countries closely working with Norway government. The members are predominately affiliated to western universities and fluent in English or other European languages and read Nature / Science type of western journals.
This always put Soviet researches before and now Chinese and Indian(to a lesser degree) at a disadvantage compared to their western peers.
The committee are not equipped to judge the research output of the whole world, till recently this was not a problem, because western research post WW-II was the majority of the world output, but that is increasingly not true and in a multi-polar world.
> the methods of the Mao era are questionable both for their cruelty and also for how efficient and effective they were
Also for killing tens of millions of people, which not only is murder of each person but also those millions of people - and then their families - never benefit.
Absolutely, I am in no way saying Mao era methods were justified, warranted or even effective.
They were misguided, ineffective, and directly or indirectly killed people in the millions.
I am just pointing out that, the atrocities of the era doesn't justify seeing pre 1960s or pre1950s years of China with rose tinted glasses as a better era, it wasn't unless you were in the elite.
It would be no different than seeing the 1970s or any earlier generation in U.S. history as a better era. Only a very small in-group perhaps had it good. Everyone else be it black, women, indigenous, various immigrants, religious, neuro or sexually diverse have only seen net improvements in last 300 years.
They were awful and achieved almost nothing but ruin, so by definition they were unnecessary.
But are you saying reform and change were unnecessary? The people of China were suffering immensely; the country had been in a state of domestic violent conflict, on and off, since before 1911 (as of 1949). The Communist Party became more corrupt.
Mao's policies and politics made all that much worse, but that doesn't mean nothing needed to be done.
China was already developing economically and technologically -- especially in coastal areas and in Manchuria (there was a large migration of Chinese to the area after it came under Japanese control).
That development would have continued.
I understand the anger and the desperation that made the Communist takeover possible but doing nothing at all and keeping all the elites in charge (instead of replacing them with new ones) would have been better.
> China was already developing economically and technologically
That's an odd version of history. China just went through WWII, including the awful Japanese invasion, which interrupted a massive civil war that restarted afterward, and which followed decades without a real national government.
> there was a large migration of Chinese to the area after it came under Japanese control
Japanese control didn't work out well for Chinese people, to say the least.
> keeping all the elites in charge
The elites had led China to disaster for a century, 'the century of humiliation' it's called (though blaming outside forces, which do deserve some blame).
> replacing them with new ones
Here we agree.
> would have been better
Certainly there is no source that can more than guess at that.
The better option would have been true democratic reform. It has worked superbly well in parts of China - Taiwan and Hong Kong. It was starting to work in 1989, and leaning in that direction before Xi.
Another point about Soviet scientists: it was very often a career-ending move to accept a Nobel prize unless you were a truly untouchable cult of personality and/or direct friend of those in power. See Andrey Sakharov, who first invented the soviet hydrogen bomb and later dedicated himself to non-proliferation which earned him a Nobel Peace prize. He was however barred from traveling to Oslo to accept in 1975, having already been blacklisted from classified work since 1968.
I wonder to what extent that lead to the curbing of consideration of those behind the iron curtain.
Peace prizes are different from science prizes. The Soviet Union had no problems with its scientists getting science prizes. It did sometimes had problems with letting them leave the country to actually receive them, of course.
While your disambiguation is valid, they very much wanted to minimize the potential fallout from individuals staying in, say, a Norwegian hotel and sampling the local culture only to return and speak fondly of said trip "beyond the curtain". Usually this was outweighed by the national prestige (and subsequent propaganda opportunity) from having a Soviet Nobel recepient but the KGB had an extremely heavy hand in deciding who got to go, regardless of scientific breakthrough.
~2/3 of US Nobel Laureates have not been born in the US. This tells you that things are a bit complex to analyze. You can also take into account second generation ones.
If you're familiar with the history of China since the Nobel prize started in 1901 it's not surprising. Five of those eight did their work outside China too.
It's also quite interesting to compare to the Soviet Union, which managed around 30 Nobel laureates in spite of also going through a communist revolution and some genocides like China did.
Russia / the USSR had a recent intellectual, scientific, and technological history in ways that China largely did not.
One datapoint: Sir John Barrow, Second Secretary to the Admiralty from 1804--1845, was part of a British envoy to China in the 1790s. Attempts to impress the Emperor with British science and technology left far less an impression than was hoped, with the Emperor dismissing the demonstration. A brief account of this being in the biography Barrows Boys by Fergus Fleming (1998).
China does have a long history of scientific and technological development, though by the 20th century this was all but forgotten / overlooked by the Chinese themselves, and it fell on an outsider, Joseph Needham (TK-chinese) to reacquaint them with this past, in Science and Civilisation in China, a 30-plus volume work begun in the 1950s and still in production.
Some people say that labeling yourself apolitical is 1, a polticial statement 2, a privilege itself which puts you into a certain socio-political position
whats up with people being afraid of being political. its your duty as citizen to be political. in ancient grece apolotical people were called "idiots". literally thats the origin of the word.
Very interesting, but according to this well-referenced paragraph on Wikipedia, the use of that word was a bit more complex:
> It is certainly true that the Greeks valued civic participation and criticized non-participation. Thucydides quotes Pericles' Funeral Oration as saying: "[we] regard... him who takes no part in these [public] duties not as unambitious but as useless" ... However, neither he nor any other ancient author uses the word "idiot" to describe non-participants, or in a derogatory sense; its most common use was simply a private citizen or amateur as opposed to a government official, professional, or expert. The derogatory sense came centuries later, and was unrelated to the political meaning.
I think you miss the point. On Nostr because technology is apolitical, you can be anything you want - political or not. A draw for some is exactly that; if they've been de-platformed elsewhere for political views, on nostr that can't happen.
This is authoritarian regime bread and butter. Create barriers that only right people can surmount, ban platforms where prominent opponents publish their media, block sources of income or just outright ban websites and newspapers and TG channels.
This is exactly what struck my eye. As someone from RU, I came to strongly identify "apolitical" with silent status quo supporters. Under an authoritarian regime, that means prosecuting people and lack of freedom of speech. Nostr? They'll just declare hosting a relay illegal and criminalize use, like they already did for other services.
They key is that your nostr app connects to hundreds of relays (and can work over Tor too), so that shutting down any single relay doesn't have notable effect.
If everything is political, then nothing is political.
I believe that author(s) just don't want to participate in non-technical discussions around his creation.
I would interpret it as anyone is welcome. The only barrier to entry is an internet connection, and even that is needed for just part of your experience.
I'd guess it's context is the censorship seen in the last decade on most social media platforms.
"Welcome" is in the eye of the beholder. Everyone can join, but I am sure as it is also with our real worldwide community, not everyone is welcome or is accepted equally.
But the point is, nostr does not intent to judge that. It happens automatically while communicating. Nostr is just the means to communicate.
Software is inherently apolitical. Claiming otherwise would be like saying that a hammer or drill are political, which is absurd. Political views are placed on software depending on the author's beliefs, but it's perfectly valid to release software in the public domain without any restrictions or expectations.
>Software is inherently apolitical. Claiming otherwise would be like saying that a hammer or drill are political, which is absurd.
No one sells "apolitical hammers" or "apolitical drills." If one has to specify that software is apolitical, it isn't.
No software exists in a vacuum, even the license terms are a political statement. Certainly nostr was created as an expression of fiatjaf's specific political ideals, and those ideals will tend to attract certain political demographics, and repel others.
> If one has to specify that software is apolitical, it isn't.
That conclusion doesn't track. In a time when a lot of software has become politically charged, it's perfectly reasonable to specify when that isn't the case.
> No software exists in a vacuum, even the license terms are a political statement.
Software doesn't exist in a vacuum, but not all licenses are the same. There are many licenses that don't place any restrictions on how the software is used. There are others whose authors relinquish all control or ownership. You may see these as political statements, but that would be corrupting their meaning. It's like claiming that atheism is a religion...
Whatever beliefs the authors have does not taint the software at all, unless the software itself is political or they make some political commentary. Communities created around software are political, as societies always are, but the software itself is inert.
The bottom line is that the Nostr project welcomes anyone, regardless of their politics. That is worth mentioning. If it attracts a certain type of political ideology, that has to do with the people who find the software appealing, not with the software itself.
I think the point is that nostr supports left wing, right wing, totalitarians, tankies, communists, lawyers, nazis, anarchists, javascript developers, liberals... everyone, without regards to their politics.
I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.
reply