Hacker Newsnew | past | comments | ask | show | jobs | submit | m132's commentslogin

Almost ideal.

Consider pestering the user to log in and install the mobile app to match the experience of Instagram, Facebook, TikTok, and the like. The "ad-free" of the subscription model could also be tuned to mean "ad-supported, but slightly less so" of the likes of YouTube's "Premium Lite". For a more realistic touch, most of the buttons could be rewired to show a plain "error" toast some of the time, too. And let's not forget about dark patterns all over the GDPR pop-up!


Or (and I'm saying this as someone in the EU): It should say that this feature is not available in the EU.

Better yet, display a pop-up with a generic Varnish 403 page in an iframe saying your IP was blocked, local American media style

> The trade-off versus gVisor is that microVMs have higher per-instance overhead but stronger, hardware-enforced isolation.

Having worked on kernel and hypervisor code, I really don't see much of a difference in terms of isolation. Could you elaborate on this?


Yeah, it's hard to hit the right balance with nuance around these and you're spot on. What I meant to get at was the specific difference in default modes where gVisor's systrap intercepts syscalls via seccomp traps and handles them entirely in a user-space Go kernel, so there's no hardware isolation boundary in the memory/execution sense. A microVM puts the guest in a VT-x/EPT-isolated address space, which is a qualitative difference in what enforces the boundary (perhaps?)

Whereas yeah, you can run gVisor in KVM mode where it does use hardware virtualization, and at that point the isolation boundary is much closer to a microVM's. I believe the real difference then becomes more about what's on either side of that boundary where gVisor gives you a memory-safe Go kernel making ~70 host syscalls, a microVM gives you a full guest Linux kernel behind a minimal VMM. So at least in my mind it comes down to a bit of around different trust chains, not necessarily one strictly stronger than the other.


I see this "hardware isolation" benefit of virtual machines brought up a lot, but if you look a little deeper into it, putting that label exclusively on VMs is very much unfair.

Just like containers, VMs are very loosely defined and, under the hood, composed of mechanisms that can be used in isolation (paging, trapping, IOMMU vs individual cgroups and namespaces). It's those mechanisms that give you the actual security benefits.

And most of them are used outside of VMs, to isolate processes on a bare kernel. The system call/software interrupt trapping and "regular" virtual memory of gVisor (or even a bare Linux kernel) are just as much of a "hardware boundary" as the hyper calls and SLAT virtual memory are in the case of VMs, just without the hacks needed to make the isolated side believe it's in control of real hardware. One traps into Sentry, the other traps into QEMU, but ultimately, both are user-space processes running on the host kernel. And they themselves are isolated, using the same very primitives, by the host kernel.

As you clarified here, the real difference lies in what's on the other side of these boundaries. gVisor will probably have some more overhead, at least in the systrap mode, as every trapped call has to go through the host kernel's dispatcher before landing in Sentry. QEMU/KVM has this benefit of letting the guest's user-space call the guest kernel directly, and only the kernel typically can then call QEMU. The attack surface, too, differs a lot in both cases. gVisor is a niche Google project, KVM is a business-critical component of many public cloud providers.

It may sound like I'm nitpicking, but I believe that it's important to understand this to make an informed decision and avoid the mistake of stacking up useless layers, as it is plaguing today's software engineering.

Thanks for your reply and post by the way! I was looking for something like gVisor.


Indeed, the world would be a much nicer place if only firewalls and Unix permissions existed...

It's always a hit or miss with Motorola, but this should up your chances:

https://github.com/zenfyrdev/bootloader-unlock-wall-of-shame...


Getting the same thing, "Failed to verify your browser. Code 11". Some noise about WebGL in the browser console, getExtension() invoked on a null reference. LibreWolf on Linux + resist fingerprinting.

Maybe opting for a better-written WAF could boost the reach?


> I'm experimenting with implementing such a sandbox that works cross-system (so no kernel-level namespace primitives) and the amount necessary for late-bound policy injection, if you want user comfort, on top of policy design and synthetic environment presented to the program is hair-pulling.

Curious, if this is cross-platform, is your design based on overriding the libc procedures, or otherwise injecting libraries into the process?

Also obligatory https://xkcd.com/2044/


I'm not interposing libc or injecting libraries. Guests run as WASM modules, so the execution substrate is constrained. The host mediates and logs effects. Changes only propagate via an explicit, policy-validated promotion step.

> not much to do with "App Sandboxes" which is a distinct macOS feature

The App Sandbox is literally Seatbelt + Cocoa "containers". secinitd translates App Sandbox entitlements into a Seatbelt profile and that is then transferred back to your process via XPC and applied by an libsystem_secinit initializer early in the process initialization, shortly before main(). This is why App Sandbox programs will crash with `forbidden-sandbox-reinit` in libsystem_secinit if you run them under sandbox-exec. macOS does no OS-level virtualization.


It is a little more direct than that even. The application's entitlements are passed into the interpretation of the sandbox profile. It is the sandbox profile itself that determines which policies should be applied in the resulting compiled sandbox policy based on entitlements and other factors.

An example from /System/Library/Sandbox/Profiles/application.sb, the profile that is used for App Sandboxed applications, on my system:

  (when (entitlement "com.apple.security.files.downloads.read-only")
        (read-only-and-issue-extensions (home-subpath "/Downloads")))
  (when (entitlement "com.apple.security.files.downloads.read-write")
        (read-write-and-issue-extensions (home-subpath "/Downloads")))
  (when (or (entitlement "com.apple.security.files.downloads.read-only")
            (entitlement "com.apple.security.files.downloads.read-write"))
        (allow process-exec (home-subpath "/Downloads")))

Most of this mythical "taste", at least as hinted by the article, can be acquired rather easily—by looking into what's already out there before jumping to creating.

Is there nothing? Great, go ahead and fill the void.

Is there so much that it becomes overwhelming to even look? If so, ask yourself: does your thing have any significant differentiators? Are you willing to maintain it? Do you want the people who come after you to see one more option in the sea, or an existing project made better thanks to your changes?

It's about respecting the time of one another. If I'm looking for a to-do app, I'm looking for a good one, at least in the ways that matter to me. Not for thousands of applications with the same exact issues. And so are you. Nobody needs a million of options that suck. We all want a handful or ideally one that does the job.


Instead of using third party apps for a todo list, I recently wrote myself a utility - a background process to reschedule iOS Reminders I don't get to, make sure every reminder I create actually gets a scheduled date/time, and to deconflict reminders from calendar entries if I get an overlap.

It took less than 90 minutes using claude code, I have a testflight I've shared with friends for feedback, and I'll probably put it out there for a dollar once I add a couple more settings.

The built in UIs, syncing, and integrations are really good. It took me a while to realize I didn't need another todo list app, just to tweak the built-ins.


It's a fairly radical idea that AI can (and should!) be doing things invisibly with existing platforms and avoid the whole nightmare of UI development.

> does your thing have any significant differentiators?

When I see a Show HN around a very popular product concept (like a habit tracker), the first thing I search for is a FAQ or comparison table against other similar apps.


> The most of this mythical "taste", at least as hinted by the article, can be acquired rather easily—by looking into what's already out there before jumping to creating.

Yes, you should do discovery, but that alone is not sufficient to develop taste. Being an also-ran is low taste even if you religiously meet the market expectations by following a pattern. Just like in fashion, you need to understand the rules to know when its okay to break the rules so that you appear fashion-forward, that is a form of taste no differently.


Almost like the rules for taste are made up on the fly…

Of course they are, taste is a social conversation to align for a window of time on a set of guidelines. Taste is a social construct, being a social construct (or "made up") does not make it any less real or valuable.

It’s a social construct yeah, but constructed by what?

IMO most of the unpleasant truth about taste is that it is really a stalking horse for money and distinction (cf the book of the same title).


Taste isn’t a social construct, it’s a function of how your brain is structured/wired. How it’s applied is a social construct.

I disagree taste is a very real thing and there are multiple levels to taste from shallow and easily changed, to deep and relatively constant.

Shallow taste is stuff like popular trends that come and go, and hating the taste of beer until you’ve had it a few times (not saying everyone has to like beer, that’s not the point).

Deeper taste is more like your deeply held cognitive biases. Like a current of a river or the valleys cut into a mountain. It’s the shape of your cognition that determines how information flows through your brain.

Deeper taste is heavily connected to you and your identity. It’s part of who you are. I think most people would agree that parts of themselves change very slowly, and some not at all.

I know there are parts of me that feel the same as when I was a child. To deny the existence of taste is to deny the existence of a “you” that is different from others.


The problem is that people are often delusional and AI feeds these delusions. You have to switch to objective measures to gain skill and taste. This is true for art (ask: Where is the focal point) instead of "is this good or necessary"

There are long lists of successful programs that market themselves as little more than "like program X, but faster/distributed/higher resolution/bigger map"


I used to find this and the whole idea of "Web3" ridiculous, but with the recent saturation of low-quality slop and disinformation, perhaps it's time to reconsider.

I enjoy reading thorough publications written by actual humans who have something to say. Part of why I'm here. And I'd take micropayments over subscriptions anytime.

There's just one catch nobody seems to be eager to talk about. While I'm willing to pay that 1¢, if it's 1¢ + any identifying information, I'm out.


While I also share the sentiment with the author, I can't help but notice that the article is picturing things as more dramatic than they really are.

What's been happening to software development from the 2010s onwards is closer to what happened to manual craftsmanship as the industrial revolution took off, than to the effects high-level programming languages and abstractions had on the field. Many attempts have been made to turn software development teams into assembly lines; between ultra high-level frameworks and AI, we've had all those "new-new" formal methodologies and extreme offshoring, for example. Another factor that contributed to the status quo is the fact that programming has become well-paid, which inevitably attracted people who are in it for the money and made it an attractive target for "cost optimization".

Not all hope is lost, however. There are two significant differences that set programming apart from traditional crafts: performance and security. There's no universal recipe for either—LLMs and large bloated orgs suck equally at both. Smaller players still can largely outperform behemoths if they have the right idea, similar to what WhatsApp did to Microsoft's Skype or to what Anthropic is now doing to OpenAI, Google and Microsoft. And as for security, just look at Apple's and Google's bug bounties.

At its core, software development is still a meritocracy. This hasn't changed despite the trillions of dollars that have been poured into making it a quantifiable problem. Organizations that refuse to accept this have their projects fail. As for the influx of money-oriented programmers, it might have skewed the proportions, but it definitely did not drive out all of the passionate ones. Keep your head up.

Also, I must say I like the irony of this post making it to the front page of a website that's usually full of headlines of the likes of "How I used Claude to code a revolutionary JavaScript framework running 100% on Amazon Lambda" :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: