Hacker Newsnew | past | comments | ask | show | jobs | submit | irae's commentslogin

I've used Linux as a daily driver for 6 months and I am now back to my M1 Max for the past month.

I didn't find any reply mentioning the easy of use, benefits and handy things the mac does and Linux won't. Spotlight, Photos app with all the face recognition and general image index, contact sync, etc. Takes ages to setup those on Linux and with macs everything just works with an Apple account. So I wonder if Linux had to do all this background stuff, if it would be able to run smoothly as Macs run this days.

For context: I was running Linux for 6 months for the first time in 10 years (which I was daily driving macs). My M1 Max still beats my full tower gaming PC, which I was using linux at. I've used Windows and Linux before, and Windows for gaming too. My Linux setup was very snappy without any corporate stuff. But my office was getting warm because of the PC. My M1 barely turn on the fans, even with large DB migrations and other heavy operation during software development.


I've never had this issue. M1 Max. But I also disable some of the Spotlight indexes. Cmmd+Space has no files for me, when I know I am searching for a file I use Finder search instead.

Awareness is more important than government regulation. Assuming you are a parent, we as parents should be more concerned and help our kids grow with a healthy relationship with aggressive marketing and addictive features, by actively avoiding it, setting up time restrictions, etc.. No one else can help kids besides their parents. Everything else is too slow to be effective and with mild efficacy.

Government will do a terrible job at it. Society lost the capability of creating good and simple laws that can be disputed on courts based on law intention. Instead, laws nowadays are full of details hard to understand that attack the symptom and not the cause.

For instance, a simple law like "Companies should take measure, even if it lowers revenue and growth, to reduce addictive behavior. They should to it more emphatically on under age users and even more on under 13 years old.". But no. Instead, they will write 40 pages of what companies should implement in their software, and than have the 40 pages be quickly outdated, partially impossible to implement and hell for developers who try to do the right thing to comply. Total crap of standards and regulation bodies that help nothing and slow down all innovation.

Solution will only come from social pressure, movements to delete the apps, parents actually educating their children to avoid adicitive features. It will take time. But Government will solve nothing.


Dumping Gmail is a long process. I've had people sending to my old address for 5 years. Better start sooner than later.

I believe the whole point is that some people inside acknowledge the issue, made leadership aware of it, yet, youtube still pushed sorts aggressively. The documents are prof of awareness, so they can't pretend they were unaware of the issues.

Instagram has it as a tool, not as default. You need to actively go, find, and enable timeframes for it to alert within your rest period.

I hope they end up removing HDR from videos with HDR text. Recording video in sunlight etc is OK, it can be sort of "normalized brightness" or something. But HDR text on top is terrible always.


I believe you are speculating on digital mastering and not codec conversion.

From the creator's PoV their intention and quality is defined in post-production and mastering, color grading and other stuff I am not expert on. But I know a bit more from music mastering and you might be thinking of a workflow similar to Apple, which allows creators to master for their codec with "Mastered for iTuenes" flow, where the creators opt-in to an extra step to increase quality of the encoding and can hear in their studio the final quality after Apple encodes and DRMs the content on their servers.

In video I would assume that is much more complicated, since there are many quality the video is encoded to allow for slower connections and buffering without interruptions. So I assume the best strategy is the one you mentioned yourself, where AV1 obviously detects on a per scene or keyframe interval the grain level/type/characteristics and encode as to be accurate to the source material at this scene.

In other words: The artist/director preference for grain is already per scene and expressed in the high bitrate/low-compression format they provide to Netflix and competitors. I find it unlikely that any encoder flags would specifically benefit the encoding workflow in the way you suggested it might.


"I believe you are speculating on digital mastering and not codec conversion."

That's good, since that's what I said.

"The artist/director preference for grain is already per scene and expressed in the high bitrate/low-compression format they provide to Netflix and competitors. I find it unlikely that any encoder flags would specifically benefit the encoding workflow in the way you suggested it might."

I'm not sure you absorbed the process described in the article. Netflix is analyzing the "preference for grain" as expressed by the grain detected in the footage, and then they're preparing a "grain track," as a stream of metadata that controls a grain "generator" upon delivery to the viewer. So I don't know why you think this pipeline wouldn't benefit from having the creator provide perfectly accurate grain metadata to the delivery network along with already-clean footage up front; this would eliminate the steps of analyzing the footage and (potentially lossily) removing fake grain... only to re-add an approximation of it later.

All I'm proposing is a mastering tool that lets the DIRECTOR (not an automated process) do the "grain analysis" deliberately and provide the result to the distributor.


RIP electron apps and PWAs. Need to go native, as chromium based stuff is so memory hungry. PWAs on Safari use way less memory, but PWA support in Safari is not great.


I, for one, would not miss a single one of the electron apps I'm forced to use.

Every single one of them makes me feel like the vendor is telling me "we can't be bothered employing half decent developers or giving the developers we have enough time and resources to write decent software, so we're just going to use cheap and inexperience web developers and burn another gigabyte or two of your memory to run what could easily be a sub 100MB native app."

At least now I'll have significant numbers to tell my boss: "Sure, we can continue to use Slack/VSCode/Teams/Figma/Postman/ - but each of those is going to require an additional GB or two of memory on every staff member's computer - which at today's pricing is over $500 in ram per laptop which are all on a 18-24 month replacement cycle. So that's maybe a million dollars a year in hardware budget to run those 5 applications across the whole team. We'll need to ensure we have signoff on that expenditure before we renew our subscriptions for those apps."


Your laptops have an 18-24 month replacement cycle? What are you guys doing to the poor things?


Apps can't be 100MB on modern displays, because there are literally too many pixels involved.

Not that I know what's going on in an Electron app heap (because there's no inspection tools afaik), but I'm guessing much of it is compiled code and the rest is images and text layout related.


> Apps can't be 100MB on modern displays, because there are literally too many pixels involved.

What? Are you talking about assets? You'd need a considerable amount of very high-res, uncompressed or low-compressed assets to use up 100MB. Not to mention all the software that uses vector icons, which take up a near-zero amount of space in comparison to raster images.

Electron apps always take up a massive amount of space because every separate install is a fully self-contained version of Chromium. No matter how lightweight your app is, Electron will always force a pretty large space overhead.


No, I'm talking about window buffers. This is about memory not disk space.


I was talking about RAM - in that running Chromium on its own already has a preset RAM penalty due to how complicated it must be.

But window buffers are usually in VRAM, not regular RAM, right? And I assume that their size would be relatively fixed in system and depend on your resolution (though I don't know precisely how they work). I would think that the total memory taken up by window buffers would be relatively constant and unchanging no matter what you have open - everything else is overhead that any given program ordered, which is what we're concerned about.


Well, you see, there's a popular brand of computers that don't have separate VRAM and have twice the display resolution of everyone else.

Luckily, windows aren't always fullscreen and so the memory usage is somewhat up to the user. Unluckily, you often need redundant buffers for parts of the UI tree, even if they're offscreen, eg because of blending or because we want scrolling to work without hitches.


Are you 100% sure every single window needs to have 8k resolution?


The size of the window is up to the user.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: