Hacker Newsnew | past | comments | ask | show | jobs | submit | nialv7's commentslogin

Millions of lines of code in a decade?! One million would be 400 loc every working day. And he said millions. And that's only counting code he open sourced.

I don't believe it is possible to "carefully" write thousands of lines of code every single day.


> introduce a third party like Tailscale.

Well just use headscale and you'll have control over everything.


That just moves the problem, since headscale will require a server you manage with an open port.

Sure, tailscale is nice, but from an open-port-on-the-net perspective it's probably a bit below just opening wireguard.


serialization/deserialization will always be needed unless you got all your programs working with the same ABI. it's just that in nushell's case you aren't serialize to human readable text.

Not a "normal" option though. They plan to hide it away inside `gsettings` so only power user who already knows about middle-click paste will be able to find it and enable it. This completely destroys discoverability.

And a couple years later it is removed as only a minority used it.

That's not the new Atlas they are announcing.

It's interesting so far we have not seen the new Atlas actually functioning. In the past Boston Dynamics announcements have always been done with real hardware. But this time it's only with models and CGI.

What happened?


Yeah they say "we couldn't pry the model out of our engineer's hands" which is obviously nonsense. What they mean is that it doesn't work yet.

This smells a lot like a hacker thought because they are exceptional in one field (cybersecurity), they therefore are exceptional in all fields. The result is that information presented in this article is very surface-level, and quite biased.

As a much better alternative, I would recommend "debt" by david graeber, which is amazing.

Is your comment perhaps in reference to the comment ‘ the assumptions and estimates that go into it, I recommend Financial Intelligence by Joe Knight and Karen Berman’ and not the parent comment you’ve replied to?

lol, good guess. I must have clicked wrong - I thought I was replying to the comment "this is bad, don't read it"

Graeber is controversial. Archeologists hate how he argues by ad hominem and does not appear to understand the works he cites, to make his argument.

I can't speak to his work on finance as a whole. Regarding deep time, his claims about pre-literate society from archeology are not widely supported, they use thin evidence to argue badly.

His anarcho-socialism isn't the concern. It's his lack of historicity, and inability to bring his peers with him on radical ideas which concerns me.

He's dead, he can't defend himself. So there's that.


Just in case anyone is put off by this comment, I want to second the recommendation of Debt: The First 5000 Years. It's excellent, and it has as a free, chapter-by-chapter audiobook on YouTube.

As for Graeber being controversial: yes, though I vaguely recall "The Dawn of Everything" being (moreso) the trove of interesting historical anthropological hypotheses, rather than "Debt"?

Anyway, it's been a while, but my main point is that I wouldn't let Graeber's controversial-ness stop anyone from reading Debt. If anything, going in with that information makes you think harder about the topics he covers.


I totally agree. He writes well. I think the dawn of everything is a good read, and I will read debt, but without wanting to give in totally to 'appeal to authority' I think you have to recognise Graeber didn't win friends.

I think you'll enjoy it. My impression is he'd have won more friends with Debt than he did with the Dawn of Everything. Perhaps not literally, but I do remember thinking Debt made stronger cases on average, and was more philosophical than creatively-antagonistic in its weaker evidence parts.

Fun fact, David Graeber had an HN account: https://news.ycombinator.com/user?id=davidgraeber

I was interested to read that. Thanks. I think aspects of his personality came out in that, but also the horrible truth that "public intellectuals" become targets for many people. I have no doubt if some of the names I have catcalled on HN like Malcom Gladwell or Ray Kurzeweil were online in HN they'd be coming in for some flack, from people like me (with lesser chops, but a lot of opinion about them, as public intellectuals)

I saw this in the flesh at a book festival. Dale Spender, a notable feminist author who moved sideways into IT tech (she was involved with online learning systems) did a book talk and the majority of questions from the audience were "Gotcha" attempts about here philosophy and feminism, with nothing to do with the subject at hand.


As a fan of Graeber, I’m interesting in reading counter arguments to his writing. Could you point out where I can read up more about what archaeologists think of his writing?


the only thing it takes to be exceptional in most fields is time and effort. there is no secret sauce. There is not something innate that "finance people" have that "computer people" don't, other than a willingness to trudge through boring finance-related crap and vice-versa.

This is all spawned from insecurity that your prestigious degree or whatever can be replicated through independent learning


biased hackers are the best kind of hackers :)

Being exceptional in cybersecurity is a pretty good indicator that someone will be successful in other fields. A good cybersecurity person will understand that cybersecurity is a mix of technical mastery and the art of understanding human behaviour.

> Being exceptional in cybersecurity is a pretty good indicator that someone will be successful in other fields.

I am not so certain about this. In particular being exceptional in cybersecurity does not make you good at playing political games or having the traits that a lot of bosses want from employees (I will attempt to avoid starting a discussion whether I consider such traits to be good or bad).


Exceptional includes soft skills too.

1. You re-defined the scope of what it means to be exceptional in cybersecurity.

2. One example of a trait that many bosses desire, but is not a social skill per se is docility.


If a doctor excels at health science but is so terrible at client communication that their patients do not understand or follow medical advice and instructions, are they a good doctor?

I would say: medicine consists of many sub-disciplines and there are many very different positions for doctors in the medical system, including ones where you have a lot less to discuss with patients.

So, yes, such as doctor who excels at health science is very likely a really good doctor, but this does not imply that this doctor will excel in every position in the medical system for which doctors are hired.

This is very consistent with my claim that I don't think that "being exceptional in cybersecurity is a pretty good indicator that someone will be successful in other fields".


Let me guess - you work in cybersecurity?

This is XKCD #793 all over


well, as an example, Vec::push doesn't have a way to report allocation failure. it will just panic, which is not acceptable in the kernel.

Sure, which is a perfectly acceptable default considering that most code is not in a position to observe allocation failures (because of OS-level overcommit, which is nearly always a good thing), and furthermore most code is not in a position to do anything other than crash in an OOM scenario. If you still want to have control over this without going full no_std, Rust has Vec::try_reserve to grow a Vec while checking for allocation errors, as well as several nightly functions coming down the pipe in the same vein (e.g. Vec::try_with_capacity, Vec::push_within_capacity).

Talking as a long time C++ programmer. I really don't get this mind set.

First off allocation failure (typically indicated by bad_alloc exception in C++ code, or nullptr in C style code) does not mean that the system (or even the process) as a whole is out of memory.

It just means that this particular allocator could not satisfy the allocation request. The allocator could have "ulimit" or such limit that is completely independent from actual process/system limitations.

Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure?

A normal structure for a program is to catch these exceptions at a higher level in the stack close to some logical entry point, such as thread entry point, UI action handler etc. where they can be understood and possibly shown to the user or logged or whatever. It shouldn't really matter if the failure is about failing to allocate socket or failing to allocate memory.

You could make the case that if the system is out of memory the exception propagation itself is going to fail. Maybe..but IMHO on the code path that is taken when stack is unwound due to exception you should only release resources not allocate more anyway.


In rust you could use multiple allocators at the same time. Allocation failure handled by allocator, converting panic to some useful behavior. This logic is observable in WASM, as there are OOMs all the time, which handled transparently to application code

So I assume there is no real blockers as people in this tread assume, this is just not a conventional behavior, ad hoc, so we need to wait and well defined stable OOM handlers will appear


>does not mean that the system is out of memory. >"The allocator could have "ulimit" or such limit that is completely independent from actual process/system limitations."

Are we playing word games here? If a process has a set amount of memory, and it's out of it, then that process is OOM, if a VM is out of memory, it's OOM. Yes, OOM is typically used for OS OOM, and Linus is talking about rust in the kernel, so that's what OOM would mean.

>Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure.

Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development.


"Are we playing word games here? If a process has a set amount of memory, and it's out of it, then that process is OOM, if a VM is out of memory, it's OOM. Yes, OOM is typically used for OS OOM, and Linus is talking about rust in the kernel, so that's what OOM would mean."

As I just explained an allocator can have its own limits.

A process can have multiple allocators. There's no direct logical step that says that because some allocator some failed some allocation, the process itself cannot allocate more ever.

"Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development."

The parent comment is talking about over commitment and OOM as if these are situations that are completely out of the programs control. They aren't.


> Are we playing word games here?

No. A single process can have several allocators, switch between them, or use temporary low limits to enforce some kind of safety. None of that has any relation to your system running out of memory.

You won't see any of that in a desktop or a server. In fact, I haven't seen people even discuss that in decades. But it exists, and there are real reasons to use it.


I am not well-versed in this area but have a doubt - when the OS sends a SIGKILL to a process because it has run of memory for it how can the program catch that before it is killed and deal with it "gracefully"? Does C provide any mechanism to deal with such scenario?

There are several levels here.

In your C++ (or C) program you have one (or more) allocators. These are just pieces of code that juggle blocks of memory into smaller chunks for the program to use. Typically the allocators get their memory from the OS in pages using some OS system call such as sbrk or mmap.

For the sake of argument, let's say I write an allocator that has a limit of 2MiB, while my system has 64Gib or RAM. The allocator can then fail some request when it's internal 2MiB has been exhausted. In C world it'd return a nullptr. In C++ world it would normally throw bad_alloc.

If this happens does this mean the process is out of memory? Or the system is out of memory? No, it doesn't.

That being said where things get murky is because there are allocators that in the absence of limits will just map more and more pages from the OS. The OS can "overcommit" which is to say it gives out more pages than can actually fit into the available physical memory (after taking into account what the OS itself uses etc). And when the overall system memory demand grows too high it will just kill some arbitrary process. On Linux this is the infamous OOM killer that uses the "niceness" score to determine what to kill.

And yes, for the OOM killer there's very little you can do.

But an allocation failure (nullptr or bad_alloc) does not mean OOM condition is happening in the system.


None of that matters: what is your application going to do if it tries to allocate 3mb of data from your 2mb allocator?

This is the far more meaningful part of the original comment:

> and furthermore most code is not in a position to do anything other than crash in an OOM scenario

Given that (unlike a language such as Zig) Rust doesn’t use a variety of different allocator types within a given system, choosing to reliably panic with a reasonable message and stack/trace is a very reasonable mindset to have.


Since we're talking about SQLite, by far the most memory it allocates is for the page cache.

If some allocation fails, the error bubbles up until a safe place, where some pages can be dropped from the cache, and the operation that failed can be tried again.

All this requires is that bubbling up this specific error condition doesn't allocate. Which SQLite purportedly tests.

I'll note that this is not entirely dissimilar to a system where an allocation that can't be immediately satisfied triggers a full garbage collection cycle before an OOM is raised (and where some data might be held through soft/weak pointers and dropped under pressure), just implemented in library code.


Sure, and this is completely sensible to do in a library.

But that’s not the point: what can most applications do when SQLite tells them that it encountered a memory error and couldn’t complete the transaction?

Abort and report an error to the user. In a CLI this would be a panic/abort, and in a service that would usually be implemented as a panic handler (which also catches other errors) that attempts to return an error response.

In this context, who cares if it’s an OOM error or another fatal exception? The outcome is the same.

Of course that’s not universal, but it covers 99% of use cases.


The topic is whether Rust should be used to re-implement SQLite.

If SQLite fails to allocate memory for a string or blob, it bubbles up the error, frees some data, and maybe tries again.

Your app may be "hopeless" if the error bubbles up all the way to it, that's your choice, but SQLite may have already handled the error internally, retried, and given your answer without you noticing.

Or it may at least have rolled back your transaction cleanly, instead of immediately crashing at the point of the failed allocation. And although crashing should not corrupt your database, a clean rollback is much faster to recover from, even if your app then decides to crash.

Your app, e.g. an HTTP server, might decide to drop the request, maybe close that SQLite connection, and stay alive to handle other ongoing and new requests.

SQLite wants to be programmed in a language were a failed allocation doesn't crash, and unlike most other code, SQLite is actually tested for how it behaves when malloc fails.


In C++ it will throw an exception which you can catch, and then gracefully report that the operation exceeded limits and/or perform some fallback.

Historically, a lot of C code fails to handle memory allocation failure properly because checking malloc etc for null result is too much work — C code tends to calm that a lot.

Bjarne Stroustrup added exceptions to C++ in part so that you could write programs that easily recover when memory allocation fails - that was the original motivation for exceptions.

In this one way, rust is a step backwards towards C. I hope that rust comes up with a better story around this, because in some applications it does matter.


I may be getting SIGKILL and SIGABORT mixed up, but one of them is not sent to the process, rather it's sent to the OS.

If it were any other way then processes could ignore signals and just make themselves permanent, like Maduro or Putin.


> most code is not in a position to do anything other than crash in an OOM scenario.

That's intentional; IOW the "most code" that is unable to handle OOM conditions are written that way.

You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C. In every other language you need to go off the beaten path to gracefully handle OOM conditions.


Handling OOM gracefully - i.e. doing anything other than immediately crashing and/or invoking undefined behaviour - is absolutely not the default in C.

It's possible. But very very few projects do.


I know one C++ library that caches data but never evicts. Instead, the library author expects you to restart your app every 24 hours.

> I know one C++ library that caches data but never evicts. Instead, the library author expects you to restart your app every 24 hours.

It may not be as simple as "that's our policy". I worked at one place (embedded C++ code, 2018) that simply reset the device every 24h because they never managed to track down all the leaks.

Finding memory leaks in C++ is a non-trivial and time-consuming task. It gets easier if your project doesn't use exceptions, but it's still very difficult.


Use Valgrind? Or are we talking projects that have become far too big for their own good, cause leaks aren't hard at all to find with the right tools and a bit of profiling... now crossing thread boundaries and weird dynamic programming tricks maybe, but thats a very different case and not really refecting on C++ itself, would likely trip up a GC lang as well.

> Use Valgrind?

Was not available for that specific device, but even with Valgrind and similar tools, you are still going to run into weird destructor issues with inheritance.

There are many possible combinations of virtual, non-virtual, base-class, derived-class, constructors and destructors; some of them will indeed cause a memory leak, and are allowed to by the standard.


> even with Valgrind and similar tools, you are still going to run into weird destructor issues with inheritance.

I love these folklore comments. Post an example.


In my experience that is usually the result of years and years of accumulation of shit code. The results is thousands of leaks. That makes detection of incremental leaks much more difficult. If you start with clean code and use ASAN or Valgrind then leak detection is not difficult.

> Handling OOM gracefully - i.e. doing anything other than immediately crashing and/or invoking undefined behaviour - is absolutely not the default in C.

What are you talking about? Every allocation must be checked at the point of allocation, which is "the default"

If you write non-idiomatically, then sure, in other languages you can jump through a couple of hoops and check every allocation, but that's not the default.

The default in C is to return an error when allocation fails.

The default in C++, Rust, etc is to throw an exception. The idiomatic way in C++, etc is to not handle that exception.


> Every allocation must be checked at the point of allocation, which is "the default"

C doesn't force you to check the allocation at all. The default behavior is to simply invoke undefined behavior the first time you use the returned allocation if it failed.

In practice I've found most people write their own wrappers around malloc that at least crash - for example: https://docs.gtk.org/glib/memory.html

PS. The current default in rust to print something and then abort the program, not panic (i.e. not throw an exception). Though the standard library reserves the right to change that to a panic in the future.


> C doesn't force you to check the allocation at all.

No one ever claimed it did; I said, and still do, that the in C, at any rate, the default is to check the returned value from memory allocations.

And, that is true.

The default in other language is not to recover.


> > C doesn't force you to check the allocation at all.

> No one ever claimed it did;

You specifically said

> Every allocation must be checked at the point of allocation

...

> the default is to check the returned value from memory allocations.

Default has a meaning, and it's what happens if you don't explicitly choose to do something else.

In libc - this is to invoke undefined behavior if the user uses the allocation.

In glib - the library that underpins half the linux desktop - this is to crash. This is an approach I've seen elsewhere as well to the point where I'm comfortable calling it "default" in the sense that people change their default behavior to it.

Nowhere that I've ever seen, in C, is it to make the user handle the error. I assume there are projects with santizers that do do that, I haven't worked on them, and they certainly don't make up the majority.


> Default has a meaning, and it's what happens if you don't explicitly choose to do something else.

It also has the meaning of doing the common thing: https://www.merriam-webster.com/dictionary/default

> : a selection made usually automatically or without active consideration

See that "without active consideration" there? The default usage of malloc includes, whether you want to acknowledge it or not, checking the returned value.

C doesn't have anything done automatically, so I am wondering why you would choose to think that by "default" one would mean that something automatically gets done.


I'm not saying "automatic", I'm including "sanitizer retursn an error" as default - that's not what happens in C (or at least any C project I've worked on). You have to actively remember and choose to check the error code. Of course things do happen automatically all the time in C, like bumping the stack pointer (another case of unhandled OOM) and decrementing it after the fact. And pushing return addresses - and returning at the end of functions. And so on.

"Ubiquitous" is a different word than default, checking the return code of malloc isn't even that. As an example - I've been having some issues with pipewire recently (unrelated) and happen to know it uses an unwrapped malloc. And it doesn't reliably check the return code. For example: https://github.com/PipeWire/pipewire/blob/6ed964546586e809f7...

And again, this isn't cherry picked, this is just "the last popular open source C code base I've looked at". This is the common case in C. Either you wrap malloc to crash, or you just accept undefined behavior if malloc fails. It is the rare project that doesn't do one of those two.


> I'm not saying "automatic", I'm including "sanitizer retursn an error" as default - that's not what happens in C (or at least any C project I've worked on). You have to actively remember and choose to check the error code.

Right. But this is what you initially responded to:

> You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C.

How did you get from "That way" to thinking I claimed that C, by default, handles allocation failures?

> As an example - I've been having some issues with pipewire recently (unrelated) and happen to know it uses an unwrapped malloc.

Correct. That does not mean that the default way of writing allocation in C is anything other than what I said.

Do programmers make mistakes? Sure. But that's not what asked - what was asked is how do you handle memory errors gracefully, and I pointed out that, in idiomatic C, handling memory errors gracefully is the default way of handling memory errors.

That is not the case for other languages.


> How did you get from "That way" to thinking I claimed that C, by default, handles allocation failures?

I think you might want to reread the line you quoted directly above this,

That way of writing code, i.e. "write[ing] code that handles OOM conditions gracefully" "is the default [...] in C".

This is what I am saying is not the case. The default in C is undefined behavior (libc) or crashing (a significant fraction of projects allocator wrappers). Not "handling OOM gracefully" - i.e. handling OOM errors.


> I think you might want to reread the line you quoted directly above this,

I am reading exactly what I said:

> You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C.

How is it possible to read that as anything other than "That Way Of Writing Code Is The Default Way In C"?

Are you saying that checking the result of malloc (and others) is not the default way of allocating memory?


> Are you saying that checking the result of malloc (and others) is not the default way of allocating memory?

In C - yes. I've said that repeatedly now...


>> Are you saying that checking the result of malloc (and others) is not the default way of allocating memory?

> In C - yes. I've said that repeatedly now...

Well, that's just not true. The instances of unchecked allocations are both few and far between, *and* treated as bugs when reported :-/

Maybe you should program in a language for a little bit before forming an opinion on it :-/


I have programmed in C plenty. Your assertion that unchecked allocations are few and far between is simply entirely incorrect. That they are treated as bugs when reported is incorrect in most C software.

For good reason. Most C software is not designed to run in a situation where malloc might fail.

I, unlike you, have provided evidence of this by pointing to major pieces of the linux desktop that do not do so.


From the parent comment:

because of OS-level overcommit, which is nearly always a good thing

It doesn't matter about the language you are writing in, because your OS can tell you that the allocation succeeded, but when you come to use it, only then do you find out that the memory isn't there.


Of course it matters, because you (the system admin) can tell your OS not to do that. Which is only helpful if your app knows how to handle the case. Most don't, so overcommit, in general, makes sense.

You can't really on linux. There's no way to do sparse allocations then because when you turn off overcommit MAP_NORESERVE still reserves memory...

It's a place where windows legitimately is better than linux.


> You can't really on linux. There's no way to do sparse allocations then because when you turn off overcommit MAP_NORESERVE still reserves memory...

Sure, but ... what does that have to do with this thread? Using `mmap` is not the same as using `malloc` and friends.

If you turn off overcommit, malloc will return NULL on failure to allocate. If you specifically request mmap to ignore overcommit, and it does, why are you surprised?


> If you specifically request mmap to ignore overcommit, and it does, why are you surprised?

You misunderstand, you specifically request mmap to ignore overcommit, and it doesn't, not does.

What it has to do with this thread is it makes turning off overcommit on linux an exceptionally unpalatable option because it makes a lot of correct software incorrect in an unfixable manner.


Zig puts OOM handling much more front and center than C. In C, you can handle OOM but it's easy to ignore NULL checks on mallocs &co because they almost never happen.

In Zig you must handle it. Even if handling means "don't care, panic", you have to spell that out.


It's also really ergonomic with `errdefer` and `try`.

> That's intentional; IOW the "most code" that is unable to handle OOM conditions are written that way.

No, this is wishful thinking. While plenty of programs out the are in the business of maintaining caches that could be optimistically evicted in order to proceed in low-memory situations, the vast majority of programs are not caching anything. If they're out of memory, they just can't proceed.


I used to think this way many years ago, then I saw my own code in production hit OOM errors and manage to recover, and even log what was happening so I could read about it later.

After those experiences I agree with the sibling comment that calls your position "bullshit". I think people come to your conclusion when they haven't experienced a system that can handle it, so they're biased to think it's impossible to do. Since being able to handle it is not the default in so many languages and one very prominent OS, fewer people understand it is possible.


I think this is bullshit. If you are running out of memory, you can, for example, choose to stop accepting more work ("backpressure"). I am always advocating for Rust, but this is one thing I really disagree on. I think Zig gets this right.

Now you need to ensure that your entire error path does not allocate or you have to deal with allocation errors in the error path as well.

Trying to apply backpressure from memory allocation failures which can appear anywhere completely disconnected from their source rather than capping the current in memory set seems like an incredibly hard path to make work reliably.


In Zig's case, the entire stdlib never allocates on failure, and most libraries follow the same pattern. The philosophy of Zig is allocation/creation can fail, but freeing/destroying must never fail. It's caused me to be really thoughtful with how I design my data structures, and often made me use better ways of representing metadata.

How do you log or tell the world about the state of the program without allocating?

Well if you've hit OOM, you're kinda screwed anyways. But, if you allocate a ring buffer at the beginning, you can always do a best attempt write.

Why screwed? It could just be that there is more load than your application can handle. Why should it necessarily crash because of that?

Maybe screwed was too strong of a term. In the scenario above, they wanted to log on resource cleanup, but that makes resource cleanup potentially fallible. The Zig philosophy is that cleanup must never fail, so having cleanup be fallible goes against that.

I was suggesting (though in retrospect not clearly) that logging should use a ring buffer instead of allocation, in order to make logging on cleanup a guaranteed best effort operation. You're right that you can recover from OOM, but logging OOM with an allocator is pretty self-defeating.


You need to apply backpressure before you hit memory limits, not after.

If you’re OOM your application is in a pretty unrecoverable state. Theoretically possible, practically not.


If you allocate a relatively big chunk of memory for each unit of work, and at some point your allocation fails, you can just drop that unit of work. What is not practical?

I think in that case overcommit will happily say the allocation worked. Unless you also zero the entire chunk of memory and then get OOM killed on the write.

I suppose you can try to reliable target "seriously wild allocation fails" without leaving too much memory on the table.

   0: Heuristic overcommit handling. Obvious overcommits of
      address space are refused. Used for a typical system. It
      ensures a seriously wild allocation fails while allowing
      overcommit to reduce swap usage.  root is allowed to 
      allocate slightly more memory in this mode. This is the 
   default.
https://www.kernel.org/doc/Documentation/vm/overcommit-accou...

Running in an environent without overcommit would allow you to handle it gracefully though, although bringing its own zoo of nasty footguns.

See this recent discussion on what can happen when turning off overcommit:

https://news.ycombinator.com/item?id=46300411


> See this recent discussion on what can happen when turning off overcommit:

What are you referring to specifically? Overcommit is only (presumably) useful if you are using Linux as a desktop OS.


Good grief all kinds of ways. Practically all the same countless possible paths as those that require allocating.

You don't have to allocate to print to stdout if that's what you're asking.

And then whatever thing that is collecting and forwarding(if applicable) the logs needs to be entirely allocation free?

It just needs to have whatever memory it needs statically allocated.

> Now you need to ensure that your entire error path does not allocate or you have to deal with allocation errors in the error path as well

And that is usually not too difficult in C (in my experience), where allocation is explicit.

In C++, on the other hand, this quickly gets hairy IMO.


That's why you don't use std in this case. You limit yourself to things defined in core.

That seems like an enormous question. Is anyone working on it?

There's experimental/nightly support for things like: `push_within_capacity()` which is a more manual way (user-space code would have to handle the return code and then increase the capacity manually if required) of trying to handle that situation.

And of course the kernel - which doesn't even use Rust's Vec but has its own entire allocator library because it is the kernel - likewise provides

https://rust.docs.kernel.org/next/kernel/alloc/kvec/struct.V...

Vec::push_within_capacity is a nice API to confront the reality of running out of memory. "Clever" ideas that don't actually work are obviously ineffective once we see this API. We need to do something with this T, we can't just say "Somebody else should ensure I have room to store it" because it's too late now. Here's your T back, there was no more space.


162 to 104 is not 75% reduction... Who calculates reduction percentage like that?!

To be fair, they claim the adblock engine saw a 75% reduction in memory usage, and in the images they're showing the main browser process generally (I assume? I don't use Brave), or which the adblock engine is only a part but had a substantial impact on usage.

That is correct.

thanks for the clarification. is this 45MB reduction for the whole browser? or is this 45MB per tab?

"Brave has overhauled its Rust-based adblock engine to reduce memory consumption by 75%"

This only claims that the memory usage of the adblock engine was reduced, not the total memory consumption of the browser.


I'm guessing the same kind of people who don't understand the difference between 0.002 dollars and 0.002 cents (http://verizonmath.blogspot.com/2006/12/verizon-doesnt-know-...).

This kind of smells LLM, which is fine. But I do want to see the facts backed by citations.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: