Incidentally, you can choose to bump in both directions. It's more complicated (you need to keep track of which end you allocated each data structure on), but in exchange, the allocator becomes sufficient for many more use cases.
Given a choice, the OP implies that you should position small-but-numerous allocations next to the top, and larger infrequent allocations next to the bottom.
This sounds like it would make the alloc logic much more complicated and branch-y, defeating the purpose of bumping down anyway, unless your implying some compile-time way to do this.
No, the idea is that you manually make some allocations downward from the top and some allocations upward from the bottom. The bumping code is as simple as in the unidirectional case.
The tricky part is choosing in a way that puts you noticeably ahead of the unidirectional allocator re: what problems you can solve, without putting excessive mental load on yourself. I've found a pattern of "long-lived allocations on one end, short-lived allocations on the other" to work well here (which, yes, doesn't always coincide with the numerous vs. infrequent axis mentioned in my previous comment).
> The mem array is divided into two regions that are allocated separately, but the dividing line between these two regions is not fixed; they grow together until finding their “natural” size in a particular job. Locations less than or equal to lo_mem_max are used for storing variable-length records consisting of two or more words each. […] Locations greater than or equal to hi_mem_min are used for storing one-word records…
(Different allocators are used for the two regions and neither seems to be a bump allocator, so it's probably not very relevant to this thread, but I was reminded of it so just sharing…)
Ok, I get it now. It would add an extra ptr to the struct, but wouldn't be significant overhead.
I do wonder what benefit there is for you over just having two separate allocators, one for long term and one for short term. I imagine there could be benefits in very memory constrained scenarios.
BEAM (Erlang) uses something similar for process memory; a process gets a chunk of memory, heap grows up, stack grows down; when they meet, trigger GC, and if that doesn't reclaim enough, allocate a larger chunk of memory. More details if you're interested [1]. In general, I'd think anywhere that the combined size of two allocators should be a consideration, one going up and one going down would make a lot of sense.
Thats an interesting idea. I'm not sure I'm sold on it v.s. just having two seperate allocators and growing them seperately. The arena allocators I use take advantage of virtual memory to grow which might change my perception of the tradeoffs involved, as I wouldn't typically need to resize one of my allocators (you can just aggressively over-reserve memory and then only commit what is actually used).
Given a choice, the OP implies that you should position small-but-numerous allocations next to the top, and larger infrequent allocations next to the bottom.