Hacker Newsnew | past | comments | ask | show | jobs | submit | nickmonad's commentslogin

https://nickmonad.blog/

Trying to blog more frequently with shorter posts!


Narrowing in on background color is an extreme oversimplification of what Tailwind provides. I found it to be a great tool for working with CSS, especially for layout. Business viability can be debated, but the value is way beyond what you suggested.


I agree with the sentiment that companies should help fund open source they depend on, but I think it's a stretch to say those business succeeded "only" because of Tailwind. It's a great project, although I'm pretty sure they would have figured out a way to work with CSS without it.


Would love to read this, although I'm seeing some pretty horrific code formatting issues in both Firefox and Chrome.


weirdest part is, the initial load looks fine (try refreshing, scroll persists).

firefox's reader view helps too


I saw a blog about this yesterday: disable extensions (notably 1Password) to fix formatting inside code tags.


Looks decent on iPhone safari FWIW.


turn off 1password


More context for those who haven't heard about this: https://www.1password.community/discussions/developers/1pass...


Hey matklad! Thanks for hanging out here and commenting on the post. I was hoping you guys would see this and give some feedback based on your work in TigerBeetle.

You mentioned, "E.g., in OP, memory is leaked on allocation failures." - Can you clarify a bit more about what you mean there?


In

    const recv_buffers = try ByteArrayPool.init(gpa, config.connections_max, recv_size);
    const send_buffers = try ByteArrayPool.init(gpa, config.connections_max, send_size);
if the second try throws, than the memory allocation created by the first try is leaked. Possible fixes:

A) clean up individual allocations on failure:

    const recv_buffers = try ByteArrayPool.init(gpa, config.connections_max, recv_size);
    errdefer recv_buffers.deinit(gpa);

    const send_buffers = try ByteArrayPool.init(gpa, config.connections_max, send_size);
    errdefer send_buffers.deinit(gpa);
B) ask the caller to pass in an arena instead of gpa to do bulk cleanup (types & code stays the same, but naming & contract changes):

    const recv_buffers = try ByteArrayPool.init(arena, config.connections_max, recv_size);
    const send_buffers = try ByteArrayPool.init(arena, config.connections_max, send_size);
C) declare OOMs to be fatal errors

    const recv_buffers = ByteArrayPool.init(gpa, config.connections_max, recv_size) catch |err| oom(err);
    const send_buffers = ByteArrayPool.init(gpa, config.connections_max, send_size) catch |err| oom(err);

    fn oom(_: error.OutOfMemory) noreturn { @panic("oom"); }
You might also be interesting in https://matklad.github.io/2025/12/23/static-allocation-compi..., it's essentially a complimentary article to what @MatthiasPortzel says here https://news.ycombinator.com/item?id=46423691


Gotcha. Thanks for clarifying! I guess I wasn't super concerned about the 'try' failing here since this code is squarely in the initialization path, and I want the OOM to bubble up to main() and crash. Although to be fair, 1. Not a great experience to be given a stack trace, could definitely have a nice message there. And 2. If the ConnectionPool init() is (re)used elsewhere outside this overall initialization path, we could run into that leak.

The allocation failure that could occur at runtime, post-init, would be here: https://github.com/nickmonad/kv/blob/53e953da752c7f49221c9c4... - and the OOM error kicks back an immediate close on the connection to the client.


This is the fundamental question which motivated the post. :)

I think there are a few different ways to approach the answer, and it kind of depends on what you mean by "draw the line between an allocation happening or not happening." At the surface level, Zig makes this relatively easy, since you can grep for all instances of `std.mem.Allocator` and see where those allocations are occurring throughout the codebase. This only gets you so far though, because some of those Allocator instances could be backed by something like a FixedBufferAllocator, which uses already allocated memory either from the stack or the heap. So the usage of the Allocator instance at the interface level doesn't actually tell you "this is for sure allocating memory from the OS." You have to consider it in the larger context of the system.

And yes, we do still need to track vacant/occupied memory, we just do it at the application level. At that level, the OS sees it all as "occupied". For example, in kv, the connection buffer space is marked as vacant/occupied using a memory pool at runtime. But, that pool was allocated from the OS during initialization. As we use the pool we just have to do some very basic bookkeeping using a free-list. That determines if a new connection can actually be accepted or not.

Hopefully that helps. Ultimately, we do allocate, it just happens right away during initialization and that allocated space is reused throughout program execution. But, it doesn't have to be nearly as complicated as "reinventing garbage collection" as I've seen some other comments mention.


Nice! Will definitely take a look :)


Author here! Overcommit is definitely a thing to watch out for. I believe TigerBeetle calls this out in their documentation. I think you'd have to explicitly disable it on Linux.

For the second question, yes, we have to keep track of what's in use. The keys and values are allocated via a memory pool that uses a free-list to keep track of what's available. When a request to add a key/value pair comes in, we first check if we have space (i.e. available buffers) in both the key pool and value pool. Once those are marked as "reserved", the free-list kind of forgets about them until the buffer is released back into the pool. Hopefully that helps!


I have a few cases in this (proof of concept) codebase that require knowledge about allocation strategy, even in Zig, but that's on me and the design at this point. Something I wanted to touch on more in the post was the attempt to make the components of the system work with any kind of allocation strategy. I see a common thing in Zig projects today where something like `gpa: std.mem.Allocator` or even `arena: std.mem.Allocator` is used to signal intent, even though the allocator interface is generic.


Author here! That's totally fair. I did learn this is a common technique in the embedded world and I had a whole section in the original draft about how it's not a super well-known technique in the typical "backend web server" world, but I wanted to keep the length of the post down so I cut that out. I think there's a lot we can learn from embedded code, especially around performance.


Back in 2005 Virgil I, which target MCUs like AVR, had static initialization and would generate a C program with all of the heap statically allocated, which was then compiled into the binary. C programmers for AVR are used to just declaring globals, but Virgil allowed arbitrary code to run which just initialized a heap.

Virgil II and III inherited this. It's a standard part of a Virgil program that its components and top-level initializers run at compile time, and the resulting heap is then optimized and serialized into the binary. It doesn't require passing allocators around, it's just part of how the language works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: