That would be a quite tricky thing to do. You would probably need some very specific architecture in your application to allow for something like that.
It's not clear to me that this is a better architecture than just splitting your application into two or more processes, one of which is of such lasting value and high quality that it would never need replacing. Then still, you may be better off making your gateways/endpoints not need 100% uptime in order to maintain availability, or making your client applications not require total availability to behave well.
For web serving (or TCP servers with short-lived connections) in particular, you could do an application-specific smooth transition. But this wouldn't be the responsibility of TFA's library.
You could have your program use SO_REUSEPORT (or SO_REUSEPORT_LB?) on the socket it listens on. When upgrading, the old version launches the new version, which begins accepting new requests on the configured addresses and ports. The old version closes its listening socket so all new connections are handled by the new version. The old version can then just wait for all existing clients to disconnect, or if the server handles long-poll style loads, maybe send a redirect of some kind to force the client to connect to the new instance. When clients are all exited or some acceptable timeout has expired, the old version can delete itself and exit.
Technically, like Erlang, Go could probably do this by redirecting new I/O into the new process's channels and kill the old process when all channels are empty.
Go's channels aren't exposed outside a specific go process's runtime. The runtime doesn't give you any convenient way to redirect them. They're not like erlang's mailboxes at all in that regard.
Furthermore, channels aren't the primitive used for multiplexing IO / handling connections on a socket in go. You typically have a goroutine (e.g. 'http.ListenAndServe' spins up goroutines), and the gorutines are managed not by channels, but by the internal go runtime's scheduler and IO implementation (which internally uses epoll).
Because of all those things, replacing a running go process that's listening on sockets is no different from that same problem in C. You end up using SO_REUSEPORT and then passing the file-descriptors to the new process and converting them back into listeners. Channels don't end up factoring into it meaningfully.
If you're interested in what this looks like, cloudflare wrote a library called tableflip [0] which does this. I also forked that library [1] to handle file-descriptor handoff in a more generic way, so I've ended up digging pretty deeply into the details of how this works in go.
Hot-code reload like in erlang isn't really a requirement in modern world in most cases. It also goes directly against immutable infrastructure we all going towards to.
In case of rust batteries aren't included so each network stack will require different strategies, but in general it will be more like nginx (or unicorn if you come from ruby world) style of reload.
That would be a quite tricky thing to do. You would probably need some very specific architecture in your application to allow for something like that.
It's not clear to me that this is a better architecture than just splitting your application into two or more processes, one of which is of such lasting value and high quality that it would never need replacing. Then still, you may be better off making your gateways/endpoints not need 100% uptime in order to maintain availability, or making your client applications not require total availability to behave well.