Hacker Newsnew | past | comments | ask | show | jobs | submit | rkerno's commentslogin

I think the overall sentiment with this post is sound, but arenas aren't the answer to Go's performance challenges. From my perspective, possibly in an effort to keep the language simple, Go's designers didn't care about performance. 'let the GC handle it' was the philosophy and as a result you see poor design choices all the way through the standard library. And the abstracting everything through interfaces then compounds the issue because the escape compiler can't see through the interface. The standard library is just riddled with unnecessary allocations. Just look at the JSON parser for instance and the recent work to improve it.

There is some interesting proposals on short term allocations, being able to specify that a local allocation will not leak.

Most recently, I've been fighting with the ChaCha20-Poly1305 implementation because someone in their 'wisdom' added a requirement for contiguous memory for the implementation, including extra space for a tag. Both ChaCha20 and Poly1305 are streaming algorithms, but the go authors decide 'you cannot be trusted' - here's a safe one-shot interface for you to use.

Go really needs a complete overhaul of their Standard Library to fix this, but I can't see this ever getting traction due to the focus on not breaking anything.

Go really is a great language, but should include performance / minimise the GC burden as a key design consideration for it's APIs.


I agree about nearly all of this, but in my fantasy I think the 'unsafe' library should be how to break the abstraction layer and adjust things directly when a good language model isn't provided.

JSON's just a nightmare though. The inane legacy of UCS2 / UTF16 got baked into Unicode 8, and UTF16 escapes into JSON.


I personally find AI generated code to be pretty average. I might get AI to write a function, then rework it. I use it a lot for reviews, which helps. And also as a sounding board for research - this is by far the most valuable use case, saves a ton of time. Or get it to write tests similar to what you have, just tell it what you want tested, and get it to suggest.

I definitely don't trust the code it writes, especially for anything remotely complicated.


This to me just appears to demonstrate what a house of cards email security really is....surely with the collective brains on this forum we can come up with an alternative that solves all of this. And surely Google needs to serve these sites under a different domain name....why aren't these sites published under something like 'hostedbygoogle.com'?


All of the alternatives that can actually happen involve handing over full control of the system to one company, and thereby eliminating email's major remaining value, which is that it is a thing not controlled by one company.

The technical problems are challenging but solvable. The human problems are not. Nobody with the resources to truly solve the problem is willing to do it to create an open platform where they don't get effectively all the money and control, but then, the rest of the world is not terribly willing to let them have all the money.

Hence the impasse we are at.

This is the problem you have to solve, not a technical one.

(See also "why the metaverse where we do things like share avatars across all services is a stupid idea that will effectively never happen". It writes well in a novel, but in practice it requires World of Warcraft to accept that someone can run around in it as Mickey Mouse while wielding a Call-of-Duty-branded sniper rifle, and none of the relevant rights holders will ever agree to that, for all kinds of reasons. The technical problems are also formidable but they are nothing next to the fact nobody will ever agree to this.)


Announcing new Thiel-backed startup: Shadowfax

Our secure, centralized and proprietary offering with native AI and blockchain layers will replace the obsolete cruft that is email. Already secured several DoD contracts and expect to fully replace email for all internal and external communications of the federal government by 2027.


It's pretty easy in a push based model to let the 'pusher' know that no more data is required. It's just like unsubscribing from an event, or returning a 'no more' status from the callback. The push model does feel more natural to me, but perhaps that comes from familiarity with linux piping.


It's easy when the network is working.

If it isn't, the 'pusher' continues to fill memory buffers that can take minutes to dequeue. You need constant communication and TCP conspires against you on this. If your flow is primarily one-directional, you might be able to use keep-alives for this, but the defaults are terrible. Here's what I have used:

    SO_KEEPALIVE=1
    TCP_KEEPCNT=1+x(seconds)
    TCP_KEEPIDLE=1
    TCP_KEEPINTVL=1
    TCP_NODELAY=1
    TCP_USER_TIMEOUT=x(msec)
where 'x' is the anticipated RTT of ping+connect (usually 3-4x the measured RTT you get from TCP_INFO).

Remember: the moon is only about 1.5 seconds away, so unless you're pushing data to the moon these numbers are likely very small.

On older Windows, you've got `SIO_KEEPALIVE_VALS` which is hardcoded at 10 ticks, so you need to divide your distance by 10, but I think new windows supports `TCP_USER_TIMEOUT` like Linux.

Mac doesn't have these socket options. I think you can set net.inet.tcp.keep* with sysctl, but this affects/breaks everything, so I don't recommend xnu as a consumer to a high-volume push-stream.

I actually don't recommend any of this unless you have literally no control over higher-level parts of the protocol: TCP just isn't good enough for high-volume push-protocols, and so I haven't used this in my own stuff for decades now.


It was kind of a toy problem, but I had a lot of fun with consul’s versioning data and making requests that didn’t return until the data had changed. It’s like GetIfModified but you can set it up with a delay if the data has not already been modified.

I don’t think it’s particularly good for multi-get but I didn’t get that far.

There are systems where you can report what you’ve seen so far when you reconnect, but those require a very different architecture on the sending end because it’s not just a streaming data situation at that point. And while it’s more efficient to store the data to derive the replay than to store the replay itself, it’s not infinitely moreso and a three hour network hardware problem could really ruin your entire week.


Gotta say, this comment is spot on. I had a small peek under the covers when another of the major severity issues surfaced and the conclusion reached is that the software is fragile as f*k. I'll be migrating as soon as I have some spare cycles. Still no conclusion as to whether Rails, Ruby or GitLab is the major contributor, but the result is awefull.


If you're targeting Go to cross-compile to, why would you build the language in Rust? Keeping to the go tool chain would reduce a lot of friction for those who are using go already, which I presume is a significant chunk of your target audience.


I don't work on the language, but I think building a compiler in Go (or C) is a pain in the ass compared to using a language that has discriminated union types, and based on the aesthetics of the language, the authors probably agree with me. The language is styled enough like Rust to make it seem like this is for "rust people" more than "Go people" anyways – I think if this was targeting Go people, it would use the native Go syntax for things like import (here replaced with "use") and adding methods, here replaced with "impl ..."


There might be an audience for disillusioned rust people who get bored with fighting the borrow checker etc, but true rust developers, like c++ devs want control and won't want to hand that to go's runtime.

My guess from the readme is that the author loves some of the rust features and syntax but the simplicity of having the go language and runtime take care of making it work is just too compelling. As far as the language itself, there's nothing here that you couldn't build with go and it would likely be more productive.


> true rust developers

true rust developers will program with magnetic needle


I’ve written a compiler in Go. The language is fine for doing that kind of development.

They probably wrote the language in Rust for the same reason they wrote a new syntax for Go: they just don’t particularly like Gos syntax.


Really interesting comments here. I haven't found an appealing option for scripts and CI ( I think I am allergic to yaml ), bash is just way too fragile. So I've decided to write my own scripting language that is written in Go, similar in many ways to Lua, but with first class support for executing other commands, running API tests, and manipulating data. Currently dogfooding with plans to share in a few months once I'm confident it's good to start getting feedback. So, one executable plus your scripts, cross platform, no yaml.


Am I the only one who finds it ironic that this is Part 2 of an 'in a nutshell' document?


O’Reilly is famous for its "in a nutshell" books, some of them quite thick.

Some nuts are bigger than others.


Speaking of nuts, have they cracked HDR yet? I remember hearing some optimism about a year ago and I really look forward to it.


KDE Plasma 6 will have some basic support for HDR features when it comes out. The Wayland color management protocol needed for full support is not yet finalized although there is a working informal implementation of HDR for the Steam Deck OLED.


Well, Stephen Hawking's book uses the same term and it's both gigantic and as complex as it gets :D


Hi, I'm curious how you deal with the potential for hash collisions across a large data set - is that a post-join check?


Hi, if you're asking about the hash table itself, then currently we use linear probing, i.e. k/v pairs with a collision are inserted sequentially starting with the hash%capacity index.


I looked at IFTTT a while back but found Integromat to be so much better. Lots of integrations too, and you can create some quite complex flows especially if you chain them together through webhooks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: