Hacker Newsnew | past | comments | ask | show | jobs | submit | sobeston's commentslogin

It doesn't. This article doesn't measure that, and full screen works differently. This article also only measures gnome.


Small self plug: https://zig.guide is ready for Zig 0.12.

Some notable changes in zig.guide since Zig 0.11 came out:

  - Changed from following master, to following major releases (Zig is more stable now, less work for me. Yay.).

  - Migrated from Hugo to Docusaurus, bringing some large frontend improvements.

  - Moved over from ziglearn.org to zig.guide.

  - Removed 'usage in production is not recommended' from the front page.

  - Support for versioning.

  - Greatly improved testing through build.zig.

  - Opened up for github sponsorships.

  - Added another section for supplementary walkthroughs.
The site still isn't nearly where I want it to be but there's been some big improvements in the last few months.


  - Migrated from Hugo to Docusaurus, bringing some large frontend improvements.
Does this mean Docusaurus has more or better features? I do not know either of them very well but looking for a solution to publish md docs.


You could probably make a good docs site in hugo with enough work, but Docusaurus has pretty much everything I need out of the box (and is less annoying to extend). I think it really is a lot better for documentation.


Win32 isn't stable on Linux, though.


You claim to be healthy, yet you have not died yet. Curious.


where did they claim to be healthy?


I believe where this stands is that if someone were to contribute and maintain this, then it could be part of the Zig compiler, however it is not something the Zig team themselves are planning on doing. Maybe give it a shot? I personally find the project interesting and fun but I'm not convinced that it's a good solution.


I actually saw multiple people in the Zig core team (and others) actively working on the Zig compiler in person this weekend! Development may have slowed in the last 2 weeks or so because of the SYCL conference; given the turnout I can assure you that there is no decline in people working with or on Zig long-term. If you look at the commits, issues, or the stage 2 meetings (normally held every thursday in the discord), you would also see that the project isn't on the decline. You're correct that the financial reports of the ZSF aren't up to date - these are updated manually so you can expect some delay.

I'm assuming you're the same person that made multiple new stack overflow accounts recently to ask (and answer, on a different account) this question here: https://stackoverflow.com/questions/74040564/is-the-zig-prog... (since removed by moderators).

My question to you is: Why are you trying to force this false narrative?


https://www.reddit.com/r/learnprogramming/comments/y1zuzv/is... https://www.reddit.com/r/AskProgramming/comments/y1zwan/is_t...

Assuming this is you also. There were commits to Zig less than an hour before you posted this.


You are correct that I also asked other online communities this question.

I wouldn't say I was trying to force a narrative. I was asking out of concern. I was especially unnerved after seeing the development graph on GitHub:

https://imgur.com/M09Nkx9

Also seeing there were features that haven't been implemented for the last 6 years made me even less confident about the future of the project:

https://github.com/ziglang/zig/issues/68

Even from this post:

https://andrewkelley.me/post/full-time-zig.html

Of all the Zig projects the author said he was going to work on:

- https://github.com/andrewrk/groovebasin/tree/zig

- https://github.com/andrewrk/genesis/tree/zig

- https://github.com/andrewrk/clashos

he never committed and did any of them.


> I was asking out of concern.

Right. Thank you for just asking questions.


[flagged]


Don't know what you're talking about. Could you share a link, please?

I did ask the same question to other online communities (specifically Reddit and Stack Overflow) to get an unbiased answer.

From Google's cache (enter this in the address bar):

- cache:https://www.reddit.com/r/learnprogramming/comments/y1zuzv/is...

- cache:https://www.reddit.com/r/AskProgramming/comments/y1zwan/is_t...

I asked on Reddit from my 3-year-old account. In comparison, your account is 30 minutes old.


maybe you should take a moment to consider why you have to give out google cache links, instead of linking the original post. maybe there's a reason they both got removed.


Author here, I'm just showing how I did it. It took a few minutes to do it for myself - figured it would be worth the 15 minutes or so it'd take to write it down. Just sharing my process, there's nothing too serious about this post.

I've happened to have written a lot of C, but some areas (like this one) I'm not as confident in. This way of casting, and working with types is very poor syntax in my opinion (I mean below in this thread you have people arguing about the spiral rule and such, it's obviously a common confusion).

Zig's way of expressing types and pointers is far superior, these transformations I made were done quickly and I didn't feel like there was any ambiguity or confusion in anything. Just a series of simple reductions, there are no "tricks" or easy mistakes to make in the code. It feels like a trivial proof.

Obviously I am biased, this was posted to zig.news. I thought it was a neat showcase of translate-c, and how zig does some of these things nicer. I'm not telling everyone that they should do what I did, but this works for me and I'm happy to share.


ptr.field works.


I think Zig does cross compilation (ignoring for the moment, projects with C dependencies) better because it comes out-of-the-box with the libraries and headers needed to target a lot of systems. In Rust, you typically get additional toolchains via rustup, whereas Zig works on all of the targets with a single install.


Absolutely, hence a toolchain thing.


Are you sure that we should be shipping all targets by default into one monster download? This doesn't seem particularly scalable.


The whole toolchain for Zig, which appears to include complete cross compilation, is less than 40MB compressed[0]. I would hardly call that a "monster download".

I think developers can spare that much space. Besides, storage space is scaling much faster than the number of available targets, so I don't see how this could be "not particularly scalable" unless you either have a weird definition of "target" or predict there's going to be a sudden explosion in the number of targets.

Go also makes cross compilation to any supported target a breeze in the default toolchain download, and it is an incredible convenience for the developer.

[0]: https://ziglang.org/download/


The Xcode download, which is the only officially-supported way to develop native apps on macOS and iOS, is 9.8 GB and gets larger with every update. The Windows 10 SDK is 4 GB and likewise is only getting larger. Sure, you can develop some apps with only a subset of the SDK. But, as a language project, do you really want to be in the business of optimizing and subsetting every single OS SDK?

I think a lot of the issue here comes down to targeting server workloads and command-line tools vs. apps using more of the OS libraries. In practice, Go focuses on the former kinds of apps, so shipping all targets is a more viable approach for them. But when you start getting into apps that want to use a larger swath of the platform facilities, then the size of the platform SDKs starts to become an issue.


No one (that I've seen) is suggesting to include those SDKs, which you probably don't even have the rights to redistribute anyways.

I provided both Zig and Go as examples of what people are looking for, and their downloads are neither obscenely large nor do they include those massive SDKs.

So yes, you have a weird definition of "target" that no one else here is using, and it is disingenuous for the conversation. rustup can already add additional targets after installation, are you suggesting this downloads 10GB SDKs? Definitely not, and that sounds irrelevant. (Even once you do this, the toolchain experience is not as good as what Zig or Go offer.)

Compare what Zig and Go do to what Rust does, not some strawman argument that would require users to download 10GB of SDK with the Rust toolchain. Alternatively, find me someone who is saying they think these multi-gigabyte SDKs should be included, because I don't see that anywhere in this conversation.

People here are complimenting what Zig currently does, which takes less than 40MB compressed. Zig can cross compile to Windows just fine. There are limitations to everything, but what Zig and Go offer is strictly better than what Rust offers in terms of the out-of-box cross compilation experience, and it isn't unduly burdensome on the developers like your proposed 10GB download.


My nightly-x86_64-unknown-linux-gnu lib directory is 148MB. This is because it includes several libraries in both rlib and so format (static vs. dynamic linking), asan/lsan/msan/tsan variants, and so forth. You could easily imagine that ballooning to 1GB if we shipped all tier 1 targets. This is what I mean by shipping all targets not being scalable: it might have seemed so in the early days when the libraries were smaller and the targets fewer, but not now.


Yes, the Zig (and Go) developers have clearly put a lot of effort into ensuring the size of their toolchain remains reasonable. I fully believe the Rust developers could achieve similar results as well, if they really wanted.

Until then, the lack of a great cross compilation experience out of the box is just a limitation of the Rust toolchain. It's an acceptable limitation in many situations, but I don't buy your repeated arguments on this HN discussion that it is unclear that this is desirable to fix. Clearly many people in this discussion alone disagree with your position, and my own anecdotal discussions with other developers in real life aligns with this discussion. YMMV, obviously.


> Yes, the Zig (and Go) developers have clearly put a lot of effort into ensuring the size of their toolchain remains reasonable. I fully believe the Rust developers could achieve similar results as well, if they really wanted.

I strongly disagree. These are different languages. Rust leans heavily on generics and monomorphization. It implements most language operations, like ptr::offset, in the language itself, increasing the size of library metadata. It supports tools like the sanitizers. It has a rich serializable MIR format so that generics can be embedded at a higher level than just machine code and a lower level than source.

It is absolutely not true that nobody "really wants" smaller binaries. Rust already did a fair bit of experimentation with running crate metadata through gzip, etc. years ago (turns out there are some thorny tradeoffs around compilation time vs. on-disk storage when you go that route). I can't speak to Zig, knowing less about it, but with Go there were conscious language design decisions that favor binary size over runtime performance (e.g. hash table lookups all going through a single function instead of being specialized). This is fine! But it's contrary to the idea that Rust could achieve smaller binary sizes if we "really" wanted to.

> Until then, the lack of a great cross compilation experience out of the box is just a limitation of the Rust toolchain.

Rust has a great cross-compilation experience. It's as simple as:

    $ rustup target add arm-linux-androideabi
    $ cargo build --target=arm-linux-androideabi
That's it. One more command than Zig or Go, to install the toolchain you need, and then you're off to the races.


The difference is really not just one command. I suggest you go out and actually try Zig cross compilation or at least read more about it. Andrew Kelley has written a lot about it. The work impressed me so much that it was one of the things that provoked me to become a financial supporter of Zig. (Even though I haven't written a line of Zig yet.)

Cross compilation in Rust is better than C or C++. But it's still a big pain.

I really think you are underestimating what the Zig folks are doing. Please investigate more deeply.

In particular: "Zig does not ship with any pre-compiled libraries; instead it ships with source code, and builds what it needs on-the-fly."


Do you think that Rust would be a better language if everything had to be compiled from source?


Obviously I don't. And that's totally consistent with learning from Zig about how to improve the cross compilation story for Rust. (Trivially: compile some things from source, but not other things.)

I'm honestly pretty disappointed at your participation in this thread. It seems like you're going out of your way to assume the worst possible interpretation of what folks are saying.


The Ultimate++ environment for C++ does the same thing with C++, and it seems to work just fine even for them.


If you use the build-std nightly feature you can compile the entire standard library, even for a custom JSON target file, and it goes pretty quick. With a bit of attention that feature would be just fine at filling in the gaps in rustup’s prebuilt targets.

But Zig’s approach isn’t actually about making everyone compile more things from scratch. It doesn’t compile libc from scratch any time you select a new target. Reading the article you can see the extreme lengths it goes to to avoid this.

The main thing that’s missing from Rust is target-specific libc.so/etc, and an appropriate linker for every target. If you try cross compiling anything, you will soon run into this problem; your system will not have the correct target libc or an appropriate linker. You can generally only change one part of the triple before the experience starts falling apart. I can cross compile for iOS/tvOS/etc on my Mac; that’s about it.

Zig manages to solve this for all its targets by pre-processing simplifications of the various libcs, which are then bundled into the Zig binary (as 3 very small files). It generates a useless but linkable .so file from the preprocessed files for any target on the fly. It gets you the correct libc headers. Then it uses LLD instead of making you hunt around on Ubuntu forum posts finding and downloading the correct GNU linker for your specific architecture. So you don’t link to a real libc, but you don’t have to compile one either. (Until you want to execute a binary with QEMU, but usually non-simulated target machines have a libc.so already.)

(Aside: Golang solves this by not depending on libc at all, by reimplementing most of it including the syscalls in Go, and I believe by using its own multi-target linker. Many pros and many cons, but an approach Google is happy to sponsor.)

If you’re saying this approach would be infeasible for Rust std/core, then yes obviously. It’s statically linked, you can’t get away with a fake .so file.

But that was never really the issue. Rustc can build std’s rlib files for any target effortlessly. And yet cross compiling is still a pretty poor experience. The state of the art for Rust developers at the moment is rust-embedded/cross, which solves the same libc+linker problem that Zig does, naively, by literally using Docker and per-target Ubuntu images to download prebuilt GCC and libc packages from Apt. Basically we can do way better than that, because Zig showed us how.

See this Dockerfile for what I mean: https://github.com/rust-embedded/cross/blob/master/docker/Do...


> That's it. One more command than Zig or Go, to install the toolchain you need, and then you're off to the races.

Except it's not the same. Zig also cross-compiles C code beautifully, and lots of Rust code depends on C libraries. That "one additional command" does not solve cross compilation to an equivalent degree as Zig.

Go code tends to avoid C because of the heavy penalty that CGo imposes, so the cross compiliation experience is usually good there, but for different reasons than Zig.


100% sure? No. Mind you, I'm on MSVC, where I already need an extra ~1gb download to get the system linker.

We could still do a few different things to make this story better for users, even if it's not literally identical.


This is silly.

If you make code that requires allocation and does not have an allocator passed into it, there is literally nothing stopping you from allocating anyway. Zig does not stop you from, for example, heap allocating without the usage of an allocator (Zig does not know what heap allocation even is). You can directly use the std page_allocator (and others) wherever you want, too.

There are very few good reasons to do this (e.g. spawning a thread requires particular allocation; using an allocator might not be sound depending on the OS).

All you have to do is document that it allocates memory and that's fine; there is no split.


I tend to agree that allocator param isn't that strong of an argument (e.g. one can easily just store a pointer to the allocator in a struct to hide it from method signatures). And it is quite valid to just create a local allocator and `defer a.free(thing)` to clean up at the end of a function (for example, if you just need a trivial std.mem.join and you know that you can fit the result in stack). One does not need to use the top-level GPA/arena for everything.

But nonetheless I think there is a split (at least for now). Zig async function frames are different than regular functions, so even if syntactically, `foo()` could magically be switched between sync or async based on io_mode, in practice if you have a recursive call tree doing fs operations, it'll work in one mode (up until a stack overflow, that is) and throw a compile error upfront in the other. Presumably this would not be an issue anymore once [0] is implemented.

[0] https://github.com/ziglang/zig/issues/1006


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: