Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Zig is a very low level language. I think the fancy type system can trip up people into thinking they are working with a high-level language.

Zig is basically C with a fancy type system, so you should not expect things like special String types, overloading of index based access etc.

I think the author was thinking that Zig was very close to Rust or C++, when in reality it is much closer to C. I had to keep reminding myself of that many times as I was learning Zig.

I had my own struggled with Zig, but not quite as much as the author. I think will probably have a much better experience if you don't try to jump and code right away but read some articles or listen to some videos to get a sense of the overall philosophy of Zig.

I am normally against having to look at source code, but with Zig that is kind of needed but also not quite as bad as it sounds. Zig code base is not that large and it is relatively easy to search. You can lookup a Zig function signature very easily. You need to do this if you are going to use any of the standard library apart from the most basic stuff.



> Zig is basically C with a fancy type system, so you should not expect things like special String types,

If I can't have nice strings, what should I be expecting from a fancy type system?


Zig's "fancy" (I don't think they're that fancy) type features that IMO make it a great C alternative are:

- non-null pointers, and distinct types for single-item pointers and multi-item pointers (multi-item pointers are rarely used except indirectly via slices, so unchecked pointer arithmetic errors are largely banished)

- builtin tagged unions (AKA algebraic data types) with very pleasant to use switch logic -- it can't be overstated how nice the "handle all the cases" logic is in Zig in general (catch, orelse, if-else/switch expressions)

- a decent proposition for errors (the error union, and try keyword), but I haven't decided if I really like it yet

There's lots of non-type stuff there too. I was writing personal projects in C without libc, but found there to be a lot of annoying work involved -- happy to do it, but it's not earned/fruitful annoyance, more like a long list of incidental historical annoyances. Zig seems to cater to the same level of the stack, but with all that boring stuff taken care of.


Many of Zig's fancy type features are in the library, not in the language, because types can be programmatically constructed at compile time.


So it sounds like Zig makes a distinction between pointers and arrays. Am I reading that right?


Yes, arrays are another distinction:

- an array [3]u8

- a single item non-nullable pointer *u8

- a single item nullable pointer ?*u8

- a multi-item non-nullable pointer [*]u8

- a multi-item nullable pointer ?[*]u8

- a slice []u8

Typically your API is just made up of slices and non-nullable single item pointers. Arrays are just the typical backing store for a slice, that you might define in main or for small scratch buffers. Here's a typical example of an OS read (where the fd has been set to non-blocking already):

    fn readUpTo32Bytes(fd: std.os.fd_t) ![]u8 {
        var array: [32]u8 = undefined;
        var data = array[0..]; // slice of entire array
        data.len = std.os.read(fd, data) catch |err| switch (err) {
            error.WouldBlock => 0,
            else => return err,
        };
        handle(data);
    }

    fn handle(data: []const u8) void { ... }
If you don't need to handle EAGAIN/error.WouldBlock, then just do `data.len = try std.os.read(fd, data);`

I didn't explain the `!` in the `![]u8` return type, but it's basically saying "an error or a []u8", where it's compile time known what the full set of errors is (in this case every error std.os.read can return, minus WouldBlock).


Replying to note I mistakenly left in the `![]u8` return type when it's not returning any byte slice. Should be `!void`.


Also Zig's memory alignment in the type system is great for doing low-level I/O.

Coming from C, I find that Zig's memory alignment options are easier and more powerful.


Totally agree. Having std.os.mmap enforce correct page_size alignment for the base pointer has saved me once already (using mmap and memfd for a fixed size circular buffer where you can always provide contiguous bytes for the full size available, without any memmove).


I still can't figure out how Zig proposes to prevent undefined behavior without a borrow checker (aka MLKit regions) or GC.

AFAICT the answer is "inject as many runtime checks as needed" although the docs seem to go way out of their way to avoid making this explicit.... or deal with the fact that these checks are now runtime failures rather than compile-time failures, and therefore need code to handle them.

It seems like it would be the same as writing Rust code using std::cell instead of references, except that Rust would force you to insert handlers for all the new failure modes this would create (of course you could just panic!(), but at least the compiler would force you to insert those panics...).


You're conflating undefined behavior with spatial/temporal memory safety, the latter of which is what is 100% prevented by Rust's borrow checker, provided you're not interfacing with hardware, in which case I believe that things change more in binary fashion.

However, Zig treats memory safety not as an extreme-at-all-costs but as a spectrum (there are reasons for wanting to think like this, at least when writing low-level code if you want to make more use of the hardware), getting 100% on the spatial memory safety front and reaching to 50-75% on the temporal memory safety front through the GPA. That's already an order of magnitude more safety than C, at which point memory exploits have dropped in ranking, and you should be more concerned about things like explicit control flow, error handling and checked arithmetic, not to mention the orthogonality of the language.

Furthermore, in the systems world, there are many safety critical systems where dynamic allocation and multi-threaded control planes are simply off the table to begin with because they're dangerous in some domains and not as safe as static allocation and single-threaded control planes, which are less dimensional and easier to reason about. And in those cases, UAFs and multi-threaded races are less of a concern (still a concern, but less).

Also, Rust won't protect you from all undefined behavior, and Zig often helps more than you think. For example, you might be surprised to hear that Rust has checked arithmetic off by default in safe builds, whereas Zig has this enabled. I've done a little security work on some large systems and the decision to disable checked arithmetic always blows my mind. Integer overflow and underflow are right up there as threat vectors when writing anything that's touching hostile data.

I'm waiting for the day when Rust changes direction on this, and I think there's a chance this will happen because the alternative status quo of not checking arithmetic (at all) is just not tenable, at least not if we care about safety and security holistically, and not only memory safety.


If dealing with potentially hostile data, Zig certainly isn't more appropriate than Rust in my opinion, try maybe WUFFS.

Suppose we have been given a 32KB data structure with some "step" bytes - in a conforming input these should always sum to less then 32768 and thus the total will easily fit in a 16-bit unsigned integer, so that's what our naive program does. Unfortunately attackers provided a structure whose step bytes sum to more than 65535...

Zig will panic here if using default arithmetic with default release builds. If the attacker wanted to cause a Denial of Service, job done already.

Rust will panic if explicitly told to enable checked arithmetic on release but it also provides explicit checked, wrapping, saturating and so on variants of the arithmetic operators if you want them for this part of your software (perhaps anticipating the risk) you can just have that without touching the behaviour of all other arithmetic in the program. 65530u16.checked_add(255u16) is None even in a default release build of Rust, what you do with that None (silently abandon this input? log the error?) is up to you and of course may not be adequately tested.

However, in WUFFS we simply can't write the erroneous program. It doesn't compile because WUFFS can't see why it's safe. Because it isn't safe. WUFFS requires the programmer to spell out what's going on, and so either you have to realise what might happen ("Oh, it can overflow, I should handle that") or choose a strategy that can't suffer the problem, ("Let's not sum up those steps, I see a different way to handle valid input").


> If dealing with potentially hostile data, Zig certainly isn't more appropriate than Rust in my opinion, try maybe WUFFS.

Thanks! Great recommendation on WUFFS! And completely agreed, it's also easy to turn on checked arithmetic for Rust (if you know about it, but Rust definitely has an unsafe default there for those that don't, which is surprising to me).

At the same time, WUFFS is not always applicable, for example to writing something like a distributed system where you do still want safety, often the flip side of security. I'm sure you'll also agree it's good to balance out that security is more nuanced than just a rant about memory safety to the extreme. It's great to have positive discussions about languages, to evaluate trade-offs positively.

Counter-intuitively, I do feel also that Zig's explicitness as a language as a whole fits a security mindset well. For example, in `std/mem.zig` there's a very careful divExact assertion around underflow when calling `bytesAsSlice()`. This is just a fantastic way to prevent buffer bleeds, i.e. HeartBleed or CloudBleed, but it's probably uncommon to see in many libraries, and something like a borrow checker wouldn't provide this aspect of memory safety automatically. You can easily get lulled into a false sense of security.

From a security angle, I also like Zig's philosophy around very simple control flow and avoiding unnecessary abstractions, no matter if they're zero-cost. I think this is going to lead to a healthier package ecosystem when it arrives, compared to say NPM, where you get these dependency explosions that are a real headache for supply chain attacks. Attackers always go one level deeper, they attack through the basement, and there's often more low-hanging fruit at hand than a UAF (especially considering that many embedded systems that Zig targets probably do static allocation anyway, so bleeds might often be the worst that can happen). It will be interesting to see how Zig's philosophy around explicitness and avoiding bloat makes a difference here.

> Zig will panic here if using default arithmetic with default release builds. If the attacker wanted to cause a Denial of Service, job done already.

In the security world, a DoS is usually not treated as a P1. Perhaps a P3 at best (if you're lucky as a researcher!). For example, I've submitted one or two DoS MIME bomb samples that can shutdown Gmail servers and got very much an "okay, we'll just not bother about it because we're Gmail and our fleet is so massive". The DoS is probably still out in the wild for Gmail. Even ProtonMail, which has experienced numerous outages, didn't classify it as a P1, although they awarded it.

However, for a read/write exploit (running with the email example, perhaps a directory traversal in Apple Mail), having checked arithmetic convert what could have been a P1 into a P3 is actually exactly what you want because it prevents the exploit from going further (these things are almost always chained).

It also surfaces the bug visibly, you get a crash, you investigate, you fix. So from an attacker's perspective, they're actually less likely in fact to try and trigger it, because then they reveal they're in your system.


Something like WUFFS is exactly what we should be using for Wrangling Untrusted File Formats as it says in the name, even if you've decided to do that in a distributed system. Realistically you're definitely going to get this wrong, so, use a language where the worst case is it doesn't work is a massive improvement over using languages where it's all additional attack surface.

That recent Apple bug where they render PNGs incorrectly can (in principle) happen in WUFFS. The other recent Apple bug where bad guys seize control of your iPhone by sending a malicious image file cannot. One of these things is not like the other.

I think you're missing the point if you expect the borrow checker to care about buffer underflow. Rust has a runtime bounds check to check bounds, the borrow checker is, as its name suggests, checking the borrow rules. The trick (compared to arithmetic overflow) is that the optimiser can often push a bounds check outside a fast loop or eliminate it altogether, so you really can afford to do this in all or almost all your release code unlike checked arithmetic. WUFFS shows that you can do away with both of these runtime checks and be entirely safe if you're not interested in being a general purpose programming language. Which is (part of) why WUFFS gets to be both safer and faster. Both Zig and Rust are intended as general purpose languages.

I don't buy the "surfaces the bug" thing because I have too much experience of real world systems where there's so much noise and mayhem that you are focused on stuff that's causing your real users pain. Even if the DoS means the server falls over and must be manually restarted, the ticket in my queue says "Urgent: Auto-restart server. Watchdog maybe?" not "OMG bad guys are trying to break into our system somehow, find out how ASAP"


> Something like WUFFS is exactly what we should be using for Wrangling Untrusted File Formats as it says in the name, even if you've decided to do that in a distributed system.

No, I was saying earlier that there are limits to WUFFS. The example I gave was that you can't write something like a distributed system (think consensus protocol like Viewstamped Replication, Raft or Paxos) in WUFFS, but where safety is nevertheless still critical, and where you reach that through crystal clear control flow and explicitness. In other words, safety is the other side of the coin to security. Hope it's a little more clear now.

> That recent Apple bug where they render PNGs incorrectly can (in principle) happen in WUFFS. The other recent Apple bug where bad guys seize control of your iPhone by sending a malicious image file cannot. One of these things is not like the other.

Of course.

> I think you're missing the point if you expect the borrow checker to care about buffer underflow.

No, I was stating the obvious, that it can't (or at least not always, but in some cases it can), not that it should.

> I don't buy the "surfaces the bug" thing

I was just trying to convey a little bit about how security works and how hackers (or at least red teamers) think, especially when blue teams are involved. I've found that the more I get into this, it becomes much less about preventing the breach and more about "assume breach, okay, now how do we detect it?". And a software DoS is also really just bottom-of-the-rung, you'll find almost no programs paying out for any findings. You shouldn't worry about them. Asserts are the safe thing to do. They close semantic gaps and make your code much more secure. It's like putting in a thousand trip wires, any thing off and an attacker can't get further. It completely shuts down exploit chaining.


> No, I was saying earlier that there are limits to WUFFS

Of course there are limits to WUFFS, that's why it isn't a general purpose language. You shouldn't implement these distributed protocols in it for the same reason toothpaste isn't a good engine lubricant, you deliberately can't even write "Hello, world" in WUFFS.

And yet, if you find yourself, in your distributed system, Wrangling Untrusted File Formats, you should reach for WUFFS to do that safely. Somewhere between "The device has a single button, it's green, press it" and "We process any PDF, HTML or XML documents sent to this email address" you will realise you need all the help you can get to Wrangle the data safely, and that's why WUFFS.


> Of course there are limits to WUFFS, that's why it isn't a general purpose language. You shouldn't implement these distributed protocols in it for the same reason toothpaste isn't a good engine lubricant, you deliberately can't even write "Hello, world" in WUFFS.

LOL, I would never have thought to do that till now! :)

I think we've always been on the same page regarding WUFFS and file format sanitizers. For me the question here really is, how do we improve the status quo when WUFFS is not an option? i.e. What are sane defaults for general purpose programming languages?

I still maintain that checked arithmetic should be enabled by default in general purpose programming languages, and that's because I believe in the principles behind WUFFS, having worked exactly on these kinds of tools myself.


> You're conflating undefined behavior with spatial/temporal memory safety,

You are correct; I should have written "memory safety".

> Zig treats memory safety not as an extreme-at-all-costs but as a spectrum

Zig needs to be more forthright about this.

When I first heard about Zig, I googled "zig vs rust" and found an article on the Ziglang website addressing that very topic:

https://ziglang.org/learn/why_zig_rust_d_cpp/

It completely fails to mention memory safety at all. That seems extremely dishonest, since memory safety is basically the "headline feature" of Rust (well, one of two or three at most). I wasted a lot of time digging through the Zig language manual ("so then how do they...") before concluding that something didn't add up. It definitely left a bad taste in my mouth.

> Rust won't protect you from all undefined behavior ... Rust has checked arithmetic off by default in safe builds

That didn't surprise me at all, nor will it surprise anybody who knows Java. Modular arithmetic is perfectly well-defined.

It's only C/C++ that picked the crazysauce option of decreeing that signed overflow is totally equivalent to scribbling all over random pieces of memory. It isn't overflow that's a security risk; it's languages that define overflow to be undefined in order to squeeze out a few piddly loop micro-optimizations. This becomes increasingly less beneficial in languages with iterators and no backward-compatible-with-C burden. Details (scroll to "Myth: overflow is undefined"):

https://huonw.github.io/blog/2016/04/myths-and-legends-about...


> Modular arithmetic is perfectly well-defined.

Yes (and thanks for the link!), I was in fact thinking more of this non-UB case (not signed overflow UB) as an example of where it's clearly defined as wraparound but can be chained into an exploit nevertheless, not technically UB but a vulnerability nevertheless. Not all exploits bother to go as far as a UAF. Unchecked arithmetic can be low hanging fruit.

> That didn't surprise me at all

It surprises me that Rust doesn't just enable checked arithmetic by default with an opt-out for performance, rather than enabling it by default for performance with an opt-out for safety. Zig's choice here is the safer choice from a security perspective.


Checked arithmetic is a much bigger performance hit than most people expect.

It means that every arithmetic operation is potentially a branch/jump instruction. This wrecks a lot of pipelining/out-of-order-execution schemes.

I once worked on an exotic architecture where the integer types had a "NaN" value just like floating point numbers do; it had both modular and checked arithmetic, but the checked versions would return NaN instead of branching.

It also had 37-bit integers. Yes, 37-bit. Fun times.


> Checked arithmetic is a much bigger performance hit than most people expect.

You're right about the branching cost. I believe there's a better way to solve that than disabling checked arithmetic everywhere.

This comes out of something I learned working on TigerBeetle [1], a new distributed database that can process a million financial transactions per second.

We differentiate between the control plane (where we want crystal clear control flow and literally thousands of assertions, see NASA's "The Power of 10: Rules for Developing Safety-Critical Code") and the data plane (where the loops are hot).

There are few places where we wouldn't want checked arithmetic in TigerBeetle enabled by default. However, where the branch mispredict cost relative to the amount of data being checked is too high, Zig enables us to mark the block scope as ReleaseFast to disable checked arithmetic.

> It also had 37-bit integers. Yes, 37-bit. Fun times.

Wow, fun times indeed! We just disabled 32-bit support for TigerBeetle because it was getting too hard to reason about padding. I can't imagine 37-bit, LOL!

[1] https://www.tigerbeetle.com


Zig doesn't propose to prevent UB (in fact the docs say that it takes aggressive advantage of it for optimization).

(Neither does Rust.)


safe Rust doesn't have any Undefined Behaviour.

Your unsafe Rust is supposed to provide suitable constraint/ guarantees that you, the programmer, conclude it does not have any Undefined Behaviour. The language can't force you to do this, and at some point it becomes a social contract not a programming language feature.

I wrote the misfortunate crate to explore Rust's promise here. The crate provides legal but obviously inappropriate implementations of lots of safe Rust traits, and sure enough nothing blows up, there is no undefined behaviour.

The defined behaviour can be undesirable for example if you insist on putting a bunch of misfortunate::Maxwells in a HashSet you're going to have a bad time. Rust doesn't promise this is a good idea, it might cause infinite loops, memory leaks, all sorts of defined trouble, but it won't be Undefined Behaviour.


Not "you, the programmer", any programmer. The Rust standard library can and sometimes does have UB, and that's even more true for other libraries.


"This software may have bugs" seems like a very different situation from "Our language just doesn't care whether maybe your program is nonsense and might have Undefined Behaviour".


Nice strawman and goalpost shifting. You said that it's up to "you", the programmer, whether unsafe code misbehaves, but this simply isn't true because "you" don't control all the unsafe code you execute. Again, third party libraries can have UB ... that it's wrapped in "unsafe" keywords doesn't change that, it just means that the compiler offers a tool to help reduce it. What I said is that rust doesn't propose to prevent UB, and that's true. I didn't say that it doesn't care. (And it's not true that Zig doesn't care, either, but it makes a different tradeoff between intrusive limitations and safety.)


Interesting.

I had the impression Zig proponents were praising it because it wasn't as low-level as Rust.


Quite the opposite. The very broad-brush overview is that zig aims to replace C and rust aims to replace C++... but among other issues with that analogy, that can't happen unless zig and rust have a dead-simple integration story.


I've only ever really seen it in Zig communities that Rust is closer to C++

In Rust communities, it's often pitched as alternates to both, but closer to C.

I suppose it's all relative, but the comparison of rust to c++ seems external


I think it is a C++ replacement that is _closer_ to C. It has generics, traits, RAII, etc, which are C++, not C, features but the way that Rust implements them _feels_ closer to C. I don't know if this quite makes sense but as an external observer this is my impression.


I don't really understand how Rust could be compared to C. Rust is perhaps more austere than C++, but that's like saying the neptune is smaller than jupiter. It's still a big language with a generous standard library. You almost never do everything yourself, by hand, which is the hallmark of the C experience. Just compare the list of methods rust has for a &str, compared to the fact C doesn't have a string type, to see the gulf of cultural difference.

Obviously, C and Rust are both low-level languages (in terms of control and overhead), but there are quite a few of those.


Yes, I also had the impression Rust is a C replacement.


From a programming language design space, it's much more like C++.

However it can occupy some niches that C can but C++ does poorly because of zero cost abstractions, I think.

(Very handwavy)


There are no niches where Rust can do better than C++. In many places Rust can, uniquely, match C++.


De gustibus non est disputandum, and thus at such a high level of abstraction you are at best trolling.


The first person to mention trolling is invariably the troll.


This is of course false.


I assure you there really are places where Rust can match C++.


Yes, but that's the sensible part of your comment, not the part I was obviously disagreeing with. (And no, the troll is not the one who first uses the word.)


Rust's purpose, its whole reason to exist, is to displace C. Rust will unavoidably fail in that, because anybody still using C is not willing to learn anything else: anybody willing to move on from C already did, long ago.

Rust is already approaching C++ in complexity, surpassing it in some places; and also in expressive power, but not surpassing it anywhere yet.

If Rust does not end up fizzling (which is still very possible!), Rust programmers will generally be drawn from the same population as C++ programmers. They will be people who want and can use a powerful language to make themselves more productive and able to manage bigger projects, without need to worry that they are taking a performance penalty, or losing control of details that matter.

Users of Zig, like of Nim and C, will be those uncomfortable with language power, disinclined to automate. Their attention is not on software and what they can build of it, but on problems where a thin veneer of software can add something useful. When there is not much for the software to do, you don't need much power to get it doing that.


Very minor correction - Nim tends to attract people specifically with its language power via its powerful Lisp-like metaprogramming facilities, static introspection, etc. These features are expressly there to automate away mundane repetition. I do not think it belongs in a list with Zig & C the way you use it here.

I also think users of C (not sure about Zig) are quite happy to automate things. Linus Torvalds is a big user of C. He wrote a little C-like compiler to check Linux kernel code called Sparse [1]. You seem to be trying to discuss maybe larger (but not very well articulated) subpopulations of "Users" than Apex Programmers like Linus. It is definitely easier to do this with C than giant languages like C++.

Why, the 1980s & 1990s were littered with maybe dozens of hacked C compilers doing "this or that" automation in a way you do not see for C++ (and will probably never see for Rust). In point of fact, C++ itself (C with classes) was an early example of such! The idea was to automate/codify the object-oriented style of Simula in C.

pjmlp's sibling & child comments are also some good color on the history/context of all this. { Of course, partly it all depends on what you meant by "language power" and "automate" - I am just going by what that seemed like. }

[1] https://sparse.docs.kernel.org/en/latest/


"Littered" is the operative word: who today uses any of them? (Maybe lex and yacc.) In the 80s and 90s, viable alternatives to C were thin on the ground, so the population demographic of C coders (which included myself) differed markedly from today. That hardly any are used today tells us about the inclinations of the long tail of remaining hold-out C coders.


Even if the only success story for Rust would be mainstream adoption of lifetime checkers across languages to some extent, that would already be a victory as it managed to change the baseline of language design across the industry.

A subject that now has become even regular presence at C++ conferences and considered a must have in static analysers roadmap by all major vendors.

Rust might fizzle out in a decade, and still leave such a mark in the industry.


I have not heard of a lifetime checker in any other language, except maybe Midori.


So here is a sample,

Chapel, HPC language mostly sponsored by Intel and HPC

https://chapel-lang.org/

D programming language,

https://dlang.org/blog/2019/07/15/ownership-and-borrowing-in...

Ada/SPARK,

https://docs.adacore.com/spark2014-docs/html/ug/en/source/la...

Swift,

https://github.com/apple/swift/blob/main/docs/OwnershipManif...

ParaSail

http://www.parasail-lang.org/

Project Verona from Microsoft Research

https://www.microsoft.com/en-us/research/project/project-ver...

Project Snowflake from Microsoft Research

https://www.microsoft.com/en-us/research/publication/project...

And finally your favourite C++

"Implementing the C++ Core Guidelines’ Lifetime Safety Profile in Clang"

https://llvm.org/devmtg/2019-04/slides/TechTalk-Horvath-Impl...

Also the "Clang Static Analyzer - A Tryst with Smart Pointers" talk at 2021 LLVM Developers Meeting.

For the Visual C++ part of the story

https://devblogs.microsoft.com/cppblog/lifetime-profile-upda...

And GCC as well, although they are late to the party

https://gcc.gnu.org/wiki/DavidMalcolm/StaticAnalyzer

Finally a couple of CppCon 2021 talks that touch on the subject in various ways,

Type-and-resource safety in modern C++

Code Analysis++

Static Analysis and Program Safety in C++: Making it Real

Finding Bugs Using Path-Sensitive Static Analysis


> Chapel, HPC language mostly sponsored by Intel and HPC

Minor correction: Intel hasn't traditionally been a sponsor of Chapel (though we'd love to see that change). Chapel was pioneered at Cray Inc. and continues on even stronger at HPE after its acquisition of Cray.

-Brad


Thanks for the correction, I thought seeing them referenced in some talks.

All the best.


At least most of these don't seem to be added language features, but separate "static analyzers" that rely on inference from existing language structures.


Sounds like a lot of "contemporary automation today" to me - automation that would often apply about as well to C code bases as well as horrifically overcomplex C++ codebases.

Also, why even draw a stark contrast between "a language" and "its tooling"? As a dev, you get to use both.

What is even the line..? Almost every compiler for anything provides options. Does gcc -fsanitize=.. not count because it's not "standardized" or only because its not activated in "typical" deployments like Rust integer overflow checks?


The line is very, very easy: if it is in the language, you can see it represented syntactically in the source code, and programmers using the language write it there. If not, not.


Clang and Visual C++ static analysis tooling can be represented on the source code via C++ attributes and blessed library types, so...


So... not in the language, but in proprietary, non-portable extensions.


With exception of C++, they certainly aren't.

C++ type system is impossible to fix while keeping backwards compatibility, so static analysis tooling is the only possible solution.


I sort of look at it as aiming at the spot where you could write the code in C with a sprinkle of C++ features and whether to add that sprinkle or do things like hand-rolling a vtable system where you need one is a decision that will vary depending on circumstances.

Though I believe the mozilla code it's replaced was all -very- C++.


>>Feature Highlights Small, simple language

>>There is no hidden control flow, no hidden memory allocations, no preprocessor, and no macros.

https://ziglang.org/learn/overview/

It try's (achieves?) to be a C without the flaws and historic ballast.


Where did you get that impression? No one has ever done that.


Here at HN.

I never read anything about Zig outside of HN.

What I remember was mostly that Zig should be easier to use than Rust, because Zig doesn't have lifetimes.


That doesn't make Zig higher level than Rust. It doesn't have lifetimes because it's low level, on a par with C ... all memory management is manual and you can abuse pointers to your heart's content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: