I wouldn't call that careless. So far, proposals to extend the for-loop have not passed the cost/benefit test. I think supporting iterators would require something like magic-methods, which so far are not a thing in Zig (language syntax doesn't interact with userland code).
The status quo for iterators is this:
var it = getSomeIterable();
while (it.next()) |item| {
...
}
IMO it's "good but not great" (not great because it adds a var and the variable pollutes the outer scope). But the alternative is a bit of a pandora's box.
Honestly, I know it's minor, but when I tried Zig the fact I couldn't do the classic for loop to do a "for (int i = 0; i < n; ++i)" was quite annoying. It pollutes the scope, so you need to add scope braces every time, and I find it's much easier to forget the ++i at the end of the while loop.
I understand where it's coming from, but having something like "for (i32 i in [0,n[)" would be so much nicer in my opinion.
Given how Zig does not allow variable shadowing (which is a choice which makes sense on its own) I find the scope leak to be very annoying in practice.
This is true. There have been proposals to alter the `while` syntax to fix the scope leak, as well as proposals to extend `for` to support ranges, but none have been accepted (and they aren't very compelling in my opinion).
Honestly, I'm not sure why people think it is such a big deal. I've written a lot of zig, mostly gamedev stuff, and I rarely use that construction. I find that the vast majority of loops are over arrays or slices.
While I don't have much experience with Zig, I agree -- from my little bit of playing around, it also hasn't been a big deal.
(The biggest thing I've hit that I want is a way to return information with an error. If that were fixed, I think Zig's error handling would be perfect.)
Loris Cro did a great talk on error handling in Zig, which also deals with how to return information with an error if you need to, and why the common case optimizes for simpler error values: https://www.youtube.com/watch?v=TOIYyTacInM
But the answer there is basically, "if you care about payload, don't use zig's built-in error handling." Which, obviously, you can do, but I'm not convinced it's a great answer!
Even for something as simple as parseInt, it would be nice, I would argue, if it returned, along with InvalidCharacter, which character was invalid. This would enable e.g. very precise indications in messages to the end user of what was wrong with the input.
In general, because currently it's super-ergonomic to return just error codes but more of a pain to return errors w/ payloads, what you'll get in practice is no error payload even when it would be useful, which will end up turning into less-helpful-than-ideal errors for end users, which is contrary to zig's goal of enabling the creation of perfect software. :-)
Just thinking out loud, given Zig's errors primarily allow the compiler to enforce that errors are handled, and exhaustively, beyond that why can't error payloads just be passed as an input argument pointer to the function call? Maybe this is a dumb idea, but it's what C programs already do, except C programs don't have the safety of compiler checked errors offered by Zig. Does the Zig compiler need to check error payloads beyond the type system already provided? I'm not sure.
I agree with you that having such a super-ergonomic error handling system will direct most code towards a certain style, but I don't think this is contrary to Zig's goals of perfect software or simplicity. It's already such a huge win on top of C's error handling.
> Just thinking out loud, given Zig's errors primarily allow the compiler to enforce that errors are handled, and exhaustively, beyond that why can't error payloads just be passed as an input argument pointer to the function call?
Yeah, that's definitely a reasonable way to do it, and actually Andrew gives an example of this:
(Andrew's example is slightly different in that rather than an explicit argument, it's an optional field in one of the arguments.)
So it's definitely not like crazy bad or anything, but also definitely less ergonomic than if you could directly return a payload with the error, and this is enough friction that it feels like you'll end up not having error information when you want it (both as an API consumer and eventually as an end-user looking at an error message).
But, I'm still very new to Zig, so perhaps my instincts on that are wrong. :-)
> But, I'm still very new to Zig, so perhaps my instincts on that are wrong. :-)
Maybe you're right too! Just speaking for myself here, but as a programmer and "man with a hammer", I sometimes like to think that instincts might just be our mental machine learning model, where it can be really well trained and give the right black box answer, albeit not with the explanatory backstory or rationale, but worth trusting nevertheless.
I'm also new to Zig, and there are ways that a pointer input arg as error payload could go wrong. I'm following the proposal issue, and it will be interesting to see which way it goes.
I think you're spot on with your description of instincts. Which of course means sometimes they can lead us wrong, especially if we're applying them to domains very different from those they were trained on.
Yeah, it's not that common in general, I agree. I think my impression was coloured by the fact I tried Zig to implement a binary protocol deserialiser where getting the size first then iterating happened a lot.
Both forms feature "implicit flow of control", contrary to Zig's stated goals. In "while (i < 10) : (i += 1)", it's hard to understand what the heck "(i += 1)". The best concept one might get is that it's puposely made be different from C, just to be different and confuse people.
In second case, it's "defer", coming from Go, the language which chickened out to add normal exceptions, because they're "implicit transfer of control", and LOLishly added "defer", as if it's not such.
> Both forms feature "implicit flow of control", contrary to Zig's stated goals.
They absolutely don't. All control paths are explicitly represented by syntax (no different from the hidden goto in a while loop -- it's explicitly recognised by the while syntax); proof: you can draw all of them by just examining the syntax of the current subroutine, while knowing nothing about others. Exceptions, however, are implicit: any call, foo(), might or might not cause some control flow change in the client without there being any explicit acknowledgement of that by the client; you cannot draw all the flow paths just by examining the syntax of the current subroutine.
Sorry, but throwing an exception in a function call is equivalent to:
res, exc = fun();
if (exc) goto exception_handler;
That's underlying model of how exceptions behave, and how they're implemented "manually" in languages with no exception handling (C, Go). You absolutely can draw that by examining syntax of a subroutine, and it's no more implicit than "defer".
Right, both the C style and Zig's defer are explicit, as opposed to exceptions, which are implicit, only Zig's error handling is less error-prone (it forces you to handle errors) and makes the code more readable, IMO, than the C style.
But no, there's a kind of continuum, and Go/Zig "defer" is already pretty high towards "implicit flow of control" end. (I agree that exceptions are a notch higher.)
It's only C's syntax which is truly explicit. It's literally a structured machine-independent assembler. That's why it's gold language which is very hard to displace (it's already perfect for what it is). But just as everyone I'm watching with popcorn all the contenders popping up. (My humble opinion about Zig's issues on that path, I, together with other people, expressed here.)
> is already pretty high towards "implicit flow of control" end
Its implicitness is zero -- there is zero information not available in the syntax of the current block, exactly as in C -- so I don't see how it can be high compared to anything. You just don't have it in C, so you're not used to it. This is exactly like an Assembly programmer who says that C's `while` is implicit because there is no explicit jmp. In fact, the third clause in C's `for` header works almost exactly like defer: you write a piece of code that isn't executed immediately after the preceding one, but is injected to the end of the block.
A language with `while` isn't any more implicit than a language with just goto; it just has another explicit control-flow construct. Same goes for defer.
> It's literally a structured machine-independent assembler.
Not so much once you take the preprocessor into account.
> it's already perfect for what it is
I think that was true in 1970, not today. First, we know more. Second, we write much bigger programs. Third, "structured assembly" isn't as valuable as it was now that even machine code is "high level." This is not to say that C could be replaced everywhere, but I think, intertia aside, it could be replaced almost everywhere.
> Its implicitness is zero -- there is zero information not available in the syntax of the current block
Ah, so your mind's window is single block, that what you bang on. You know, blocks can be long too. To the end of reading 1000-line block, you think that you know how it ends. But oops, you completely forgot about some "defer", which implicitly executes at the end of block. That's my definition of "true explicitness", where all code which executes at some point is available locally.
And if you forgo that definition of explicitness, why stop at block? Exceptions work across blocks, i.e. on the level of the entire subroutine, and not much harder to reason about than "defer".
> In fact, the third clause in C's `for` header works almost exactly like defer
Good point. It's a clause in "for", so best practice is to use it for "loop iteration expression". It's also familiar syntax to the entire generations of programmers. Unlike Zig's "2nd clause in while" which is "original design" people will "thank" you for.
> it could be replaced almost everywhere.
The question is with what. IMHO, Zig isn't good enough to replace C at all. Too much NIH, again. It does too much differently. Naive thinking is of course "so that it's better", but actually, it just repeats C's mistakes (ugliness in language) and makes its own, "very original" ones.
> That's my definition of "true explicitness", where all code which executes at some point is available locally.
But no control flow construct satisfies that. Jumps are the whole point, and the unit for structured control flow is the block (what's the next instruction after `break`, `continue` or `}`?). defer is just as structured, just as local, and just as explicit as C's control flow constructs.
> It's also familiar syntax to the entire generations of programmers.
True, but learning Zig takes a day; two tops. Where it differs in syntax from C, it does so for good reason, and it's not like it's a complex language where different syntax is an additional burden. C's `for` syntax just doesn't make sense (`;`, which everywhere else in the language denotes sequential composition, means something different in the `for` header). Is an additional 2 minutes of learning, in a language that has very little syntax, not worth fixing something that's unpleasant in C?
Talk about confusing, implicit syntax of Zig - there's no connection between "it.next()" and "|item|". It's implicit and you need to make it up. Compare that to other languages: "for item in it" or "while (item = it.next())".
It isn't confusing. The meaning of the syntax isn't obvious if you didn't bother to learn zig syntax, but it is entirely consistent with how zig does this sort of thing.
The for loop payload syntax, where you can also give a name to the counter, is the only weirdness with it.
I'm sorry, but I do programming languages for decades, and familiar with a bunch of them. There's absolutely nothing special about Zig, it's yet another wannabe proglingo. What sets it apart is strong desire "to replace C" while being apparently "as different as possible from it".
I understand such a strategy (among many possible), and that will be a fun strategy if it pans out. The current situation though is that people question: why that dude purposely make it harder than needed for everyone?
That's just not true. It is not only pretty exceptional in its general partial evaluation construct combined with compile-time introspection (although D and Nim have some similar features), it is the only language AFAIK where a single general partial evaluation construct is used in stead of type generics, value generics, typeclasses/concepts, constexprs, macros and conditional compilation.
Zig might not be your cup of tea, and that's fine, but it is without a doubt unique in its design and not only special, but pretty radical.
> Talk about confusing, implicit syntax of Zig
I don't think it's any more confusing than any other syntax (and less confusing than C's `for` syntax where the ubiquitous `;` is treated differently than anywhere else), even though it might not be familiar to you before learning it, but there is absolutely nothing implicit here. You might find `in` more appealing to you or more familiar than `|` but it's certainly not more explicit.
I feel like what you're complaining about is the equivelent of Calculon's critique of a screenplay: "no, I don't like the font". I am very much not a genius (check the user name) and it took me less than a minute to understand the payload operator (which IIRC was taken from Ruby or something).
Let's say `for (x in y) {}` replaced the payload operator. It would still exist in:
And any that I might be forgetting. Is your suggestion that each of these should have separate syntax, thus complicating the language with one-off syntax?
> I feel like what you're complaining about is the equivelent of Calculon's critique of a screenplay: "no, I don't like the font".
Syntax is not irrelevant to the main use of a programming language the way fonts are to the use someone is probably concerned with if they ask an opinion of a screenplay, so, no, I don't see a similarity there.
> it took me less than a minute to understand the payload operator (which IIRC was taken from Ruby or something).
Loosely inspired by Ruby, perhaps, but definitely not taken from. Neither the positioning nor the semantics are the same.
> I feel like what you're complaining about is the equivelent of Calculon's critique of a screenplay: "no, I don't like the font"
I'm a human, and syntax is very important to me. 't broken read can syntax I. In that regard, if you purposely (or naively) misuse Comic Sans, then yes, "I don't like the font".
> (which IIRC was taken from Ruby or something).
Bingo. Ruby's not everyone's favorite language, so please be ready that people will send you rays of some hard love in that regard. Did you consider using C syntax for C-replacement-wannabe? (Rhetoric question.)
From what you're saying, I believe you would complain about a foreign language simply because it doesn't resemble your native language enough. Zig's syntax choices are not arbitrary and great effort is made to ensure their consistency. Your complaints about the syntax aren't that there is anything inconsistent or illogical about the syntax, just that it isn't the syntax you're used to.
That's a valid reason to not want to use the language, but I don't believe it is a valid basis for criticism and certainly doesn't excuse the hostility of your comments in this thread.
> From what you're saying, I believe you would complain about a foreign language simply because it doesn't resemble your native language enough.
I assume you mean a natural language. No, I won't complain about a real language used by people. I might criticize someone's inventing their own natural language. The same as I do for programming languages. And I'm very well familiar with the "we need to invent to move forward" argument. I also assume that people taking that ungrateful task are aware that they will receive criticism for that.
> just that it isn't the syntax you're used to.
Not just me, but many people. I specifically started to comment, seeing other people's feedback resonates with my own sentiment about Zig.
> certainly doesn't excuse the hostility of your comments in this thread.
I don't think there was particular "hostility", but I indeed try to nail right at the spot of the issues. Ouch! Beyond that, I allowed me some sour comments regarding downvoting other peoples' and my comments where concerns were raised with way Zig does some things. Such downvoting could be considered hostility, but I actually consider it funny ;-).
If you go into a conlang discussion and complain about people inventing languages when there's already natural languages, I guarantee you will get downvoted and pretty much for the same reason.
> I'm sorry, but that's total nonsense. Shows that not enough good faith went into Zig design. Even in C, that could be syntactically annotated as:
> /*pragma:noinline*/
> collectNoInline(gc);
In which you say that a built-in function call with optional flags and settings is total nonsense, while hijacking comment syntax to change compiler behavior is better. You can make an argument for that, but the way you describe it as "total nonsense" and not in good faith is definitely hostile.
Let's look at some more examples:
> Jumping thru hoops is what Zig seems to choose with its unorthodox, different-just-to-be-different syntax.
> Zig appears to have strong NIH mentality in regard to the syntax. Trying to replace C, but go out of their way trying to invent syntax just to be different from it and most other things too.
Zig's syntax isn't that unorthodox except compared to languages that look even more like C than it does, and it definitely isn't different just to be different. By claiming such, when there are thousands of posts in discussions about syntax, is plain insulting. Can you provide even a single post that suggests anyone has ever suggested syntax be a certain way just to be different?
Zig is a lot more complex than C, but I think most of the language complexity came from filling in gaps rather than tacking things on. For example I think they have settled on having no dynamic dispatch feature (interface, trait, virtual method, etc) at the language level. The comptime feature is flexible enough that userland approaches seem to be good enough. I bet a lot of "complexity" in the future will come from competing idioms used in various third party libraries.
> Zig is a lot more complex than C, but I think most of the language complexity came from filling in gaps rather than tacking things on.
I think it's important to keep in mind that with C you might end up using a bunch of extenions, complex pre-processor macros or pragma magic to achieve essential things.
C in itself is relatively simple, but using it in practice can end up becoming quite complex.
Zig ends up being simpler in some ways by having less magic and special ways of achieving things. "printf" is very magical in most C compilers, but the equivalent in Zig is nothing particularly special from either the language or the compilers side.
I think the only thing that truly adds complexity without being strictly necessary, is the async stuff. But you can argue that async IO is becoming a really essential thing for systems programming, and using it without explicit language suppport is a nightmare.
"printf" isn't magical, it's a standard va_args function. There's plenty of ugliness in the implementation [1] (so many macros!) but that's to be expected from something that has to simultaneously parse a format string and format output. The only magic is around format string checking.
Also the compiler is likely to optimise most calls to printf, transforming them into calls to other functions like puts [0], or apparently [1] even to fwrite. Probably doesn't count as magic, but almost.
having just learned it, async is not really complex, so much as "a thing that needs to be explained to you" kind of like how "pointers need to be explained to you". Sadly I don't see any good resources on that, yet.
Disagree, async (edit: in .Net at least) is pretty complex. For instance, from my experience it seems few C# programmers know that it's possible to deadlock if you get it wrong. [0][1] Plenty of people get confused with the basics, too. [2][3]
Zig is not .net. async in zig is simple, you just need to understand a few things: what a frame is, what the difference is between a stack frame and a heap frame, what it means (at the machine level) to jump between frames, and the fact that you can't do it in c. That's it!
It really is just a control flow structure in zig (which is why it's a keyword)
C is unusual though in that it's a minefield of undefined behaviour. It's very easy to think you truly understand the language but to have no real understanding of its many curious rules around undefined behaviour. You can't take a try it and see attitude, you need to be a language-lawyer.
Of course, even if you understand the rules you'll still accidentally write code that invokes undefined behaviour, which is much of the reason languages like Zig exist.
the difference is, they try to check for them at compile time, and if you compile with safety checks, also at runtime; however, if you compile with ReleaseFast (which I assume most people will in production), those runtime checks are turned off and the undefined behavior still exists.
Having your account hacked is a disastrous consequence though. And it's likely that the need of performance is not on the CPU side of things but on the GPU so ReleaseSafe should be good enough.
Right, I hadn't meant to imply Zig is free of UB. It aims to improve on C's wild-west UB rules not by having no UB, but by having only a manageable dose of it, and supporting good optional runtime checks.
Zig's approach is essentially that of Ada. You can ask the Ada compiler for runtime checks, or promise the compiler that your code is free of undefined behaviour and have your code run at the speed of light (C), or go haywire if you got it wrong. Sadly C is less suited to runtime checks, arrays are dealt with through raw pointers so range checks aren't easy for the compiler to add automatically. You can though ask GCC to generate checks (trap-on-failure) for things like dereferencing NULL, or signed arithmetic overflow.
The most convenient way is to probably to use a compiler builtin https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins..., if you want to be portable the next easiest way is to use a wide enough type (e.g. add or multiply two 32 bit numbers to a 64 bit one and verify it is inside [INT_MIN, INT_MAX]). Otherwise, you can either do a pre-condition check (for addition overflow occurs if a > 0 && b > 0 && a > INT_MAX-b || a < 0 && b < 0 && a < INT_MIN-b) or work with unsigned integers and check for wraparound after the operation. Finally, both clang and gcc have options to check for signed integer overflow at runtime (-fsanitize=signed-integer-overflow for gcc).
Of course, in practice this is too much effort for most people most of the time, so actual deployed C and C++ code is full of undefined behavior due to integer overflow. This paper has a great overview:
The overflow happens in the computation, you don't typically determine whether there was an overflow by checking the variable into which the result is saved (although perhaps this approach could work if you're using an unsigned integer type, which wraps around on overflow). You'd normally perform the check before you perform the addition (or whichever operation risks overflow). This can be fiddly to get right, but it's possible.
Also, the maximum value that can be represented in a variable of type uint32_t is (2^32) - 1, not 2^32.
Chromium, OpenSSL, the Linux kernel, the Windows NT kernel, have all suffered from security vulnerabilities due to undefined behaviour. We can bet they will continue to suffer from such issues. It's not something you can avoid simply by being competent and careful.
edit: As lmm says, it's likely you have UB issues in your code you aren't aware of. That's not quite the same thing as having issues in your code due to not being a good enough language-lawyer. I've resolved some very subtle issues that found their ways into a 'serious' C++ codebase, and I didn't spend that long in the C++ world. In most languages those issues simply couldn't have happened in the first place.
> the more bugchecked and field tested the code is, the more obscure any bug that surface is?
Right. A battle-tested codebase only has subtle errors, as the obvious ones will all have been fixed. An immature codebase has subtle errors and more obvious ones.
> GCC compiles to alot of architectures. I have a hard time imagining any modern language compiling to all those platforms without quirks in practice.
Compiler bugs are a separate issue from undefined behaviour and surprising language subtleties. With mature compilers they're pretty rare, but they do happen.
JavaScript is a good example. There's no undefined behaviour in JavaScript. That's vitally important given that JavaScript engines have to be able to run untrusted code. If JavaScript code is able to cause undefined behaviour, that's a serious security issue in the engine. Such bugs do happen, of course, but they aren't all that common. Generally, JavaScript runs fine regardless of whether you're running on x86, AMD64, or AArch64. Same goes for Java.
(I admit I'm ignoring the possibility of a constrained/contained kind of undefined behaviour where the JavaScript context might see things go haywire but the process containing the JavaScript environment is unaffected.)
How do you know? One of the reasons they're so insidious is that code that hits them tends to work fine until it gets compiled with a newer version of the compiler. E.g. signed integer overflow did exactly what you expect in most compilers until fairly recently.
> E.g. signed integer overflow did exactly what you expect in most compilers until fairly recently.
How recently? Both gcc 4.1.2 (2007) and clang 3.0.0 (2011) optimizes `x+1 > x` to true for a signed int `x` on -O1. And it probably goes way back, these are just the oldest compilers I found on godbolt.
Ah, point taken, but that's within the bounds of what many people expect; propagating the fact that the overflow is "impossible" to rearrange earlier control flow is more surprising and more recent.
> Ah, point taken, but that's within the bounds of what many people expect;
The thing is it's very hard to draw the line once you go that route. Different people expect different things from undefined behavior. The best thing is to not expect anything sane. And if you are unhappy about certain undefined behaviors in the standard then it's better to push the standard to define more behavior. Certain unnecessary undefined behaviors get resolved with newer standards, although I would expect significant pushback on defining the behavior signed integer overflow.
I understand there's a good chance the next standard will specify that signed integer overflow results in an unspecified value, which would match the behaviour of older compilers and what (IME) most programmers tend to expect.
Ye that is true. C compilers got some strange gotchas that you need to memorize but my main point is that those problems atleast to my projects are miniscule compared to off by one out of bound array access or dereferencing null pointers.
I agree with you here, but even these two categories of runtime errors are much more painful in C/C++ than in most other languages.
As I mentioned elsewhere in the thread, you can ask gcc to trap if your code is about to dereference NULL, but the compiler can't easily detect all instances of out-of-bounds array access, due to the way arrays and pointers work in C. I believe Valgrind can help detect out-of-bounds errors at runtime, but in most languages you don't need a sledgehammer like Valgrind to find these common errors.
We're agreed. I use it for my personal C++ projects. It's really quite awful (I've ranted about it on HN more than once [0][1]) but it's still the least bad choice.
It's good to see the project soldier forward! Some things I am excited about for the future:
- Continued progress on the new "stage2" compiler, which is written in Zig itself (long promised, work started in earnest I think about six months ago). There will be an optional non-LLVM backend this time, which is good news for build times. Even in debug mode, most of build time currently is spent waiting for LLVM.
- Incremental compilation with live reloading (in-place binary patching). The stage2 compiler is being written with this feature from the start.
- I can't find the link, but I believe the plan for stage1 (the current compiler written in C++, with all its warts) is to simply delete it. Since the new compiler will also support compiling to C, the devs will simply have stage2 compile its own source into C at some point, and that will be the stage1 from that point forward.
So did I understand correctly? The "bootstrap" stage which is best to be in C (for the targets which don't have LLVM) will be just the output of the compiler compiling itself to C, and that will be regularly produced and pushed as the compiler grows? And that's one of the steps to make it LLVM independent?
P.S. Amazing work up to now! Kudos! I'm a fan just for everything it does what other language designers ignore, like designing the language to both compile and link fast. Cross compiling always available is a masterpiece. I guess that feature wouldn't have to be present on the platforms which don't have LLVM, if the language is to work on such too?
It's pretty amazing to see this project still going. I remember Sauerbraten more because of its unique engine. Maps are basically a big octree with something like marching-cubes applied, and can be edited in realtime (and in multiplayer).
The developers have many notable side projects in the gaming space, too. Aardappel has a programming language, Lobster[0]. And eihrul created the ENet library for UDP networking[1]. I think Sauerbraten itself uses ENet.
I don’t know game engine tech, but I watched this[0] (which was awesome), and was immediately reminded of BZFlag[1][2], though the world looks more visually sophisticated.
The graphics haven't changed significantly. It looks like the biggest addition is new maps, which the community still seems to be quite involved in. There is a fork of the cube 2 engine intended to introduce modern graphics, called tesseract.
It's common to take map geometry from a level editor and process the polygon soup into some kind of space partitioning tree structure, but here the octree is manipulated more directly. The idea is to accept some limitations on what shapes can be created, for the sake of better real-time editing performance.
The original Cube engine is almost as old as Quake III: Team Arena, which can do more free-form geometry and has pretty light-maps, but ‘compiling’ these maps was a lengthy, compute-intensive task at the time, which made online multiplayer in-game level editing impossible.
The Quake 1 and 2 model formats were very similar, the interpolation was a rendering feature. After the Quake 1 engine source was released in late 1999 the interpolation was quickly added by fans (although it wasn't as easy as it sounds, as IIRC the original Quake 1 source (not QuakeWorld) didn't track entities across frames on the client-side, so that had to be added first).
The main difference between the two model format was how they encoded vertex coordinates. They both stored X, Y, Z coords as one byte each. But MDL (Quake 1's format) had a uniform scale/offset for transforming these into the final coordinate space, whereas in MD2, each animation frame had its own scale and offset. This seems like an upgrade but when combined with interpolation it could also result in a pretty ugly "vertex swimming" (jiggling) effect when you tried to portray subtle movements, like the idle anims for the player's weapons.
One of the many things I admired from Quake is that there was a pretty uniform scale of detail to everything. There wasn't really anything that had higher polygon detail, texture resolution, or animation rate compared to anything else in the world. Everything looked very solid and consistent because of that. Quantized vertex coords was one of those tricks that seems restrictive but it didn't hurt them with the game they designed
While we're talking about clever quantizing, we should mention the vertex normal encoding. In MD2 (iirc, not sure about MD1) each vertex normal was stored as a byte which indexed into a pre-established array of unit vectors which were more or less uniformly distributed around a sphere. It was a creative way to have good-enough per-frame normals in a tiny amount of space without forcing the engine to do any painfully slow per-frame normal generation (with the floating point division and square root which that entailed).
This is fun to mess around with. It would be nice if you could select "climate" (like daily weather or even season) as an additional input. It keeps adding puffy vests and jackets over an already warm shirt. Maybe loosen up the top/outerwear distinction? e.g. a flannel shirt could also serve as outerwear.
You can set your weather if you click the gear icon on the top right.
If you set it to 'use current' it will update the weather every time you're on the site.
We're working on the layering aspect which will have flannels and cardigans as outerwear layers.
Cool, I missed that :) The temperature setting does seem to make it stop picking puffy jackets, although it's still coming up with some pretty heavy outfits. I'm sure that's just a matter of simple tweaking.
Will keep an eye on this, it's nice even if it picks some crazy outfits. At least it gets you thinking outside the box.
This is a really nice system. Quake used it back in 1996 to fit the game in 8MB. They called it a "hunk"[0] and it worked pretty much exactly as you said.
The main drawback I find is that you can't use some generic data structure implementations that expect to be able to free memory out of order, unless you're fine with leaking before the final cleanup (if integrated into a generic allocator interface, the "free" method will probably be a no-op). For example, maybe you need a temporary hashmap to help with parsing a text file. It can be interesting to come up with implementations that work without freeing out of order.
Of course, you can always opt to use another, more general purpose allocator, on a block of memory retrieved from the hunk (see Quake's "zones").
First: I see my pregnant sister nearby, I go back to work and then go to sleep. I wake up, and she is standing over my bed. I ask her about the baby, and she says that it's a boy, six months old, and it has the same name as me. I ask if the baby is me, and she says yes. She says she's taking him home. I say goodbye but she doesn't leave. The game then starts just echoing back what I type, until I type "stab sister", to which she replies "---- on me" (censored by the game). I keep trying to type things (scream, rip hair out) but the game starts just giving me fragments of sentences like " on the wall". Then eventually just " !" and then " ". I type "run out of house" and it says "Get out".
Second: A young girl approaches me, and walks past. I get a few lines about how empty the place is, perhaps it's been this way forever, "it would be strange if someone had come here before you". I lie down on the ground. The girl comes back and stares at me. She asks me if I'm a ghost. I look at my hands and they're glowing red hot. "This must be what ghosts feel like". I start to sweat and shake. I dunk my hands in a nearby stream of water, which causes them to become white hot. I scream in pain, and the girl simply smiles at me. She pulls me to her. I push her away. She runs away and comes back singing happily. My hands start to burn again, the girl laughs as I scream. I close my eyes and everything goes black. Then I open my eyes and she's there with a pitchfork. She starts to pull weeds out of my arms(!). She throws me to the ground and laughs. I type a command to fight back and it says that I overpower her, she falls unconscious and I throw her on a horse and "ride off into the night".
I think this is a great idea. I maintain a personal extension as well. I've experimented with a few ideas for augmenting browser experience, but so far I'm mostly just hiding obtrusive elements on various websites (mostly by CSS, with some JS where sites obfuscate class names).
Even that would have taken multiple thirdparty extensions to accomplish - and probably would have required giving very broad permissions to them. Worth noting that Stylish, one of the extensions I might have used, was compromised with spyware a few years ago.
As for publishing, I don't publish mine because it forms a kind of personal fingerprint. The more I add to it the more personal it gets. I rather support the idea of everyone having their own custom extension. You can really improve your online quality of life and with the WebExtension API it's pretty painless.
I use Stylus to add custom CSS to sites and Violentmonkey (https://violentmonkey.github.io/) to add custom JS. They both make it fairly easy to start writing code for a new site. However, there is no easy way to set up both custom CSS and custom JS for a single site – a custom browser extension like you made could potentially support that better.
The status quo for iterators is this:
IMO it's "good but not great" (not great because it adds a var and the variable pollutes the outer scope). But the alternative is a bit of a pandora's box.