Yeah, this seems to be a common thing nowadays, although often with the value cited as "simplicity". I've always found it a bit odd because it seems to me like there are tradeoffs where making things at one level of granularity more clear or simple (or whatever you want to call it) will come at the cost of making things less clear and simple if you zoom in or out a bit at what the code is doing. Assembly is more "clear" in terms of what the processor is doing, but it makes the overall control flow and logic of a program less clear than a higher level language. Explicitly defining when memory is allocated and freed makes the performance characteristics of a program more clear, but it's "ceremony" compared to a garbage collected language that doesn't require manually handling that by default.
I think my fundamental issue with this sort of prioritization is that I think that there's a lot of value in being able to jump between different mental models of a program, and whether something is clear or absolutely ridden with "ceremony" can be drastically different depending on those models. By optimizing for exactly one model, you're making programs written in that language harder to think about in pretty much every other model while quickly hitting diminishing returns on how useful it is to try to make that one level of granularity even more clear. This is especially problematic when trying to debug or optimize programs after the initial work to write them is complete; having it be super clear what each individual line of code is doing isolation might not be enough to help me ensure that my overall architecture isn't flawed, and similarly having a bunch of great high-level abstractions won't necessarily help me notice bugs that can live entirely in one line of code.
I don't think these are specific use cases that a language can just consider to be outside of the scope in the same way they might choose not to support systems programming or DSLs or whatever; programmers need to be able to translate the ideas of how the program works into code and then diff between them to identify issues at both a macro and micro level regardless of what types of programs they're working on.
[author here] That’s a very good point - "not ceremony" was poorly phrased.
ROX does introduce more explicitness, which indeed introduces more ceremony. The goal isn’t to reduce keystrokes; it’s to reduce hidden behaviour.
A better framing would be: ROX prioritizes clarity over convenience. Explicitness may cost more keystrokes, but it eliminates hidden behavior. [README updated]
My whole CAD experience is with OpenSCAD, which apparently uses the same kernel; Honest question -- what is the alternative? I would assume the vast majority of 3D model processing relies on triangle meshes...
Meshes are an output, not a source, and not a good internal representation.
Meshes are only a lossy approximation of the actual geometry, and the errors accumulate as features reference off of earlier features.
A simple example is a line tangent to an arc. The line will only be the correct distance from the center of the arc at one point (one angle) per segment.
In openscad you don't notice the problem because since it's code and you have variables and programming ways of expressing relationships, you would usually not try to derive one thing from another by geometry, instead both things would reference a common ancestor variable. But that's a technical implimentation detail that works around the problem by just not doing a thing that doesn't work. The problem is still there that a valid geometry constraint would produce invalid results.
but then the author used the source directly as noted in the Github footnote:
>I ended up directly using solvespace's solver instead of the suggested wrapper code since it didn't expose all of the features I needed. I also had to patch the solver to make it sufficiently fast for the kinds of equations I was generating by symbolically solving equations where applicable.
I've built something very similar (also based on triangle meshes, but in TS), and while it wouldn't work for say 3D printing, my target is game object modeling. I guess people have specific use cases in mind when referring to "CAD".
An example of common terms that disagrees with that somewhat, is "CAD/CAM" where the design component is clearly distinct from the manufacturing component.
I do agree that historically, software aimed at building 3d models for games/animations and other digital use was usually called modeling and not cad. I'm thinking of software like 3D Studio Max back in the 90s here.
I notice though that the Wikipedia article for CAD says: "This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing."
my personal distinction I use is about measurements. while you may model to a specific scale for use in 3d gfx (game by engine/animation/vfx) you cross over from "modeling" to "cad" as soon as you are creating geometry with specific real world measurements. (probably for manufacturing or engineering reasons bc thats when it matters most)
like I can model a table that is the right size and looks like it will not tip over for my game, but I am going to cad that table to run a stress sim and make the plans for building it for real.
though id still call the action of doing the building in the cad software "modeling"... so idk.. language is weird.
so software that lets you work accurately with measurements and real units == cad. (fusion360) software that just makes geometry == modeling. (blender)
but if you wanna go get real confused look at "plasticity" an app targeted at "modeling" but uses a cad engine and sells itself as "cad for artists" it has real scale
measurements and everything too.
I find this type of science is infuriating. Monotropism as a theory of autism or adhd is equivalent to saying that “tendency to focus on a few things” is a theory of autism or adhd. You’re describing the symptoms, which explains almost nothing. A theory of autism would explain the underlying physical causes of the behavior.
Eh, newtons theory of universal gravitation is still a theory, even if he had no access to particle physics, and even if later the Theory of General Relativity (among others) largely supplanted it. It’s also still useful (and used!) in most real life engineering situations. It’s rare where time dilation applies in say civil engineering.
That folks with Autism and ADHD may have large portions of their symptoms occur because they focus too much on some specific things, to the detriment of others - like emotional well being/regulation - can still be falsifiable (better than most psychiatric theories for sure!) and useful clinically.
Personally, it lines up with what I’ve seen and experienced.
That there is another (perhaps chemical, or brain structure) theory too doesn’t necessarily change that!
I don’t think it’s falsifiable until there are autism diagnostics that aren’t behavioral. Right now, they’re 100% behavioral, which that any theory that tries to cluster autism symptoms is hopelessly tainted by a recursive definition -> diagnosis -> definition cycle.
If intervention x results in y behavior changing in z% of the population (or not), would that count? I think it would.
Just because it doesn’t describe 100% of situations or work 100% of the time doesn’t mean it couldn’t be shown to produce useful predictions and work or not.
And yes, that isn’t particle physics type falsifiable, but it is medical treatment falsifiable. Kinda. Newtons theory never predicted Mercury’s behavior, but was still super useful eh?
They're Newton's laws of universal gravitation, not a theory. The difference matters because Newton's laws describe what happens in nature without positing a mechanism. Einstein provided the mechanism via general relativity, which is a theory.
I've noticed this too. The HRL seems to lock the models into one kind of personality (which is kind of the point of course.) They behave better but the raw GPTs can be much more creative.
Poetically GPT-1 was the more compelling answer for every question. Just more enjoyable and stimulating to read. Far more enjoyable than the GPT-4/5 wall of bulletpoints, anyway.
I must be missing something here. How would this help predict interpreter dispatch? Those won’t be a function of previous branch history or pc, which may very well be independent of the next opcode. They’d be a function of state in memory or registers.
Interpreters are just like normal programs, but splatted out a bit. In particular, they have branches and loops just like normal programs. The challenge for processors is that these high level constructs are far apart and dispatched through an interpreter loop, which obfuscates them. Being able to reach further back in history lets you recover this kind of information "through" the intervening bits.
If your interpreter is interpreting a program with unpredictable branches, of course no predictor will magically make your interpreter get branches better predicted than an equivalent compiled program will.
The question here is about all other branching the interpreter will do. i.e. even if you have a unpredictable `if (a+b < 0)`, there's still the dispatching to the "load-variable" and "add" and "load-constant" and "less-than" and "do-branch" opcodes, that still will benefit from being predicted, and they could very well if you have it repeated in a loop (despite still having a single unpredictable branch), or potentially even if you just have a common pattern in the language (e.g. comparison opcodes being followed by a branch opcode).
In a hot loop, the next opcode can be predicted quite well from the history of previous opcodes executed, especially once have a couple iterations available in your history. And the opcodes executed in an interpreter are generally equivalent to the dispatch branch target.
"very well may be" but oftentimes isn't. Branch history does in practice do a very good job of predicting what target you're going to take for an indirect branch.
This generates seed_seq with 19968 bits of random data, which is enough for the 19937 bits of Mersenne Twister internal state.
Note that 19668 bits of random data is overkill; something like 256 or 128 bits would probably enough for practical purposes. But I believe there is no real need to limit the amount of data extracted from a random source. Modern operating systems are pretty good at generating large amounts of random data quickly. But if this is a concern, just change 624 to 4/8/16/32 for 128/256/512/1024 bits of entropy. In practice, I don't think you'll notice a difference either in randomness or initialization speed.
edit: also, if performance is a concern, consider changing mt19937 to mt19937_64, which is the 64-bit variant of mt19937 that is incompatible (generates different numbers) but is almost twice as fast on 64-bit platforms (i.e. most platforms today).
If you're aware/concerned about seeding, you probably aren't using the C++ std prng (mt19937) anyway -- other prngs have desirable properties like vastly smaller state, better performance, or cryptographic security.
The Mersenne Twister (MT) was one of the best engines and was the default in many other languages/packages too. See "Applications" section in wikipedia - https://en.wikipedia.org/wiki/Mersenne_Twister
The author identified distribution problems with the 32-bit versions of MT (i am not sure whether similar problems exist with its 64-bit versions) and proposed a different one named "Permuted Congruential Generator (PCG)" which has now been adopted as the default by many of the languages/packages - https://en.wikipedia.org/wiki/Permuted_congruential_generato...
As you can now appreciate, the subject is mathematically complicated and the defaults chosen by the language/package implementer becomes the "most commonly used" and hence reference case. While this is good enough for most "normal" applications if you are doing any special simulations (Monte Carlo or otherwise) and/or specific Numerical Computations it is your responsibility to understand what it is that you need and program accordingly using the various options (if available) or roll your own.
> (i am not sure whether similar problems exist with its 64-bit versions) and proposed a different one named "Permuted Congruential Generator (PCG)"
The 64-bit version might be a bit faster (for certain workloads, on 64-bit hardware) than the 32-bit version, but still wastes the same space and has the same mathematical flaws.
PCG is still not perfect (128-bit math hurts, though the new DXSM variant at least reduces that to 128x64), but its mathematical properties are nicer than the xor* family (its main competitor), and both families are miles ahead of any other RNG out there.
It is understanding the non-trivial statistical properties (even at a simple conceptual level) that is of paramount importance. PRNG is one of the most difficult subjects in Numerical Computation and has nothing whatever to do with any language/package/library etc.
Even today, caution is sometimes required, as illustrated by the following warning in the International Encyclopedia of Statistical Science (2010).
The list of widely used generators that should be discarded is much longer [than the list of good generators]. Do not trust blindly the software vendors. Check the default RNG of your favorite software and be ready to replace it if needed. This last recommendation has been made over and over again over the past 40 years. Perhaps amazingly, it remains as relevant today as it was 40 years ago.
It was better than the bad, C interface LCG rand(), I guess. (There are LCG parameters that make for ~objectively better PRNGs than MT, but rand()'s parameters aren't great and its state is too small.)
I've found it easier to write my own PRNG than to use the std. Using the std PRNG is about as buggy as my implementation, so the trade-off is reasonable. I usually need non-cryptographically strong PRNGs, so xorshift128+ is sufficient.
I used it because it was recommended by Stephan T. Lavavej, maintainer of Visual Studio's C++ Standard Library, in his "rand() Considered Harmful" talk, back when <random> was introduced. See 11m30s. https://youtu.be/LDPMpc-ENqY?t=10m50s
There's std::linear_congruential_engine, but it doesn't provide you with any (good) default parameters; only the bad ones from historical rand() implementations (minstd_rand0 / minstd_rand).
This is cool for sure. I think you’ll ultimately find that this can’t really be faster than modern OoO cores because python instructions are so complex. To execute them OoO or even at a reasonable frequency (e.g. to reduce combinatorial latency), you’ll need to emit type-specialized microcode on the fly, but you can’t do that until the types are known — which is only the case once all the inputs are known for python.
You're right that dynamic typing makes high-frequency execution tricky, and modern OoO cores are incredibly good at hiding latencies.
But PyXL isn't trying to replace general-purpose CPUs — it's designed for efficient, predictable execution in embedded and real-time systems, where simplicity and determinism matter more than absolute throughput.
Most embedded cores (like ARM Cortex-M and simple RISC-V) are in-order too — and deliver huge value by focusing on predictability and power efficiency.
That said, there’s room for smart optimizations even in a simple core — like limited lookahead on types, hazard detection, and other techniques to smooth execution paths.
I think embedded and real-time represent the purest core of the architecture — and once that's solid, there's a lot of room to iterate upward for higher-end acceleration later.
Java is statically typed and a lot saner than Python, and JavaCard is a fairly restricted subset. Apparently real cards don't typically support garbage collection.
IMO JavaCard doesn't really make sense either. There's clearly space for another language here, though I suspect most people would much rather just use Rust than learn a new language.
Sure, but for embedded use cases (which this is targeting), the goal isn't raw speed so much as being fast enough for specific use cases while minimizing power usage / die area / cost.
> The language forces clarity — not ceremony.
I find this statement curious because a language, like this, without ability to build abstractions forces exactly the opposite.
reply