Hey, thank you for the viewpoint. I'm myself a career JS/S programmer as well, and I do appreciate that the lived reality is quite varied.
The partial resolving and haphazardness of JSON data usage shouldn't matter too much. I don't mean to make JSON parsed objects to be some special class, per se, or for the memory layout to depend on access patterns on said data. Only, I force data that was created together to be close together in memory (this is what real production engines already do, but only if possible) and for that data to stay together (again, production engines do this but only as is reasonably possible; I force the issue). So I explicitly choose temporal coherence. Beyond that, I use interface inheritance / removal of structural inheritance to reduce memory usage. eg. Plain Arrays (used in the common way) I can push to 9 bytes or even 8 bytes if I accept that Arrays with a length larger than 2^24 are always pessimised. ECS / Struct-of-Arrays data storage then further allows me to choose to move some data onto separate cache lines.
But; it's definitely true that some programs will just ruin all reasonable access patterns and do everything willy-nilly and mixed up. I expect Nova to perform worse on those kinds of cases: as I am adding indirection to uncommon cases and splitting up data onto multiple cache lines to improve common access patterns, I do pessimise the uncommon cases further and further down the drain. I guess I just want to see what happens if I kick those uncommon cases to the curb and say "you want to be slow? feel free." :) I expect I will pay for that arrogance, and I look forward to that day <3
Thank you for your response! I’ve been loosely following the project already and now my interest is piqued even more. Your explanation and approach makes a lot of sense to me, now I’m curious to see how it plays out!
The partial resolving and haphazardness of JSON data usage shouldn't matter too much. I don't mean to make JSON parsed objects to be some special class, per se, or for the memory layout to depend on access patterns on said data. Only, I force data that was created together to be close together in memory (this is what real production engines already do, but only if possible) and for that data to stay together (again, production engines do this but only as is reasonably possible; I force the issue). So I explicitly choose temporal coherence. Beyond that, I use interface inheritance / removal of structural inheritance to reduce memory usage. eg. Plain Arrays (used in the common way) I can push to 9 bytes or even 8 bytes if I accept that Arrays with a length larger than 2^24 are always pessimised. ECS / Struct-of-Arrays data storage then further allows me to choose to move some data onto separate cache lines.
But; it's definitely true that some programs will just ruin all reasonable access patterns and do everything willy-nilly and mixed up. I expect Nova to perform worse on those kinds of cases: as I am adding indirection to uncommon cases and splitting up data onto multiple cache lines to improve common access patterns, I do pessimise the uncommon cases further and further down the drain. I guess I just want to see what happens if I kick those uncommon cases to the curb and say "you want to be slow? feel free." :) I expect I will pay for that arrogance, and I look forward to that day <3