Hi Terry! Sorry, didn't know you had a new account.
I read some of your post history - can you explain how what you were doing with Ticketmaster was systems programming? It sounds like what would now be considered application programming, but I have no idea how intensive your work was at the time. High-performance code can often cross the line.
TempleOS is simply fascinating. I'm sure you know that I can't agree on the 'divine' part but it's certainly a fascinating OS. Writing an OS is a tough task by any means, let alone a reasonably fast one with a novel presentation. That's a lot of work, even for a productive OS programmer.
Have you taken a look at MenuetOS, Plan9, Oberon, or other non-traditional OSs? I'm curious what you have to think of their own unique features.
I applaud your goal of taking it back to basics. I think you're largely right about the number of abstractions between you and the hardware. Really really fast code needs to be written close to the metal.
I worked on the code that installed language modules into kernel space. I wrote a check-disk utility. I did a file compression archive. There were serial port drivers and I made a command to reset a stuck port.
I worked in VAX assembly language.
I did firmware for a bar code reader networking device with keypad and display. I worked on image processing for an actual bar code reader itself.
I worked there 1990-1996. About half was user space. The other half was kernel or firmware on bare metal.
-----
Pat and I started at the same time. He was in school with me, in fact in my differential equations class. He dropped out of school. I stayed in and got my master's in electrical engineering while working part time at Ticketmaster. I graduated with master's in 1994 and stayed with Ticketmaster for a year.
I returned to Ticketmaster for 3 weeks in 2002. Pat was a boss.
When I was a young programmer, all the code I looked at was awful. If there was a different format indentation, that distracted me. All I could think about was code format and superficials.
I worked at Ticketamster 1990-1996.
I went back in 2003. I remembered my first assignment. In 1990, the Linker at Ticketmaster that Troy wrote ran out of room because the MAP file grew too big.
In 2003, I went back a found a terrible kludge that I did in 1990 because I was stupid. In 2003, I fixed it correctly. I looked at the code and thought how beautiful it was. In 1990, I thought the code was bad.
As you get more experience, it is a little easier to read code. Take my word that all code is bad, LOL.
Personally, I don't like code that is made of 5-line functions with no rhyme nor reason spread seven levels deep.
Somebody said the smaller your functions the better. Bad advice.
I think most of us know who Terry is, but it's the first time I see him mentioning TicketMaster. That said, the website does mention 1990 for TM, so I got part of my question. Thanks.
Do you have any current or distant ideas on implementing basic GPU support in TempleOS? Virtualization has gotten to the point where GPU passthrough is beginning to be feasible/viable.
PS. I think this is an amazing project. I'm grateful you accepted the challenge in writing it. :D
Because God said so. That and the aim of the system is to create something akin to the Commodore 64 in which the entire system is open to the user. http://www.templeos.org/Wb/Doc/Charter.html#l1
"Graphics operations should be transparent, not hidden in
a GPU."
Is GPU programming opaque, I've not done any low level stuff? I imagine it's a whole bunch of different API calls, but based on triangles, rather than points and lines?
Could one write a simple GPU in code, for example, I wonder?
You'd be surprised just how many layers of abstraction there are between getting something done 'outside' the context of the GPU, and across the CPU/GPU bridge, and getting it done on the GPU in a modern 3D stack these days .. You can do things any one of a number of different ways - pass off a blob of data for rendering, or write shader programs that get compiled for the GPU when the app requires it .. those compilers are not open (shader compilers are an arcane and highly contentious realm of IP-rights-holders in a very competitive and volatile industry) and often-times, the hard work of a 3D developer is spent in moving existing assets (code/resources) from one 3D-pipeline-fashion-runway de-jeur to the next ..
It is pretty arcane.
That said, of course you can write a software renderer and simulate a fair amount of the work that the GPU will usually offload from the CPU - and in many cases this has been successfully applied - e.g. the Emulation world - to the task of maintaining legacy binary assets in lieu of having source code to port. The emu guys have amazing stats in that regard.
Well with how modern GPU's, their driver and API's work you write code to do something then the driver "decides" on the "best" way of doing it.
Display drivers can replace entire shaders and modify pretty much any instruction and they often do so if nothing than for the code to actually work because game developer not only have shifted allot of the "performance" optimization burden to the GPU vendors but also quite often ship completely non complaint (and nonfunctional) graphics code, there have been plenty of gem posts on gamedev.net including multiple AAA studio's that botched their code so poorly on multiple titles that if it there wasn't a generic fix in the driver already it would not display anything at by not calling D3D::Create, D3D::EndFrame/Begin frame properly or at all in many cases.
Overall the majority of the driver codebase today is abstraction and fixes, only about a 3rd of it is actual API implementation.
So this is one reason why GPU driver code is necessarily closed :( to save face.
There are some software-based graphics cores out there, and one or two VHDL/FPGA efforts, but the performance/watt ratio between those and mainstream GPUs is laughable without a second thought.
Here's hoping AMD's vision to try and be more open in the future really works out. Also that Vulkan is at least mildly sane with open-ness.
Because the current state of things reinforces ideas like "the GPU is opaque" - architecturally, graphics processing is not a magic box, and while it would take a long time to fully understand it's not technically insurmountable; but the current status quo with drivers makes it so.
How much performance would a pure and thus leaner driver gain, I wonder.
Also a fair bit of backwards compatibility will be involved, like with x86 cpus. And even then, the same fixup is required to run opengl on directX cards, I believe.
Considering that even with API's that have some what good internal compliance testing like DirectX developers still ship utterly broken and un-optimized code - none.
This is also why Vulcan will probably not succeed (at least now how people think it will).
The last thing that say Nvidia wants is to maintain a code base of 3-4M LOC's which 50-60% of it is intended to allow for games to run anywhere from running at all to running well.
With how the current market works the driver is the "secret" sauce that GPU makers use to compete in the market and is just as important (or even more in some cases) as the hardware it self.
All my code is position independent and my compiler only makes position idependent code.
My OS is ring-zero-only. Just in time compilers do not need a linker in a single address space -- they just insert the address of the function call.