Hacker Newsnew | past | comments | ask | show | jobs | submit | tyfighter's commentslogin

I think the lack of a real usable emulator for SGIs is holding back any kind of homebrew. I say this as one of the developer's that got SGI Indy emulation working in MAME. Yes, it works, but it's too slow and too old to be usable. I spent some time after the MAME effort working on a custom high performance emulator for Crimson/Onyx/Reality Engine, but I've kind of burned out again. Maybe some day if I'm really driven again, and had help. I've done most of the reverse engineering already, it's just a lot of code.

I think that if a high performance, usable emulator for some of the big systems existed I think some of the old software might be rediscovered and show up on the internet.


I think the problem was that the machines were always very expensive, even used.

My Fuel has an SSD and Id use it daily except:

- It's loud

- It's single core

- It's a furnace

- It's very very loud

It has a fairly modern Emacs, ssh and a non distracting UX. The browser is the only real thing that is too old to be useful, feature and performance wise, but that's just bonus points productivity wise (besides, rdesktop into a modern machine and you can watch youtube)

If I had a 900 MHz O2 loaded with RAM, and an SSD (SCSI SSD, ha!) it'd probably be my daily driver.


I have a 600MHz RM7k O2, and my 700MHz R16k Fuel blows it out of the water. The O2 isn't that quiet or that quick even with upgrades!

What SSD are you running? I'm still on 10k SCSI drives selected for the quietness of their bearings.


Right now I cant get to the machine (off, in the basement) but it is some run-of-the-mill SATA drives using a SATA expansion card.

It works great but I just use it for /opt since I ran out time to move more of the machine into it.

You cant boot off the SSD, so I still use a SCSI but you can replace that too if you boot the SGI off the network.

Silent SGI:

Having gotten rid of the SCSI drives completely w/ the network boot, you can put a modern, more silent, PSU [1] (but hurry, ones w/ enough current on 5V (?) are rare), and then replace the GPU and CPU fans and turn of environmental monitoring.

[1] i had to replace mine; my 500 MHz Fuel is notorious for bad psu


Do not turn off environmental monitoring. That's for debugging only. That's how people are cooking the video cards. Please get your fuel /properly/ repaired by say weblacky on irixnet. The reason why? With env monitoring off, the system won't respond to overheating on the graphics card and it'll cook it alive. The fuel has notoriously bad airflow (air doesn't move right angles)

You could use the NVME driver to load the filesystem and boot the kernel diskless...

Yes, I have a restored Indigo 2. I fixed at lot of things in the PSU.

And yes:

- It's loud

- It's single core

- It's a furnace

- It's very very loud

:-D


I keep my Challenge S running 24/7 :)

Your contributions to the Indy alongside Ryan's contributions were neat, truly. You plowed the road so others can navigate it. There's a rumor about a faster Indy emulator... but don't hold your breath yet. (Not a project I'm part of, but I've been told snippets)

The OS/hardware though, has serious limitations that while no problem for me, definitely pisses off people. Examples:

No atomics/Thread local support. Doesn't matter that someone ported GCC 15 -- you can't make use of many useful newer language features.

Immediate Mode OpenGL only. There's no direct hardware access. Not a problem for me, but every SGI out there is fixed function only. I've had people bitch to high hell we don't have shaders.

and in general, some people just think the OS is janky. I love it, but not everyone is me.


> Not a problem for me, but every SGI out there is fixed function only.

Is that true? I remember sgi had a shader library for modeling light aimed at the automotive market. All the demos and examples were showing off car paint colours in different environments.


SGIs that matter (MIPS, etc)

IRIX only supports about OpenGL 1.2. It does have a fragment shading extension though:

https://tech-pubs.net/reputable-archive/fragment_lighting.tx...


They got the N64 running on the MiSTer. So an indy should be possible, they're closely related systems.

I'd love an Onyx/RE on an FPGA someday. Next to my FPGA cray.


The CPUs are close, but the Indy is otherwise pretty different from the N64. Totally different graphics architecture, and - relevant to getting it on MiSTer - it’s a workstation rather than a video game console, necessitating quite a bit more complexity. I’d be really surprised if it could be squeezed on.

(Though, full disclosure, I said the same thing about the N64 before the core for it came out - the folks working on MiSTer are incredible.)


Huh. I had thought the n64 was basically an Indy xz graphics. What was the rcp closest to?

I was always confused why sgi didn’t throw the rcp on a pci card and dominate the pc graphics market.


To my knowledge - and I'm not an expert here - the N64 hardware is pretty unique and doesn't really resemble any of SGI's other chipsets. Not in precise capabilities - the XZ, for instance, didn't even support hardware texture mapping - and not in overall technical design.

It does seem a little bit like an ultra-simplified, integrated version of the RealityEngine [0]. The RealityEngine had "6, 8, or 12 Geometry Engines" split out across three to six boards, each powered by an Intel i860XP, that then passed their work along to Fragment Generators. This roughly corresponds to the RSP, which was just another MIPS core (with vector/matrix extensions), passing its work along to the RDP on the N64. I'm not sure how programmable the RealityEngine's pipeline was compared to the surprisingly flexible RSP.

Remember, the constraints for a graphics workstation are really different than for a game console - especially on the low-end, totally different corners are going to be cut. An Indy still needed to be able to generate a high resolution display and allow modelling complex scenes for film and TV; but while some degree of real-time 3D was important, it was expected that artists could be modelling using wireframe or simplified displays. A game console was displaying low-resolution and relatively low-detail scenes, but they still wanted them to look aesthetically "complete" - shading, textures, fog, lighting, particles - while running at real-time speeds. SGI used their expertise and built something custom-fit for the job at hand, rather than just reusing an existing solution.

[0] https://cseweb.ucsd.edu/~ravir/274/15/papers/p109-akeley.pdf


I would have loved to have that paper when I was learning 3D and OpenGL.

Nay, the N64 is pretty unique hardware-wise. Conceptually it's vaguely similar to the O2, the RCP is an R4000 fixed function CPU with some extra graphics instructions IIRC.

Another amazing piece of gaming and art by an incredible, dedicated community. I'm thoroughly enjoying it so far, and I've still got a long way to go.


I haven't made a website of any kind since a C&C: Red Alert fan site somewhere on GeoCities in the late 90s.

I work on graphics drivers. They're hard write and even harder to debug. You have to be a huge nerd about graphics to get very far. It's a relatively rare skill set, but new, younger, nerdier people keep on coming. Most people in graphics are quiet and are just keeping the industry functioning (me). It's applied computer architecture in a combination of continuous learning and intuition from experience.


That is interesting, do you ever find bugs in the hardware itself?

Is there some big spec document or ISA that you follow when implementing the driver?

Also I'm curious is it easier to write a driver for the modern "lower level" APIs like vulkan/dx12?


Hardware bugs can be found during chip bring-up within the first couple of months back from the fab, but since I've worked in this area I've never actually seen a bug that couldn't be worked around. They happen, but they're rare and I've never experienced a chip needing a respin because of a bug.

There is documentation, but it's not as well organized as you might imagine. Documentation is usually only necessary when implementing new features, and the resulting code doesn't change often. There are also multiple instruction sets as there are a bunch of little processors you need to control.

Vulkan/DX12 aren't really "low-level" APIs. They're "low overhead", and honestly, no. Their code base is just as large and complicated, if not more so, than OpenGL/DX11.


I wish I was doing your job. How do I do so ?


This is something I heard through the grape vine years ago, but when you're a very large corporation negotiating CPU purchasing contracts in quantities of millions, you can get customizations that aren't possible outside of gigantic data centers. Things like enabling custom microcode (and development support) for adding new instructions for the benefit of your custom JIT-ed server infrastructure. The corporate entity here is likely a hyperscaler that everyone knows.


Some of the public x86 ISA extensions were things that hyperscalers specifically requested.


Such as?


Most of the Intel cache partitioning things were driven primarily by Google. The holy grail was to colocate latency-sensitive tasks with bulk background tasks to increase cluster utilization.


I guess technically CAT and RDT are not ISA extensions because they are managed by MSRs. I was thinking of aspects of BMI, but I am sure that large-scale buyers had input into things like vector extensions, PMU features, and the things you mentioned as well.


Historically the large buyer that could do this was NSA. Men in black would show up and tell you to add a bit population count instruction to your CPU..


I think it's doubtful that around the time that POPCNT was added to CPUs the NSA was all that influential. Their big scary data center, which is actually tiny, wasn't built until 2014, while players like Google and Meta had much larger data centers years earlier and were undoubtedly larger buyers of AMD Barcelona / Intel Westmere where POPCNT first emerged.


Here is an article about the popcnt instruction:

https://vaibhavsagar.com/blog/2019/09/08/popcount/

The author of the article believes that while popcnt was indeed used for cryptographical analysis in the 60s, but the fact that popcnt disappeared from instruction sets is seen as evidence that this usage became a lot less important over time. So the author considers the reason for the reappearance of popcnt that there simply exist lots of other useful applications of popcnt that become evident over these decades.

A German article about the same topic:

https://nickyreinert.medium.com/ne-49-popcount-ea62aa304f88


Oh, when I saw this happen first-hand, it was probably 1986.

Note that the first "data center" I know of was built at Bletchley Park in the 1940s.


eg. "custom Intel Xeon 6 processors, available only on AWS." https://aws.amazon.com/blogs/aws/best-performance-and-fastes...


Oracle Cloud used to boast this as something they had. Tuned for OracleDB with more cache, different core count.

And every homelabber has had one of the 7B13 or 9654-variant processors


Actually, the reason Transmeta CPUs were so slow was that they didn't have an x86 instruction hardware decoder. Every code cache (IIRC it was only 32 MB) miss resulted in a micro-architectural trap which translated x86 instructions to the underlying uops in software.


How is anyone just supposed to know that? It's not hard to find vim, but no one says, "You need to be running this extra special vim development branch where people are pushing vim to the limits!" Yes, it's fragmented, and changing fast, but it's not reasonable to expect people just wanting a tool to be following the cutting edge.


I agree that it might not be reasonable to expect people to keep up with the latest.

For this specific thing (LLM-assisted coding), we are still in nerd territory where there are tremendous gains to be had from keeping up and tinkering.

There's a lot of billions dollars being invested to give devs who don't want to do this the right tools. We aren't quite there yet, largely because the frontier is moving so fast.

I made my original comment because it was so far from my experience, and I assumed it was because I am using a totally different set of tools.

If somebody really doesn't want to be left behind, the solution is to do the unreasonable: read hacker news everyday and tinker.

Personally, I enjoy that labor. But it's certainly not for everybody.


I find that X and Discord are more useful than HN for trying to keep up to date. Which is a shame I think but it is what it is.


I agree with your comment, but I also chuckled a bit, because Neovim _is_ a fast changing ecosystem with plugins coming out to replace previous plugins all the time, and tons of config tweakers pushing things to the limit. That said… one does not have to replace their working Neovim setup just because new stuff came out. (And of course, minimalist vim users don't use any plugins!)


That's what people always seemed to say about emacs, that you haven't used it unless you've learned 300 incredibly complicated key bindings and have replaced half the program with a mail reader.


You're (they're?) not alone. This mirrors every experience I've had trying to give them a chance. I worry that I'm just speaking another language at this point.

EDIT: Just to add context seeing other comments, I almost exclusively work in C++ on GPU drivers.


Same - I work on a cpp GPU compiler. All the LLMs are worthless. Ironically the compiler I work on is used heavily for LLM workloads.


it really only works for problem domains saturated with medium blogspam and youtube tutorials.


That's a bingo! Christoph Waltz is just a great actor.

I'm building an app in my stack with fairly common requirements. There are a few code examples that cover requirements but none that cover our specific scenario. After searching the web myself, I asked 3 different AI models. All they did was regurgitate the closest public GitHub example, lacking the use case I was trying to do. Solving this problem can only be done by understanding the abstraction of the alteration in design.

These things can't actually think. And now they're allowed to be agentic.

In some ways they're just glorified search engines but there's a geopolitical sprint to see who can get them to mock "thinking" enough to fool everybody.

Out of ego and greed, everything will be turned over to this machine, and that will be the end of humanity; not humans...humanity.


There's the market out there for a consultancy that will fine-tune an LLM for your unique platform, stack, and coding considerations of choice – especially with proprietary platforms. (IBM's probably doing it right now for their legacy mainframe systems.) No doubt Apple is trying to figure out how to get whatever frameworks they have cooking up ASAP into OpenAI etc.'s models.


I can't imagine there is a lot of GPU driver code in the training data.


Modern x86 implementations don't even do the XOR. It just renames the register to "zero".


I don't know which strains credulity more:

1.) In the 1960s, the Air Force developed a top-secret device power enough to simulate the EMP of a nuclear blast, i.e. some form of Non-nuclear Electromagnetic Pulse device that wouldn't be developed into a weapon for a few more decades. Rather than a controlled scientific test, they then decided to secretly drive it up to an operating missile base. They proceed to set it up on a 60 foot tall portable stand without anyone at the site noticing. I guess people at the site had their AirPods in for hours while giant generators run to charge the banks of capacitors necessary to run something that huge. No one noticed anything of this happening until it was hovering over the gate, they had their rifles pointed at it, and the mad scientists behind this plan were justified in their fear that our missile launch facilities were vulnerable to EMP.

2.) In the 1960s, a UFO disabled a missile at a launch facility.


Exactly. The laws of physics teaches us that being visited by extraterrestrial space crafts is nearly zero. The likelihood that the government (Pentagon/USAF) is lying to themselves and to us to cover up there own inanities; very high.


I must not be the target audience for this kind of thing, but I really don't get it. The few times I've opened HN today I've seen this at the top, and the number of points has been higher. I've opened it 3 times, and clicked the button, and some other icons showed up below. The first time I didn't open even bother to mouse over the icons, so I didn't know you'd be buying things. I closed before I got to 50 stimulations. The second time, I did hover over the icons showing up, and clicked a couple, and nothing seemed to happen other than spending stimulation, so I closed before 100. I just did this a third time as it's now over 1200 points, and I really just don't understand what is going on. What am I missing?


I think this is a scenario where if you have to ask, you'll never know. Perhaps, ironically, there just wasn't enough immediate stimulation for you to continue...


I don't even know what a clicker game is or why one would be stimulating.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: