Hacker Newsnew | past | comments | ask | show | jobs | submit | sanxiyn's commentslogin

Yes, but in principle it isn't that different from running on Trainium or Inferentia (it's a matter of degree), and plenty of non-AI organizations adopted Trainium/Inferentia.

What's new is HN discovered it. It wasn't posted in September 2025.

100%

People forget this is also a place of discussion and the comment section is usually peak value as opposed to the article itself.


Looking at their blog, they in fact ran gpt-oss-120b: https://furiosa.ai/blog/serving-gpt-oss-120b-at-5-8-ms-tpot-...

I think Llama 3 focus mostly reflects demand. It may be hard to believe, but many people aren't even aware gpt-oss exists.


Many are aware, just can’t offload it onto their hardware.

The 8B models are easier to run on an RTX to compare it to local inference. What llama does on an RTX 5080 at 40t/s, Furiosa should do at 40,000t/s or whatever… it’s an easy way to have a flat comparison across all the different hardware llama.cpp runs on.


> we demonstrated running gpt-oss-120b on two RNGD chips [snip] at 5.8 ms per output token

That's 86 token/second/chip

By comparison, a H100 will do 2390 token/second/GPU

Am I comparing the wrong things somehow?

[1] https://inferencemax.semianalysis.com/


I think you are comparing latency with throughput. You can't take the inverse of latency to get throughput because concurrency is unknown. But then, RNGD result is probably with concurrency=1.

I thought they were saying it was more efficient, as in tokens per watt. I didn’t see a direct comparison on that metric but maybe I didn’t look well enough.

Probably. Companies sell on efficiency when they know they lose on performance.

If you have an efficient chip you can just have more of them and come out ahead. This isn't a CPU where single core performance is all that important.

Only if the price is right...

Eh if there's a human on the other side single stream performance is going to matter to them.

Right, but datacenters also very much operate on electrical cost so it’s not without merit.

Now I'm interested ...

It still kind of makes the point that you are stuck with a very limited range of models that they are hand implementing. But at least it's a model I would actually use. Give me that in a box I can put in a standard data center with normal power supply and I'm definitely interested.

But I want to know the cost :-)


In my opinion, Groq's technical decisions are unsound in a normal world. But being HBM-free may have some merit in a world where HBM supply is constrained.


Ruby also used to use Bison, uses its own https://github.com/ruby/lrama these days.


Yeah. There are other fully open models like Hugging Face SmolLM but they are not common.


With cargo --offline, Rust has better than average support for offline build.


An interesting bit of history: for a long time Rust maintained first party support for Windows XP, after other parts of ecosystem generally gave up. This was because Firefox needed it.

https://github.com/rust-lang/compiler-team/issues/378 (major change proposal to drop Windows XP support) notes this history and links to other relevant pages.


If you look at the actual code, it runs ping -c 5. I agree ping without options doesn't terminate.


Yes, but in fact compensating for bad questions is a skill, and in my experience it is a skill excelled by Claude and poorly by Gemini.

In other words, better you are at prompting (eg you write a half page of prompt even for casual uses -- believe or not, such people do exist -- prompt length is in practice a good proxy of prompting skill), more you will like (or at least get better results with) Gemini over Claude.

This isn't necessarily good for Gemini because being easy to use is actually quite important, but it does mean Gemini is considerably underrated for what it can do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: