Yes, but in principle it isn't that different from running on Trainium or Inferentia (it's a matter of degree), and plenty of non-AI organizations adopted Trainium/Inferentia.
Many are aware, just can’t offload it onto their hardware.
The 8B models are easier to run on an RTX to compare it to local inference. What llama does on an RTX 5080 at 40t/s, Furiosa should do at 40,000t/s or whatever… it’s an easy way to have a flat comparison across all the different hardware llama.cpp runs on.
I think you are comparing latency with throughput. You can't take the inverse of latency to get throughput because concurrency is unknown. But then, RNGD result is probably with concurrency=1.
I thought they were saying it was more efficient, as in tokens per watt. I didn’t see a direct comparison on that metric but maybe I didn’t look well enough.
It still kind of makes the point that you are stuck with a very limited range of models that they are hand implementing. But at least it's a model I would actually use. Give me that in a box I can put in a standard data center with normal power supply and I'm definitely interested.
In my opinion, Groq's technical decisions are unsound in a normal world. But being HBM-free may have some merit in a world where HBM supply is constrained.
An interesting bit of history: for a long time Rust maintained first party support for Windows XP, after other parts of ecosystem generally gave up. This was because Firefox needed it.
Yes, but in fact compensating for bad questions is a skill, and in my experience it is a skill excelled by Claude and poorly by Gemini.
In other words, better you are at prompting (eg you write a half page of prompt even for casual uses -- believe or not, such people do exist -- prompt length is in practice a good proxy of prompting skill), more you will like (or at least get better results with) Gemini over Claude.
This isn't necessarily good for Gemini because being easy to use is actually quite important, but it does mean Gemini is considerably underrated for what it can do.
reply