Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The README lacks the most important thing: how many more tokens/sec at the same quantization, compared to llama.cpp / MLX? It is worth to switch default platforms only if there is a major improvement.


I ran R1-8B for both anemll[0] and mlx[1][2] models on an M4 Max.

Prompt: "Tell me a long story about the origins of 42 being the answer."

anemll: 9.3 tok/sec, ~500MB of memory used.

mlx 8bit: 31.33 tok/sec, ~8.5GB of memory used.

mlx bf16: 27.17 tok/sec, ~15.7GB of memory used.

Memory results are from activity monitor across any potentially involved processes, but I feel like I might missing something here...

[0] https://huggingface.co/anemll/anemll-DeepSeekR1-8B-ctx1024_0...

[1] https://huggingface.co/mlx-community/DeepSeek-R1-Distill-Lla...

[2] https://huggingface.co/mlx-community/DeepSeek-R1-Distill-Lla...


Thank you. Strange. If the memory numbers are accurate, it is so slow because likely layers are loaded from disk before inference of each layer or something like that, otherwise it could not do the inference of such model in 500MB. But if that's what it does, 33% of the speed would be already too fast, likely.


Interesting. Does this mean larger models could be ran on less memory? It looks like it uses 15-20x less memory. Could a 671B DeepSeek R1 be ran in just ~40-50GB of memory? It sounds like it'd be 1/3 as fast though (<1tk/sec).


What hardware are you on? Most models are memory bandwidth limited. ANE was limited to 64GB/s prior to M3 Max or M4 pro. If you are on M1, GPU will be significantly faster for 3-8B models due to memory bandwidth rather then ANE capabilities.


M4 Max with 128GB of memory.


M4 max should work at 120GB for ANE and 500+ for GPU. So GPU will be 3-4 times faster for anything over 1-3B. ANE is likely as fast for prefill due to higher FLOPs


Can you add a recent build of llama.cpp (arm64) to the results pool? I'm really interested in comparing mlx to llama.cpp, but setting up the mlx seems too difficult for me to do by myself.


I ran them again several times to make sure the results were fair. My previous runs also had a different 30B model loaded in the background that I forgot about.

LM Studio is an easy way to use both mlx and llama.cpp

anemll [0]: ~9.3 tok/sec

mlx [1]: ~50 tok/sec

gguf (llama.cpp b5219) [2]: ~41 tok/sec

[0] https://huggingface.co/anemll/anemll-DeepSeekR1-8B-ctx1024_0...

[1] https://huggingface.co/mlx-community/DeepSeek-R1-Distill-Lla...

[2] (8bit) https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-...


Thank you very much.


Yeah, I looked all over for a comparison and couldn't find anything in the repo, on their social media, etc. I saw some other comments here that said it's supposed to be "15.8 fp16 ops compared to 14.7 fp32 ops" but that isn't really enough to go on. Maybe when I have the time I'll install their TestFlight app and do some comparisons myself.


In my testing, tokens per sec is half the speed of the GPU, however the power usage is 10x less — 2 watts ANE vs 20 watts GPU on my M4 Pro.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: