Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
SteelPh0enix
on Nov 30, 2024
|
parent
|
context
|
favorite
| on:
Llama.cpp guide – Running LLMs locally on any hard...
I have spent unreasonable amounts of time building llama.cpp for my hardware setup (AMD GPU) on both Windows and Linux. That was one of the main reasons of writing that blog post for me. Lmao.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: