Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we agree on almost all points, but I thought ollama-gui was an official gui, so I’m even more baffled as to what the draw is. Running as a llama.cpp as a service/API endpoint is trivial (I do just that). Maybe you can outline for me what the value proposition of ollama is so I can better understand what it does that plain llama.cpp doesn’t.


Ollama allows me to use a single podman command, which uses the latest version of ollama, downloads a model of my choosing, and starts a local http endpoint widely supported by different clients. I can just run this one command to chat with a local model through a web interface, get code completions in VSCode, ask about the content of my local Markdown notes.

Now, I don't use AI that much, I could totally live without this. But if it weren't for the robust one-liner I probably wouldn't use local LLMs at all.


My experience of ollama is that it makes it super easy to pull various models and use them locally. Sure, I could do this myself but it's helpful not to have to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: