Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
zhobbs
5 months ago
|
parent
|
context
|
favorite
| on:
Cerebras launches Qwen3-235B, achieving 1.5k token...
Possible I'm misunderstanding what you're trying to do, but ollama works well for me for local inference with qwen on my Macbook Pro (32GB).
nateb2022
5 months ago
[–]
Yup, also using Ollama and on a Macbook Pro. Ollama is #1
p0w3n3d
4 months ago
|
parent
[–]
But isn't ollama only local chat? Or I am missing something? I'd like to setup it as a server for my usages on another laptop (use it as my local AI hub) and would love to integrate it with some IDE using MCP
grosswait
4 months ago
|
root
|
parent
[–]
No, it can listen on 0.0.0.0 or you can serve it through a proxy
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: