Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My issue with using LLMs for this use case is that they can be wrong, and when they are, I'm doing the research myself anyway.


The times it's wrong has become vanishingly small. At least for the things I use it for (mostly technical). Chatgpt with extended thinking and feeding it the docs url or a pdf or 3 to start you'll very rarely get an error. Especially when compared to google / stack exchange.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: