Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs can't, but agents can. They can read documentation into context, verify code, compile, use analysis tools, and run tests.

Hallucinations do occur, but they're becoming more rare (especially if you prompt to the strengths of the model and provide context) and tests catch them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: