Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs might be useful for churning out vaguely correct-looking code quickly, but they're just regurgitating the contents of their training corpus. There's no guarantee of correctness, and it's only a matter of time before someone dies because of an LLM-generated bug.

Human programmers aren't going anywhere. (You can't even call what LLMs do programming, because there's no intent or understanding behind it.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: