Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nitpick: I'd ask for NITPICK at the end of output instead of the start. The model should be in a better place to make that decision there.


I find it ironic how capable LLMs have become, but they're still struggling with things like this. Great reminder that it's still text prediction at its core.


I find it more helpful to keep in mind the autoregressive nature rather than the prediction. 'Text prediction' brings to mind that it is guessing what word would follow another word, but it is doing so much more than that. 'Autoregressive' brings to mind that it is using its previously generated output to create new output every time it comes up with another token. In that case you immediately understand that it would have to make the determination of severity after it has generated the description of the issue.


I mean, think of LLM output as unfiltered thinking. If you were to make that determination would you make it before you had thought it through?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: