Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Basic rule of MLE is to have guardrails on your model output; you don't want some high-leverage training data point to trigger problems in prob. These guardrails should be deterministic and separate from the inference system, and basically a stack of user-defined policies. LLMs are ultimately just interpolated surfaces and the rules are the same as if it were LOESS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: