Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The question of bias reduces to bias in factual answers and bias in suggestions - both which come from the same training data. Maybe they shouldn't.

If the model is trained on data that shows e.g. that blacks earn less, then it can factually report on this. But it may also suggest this be the case given an HR role. Every solution that I can think of is fraught with another disadvantage.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: