We use plenty of models to calculate credit risk, but we never let the model sign the contract.
An algorithm can't go to court, and it can't apologize to a bankrupt family.
"Human in the Loop" isn't just about code quality. It's about liability.
If production breaks, we need to know exactly which human put their reputation on the line to merge it.
Accountability is still the one thing you can't automate.
They have a post describing themselves as not a programmer, and one as "as engineers". It's got all the hallmarks (lists, "not just but", bolding when you can't). But what really got me was this conversation literally about why they're not AI! It's insanity, and now I'm convinced it's at least a few accounts in tandem, if not more.
This sub thread really doesn’t add value to the discussion IMO and isn’t a fit for HN. The only likely outcome is a real human is attacked based on pure speculation. Let the mods decide if a user is breaking any policy regarding AI comment submissions. Litigating it here is cringe.
I would go even further and say AI witch hunts aren't productive, period. In this case where the person writing is ostensibly writing in a second language it's even more silly
We use plenty of models to calculate credit risk, but we never let the model sign the contract. An algorithm can't go to court, and it can't apologize to a bankrupt family.
"Human in the Loop" isn't just about code quality. It's about liability. If production breaks, we need to know exactly which human put their reputation on the line to merge it.
Accountability is still the one thing you can't automate.