Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I take an interest in plane crashes and human factors in digital systems. We understand that there's a very human aspect of complacency that is often read about in reports of true disasters, well after that complacency has crept deep into an organization.

When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.

When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by an AI, b) that the driver had learned to trust, and c) the driver - though theoretically responsible - had become complacent.



Sure, but not every application has dramatic consequences such as plane or car crashes. I mean, we are talking about theoretical physics here.


Theoretical? I don't see any reason that complacency is fine in science. If it's a high school science project and you don't actually care at all about the results, sure.


Half-Life showed a plausible story of how high energy physics could have unforeseen consequences.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: