Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The real danger isn't that "too smart" unaligned AI might execute some kind of deliberate plan for taking over the world, it's that there are massive numbers of people so stupid that they will blindly follow the instructions of stochastic parrot bullshit generators.

"They're the same picture".

Imagine someone who wants to rule the world, just asks an AI for such a plan. AI doesn't need to want anything itself, just be given free rein. I'm sure Boris Johnson would have taken this option if it had existed, though I doubt it would have made any difference to the character flaws which were his ultimate downfall.

Even the "paperclip maximiser" scenario is just someone in the business of making paperclips asking an AI for help, and who doesn't look too closely until it's too late.

Myself, I'm… relatively optimistic, in that I think these systems are likely to be fragile to distribution shifts they themselves create, in ways that mean the rest of us can likely stop them from being existential threats if they start down a dangerous path. (This requires that we never figure out how to make them learn from as few examples as humans need, which may be wishful thinking, but for now seems to be an acceptable guess).

For some reason this reminded me of the old joke about the engineer, the physicist, and the mathematician's differing approaches to noticing a fire; I hope humanity doesn't take the mathematician's solution: https://jcdverha.home.xs4all.nl/scijokes/6_2.html



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: