Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>An open AI is safe inherently, since it means that it can be easily ripped apart and thoroughly studied for exploitable points, unlike some closed black box system. Having Open AI be some closed system does nothing to reduce the number of bad actors - they will all choose to exploit Open AI's system once given the opportunity.

Replace the word "AI" with "ultra deadly and contagious bioweapon" and it comes clearer why being "open" is itself a danger, for those who aren't able to zero-shot understand it.



That's precisely the point of it being open - it makes it easier to understand its points of failure rather than attributing it as a feature of the black box. An open source bio weapon (if one existed) would not be as dangerous compared to a secretive one, simply because once out in the open, its points of failure would have already been studied.


You can't "study points of failure" if you're dead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: