I like the idea, but it is too black-and-white in some places. Never exploiting security vulnerabilities? There are certainly cases where that is justifiable -- attacking the Enigma machine, for example (speaking of Turing...). If World War II were to happen today, there would almost certainly be attacks on computer systems, and we would want our military to attack the enemy military's systems.
There is also the matter of law enforcement. It is better for law enforcement agencies to exploit vulnerabilities (with a warrant) than to require special back doors in software. No reasonable person can take issue with the police exploiting a vulnerability to catch a mafia hit man, a criminal arms dealer, etc. Some hacker needs to be willing to write the software that exploits those vulnerabilities. I would say that writing software with back doors is a much more serious ethical problem than exploiting unintentional vulnerabilities.
Utilitarianism comes to mind: seek to maximize the benefit to society. Sometimes that means attacking, sometimes it means defending, sometimes it means refusing to create the software you are told to create.
There is also the matter of law enforcement. It is better for law enforcement agencies to exploit vulnerabilities (with a warrant) than to require special back doors in software. No reasonable person can take issue with the police exploiting a vulnerability to catch a mafia hit man, a criminal arms dealer, etc. Some hacker needs to be willing to write the software that exploits those vulnerabilities. I would say that writing software with back doors is a much more serious ethical problem than exploiting unintentional vulnerabilities.