This is very interesting work. Thanks for publishing it. One thought experiment for you (which perhaps you've discussed already): could an attacker potentially influence and predict the state of patched software on the target system, introducing vulnerabilities which did not exist prior to patching? Also along that line, have you attempted to fuzz the input fields in scenarios such as your shellshock example?
That thought experiment falls under the umbrella of adversarial machine learning, which is something that we are aware of but has not been a focus for us thus far. Getting the correct adaptation in the first place was the primary goal. To trigger adaptation/patching, an attacker needs to drive the protected application to an undesirable state (exploit it, in other words), so an insidious attack that predicted and triggered multiple patches in the name of creating some ultimate vulnerability is a pretty high bar to clear. I would not claim it is impossible, but I do not know under what conditions that path would ultimately be easiest for the attacker.