All the people in the comments are blaming the user for supposedly running with `--dangerously-skip-permissions`, but there's actually absolutely no way for Claude CLI to 100% determine that a command it runs will not affect the home directory.
People are really ignorant when it comes to the safeguards that you can put in place for AI. If it's running on your computer and can run arbitrary commands, it can wipe your disk, that's it.
There is, in fact, a harness built into the Claude Code CLI tool that determines what can and cannot be run automatically. `rm` is on the "can't run this unless the user has approved it" list. So, it's entirely the user's fault here.
Surely you don't think everything that's happening in Claude Code is purely LLMs running in a loop? There's tons of real code that runs to correctly route commands, enable MCP, etc.
That's true - but something I've seen happen (not recently) is claude code getting around its own restrictions by running a python script to do the thing it was not able to do more directly.
For what it's worth the author does acknowledge using "yolo mode," which I take to mean `--dangerously-skip-permissions`. So `--dangerously-skip-permissions` is the correct proximal cause. But I agree that it isn't the root cause.
I mean it's hard to tell if this story is even real, but on a serious note, I do think Anthropic should only allow `--dangerously-skip-permissions` to be applied if it's running in a container.
Oof, you are bringing out the big philosophical question there. Many people have wondered whether we are running in a simulation or not. So far inconclusive and not answerable unfortunately.
I asked Claude and it had a few good ideas… Not bulletproof, but if the main point is to keep average users from shooting themselves in the foot, anything is better than nothing.
I'm not sure how much you should do to stop people who enabled `--dangerously-skip-permissions` from shooting themselves in the foot. They're literally telling us to let them shoot their foot. Ultimately we have to trust that if we make good information and tools available to our users, they will exercise good judgment.
I think it would be better to focus on providing good sandboxing tools and a good UX for those tools so that people don't feel the need to enable footgun mode.
People are really ignorant when it comes to the safeguards that you can put in place for AI. If it's running on your computer and can run arbitrary commands, it can wipe your disk, that's it.