The difference between "stop AI" and "stop cryptography" is that those of us who want to stop AI want to stop AI models from becoming more powerful by stopping future mathematical discoveries in the field. In contrast, the people trying to stop cryptography were trying to stop the dissemination of math that had already been discovered and understood well enough to have been productized in the form of software.
Western society made a decision in the 1970s to stop human germ-line engineering and cloning of humans, and so far those things have indeed been stopped not only in the West, but worldwide. They've been stopped because no one currently knows of an effective way to, e.g., add a new gene to a human embryo. I mean that (unlike the situation in cryptography) there is no "readily-available solution" that enables it to be done without a lengthy and expensive research effort. And the reason for that lack of availability of a "readily-available solution" is the fact that no young scientists or apprentice scientists have been working on such a solution -- because every scientist and apprentice scientist understood and understands that spending any significant time on it would be a bad career move.
Those of us who want to stop AI don't care you you run LLama on your 4090 at home. We don't even care if ChatGPT, etc, remain available to everyone. We don't care because LLama and ChatGPT have been deployed long enough and in enough diverse situations that if any of them were dangerous, the harm would have occurred by now. We do want to stop people from devoting their careers to looking for new insights that would enable more powerful AI models.
There's several assumptions you're making. First, that sufficient pressure will be built up into stopping AI before drastic harms occur instead of after, at which point stopping the math will be exactly the same as was stopping cryptography.
And that should there be no obvious short term harms to a technology, there can be no long term harms. I don't think it's self evident that all the harms would've already occurred. Surely humanity has not yet reached every type and degree of integration with current technology possible.
Well, in my book that is call obscurantism and never worked for long. It would be the first time that something like this works forever in humanity. I think once the genius is outside the bottle you cannot close him again.
If I take the science fiction route I would say that humans in your position should think about moving to another planet and create military defenses against AI.
Western society made a decision in the 1970s to stop human germ-line engineering and cloning of humans, and so far those things have indeed been stopped not only in the West, but worldwide. They've been stopped because no one currently knows of an effective way to, e.g., add a new gene to a human embryo. I mean that (unlike the situation in cryptography) there is no "readily-available solution" that enables it to be done without a lengthy and expensive research effort. And the reason for that lack of availability of a "readily-available solution" is the fact that no young scientists or apprentice scientists have been working on such a solution -- because every scientist and apprentice scientist understood and understands that spending any significant time on it would be a bad career move.
Those of us who want to stop AI don't care you you run LLama on your 4090 at home. We don't even care if ChatGPT, etc, remain available to everyone. We don't care because LLama and ChatGPT have been deployed long enough and in enough diverse situations that if any of them were dangerous, the harm would have occurred by now. We do want to stop people from devoting their careers to looking for new insights that would enable more powerful AI models.