This is false. You are dismissing the many concrete concerns people have expressed. Whether you agree with those concerns is immaterial. Feel free to argue against those concerns, but claiming there are no concerns is a false and unsupported assertion.
> but for some reason the discourse around this issue has been more influenced by Terminator than OSHA.
1) Claiming that concerns about AGI are in any way about "Terminator" is dismissive rhetoric that doesn't take the actual concerns seriously.
2) There are also, separately, risks about using models and automation unthinkingly in ways that harm people. Those risk should also be addressed. Those efforts shouldn't subvert or co-opt the efforts to prevent models from getting out of control, which was the point of this bill.
Ok, so based on another comment in this thread, your concrete concern is something like: the math that happens during inference could do some side-channel shenanigans that exploits a hardware-level vulnerability to do something. Where something leads to and existential threat to humanity. To me, there's a lot of hand waving in the something.
It's really hard to argue for or against the merits of a claim of risk, when the leap from what we know today (matrix multiplication on a GPU is generally considered safe) to the hypothetical risk (actually it's not, and it will end civilization) is so wide. I think I really need to see a plausible path from GPU vulnerability to "we're all gonna die" to take a concern like this seriously. Without that, all I see is a sci-fi boogeyman serving only to spook governments into facilitating regulatory capture.
My concern is that people are rapidly attempting to build AGI, while applying lower standards of care and safeguards than we would expect to be applied to "team of humans thinking incredibly quickly", which is a bare minimum necessary-but-not-sufficient lower bound that should be applied to superintelligence.
Among the many ways that could go wrong is the possibility of exploitable security vulnerabilities in literally any surface area handed to an AI, up to and including hardware side channels. At the same time, given the current state of affairs, I expect that that is a less likely path than an AI that was given carte blanche (e.g. "please autonomously write and submit pull requests for me" or "please run shell commands for me"), because many many AIs are being given carte blanche so it is not necessary to break out of stronger isolation.
But that statement should not be taken as "so the only problem is with whatever AI is hooked to". The fundamental problem is building something smarter than us and expecting that we have the slightest hope of controlling it in the absence of extreme care to have proven it safe.
We currently hold frontier AI development to lower standards than we do airplane avionics systems or automotive control systems.
This is not "regulatory capture"; the AI companies are the ones fighting this. The people advocating regulation here are the myriad AI experts saying that this is a critical problem.
This is false. You are dismissing the many concrete concerns people have expressed. Whether you agree with those concerns is immaterial. Feel free to argue against those concerns, but claiming there are no concerns is a false and unsupported assertion.
> but for some reason the discourse around this issue has been more influenced by Terminator than OSHA.
1) Claiming that concerns about AGI are in any way about "Terminator" is dismissive rhetoric that doesn't take the actual concerns seriously.
2) There are also, separately, risks about using models and automation unthinkingly in ways that harm people. Those risk should also be addressed. Those efforts shouldn't subvert or co-opt the efforts to prevent models from getting out of control, which was the point of this bill.