I don't see this a worse idea than AI code reviewer. If it spits out irrelevant advice and only gets 1 out of 10 points right, I consider it a win, since the cost is so low and many humans can't catch subtle issues in code.
As someone who has had to deal with the output of absolutely stupid "AI code reviewers", I can safely say that the cost of being flooded with useless advice is real, and I will simply ignore them unless I want a reminder of how my job will not be automated away by anyone who wants real quality. I don't care if it's right 1 in 10 times; the other 9 times are more than enough to be of negative value.
Ditto for those flooding GitHub with LLM-generated "fix" PRs.
and many humans can't catch subtle issues in code.
That itself is a problem, but pushing the responsibility onto an unaccountable AI is not a solution. The humans are going to get even worse that way.
You’re missing the bit where humans can be held responsible and improve over time with specific feedback.
AI models only improve through training and good luck convincing any given LLM provider to improve their models for your specific use case unless you have deep pockets…