Yeah, interestingly, being bad at lawyering isn't a typical reason for discipline. Discipline is integrity and process-based, things like stealing client funds and failing to communicate promptly and reasonably with the client.
Arguably the leading goof was using a technology the lawyer didn't understand and failing to inform the client of the risks of using it. Between that and citing garbage precedent-- half the bar might be eligible for discipline on any given day. The judge might issue sanctions but bar discipline is a different ball of wax.
The way I see it, AI cannot claim legal ownership of the output produced, it belongs to the human that generated it (e.g., an example of someone using generative AI to produce a painting that can be copyrighted by them).
Which makes me think that this case should be treated just as if the lawyer wrote it on their own. If the lawyer enters output of chatGPT they generated into court records as if it was their own, it was their own. The lawyer wrote those made up cases into the documents, and the entire matter should be treated as such.
It blows my mind the lawyer didn't double-check the fake cases cited by chatGPT, but had the idea to ask chatGPT whether those citations were legit (and then got his concern satisfied with a simple "yes").
I don't think this is indictiative of a wider issue. Or likely to be repeated substantially.
In terms of severity it's painfully foolish but then again ChatGPT is a totally new tool and a lot of people will be caught off guard.
I am stunned the lawyer didn't at least look up the case notes or even prepare a pocket brief if he believed they were real but hard to find