The only reason anyone considers it a problem is that people have been complaining about racial bias in AI for a while now. Complaining is how you raise awareness and create the social pressure for companies to not take the path of least resistance when creating these systems.
Feed in a bunch of pictures of CEO's who are predominantly white men in suits, get an algorithm that when asked to imagine a "CEO" produces pictures of white men in suits.
Unsurprising? Yes.
Problematic? Also yes, depending really on how the tool would be used.
Worth paying attention to in the design and use of such tools? Yes definitely.
The problem is the AI constructed images are based on PR department constructed images of CEO's...the sort of image that is easily obtained from a Google image search for CEO's.
But to a first approximation actual CEO's run small business and don't wear tailored suits.
Or to put it another way, there are more YC company CEO's than Fortune 500 CEO's.
The problem is that AI ideology doesn't see bad results as a serious issue.
So data is classified by less than minimum wage Amazon Mechanical Turks.
Is it a problem? Yes. But you don't fix it by complaining about an algorithm that just makes these problems more visible.