I get their point, but at the end of the day it's politics and marketing having it their own way.
With a 32GB card well below 1000$ it would sell like candies for anybody doing anything AI-related that's not training (you can easily run inference and fine tuning on such a card).
But it would massively eat in their data center sales which is what executives and investors want to see.
It's a tragedy because such a card would get a lot of love and support from amateurs to make it work great in the ML/AI context and thus improve their data center offerings long term.
So this is gonna end up in the same fashion AMD turns: it will disappoint or be ignored by most gamers cuz it has less brand power and no DLSS, and AMD will still disappoint at the data center level.
I think it could work out with a weak gpu (or high TDP). You want to make the card have higher TCO for datacenter, but if you make it a 3 slot card with 400W TDP that's 2x slower than your server GPUS, I think it works out. Once you have $10k of server (cpu+ram+networking) if your options are adding 2 9070AIs or 3 MI-300whatevers, the server GPUS would win for a server.
If you created a 32GB card that was great at AI workloads and cheap, it doesn't matter what you set the MSRP to. Street price would rise to the same level as other 32GB cards with similar performance.
And AMD already said there is not a 9070XT 32GB coming. Which I understand as "we're building a 32GB card with this chip, but it's not coming before christmas and will cost you a kidney".
I really would like to upgrade from my 2070 Super but I'm not getting a 16GB card now just to buy another one with 32GB later on.