Wild, probably inaccurate speculation warning: It wouldn't entirely surprise me. AMD claims RDNA2 is 50% faster per watt than RDNA. They've apparently fitted the the thing with two 8 pin power connectors which suggests a minimum TDP of 300W (much less and it would presumably have 8+6 pin connectors).
Edit: I have made a bad assumption here: actually for instance the Titan RTX is rated at 280W and has two 8 pin connectors.
The 5700XT is rated at 225W. The RTX 3080 founders edition, according to techpowerup benchmarks is almost exactly twice the speed of the 5700 XT on average. At a TDP of 300W the 6900 XT would also be expected to be exactly twice as fast as the 5700 XT (1.5*300/225=2), which would put it on par with the RTX 3080. But that's the minimum. It could theoretically be anywhere up to 375W - although practically there is actually going to be a limit of maybe 350W or so.
Then there is the big caveat that watts don't actually translate linearly into performance in many cases. But I assume here that the 6900XT is actually well within its stated design envelope that fits within the stated efficiency target and not some hastily overclocked beast.
Obviously this is all massively simplified and abstracted, and depends on marketing numbers that may or may not hold water in reality, or may mean something different to how I've interpreted them.
Not necessarily either way. Aside from the standard caveats about graphics cards being more complicated than just their shader performance, they are fairly different beasts.
For example: The RX 5700 XT is about or par with or a hair faster than the RTX 2070, for instance. Where the RX 5700 XT has 2,560 SPs the RTX 2070 has 2,304 CUDA cores. The difference between the 3080 and the 2070 appears much larger but it's worth bearing in mind that Ampere shaders are different to Turing shaders.
So depending on workload the RTX 3080 may, in the worst case scenario (equal or more integer operations than floating point operations), act like it 'only' has 4,352 Turing-equivalent CUDA cores. Presumably most workloads are optimised such that doing this makes sense.
Edit: I have made a bad assumption here: actually for instance the Titan RTX is rated at 280W and has two 8 pin connectors.
The 5700XT is rated at 225W. The RTX 3080 founders edition, according to techpowerup benchmarks is almost exactly twice the speed of the 5700 XT on average. At a TDP of 300W the 6900 XT would also be expected to be exactly twice as fast as the 5700 XT (1.5*300/225=2), which would put it on par with the RTX 3080. But that's the minimum. It could theoretically be anywhere up to 375W - although practically there is actually going to be a limit of maybe 350W or so.
Then there is the big caveat that watts don't actually translate linearly into performance in many cases. But I assume here that the 6900XT is actually well within its stated design envelope that fits within the stated efficiency target and not some hastily overclocked beast.
Obviously this is all massively simplified and abstracted, and depends on marketing numbers that may or may not hold water in reality, or may mean something different to how I've interpreted them.