That's not in debate. Their two latest arch's being relatively poor in compute is. Especially it appears so for Maxwell even more than Kepler which wasn't particularly good.
On the contrary, Kepler (GK110/GK210) is extremely good for compute.
DGEMM efficiency is at 93%, which is very high. Even Maxwell should be quite good for compute, though only for FP32.
Meanwhile, as Cookie has already noted, AMD has the higher performance on paper, but there are multiple factors in getting all of that performance in the real world. While AMD is no slouch, for whatever reason it seems to be harder to get peak performance out of GCN.
Sorry. To get it a bit back on topic, People are talking like it's a conscious decision on nVidia's part to neglect compute for gaming with their latest designs. People were even saying that with Kepler because it wasn't as powerful as GCN. I find this questionable though. Is it by design? (I personally doubt it considering the importance of compute for nVidia's business.) Or is it because they've had to? (Is this the only way they can get the efficiency increase they are going for?)
Can it not be both? They already have GK210, which at 550mm2 is quite large. GM200 stripped the FP64 capabilities for more graphics capabilities. That option isn't really on the table for compute.
Plus Pascal will be here next year anyhow. So a Tesla Maxwell would be very short lived compared to the 3-4 years of Tesla Kepler.