And 225W power delivery for 120W GTX 1660 Ti is explained perfectly well by its power draw?AMD are not being upfront with the information, to say the least, and it certainly does not explain a 8+6pin OEM 5700XT when 3/4s of a 40CU Vega at sane clocks should be a ~150w card, if even that.
You see that sane clocks caveat you included? AMD's track record with that in shipping cards has been pretty awful recently.
The size and number of PCIe power connectors has long been a staple of marketing fiction.and it certainly does not explain a 8+6pin OEM 5700XT when 3/4s of a 40CU Vega at sane clocks should be a ~150w card, if even that.
And 225W power delivery for 120W GTX 1660 Ti is explained perfectly well by its power draw?
GTX 1660 Ti perfectly well can work with just 6 pin connector. And yet NO GTX 1660 Ti has anything less than 8-pin connector and 225W power delivery.
Its also funny that you claim that I have said anything, when I just point out to what AMD has said, in the context of everybody claiming, before we have seen any review of Navi GPU, that is worse than RTX 2070 in efficiency. It might be, but it also might be better in efficiency than RTX 2070. Its simple as that.
40 CU Navi GPU was 14% faster than Vega 64 with 64 CU's. And at the same time (i.e clocks), 40 CU Navi chip used 23% less power than Vega64 cut down to 40 CU's.
You misread the meaning in that - all it means is that a high end compute part to replace the current Vega line will not be forthcoming anytime soon.Here you have it folks, straight from the top. Navi is, in the main, the gaming half with some light compute of AMD graphic lines. Should settle some arguments here.
PCWatch: Instinct is based on 7nm, so how about changing to the Navi architecture and performance?
Forrest Norrod: There's going to be some overlap between the two. I think Lisa eluded to this earlier, where GCN and Vega will stick around for some parts and some applications, but Navi is really our new gaming architecture so I don't want to go beyond that. You'll see us have parts for both gaming applications and non-gaming applications.
https://www.anandtech.com/show/14568/an-interview-with-amds-forrest-norrod-naples-rome-milan-genoa
You do realize that GTX 1660 Ti uses LESS power than GTX 1060? And still uses 8 pin connector versus 6 pin connectors on ALL of GTX 1060 GPUs?A 225w power delivery that is also used by: GTX 1080/70, RX580/480, RTX 2060/70, and yet AMD thinks it isn't enough for the 5700XT and decides that sharing power delivery with 1080ti and 2080 is necessary out of the box as a reference card. Maybe they wanted an absolute shedload of headroom, but this would be the very first time that a reference card is built to such overkill levels out of the box (i.e 300w delivery for a card that may draw half that, with your claims).
You said
When frankly AMD themselves don't say it and their own 225W TBP spec for 5700XT (same as the RX590) is also very contradictory of that claim. I could end up eating crow and AMD actually built a RTX 2070-killer that draws power at Polaris 10 levels but nothing that is being shown now implies this is likely.
A 225w power delivery that is also used by: GTX 1080/70, RX580/480, RTX 2060/70,
They do not seem. They ARE 8 pin connector GPUs. ALL of GTX 1660, and 1660 Ti.Edit: The GTX 1660, all of them seem to be 8 pins
AMD does not say what I have said? And you think I made it up, or took it straight from the ********* footnotes of AMD's presentation?
![]()
![]()
Using the following Graphics cards: Radeon 5700 XT(!) with 40 Units, versus Vega 64 with 64 Units. Radeon 5700 XT was faster on average 14% across various games in 3 resolutions: 1080p, 1440p, 4K.
Using the following graphics cards: Navi 10, with 40 Compute units, versus Vega 64, with 40 CU's enabled, Navi GPU was 23% more efficient.
Performance may Vary.
![]()
40 CU Navi GPU was 14% faster than Vega 64 with 64 CU's. And at the same time, 40 CU Navi chip used 23% less power than Vega64 cut down to 40 CU's.
Most of the RTX 2070s are 8 + 6 pins power delivery, and a few are 8 + 8
Edit: The GTX 1660, all of them seem to be 8 pins
I am sure that a 40CU Navi GPU can be clocked to be 14% faster than a fully enabled Vega 64, or that it can clock to use 23% less power than a 40 CU Vega 64 (and be significantly more efficient than Vega at that clock). But you are saying it can do both at the same clocks, that's a very different statement.
Not sure how much I would trust that - surely the Vega 10 chip was power optimised for 2 specific configurations of 64 or 56 CU SKU's during design, and in the bios since.Guys, read Again the Footnotes. Slowly. Especially the one: RX-358. It talks about Vega 64 running 40 CU. In relation to the power comparison between Vega and Navi.
I think it's weird that they would test any Vega product with only 40CU enabled.
RX 358 is about performance per mm2.
The same slide is talking about Power draw compared to Vega 64. Where did AMD in the same slide:RX 358 is about performance per mm2.
Guys, read Again the Footnotes. Slowly. Especially the one: RX-358. It talks about Vega 64 running 40 CU. In relation to the power comparison between Vega and Navi.
Do you KNOW that that 225W is actual power draw of 5700 XT? What if it is 215W? What if it is 217W?Why should I care about that contrived nonsense, involving a Vega SKU that never existed? I prefer to base my analysis on the hard numbers given elsewhere in the presentation. The "typical board power" for the RX 5700 XT is specifically listed as 225W. And if we can take the performance numbers provided at face value, the RX 5700 XT averages about 11% higher performance than the RTX 2070. We know from independent testing that (regardless of what Nvidia claims) the actual TBP of RTX 2070 Founders Edition is 200W. That means that if perf/watt of Navi was equivalent to Turing, then RX 5700 XT should be a 222W card. Since it's actually a few watts higher, this means that Navi has slightly worse perf/watt than Turing. If they were on the same node, that negligible difference would be fine, but Navi can barely keep up in perf/watt despite a full node advantage. That's the problem.
