AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 51 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

french toast

Senior member
Feb 22, 2017
988
825
136
You do realize it is the AMD fanboys that call the 1080 a "mid-range card" because they like to complain that Nvidia is charging so much for a "mid-range card". Performance be damned, according to some mid-range is defined by die size and GPU codename...
Er no.
Certain people - amd fanbois or not -would probably be referring to midrange in that context to highlight the high margins nvidia makes with a moderately sized die, not that a 500$ gpu is classified as midrange in normal understanding of the term.
Gtx 1060 or even 1070 would generally be considered to be mid range in most people's minds, or 180-350$ price range I would suggest.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
Are the HBM2 yields that bad, that they are forced to pump 1.35v into them, and not the 1.2v that Hynix claims? https://www.skhynix.com/eng/product/dramHBM.jsp

This was found by http://imgur.com/a/ImPz1
Also HBM stock voltage is 1.35V when SK Hynix specs 1.2V soooo the HBM seems to be shipped at it's limits. Any more voltage will probably cause pretty rapid degradation.
LC0hD9I.jpg
 
  • Like
Reactions: Tee9000

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
That would also tell why in applications that rely on bandwidth of memory, the GPU actually sees decrease in performance per clock.

Somehow in current state of software(both drivers and application) it performs just like Fiji.
That's the thing. Its effective bandwidth thanks to compression, cache changes, and TBR(if it's working), is the same as Fury X. But that's not enough, it needs to increase the effective bandwidth by quite a bit more, and working from just a base of 250GB/s is too pitiful.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Exactly. Also, it's quite clear that a person buying FE for its intended use would want the professional workload performance to be up to snuff and it is. That buyer isn't disappointed.



Very interesting.
Also: the buyer will have 16GB.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
400W... :eek: When they say "through cable", it's actually drawing even more (since PCI-E slot can give quite a bit too)?

They're going to have hard time hitting that 1600 MHz... They should rather advertise minimum boost clocks before some get really disappointed.

wouldn't put too much into current power measurements.
 

Snarf Snarf

Senior member
Feb 19, 2015
399
327
136
My 980ti pulling 300W+ right now in the middle of summer in an upstairs bedroom already gets my ambient room temps up to over 80F after a few hours of gaming, I can't imagine having anymore power than that, I would literally melt. This is of course in south Texas where we have summers that get well over 100F for weeks at a time.

I was okay with higher power draw at 1080ti level performance because it would offset the perf/watt from my 980ti, but if RX is going to hit 400W at 1600MHz it's a no sale now.
 

Karnak

Senior member
Jan 5, 2017
399
767
136
Why is that?
Are you in the camp that thinks there is new silicon for RX?
Because of AVFS (Adaptive Voltage & Frequency Scaling, new with Polaris) and ACG (Advanced Clock Gating, new with Vega). ACG is completely disabled as of now and AVFS isn't working properly.

Coming from a guy who is working for AMD.
https://www.forum-3dcenter.org/vbulletin/showthread.php?p=11421717#post11421717

Translation:
https://translate.google.com/transl...lletin/showthread.php?p=11421717#post11421717
 

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
So your definition of tier is "SKU". Ok.
What's your definition?
Shader count, ROPs, and memory interface have been the defining characteristic of a tier for as long as I can remember.

You seem to think GTX1070 are mid range and 1080s high end. What makes the 1080 high end compared to the 1070? 1070 has the same GP104 chip, same memory bus, same ROPs, just less shader count.

Shouldn't 1070 also be high end by your definition?

In any case, Vega currently has three faster products to compete with in it's price range, and loses to all three. (plus remaining stock of last years Titan)

AMD can flip this around with sub $500 pricing on RX Vega, but anything $500 and over won't sell in current market.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Vega Nano nowhere to be seen.

Edit: http://www.pcgameshardware.de/Vega-...ase-AMD-Radeon-Frontier-Edition-1232684/3/#a1

On average Vega Frontier is 35% faster than Fury X, while having 55% higher core clock.

It makes zero sense, at all.

Sheer clock speed difference should put it between GTX 1080 and Titan X(much closer to Titan X). And that is even excluding the architecture improvements.

GTX 480: 480 cores, 700mhz clock speed
GTX 285: 240 cores, 648mhz clock speed

GTX 480 was only 43% faster despite double the core count and a 9% high frequency. Even GTX 580, which had 2.13x more cores and a 19% higher clock speed was only 66% faster. Theoretically, GTX GTX 480 should have been at least 2x faster and GTX 580 should have been 2.5x faster than GTX 285.

Decisions are made, tests are ran, production takes place, reality sets in, and things don't always translate.
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Why is that?
Are you in the camp that thinks there is new silicon for RX?

AFAIK in this or the other Vega thread it was mentioned that voltage was locked at something ridiculously high even at stock and no way to lower it. And high means higher than RX 580 high. The theory was that AMD has trouble with binning and finding stable defaults so for FE they just upped the voltages to ridiculous levels to ensure 100% stability. Changes are voltage in RX Vega could be a lot lower art same clocks and hence power draw would also be a lot lower. If we get 1080 performance at 200w that would be far more acceptable than current situation.

One thing that is a fact is the high voltage. If it actually can be lowered and keep 1600 mhz is unknown. This is just one of many theories that could make RX Vega not completely suck. And I hope at least some aspects of them are true but one really got to wonder why AMD released the FE? Are there that heavy repercussions for not meeting your deadlines, legally? Can anyone shine a light on this? Or is this whole thing a money-making plot? Release sucky FE, stocks drop, buy stock/options, release fixed RX Vega and stock goes "through the roof". I don't know. It really is very puzzling.

Or they know it's kind of bad so they release the FE that really, really sucks and then add some minor improvements as to say "hey it isn't that bad, we improved over FE".

What I can't believe yet is that it really sucks this bad as FE and RX might only get + 10% and nothing else. That would be an utter failure. It must be at least +30% at that power use or much lower power.
 
  • Like
Reactions: french toast

Elixer

Lifer
May 7, 2002
10,376
762
126
One thing that is a fact is the high voltage. If it actually can be lowered and keep 1600 mhz is unknown. This is just one of many theories that could make RX Vega not completely suck. And I hope at least some aspects of them are true but one really got to wonder why AMD released the FE? Are there that heavy repercussions for not meeting your deadlines, legally? Can anyone shine a light on this? Or is this whole thing a money-making plot? Release sucky FE, stocks drop, buy stock/options, release fixed RX Vega and stock goes "through the roof". I don't know. It really is very puzzling.

This is getting a bit OT, but, it would be lying to the investors.
Some investors can and do win these types of things, which is why they need to be very careful.

Obviously, something went wrong, we just have no idea what the issue is/was.
The next investor meeting is coming up soon, so, we should hear more about the details.
AMD can't talk about things before that time either, they are in their quiet period.

Speculating on why Vega FE's performance is all over the map can't really be answered by anyone outside of AMD. All we know right now is, it runs hot, and eats power like no tomorrow, and it seems to be bandwidth starved & pumping voltages to HBM2 that are above specs.
We also know it took the use of LN2 to get Vega FE to not downclock from 1600MHz, but, the power draw was still there.

So, here we wait for more info.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Are the HBM2 yields that bad, that they are forced to pump 1.35v into them, and not the 1.2v that Hynix claims?

We know that AMD originally wanted to run the HBM2 at 2.0 Gbps, to get the same 512 GB/sec bandwidth as Fiji despite only half the bus width. But they failed, and were forced to settle for just under 1.9 Gbps. Perhaps getting even that far required overclocking/overvolting? It's also another potential reason for the delays. If the design was made with the assumption that they would have 512 GB/sec, having only 480 GB/sec (and also perhaps higher latency than expected) could have caused substantial performance regressions compared to what was anticipated. I wonder if the RX Vega (and the real professional cards) will be able to achieve 2.0 Gbps HBM2 speeds at a normal voltage. Maybe it will be easier on 4-Hi stacks?
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
We know that AMD originally wanted to run the HBM2 at 2.0 Gbps, to get the same 512 GB/sec bandwidth as Fiji despite only half the bus width. But they failed, and were forced to settle for just under 1.9 Gbps. Perhaps getting even that far required overclocking/overvolting? It's also another potential reason for the delays. If the design was made with the assumption that they would have 512 GB/sec, having only 480 GB/sec (and also perhaps higher latency than expected) could have caused substantial performance regressions compared to what was anticipated. I wonder if the RX Vega (and the real professional cards) will be able to achieve 2.0 Gbps HBM2 speeds at a normal voltage. Maybe it will be easier on 4-Hi stacks?
There is a matter whether Vega FE has final and stable BIOS, with proper settings for the Voltages of the GPU.

If not - what we see is not that strange.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
AFAIK in this or the other Vega thread it was mentioned that voltage was locked at something ridiculously high even at stock and no way to lower it. And high means higher than RX 580 high. The theory was that AMD has trouble with binning and finding stable defaults so for FE they just upped the voltages to ridiculous levels to ensure 100% stability. Changes are voltage in RX Vega could be a lot lower art same clocks and hence power draw would also be a lot lower. If we get 1080 performance at 200w that would be far more acceptable than current situation.

One thing that is a fact is the high voltage. If it actually can be lowered and keep 1600 mhz is unknown. This is just one of many theories that could make RX Vega not completely suck. And I hope at least some aspects of them are true but one really got to wonder why AMD released the FE? Are there that heavy repercussions for not meeting your deadlines, legally? Can anyone shine a light on this? Or is this whole thing a money-making plot? Release sucky FE, stocks drop, buy stock/options, release fixed RX Vega and stock goes "through the roof". I don't know. It really is very puzzling.

Or they know it's kind of bad so they release the FE that really, really sucks and then add some minor improvements as to say "hey it isn't that bad, we improved over FE".

What I can't believe yet is that it really sucks this bad as FE and RX might only get + 10% and nothing else. That would be an utter failure. It must be at least +30% at that power use or much lower power.

When Polaris first came out, its voltages were also much higher and this was part of the reason for its high power draw.

Could be the same thing with Vega - new silicon, new drivers, new BIOS and whatever, perhaps that is causing higher than normal voltages to be required for stability?

If reviews of Vega has it drawing large amounts of power, then I'll wait at least 6 months before I consider buying. I don't think I'd ever buy Nvidia, but if I buy AMD, I don't want a space heater.
 

Tup3x

Senior member
Dec 31, 2016
965
951
136
We know that AMD originally wanted to run the HBM2 at 2.0 Gbps, to get the same 512 GB/sec bandwidth as Fiji despite only half the bus width. But they failed, and were forced to settle for just under 1.9 Gbps. Perhaps getting even that far required overclocking/overvolting? It's also another potential reason for the delays. If the design was made with the assumption that they would have 512 GB/sec, having only 480 GB/sec (and also perhaps higher latency than expected) could have caused substantial performance regressions compared to what was anticipated. I wonder if the RX Vega (and the real professional cards) will be able to achieve 2.0 Gbps HBM2 speeds at a normal voltage. Maybe it will be easier on 4-Hi stacks?
I guess there's a reason why NVIDIA doesn't clock the HBM2 anywhere near as high as AMD. I guess one stack can do about 180 GB/s without bumping the voltage through the roof. Also since NVIDIA uses four stacks of 4GB HBM2, I wouldn't have high hopes that VEGA RX would have more bandwidth than FE. Maybe a bit but seem to run them on the edge what's possible.
 

Head1985

Golden Member
Jul 8, 2014
1,864
689
136
What's your definition?
Shader count, ROPs, and memory interface have been the defining characteristic of a tier for as long as I can remember.

You seem to think GTX1070 are mid range and 1080s high end. What makes the 1080 high end compared to the 1070? 1070 has the same GP104 chip, same memory bus, same ROPs, just less shader count.
Nope GTX1070 have 1 GPC disabled thats 33% GPU disabled.Rops are not conected to GPC so GTX1070 act like 48rops GPU anyway.

Btw vega is BIG mess now.Those beyond 3d results are bad.Geometry is still slow(pretty much polaris level) bandwidth super bad.I am not sure what AMD doing since december.If there is HW bug they should be abble do respin since december.
 

Guru

Senior member
May 5, 2017
830
361
106
My 980ti pulling 300W+ right now in the middle of summer in an upstairs bedroom already gets my ambient room temps up to over 80F after a few hours of gaming, I can't imagine having anymore power than that, I would literally melt. This is of course in south Texas where we have summers that get well over 100F for weeks at a time.

I was okay with higher power draw at 1080ti level performance because it would offset the perf/watt from my 980ti, but if RX is going to hit 400W at 1600MHz it's a no sale now.

That is just silly, hyperbole extravaganza spam. You can have a GPU that consumes 1000w and stay cool if you have beefy enough cooling.

Sure more hot air is produced, but again its not going to make any real difference to the ambient temperature of the room for example, 350 to 400W is nothing, especially with effective cooling. You need 1.5kw heater to heat a 5x5 room properly in the winter and its a freaking heater, designed to heat.

Wattage is only important at looking at the electric bill. 100W isn't even a big difference at all, especially for a GPU which won't be at 99%-100% use most of the time.

So a difference of 100W is insignificant in every single way, especially when we are talking between say 300W and 400W, if its say 100W vs 200W you have to take into account psu, the cables, does it have 6pin or 8pin, etc...
 

Tee9000

Junior Member
Jul 2, 2017
22
17
81
Edit: http://www.pcgameshardware.de/Vega-...ase-AMD-Radeon-Frontier-Edition-1232684/3/#a1

On average Vega Frontier is 35% faster than Fury X, while having 55% higher core clock.

Actually the difference is 22%.

PCGH said:
The provisional value in the PCGH performance index is 69.6 (free boost) and 75.1 with the maximum boost. For comparison, the Fury X is 56.8, the GTX 1070 is 64.8, and the GTX 1080 (10 Gbps) is 79.2, while the Titan Xp is the 100 percent benchmark.

PCGH performance index, which includes 19 games in three resolution

Fury X ---------------------- 100% (56.8 rating) - base rating
GTX 1070 ----------------- 114% (64.8 rating)
Vega FE ------------------- 122% (69.6 rating)
Vega FE @ 1600 Mhz - 132% (75.1 rating)
GTX 1080 (10 Gbps) ---139% (79.2 rating)
Titan Xp --------------------176% (100 rating)

Using the same 19 games index system at 3 resolutions and excluding Fury X as the reference, we have the following:

GTX 1070 - 7% gap - Vega FE - 14% gap - GTX 1080 (10 Gbps) - 26% gap - Titan Xp.
 
Last edited:
Status
Not open for further replies.