AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 55 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Guru

Senior member
May 5, 2017
830
361
106
bviously, nobody wants to throw away good dies, but the last time AMD even tried to divert leaky parts from the top SKU was with Tahiti and the 7870LE.
Pretty sure the Vega Hybrid HardwareNexus built, with the OC, is the best case scenario for Vega FX

That should be a good indication I think. Go check out their results
No its not, because they just changed cooler and nothing else.

Do I think RX Vega will be competitive with the 1080ti, absolutely not, that card is up to 20% faster in some titles, but I do think that more stable 1630Mhz clock can bring in 5% performance improvement, 5% improvement from drivers and say some feature or whatever or some bottleneck was fixed in the meantime and we are looking at up to 15% more performance, which would put it at a striking distance of the 1080ti, maybe even beating it in some DX12/Vulkan titles like BF1, Sniper Elite 4 and Doom.

BTW Does anyone think that the Vega FE edition is just modified Fiji done on 14nm and thus the much higher clocks of up to 1600MHz which doesn't have the new features and thus its performing substantially weaker than people expected?

While AMD are sitting on the REAL VEGA and will release it for desktop gaming?
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Or they had to meet a Q2 deadline, but were absolutely not ready with either BIOS or drivers, so they pulled together a stable branch that is not very performant and not very feature packed.

Any number of explanations that wouldn't count as gimped but are not optimal.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Or AMD's marketing department is rubbish to the degree that they deliberately released crippled GPU, with inactive features, wrong BIOS, that not enables full control over voltages, to lower expectations, and destroy the hype train, and then to show "real" performance of the architecture, with RX Vega release.

When in business history can you(or anyone for this matter) cite a company doing this?
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Or AMD's marketing department is rubbish to the degree that they deliberately released crippled GPU, with inactive features, wrong BIOS, that not enables full control over voltages, to lower expectations, and destroy the hype train, and then to show "real" performance of the architecture, with RX Vega release.
AMD cant do that. It goes against everything AMD is about, which is transparency. Neither would Vega FE owners be too happy about AMD selling a premium card with less features than the card that will cost a lot less.
Then AMD would be in trouble with investors, stock owners, board members for deilbaretely hiding the truth.

No this is AMD knowing that their Vega architecture didnt turn out the way they wanted. ie its crap compared to the competition. So they do a last minute adjustment, sell a card called Vega FE that does ok with professional applications and hope it can save some of the money they injected in to making this poor architecture.
I bet you Vega RX was the original plan to release first many months ago, but it was delayed because they are in panic mode and trying all they got to squeeze out as much as possible from game optimizations before releasing the mess in to the market.
Everyone knows you can only extract this much through drivers, and it will come bite them once new games are released and they didnt get around to optimizing the game for their Vega RX.
 

exquisitechar

Senior member
Apr 18, 2017
722
1,019
136
Vega FE is just bizarre...never seen anything like it.

I recall multiple people saying that many key features, such as the draw stream binning rasterizer, are disabled in the drivers. If this is the case, why is everyone getting worked up over FE gaming performance? Or was this proven to be false? And why is AMD being so vague about all of this? o_O
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
When in business history can you(or anyone for this matter) cite a company doing this?

When it comes to releasing products with crippled features or features pulled entirely at a later time, it's not completely unheard of. Intel pulling TSX from a bunch of CPUs that were sold on the basis of supporting it comes to mind for instance, and that wasn't all that long ago.

EDIT: Oh, and thinking of pre-release software, I think of the several ASUS motherboards that were being sold when AM4 first launched that were shipping with BIOS versions pre-Ryzen, with ASUS support telling people they should acquire a "supported" CPU (an OEM only AM4 Bristol Ridge CPU) to flash an updated BIOS. Lulz.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
No this is AMD knowing that their Vega architecture didnt turn out the way they wanted. ie its crap compared to the competition. So they do a last minute adjustment, sell a card called Vega FE that does ok with professional applications and hope it can save some of the money they injected in to making this poor architecture.

A more likely scenario, is that driver development for advanced gaming features, like Draw Stream Binning Rasterizer(DSBR) proved a lot harder that anticipated.

AMD execs promised a Q2 Vega delivery (and a chunk of bonuses was tied to it), so they deliver a "workstation" Vega card without the advanced features mainly needed for gaming to meet that date.

This gives them more breathing room to finish the drivers, and deliver working DSBR in Vega RX.

Not that this is a good thing. They are still late, and Vega FE was really just a stopgap aimed at meeting the letter of delivery date, but not it's spirit.

But there is some hope of seeing improved gaming performance in Vega RX from the newer drivers. Though don't expect miracles.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
A more likely scenario, is that driver development for advanced gaming features, like Draw Stream Binning Rasterizer(DSBR) proved a lot harder that anticipated.
That is done in hardware, not software, it is supposed to be seamless, and will kick in automatically, nothing needs to be done on game/app side.
If Vega RX was that much faster than FE they would have released the card already.
Or have shown direct comparisons with 1080Ti on several games. Let sites do a preview on its performance.

TDP over Vega FE is basically unchanged.
Not necessarily so, they might have very good reasons for holding it off.
They are pumping more voltages to HMB2, above the max specified by JEDEC, so, could be a yield issue here.
They are also giving more juice to Vega itself, which does explain the TDP, so, unless they lower the voltages way down (at a cost of performance), we will still see high TDP numbers.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
When in business history can you(or anyone for this matter) cite a company doing this?
Let me give you an example.

AMD claims that Raven Ridge APU is 40% faster than 7th generation GPU while being in 50% lower TDP.

A12 9800 has 8 CU's 1108 MHz core clock, and around 1.134 TFLOPs of compute power in a 65W TDP thermal envelope.
Raven Ridge APU has 11 CU's, and 800 MHz in 35W TDP and around 1.126 TFLOPs of compute power.

It means that per clock Vega has to be 30% faster than GCN4(!), because Bristol Ridge and Stoney Ridge are the same family of GPUs, and they both are the same family as is Polaris.
https://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver
GFX81: AMUR, STONEY, ELLESMERE, DERECHO
AMD cant do that. It goes against everything AMD is about, which is transparency. Neither would Vega FE owners be too happy about AMD selling a premium card with less features than the card that will cost a lot less.
Then AMD would be in trouble with investors, stock owners, board members for deilbaretely hiding the truth.

No this is AMD knowing that their Vega architecture didnt turn out the way they wanted. ie its crap compared to the competition. So they do a last minute adjustment, sell a card called Vega FE that does ok with professional applications and hope it can save some of the money they injected in to making this poor architecture.
I bet you Vega RX was the original plan to release first many months ago, but it was delayed because they are in panic mode and trying all they got to squeeze out as much as possible from game optimizations before releasing the mess in to the market.
Everyone knows you can only extract this much through drivers, and it will come bite them once new games are released and they didnt get around to optimizing the game for their Vega RX.
Vega should be 40% faster per clock than Fiji in graphics, and on top of that there should be huge difference from higher clock speeds, and better utilization of memory bandwidth, resulting in almost two times higher graphics performance on Vega, than it is on Fiji.

We do not see this, so far. Is it drivers? Is it software? Is it hardware? On hardware level there is NOTHING that would bottleneck Vega so much. So my guess is that Vega FE is bottlenecked by software.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
That is done in hardware, not software, it is supposed to be seamless, and will kick in automatically, nothing needs to be done on game/app side.
Drivers job is to point out what features are apparent on hardware and in DX11 case, what can application do with those features.

In DX12 its Application's job to decide what to with the features of hardware.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Let me give you an example.

AMD claims that Raven Ridge APU is 40% faster than 7th generation GPU while being in 50% lower TDP.

A12 9800 has 8 CU's 1108 MHz core clock, and around 1.134 TFLOPs of compute power in a 65W TDP thermal envelope.
Raven Ridge APU has 11 CU's, and 800 MHz in 35W TDP and around 1.126 TFLOPs of compute power.

It means that per clock Vega has to be 30% faster than GCN4(!), because Bristol Ridge and Stoney Ridge are the same family of GPUs, and they both are the same family as is Polaris.
https://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver


Vega should be 40% faster per clock than Fiji in graphics, and on top of that there should be huge difference from higher clock speeds, and better utilization of memory bandwidth, resulting in almost two times higher graphics performance on Vega, than it is on Fiji.

We do not see this, so far. Is it drivers? Is it software? Is it hardware? On hardware level there is NOTHING that would bottleneck Vega so much. So my guess is that Vega FE is bottlenecked by software.

That is just a bunch of assumptions, about a product we have much less knowledge about, that is even farther from the market, to jump to conclusions about one we have much more information about, and is much closer. It's kind of backwards.

Usually when manufacturers makes claims about 40% faster, it means up to 40% faster, and lower Power assumptions are similarly limited and often not necessarily at the same time, and we DON'T know the final clock speed.

In reality Raven Ridge is supposed to have ~40% more SPs and be about ~40% faster, we don't know the clock-speed differences in those comparisons.

So just like Vega vs Fiji, IPC could essentially be the same.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
That is just a bunch of assumptions, about a product we have much less knowledge about, that is even farther from the market, to jump to conclusions about one we have much more information about, and is much closer. It's kind of backwards.

Usually when manufacturers makes claims about 40% faster, it means up to 40% faster, and lower Power assumptions are similarly limited and often not necessarily at the same time, and we DON'T know the final clock speed.

In reality Raven Ridge is supposed to have ~40% more SPs and be about ~40% faster, we don't know the clock-speed differences in those comparisons.

So just like Vega vs Fiji, IPC could essentially be the same.
Assumptions? AMD based the power numbers based on simple comparison between Engineering Samples of Raven Ridge APUs which have 35W TDP, and A12 9800 which has 65W, and narrowed it, so it could be theoretically fit in that 50% lower power envelope. Raven Ridge has 704 Cores, vs 512. That is not 40% more, but 37.5% more. A12 9800 has 1108 MHz, Raven Ridge 800 MHz. A12 980 has 38.5% higher core clock. Something is not right, here if AMD claims the Raven Ridge GPU is still 40% faster, despite no change in theoretical maximum performance between those two.

Vega has higher IPC than Fiji, which has been proven by Compute benchmarks on Gamers Nexus. In terms of graphical IPC - it has to be also higher, because right now it is directly related to features of the hardware, which already put it on Par with latest Nvidia tech.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Assumptions? AMD based the power numbers based on simple comparison between Engineering Samples of Raven Ridge APUs which have 35W TDP, and A12 9800 which has 65W, and narrowed it, so it could be theoretically fit in that 50% lower power envelope. Raven Ridge has 704 Cores, vs 512. That is not 40% more, but 37.5% more. A12 9800 has 1108 MHz, Raven Ridge 800 MHz. A12 980 has 38.5% higher core clock. Something is not right, here if AMD claims the Raven Ridge GPU is still 40% faster, despite no change in theoretical maximum performance between those two.

Where has AMD given us the final specs of Raven Ridge? Yeah, that's right, nowhere. So it is just assumptions on your part.

AMD only really provided this slide:
http://i.imgur.com/ASqU61X.jpg

There is nothing about clock speed or SP counts for Raven Ridge, nor the 7th gen APU they are comparing with.

Vega has higher IPC than Fiji, which has been proven by Compute benchmarks on Gamers Nexus. In terms of graphical IPC - it has to be also higher, because right now it is directly related to features of the hardware, which already put it on Par with latest Nvidia tech.

Vega doesn't have any better IPC for gaming, and that is what the vast majority user following this care about.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Or AMD's marketing department is rubbish to the degree that they deliberately released crippled GPU, with inactive features, wrong BIOS, that not enables full control over voltages, to lower expectations, and destroy the hype train, and then to show "real" performance of the architecture, with RX Vega release.
When in business history can you(or anyone for this matter) cite a company doing this?

Let me give you an example.

AMD claims that Raven Ridge APU is 40% faster than 7th generation GPU while being in 50% lower TDP.

A12 9800 has 8 CU's 1108 MHz core clock, and around 1.134 TFLOPs of compute power in a 65W TDP thermal envelope.
Raven Ridge APU has 11 CU's, and 800 MHz in 35W TDP and around 1.126 TFLOPs of compute power.

It means that per clock Vega has to be 30% faster than GCN4(!), because Bristol Ridge and Stoney Ridge are the same family of GPUs, and they both are the same family as is Polaris.
https://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver
What does your post have to do with business history? I gave you the FULL length of business history, and instead of drawing upon that, you went with a hypothetical and unreleased product.

You can see how that would lead people to be skeptical of the original assumption?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Or AMD's marketing department is rubbish to the degree that they deliberately released crippled GPU, with inactive features, wrong BIOS, that not enables full control over voltages, to lower expectations, and destroy the hype train, and then to show "real" performance of the architecture, with RX Vega release.

Marketing has nothing to do with the state of the product or drivers.

Or they had to meet a Q2 deadline, but were absolutely not ready with either BIOS or drivers, so they pulled together a stable branch that is not very performant and not very feature packed.

Any number of explanations that wouldn't count as gimped but are not optimal.

Something like this I believe too.

A more likely scenario, is that driver development for advanced gaming features, like Draw Stream Binning Rasterizer(DSBR) proved a lot harder that anticipated.

AMD execs promised a Q2 Vega delivery (and a chunk of bonuses was tied to it), so they deliver a "workstation" Vega card without the advanced features mainly needed for gaming to meet that date.

This gives them more breathing room to finish the drivers, and deliver working DSBR in Vega RX.

Not that this is a good thing. They are still late, and Vega FE was really just a stopgap aimed at meeting the letter of delivery date, but not it's spirit.

But there is some hope of seeing improved gaming performance in Vega RX from the newer drivers. Though don't expect miracles.

Yes. Deliver what they have to to get paid/keep their jobs.

Where has AMD given us the final specs of Raven Ridge? Yeah, that's right, nowhere. So it is just assumptions on your part.

AMD only really provided this slide:
http://i.imgur.com/ASqU61X.jpg

There is nothing about clock speed or SP counts for Raven Ridge, nor the 7th gen APU they are comparing with.



Vega doesn't have any better IPC for gaming, and that is what the vast majority user following this care about.


Vega FE doesn't. AMD said if you want to play games this isn't the card for you. Then when it doesn't perform in games people get all butthurt.

If RX is in the same condition performance wise that will be a problem.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Where has AMD given us the final specs of Raven Ridge? Yeah, that's right, nowhere. So it is just assumptions on your part.

AMD only really provided this slide:
http://i.imgur.com/ASqU61X.jpg

There is nothing about clock speed or SP counts for Raven Ridge, nor the 7th gen APU they are comparing with.



Vega doesn't have any better IPC for gaming, and that is what the vast majority user following this care about.
http://www.guru3d.com/news-story/am...-zen-cpu-cores-and-704-shader-processors.html
Up to 11 CUs means 704 GCN cores. Clock speeds are based on Engineering Samples that already leaked, and they all have 35W TDP's, because are slated for Mobile platform.

As for IPC. I already have stated my point in this matter.
What does your post have to do with business history? I gave you the FULL length of business history, and instead of drawing upon that, you went with a hypothetical and unreleased product.

You can see how that would lead people to be skeptical of the original assumption?
Never it has been done in history of business. It still doesn't rule out the possibility of what I have posted.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
http://www.guru3d.com/news-story/am...-zen-cpu-cores-and-704-shader-processors.html
Up to 11 CUs means 704 GCN cores. Clock speeds are based on Engineering Samples that already leaked, and they all have 35W TDP's, because are slated for Mobile platform.

Engineering samples of Ryzen were clocked lower than final chips, so why would these very early leaks of Raven Ridge be a the final clock speeds? Which specific Bristol Ridge part was it compared with?

Assumptions like this, leaping to conclusions, is how Vega got overhyped.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Drivers job is to point out what features are apparent on hardware and in DX11 case, what can application do with those features.

In DX12 its Application's job to decide what to with the features of hardware.
DSBR isn't a API call, that happens automatically in Vega, it is made to reduce the amount of work Vega must do.

In other words, game sends geometry to DX/Vulkan/openGL API. Drivers take that data, do whatever tweaking on it, and send it on to the hardware.

DSBR works on a pixel level, and it may have zero (0) impact on the workload, depending on if the game's workload has already culled the geometry down to the pixel level. Hint, most modern AAA game engines already do this.
That means, that if the game/app already is doing pixel level culling, then DSBR won't have any work to do, and you won't see any savings (which is why some of those benchmarks we have seen, this feature has zero impact on them).
That also means, that in really old games where the culling wasn't down to the pixel level, DSBR should show some nice gains, but again, this is workload specific (some engines are better than others).

*Edit, is there any AT'er that got a Vega FE? If so, and they have a Unreal 4 based game, they could turn off occlusion via TOGGLEOCCLUSION console command, and DSBR should keep the framerates the same as before, well, in theory. I don't got a Unreal 4 based game to see what that actually does.

DSBR definitely needs driver support. You can see it in the linux patches.
Which patches do you speak of? I did a quick glance, and didn't see anything relevant.
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
No its not, because they just changed cooler and nothing else.

Do I think RX Vega will be competitive with the 1080ti, absolutely not, that card is up to 20% faster in some titles, but I do think that more stable 1630Mhz clock can bring in 5% performance improvement, 5% improvement from drivers and say some feature or whatever or some bottleneck was fixed in the meantime and we are looking at up to 15% more performance, which would put it at a striking distance of the 1080ti, maybe even beating it in some DX12/Vulkan titles like BF1, Sniper Elite 4 and Doom.

BTW Does anyone think that the Vega FE edition is just modified Fiji done on 14nm and thus the much higher clocks of up to 1600MHz which doesn't have the new features and thus its performing substantially weaker than people expected?

While AMD are sitting on the REAL VEGA and will release it for desktop gaming?


that's too close. well within reach. from pcper it should be 30-40% faster but that WAS with lower clocks. hmmmmmmmmmmmm..........
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Drivers job is to point out what features are apparent on hardware and in DX11 case, what can application do with those features.

In DX12 its Application's job to decide what to with the features of hardware.

Doesn't DX12 still need to know what's there through drivers?
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
https://www.pcper.com/reviews/Graph...ga-Frontier-Edition-16GB-Liquid-Cooled-Review

Liquid cooler review is out.
tl;dr
Between 13% and 17% faster than air cooled version in 4K at stock. So 15% more performance for 16% more power, with liquid cooling reducing leakage. Sounds about right.
power-heaven2.png

vegafewcclocks.png
 
Last edited:
  • Like
Reactions: Tee9000
Status
Not open for further replies.