AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 49 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tential

Diamond Member
May 13, 2008
7,355
642
121
Vega better be able to do 4k or else....

rkwou70jut8z.jpg

Remove if off topic but found it funny for Vega to lighten the mood.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
They are probably just refactoring their GPU into something... That approach may or may not pay off later but currently it looks like Vega is just a testbed for infinity fabric. Probably the whole point of Vega is to make GPU arch for their APUs. Or so it seems...
As far as the desktop market goes, I think so too. Long term they are going to have APU's and GPGPU's.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Curious, since we've it happen to Nvidia (though they had a more legit fault).

If there is a possible manufacturers issue, how is the current Vega FE not considered some kind of false advertisement? If a bunch of features are advertised, and it is true that they are disabled is that opening them up to some kind of legal risk?

Even more so if a new product is required, ie the current can't be "fixed", that seems worse.

Note, I'm not accusing AMD of anything, I'm still giving them the benefit of the doubt (what little I can), but if some of these current theories hold any water, I'd feel that be worse for AMD long term than short term.
 
  • Like
Reactions: Grubbernaught

Elixer

Lifer
May 7, 2002
10,376
762
126
I know a re-spin of Vega would take between 16-24 weeks, so, there isn't enough time to do that between Vega FE & RX, unless it was done before FE.

They are also working on Vega APU for Fall, so, don't think they have that many people to manage all these different launches, unless the CPU guys are in charge of that, and it isn't a joint venture between CPU & GPU guys?

BTW, the lack of response from AMD for pretty much everything is because they are in their quiet period.
Q2 2017 AMD Earnings Call
Tuesday, July 25, 2017 2:00 p.m. PT

I am betting we will see more responses once that is done.
 

Magic Hate Ball

Senior member
Feb 2, 2017
290
250
96
Yes, you can limit the max amount of power, at the cost of performance with drivers, but, all power gating should already been enabled at the hardware level.

You know, I wonder if that's part of the delay with RX Vega.

Everything regarding Vega FE makes sense if you look at it as a a failed spin and/or "safe mode for stability" when it comes to power consumption.

If you look at the (launch) Polaris 10 you see stock voltages in the 1150mv range max with stock settings. My RX470 kicks to 1106mv when I switch to manual settings by default.

I have a feeling they had some problems or didn't get time to figure out production ready voltages for mass production. As some people have stated, 1200mv to 1250mv is generally considered the max safe non-degrading voltage on Polaris 10 silicon.

Now what are we seeing on the Vega 10? 1200mv max out of box! I think they just max volted the damn thing and turned off the clock gating/power saving features because of stability issues during testing just so the damn things won't crash on the clients until they can (hopefully) sort shit out.

EDIT: It could also be this... but I hope not
 
Last edited:
  • Like
Reactions: beginner99

Jackie60

Member
Aug 11, 2006
118
46
101
Since when has gtx 1080 been midrange?? It's still nvidia 2017 high end gaming card, not it's top enthusiast but it's high end premium.
And by competing it should be beating it, outside of power consumption.
I would consider the 1080 mid range there's 1080ti and the Titan above it that's two tiers down from the top.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Even a W7100 with half the execution units of Fiji, bests it on these "Pro" benchmarks. That makes it blatantly obvious that drivers can unlock massive performance gains in these particular benchmarks.

It is MUCH more likely that all we are seeing here is the difference between Fiji consumer and [semi]Pro drivers on Vega RX.


This is actually just more evidence that the Vega FE is getting higher workstation scores than Fiji because of drivers, not HW advances.
Vega does NOT have professional, signed drivers. Those will come with Radeon Pro WX 9100. What we see with Vega is just difference between throughput of Fiji and throughput of Vega cores.
On paper it's actually really close to Fiji. Just clock speed is higher. Which leads to another question: how that was achieved? It does have few improvements that should make x amount of improvement. I think those seem to work well in some professional workloads. But those do not necessarily translate to games (although in this case it's kinda pointless to compare Vega FE to gaming cards in professional workloads, because the drivers). Also I think AMD is focusing way too much in compute.

So, did they reduce the IPC to get the clocks up and the new improvements offset the lost IPC so that clock per clock it is more or less the same as Fiji? That could work IF you can get those clock speeds high enough. However it looks like AMD has issues clocking it high enough. That's likely the problem that they are facing. It can't be process' fault solely since GT 1030 runs surprisingly high clocks out of the box while being really energy efficient (I have one passive cooled version in our HTPC and it boost way over 1700 MHz at stock). It uses comparable process (14nm Samsung).

When RX comes out we'll find out for sure.
In high-level layout - sure, it is Fiji, at first glance.

I am talking about low-level differences. The small changes that make big difference in the high-level layout.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
I would consider the 1080 mid range there's 1080ti and the Titan above it that's two tiers down from the top.

Titan is based on the same GP102 as 1080Ti. Unless every SKU counts as a tier for you that's one tier down from the top.

And if you count every SKU and 1080 is midrange, then what's 1070? And 1060? Rinse and repeat... and 1030? :>
 

Head1985

Golden Member
Jul 8, 2014
1,864
689
136
On paper it's actually really close to Fiji. Just clock speed is higher. Which leads to another question: how that was achieved? It does have few improvements that should make x amount of improvement. I think those seem to work well in some professional workloads. But those do not necessarily translate to games (although in this case it's kinda pointless to compare Vega FE to gaming cards in professional workloads, because the drivers). Also I think AMD is focusing way too much in compute.

So, did they reduce the IPC to get the clocks up and the new improvements offset the lost IPC so that clock per clock it is more or less the same as Fiji? That could work IF you can get those clock speeds high enough. However it looks like AMD has issues clocking it high enough. That's likely the problem that they are facing. It can't be process' fault solely since GT 1030 runs surprisingly high clocks out of the box while being really energy efficient (I have one passive cooled version in our HTPC and it boost way over 1700 MHz at stock). It uses comparable process (14nm Samsung).

When RX comes out we'll find out for sure.
1600/1050 is 52% increase.That should be enough to beat GTX1080 by 20%.Problem is vega is memory bandwidth bottleneck and its not even close 52% faster than FURYX.

Also its still GCN so it eats power like two tanks.I dont get why they went with 2048bit.This is another huge mistake by design team.
If they increase shader power by 50% they need increase memory bandwidth by same and not cut it to less than fury X have.

its like Nv have make GP102 TITANXP with 256bit and slow 7Ghz memory.Thats same what vega is.

TITANXP 548GB/s
980TI 336GB/s
thats +63%

FURYX 512GB/s
VEGA 484GB/s
thats -6%
 
Last edited:
  • Like
Reactions: crisium

msi2

Junior Member
Oct 23, 2012
22
0
66
1600/1050 is 52% increase.That should be enough to beat GTX1080 by 20%.Problem is vega is memory bandwidth bottleneck and its not even close 52% faster than FURYX.

Also its still GCN so it eats power like two tanks.I dont get why they went with 2048bit.This is another huge mistake by design team.
If they increase shader power by 50% they need increase memory bandwidth by same and not cut it to less than fury X have.

its like Nv have make GP102 TITANXP with 256bit and slow 7Ghz memory.Thats same what vega is.

TITANXP 548GB/s
980TI 336GB/s
thats +63%

FURYX 512GB/s
VEGA 484GB/s
thats -6%

This is obvious... The chip would have been horrifically expensive to produce.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
1600/1050 is 52% increase.That should be enough to beat GTX1080 by 20%.Problem is vega is memory bandwidth bottleneck and its not even close 52% faster than FURYX.

Also its still GCN so it eats power like two tanks.I dont get why they went with 2048bit.This is another huge mistake by design team.
If they increase shader power by 50% they need increase memory bandwidth by same and not cut it to less than fury X have.

its like Nv have make GP102 TITANXP with 256bit and slow 7Ghz memory.Thats same what vega is.

TITANXP 548GB/s
980TI 336GB/s
thats +63%

FURYX 512GB/s
VEGA 484GB/s
thats -6%

Partially agree but Vega theoretically (probably also in practice) needs less bandwidth because of the changes to uArch. One of them being bigger caches and how they are organized which should reduce memory bandwidth requirements. Second is better color compression compared to fury x (already present in polaris) and last is the tile-based rasterizer which seemingly is not yet enabled or completely broken.
 

leoneazzurro

Senior member
Jul 26, 2016
930
1,465
136
1600/1050 is 52% increase.That should be enough to beat GTX1080 by 20%.Problem is vega is memory bandwidth bottleneck and its not even close 52% faster than FURYX.

Also its still GCN so it eats power like two tanks.I dont get why they went with 2048bit.This is another huge mistake by design team.
If they increase shader power by 50% they need increase memory bandwidth by same and not cut it to less than fury X have.

its like Nv have make GP102 TITANXP with 256bit and slow 7Ghz memory.Thats same what vega is.

TITANXP 548GB/s
980TI 336GB/s
thats +63%

FURYX 512GB/s
VEGA 484GB/s
thats -6%

You are assuming here that FuryX uses every ounce of bandwidth and it's bottlenecked by it in every situation. This is generally not the case, and anyway more recent AMD GPUs (Polaris) perform in a striking distance with Fiji with far less BW.
 

Aenra

Member
Jun 24, 2017
47
34
61
I only see two problems thus far; AMD's poor choice of marketing and people's having lost track..vision..sight.. you name it.

Is it better than the Titan X..P..Pp..XPp.. whatever in pro scores? It is.
Is it worse than said same model in gaming? It is.

- If i mostly use my GPU for professional reasons, not for gaming, the Vega FE is a clear win. It is significantly better than the opposition's for anything work related, even though they cost the same. And i'm not missing out on anything, not really, can still use it for anything entertainment-related.
Ergo? If i was, really was, the kind of person the card is targetting, i'd have nothing to complain about. I get a tool to do my job that is better than Nvidia's similarly priced products; i get said tool without having to fork 5 and 6 grand, which obviously i cannot afford.. if i could have, i wouldn't even be looking for "prosumer" cards in the first place.

This is whom the Vega FE is targetted at. And all is good.

- If i mostly purchase stuff just so i can throw my money away for an insignificant, epeen only related performance increase
And/or
- If i mostly game but i occasionally pretend i do coding and "work" (aka i have a blog ergo instantly, i am a ""professional"" journalist), the Titan XP..p..Pp..? Is a clear win. There is only one issue, the financial, but already, it's out the window as it does not concern me; am not that kind of customer, i'm not under that kind of a mindset.

This is not whom the Vega FE is targetted at. Anyway. So again all is good.

Ergo, zero issues far as i am concerned. Yes, bad marketing, but if you're into true "work", you already know what the card can offer you, so that's out. Also yes, people having lost vision, wanting a 'Titan' from AMD, failing to grasp how in reality, the FE is a better solution; failing because their mindset is twisted by years of green hyping. But again, if you're into true "work", this wasn't what you were after in the first place, now was it. So both surmountable, both out of the equation for wannabe/actual semi-professionals. Or proper pros that just can't afford the proper tools.

I see no issue for this card. None considering whom it's targetting. The end, period.

So this only leaves the future. And how the RX version will perform. And how "gamers" use other cards to judge how a future, unreleased product may or may not perform. And how most "gamers" focus on that based on what? A card that was never meant to be an 'x' killer ....
You folks may have the data, but you lack the objectivity. Myself, will hold until it's out; no rush.
 
Last edited:
  • Like
Reactions: guachi

richaron

Golden Member
Mar 27, 2012
1,357
329
136
You folks may have the data, but you lack the objectivity. Myself, will hold until it's out; no rush.

I don't see any RX data. Obviously there's FE data, with which we can use to guess RX performance, but you're correct the problem is objectivity. There are still way too many unknowns so sort out before anyone can talk with confidence about RX Vega.

The biggest mouths around here clearly have an agenda to make 'their' company look best. I can only guess as to why they spend so much time and energy on this task. They'll take any data they can find and twist how it's presented to suit their own purposes.
 
  • Like
Reactions: Aenra

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
Titan is based on the same GP102 as 1080Ti. Unless every SKU counts as a tier for you that's one tier down from the top.

And if you count every SKU and 1080 is midrange, then what's 1070? And 1060? Rinse and repeat... and 1030? :>

1080 is the third card down the stack anyway you look at it.

http://www.tomshardware.com/reviews/nvidia-titan-xp,5066.html

Titan Xp (2017) is a full GP102 chip, 1080 Ti is also a GP102 but has less shaders, ROPs, memory interface.

If all those differences don't make for a tier in the product stack, you might as well say every Pascal chip is the same level, or that 1080/1070, 980/970,680/670, Fury X/Fury, 290X/290, 7970/7950 are all the same tier.

That has never been how products are viewed or marketed.

Technically it could almost be said the RX Vega has four NV skus to compete with if they're still selling last year's Titan X as that has more ROPs and Memory Bandwidth than 1080Ti, and less shaders than this year's Titan.

If RX Vega launches with several chips at different price points, performance levels, memory interfaces, ROPs, and shader counts AMD will definitely be marketing them as separate product tiers, not "RX Vega- all of them the same thing!".
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Vega does NOT have professional, signed drivers. Those will come with Radeon Pro WX 9100. What we see with Vega is just difference between throughput of Fiji and throughput of Vega cores.

That really doesn't add up.

The performance Delta Between Fiji and Vega FE in workstation benchmarks, are exactly like the performance deltas between Normal and "pro" AMD drivers.

There are two possibilities.

A) Vega FE effectively has Pro drivers, even if they are unsigned.
B) Vega FE has such radically different HW that it behaves just like is has Pro drivers without them.


Occam's razor says A) is the most likely solution (Being simple and straightforward).

B) is unlikely in the extreme. What do you think will happen when Vega FE gets "real" Pro drivers will it get the same kind of boost that Pro drivers give on top of the ones we see here. So the SNX-02 benchmark will go from 5 Times faster and get another 5 X boost and be 25 times faster?

No. AMD is selling this card as a Workstation card, and they gave it Workstations drivers that boost those workstation benchmarks, just like their Pro drivers do.

Thinking that the Workstation benchmarks represent an enormous HW boost over Fiji is just wishful thinking, leading you away from the obvious answer: It's just Workstation drivers.
 
  • Like
Reactions: Gideon

Aenra

Member
Jun 24, 2017
47
34
61
The biggest mouths around here clearly have an agenda to make 'their' company look best

No no, it's besides that :)
(i do not let fanboys of any side influence me, so they may as well type whatever they want)

For me at least, it's just the lack of reason mostly; we don't know the pricing (ergo we cannot 'compare' or gauge its price/performance value), we don't know whether certain functions are or are not disabled, we don't know whether the PCB will be identical (and more to the point, if not, what will the changes be), we don't know the "tuning cost" for said pro performance (at the expense of other functions? It's a brand new architecture [supposedly], i cannot presume to know how it can do so much better in pro stuff while failing in gaming), we don't know what the performance target is in the first place.

Now of course, forums are forums, we talk; that's O.K. ^^
But i notice most people neglect to take the above into serious consideration. On top of that, i notice how the vast majority takes the FE as a measure/frame of comparison, subconsciously allowing for its irrelevant gaming performance to become some sort of a basis for their arguments. Tell them 'it's not red' and there you go, you mentioned red, maybe it is red, can't you see it? Maybe the wrong angle, or the light, or something. Besides, "gamers" have """tested""" the FE, so why not? Am telling you dude, it's like, red. Seriously.

Edit: and because ADHD is the plague of our century, for those seeking the context, read my previous post; before reaching conclusions :p
 
Last edited:
  • Like
Reactions: 3DVagabond

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
I didn't think a 1080 could ever be called "mid-range". I always thought that was the $250-400 range...but maybe nV knows better :D

Mid-range because of the die size not because of the price, but anyway this is pointless here.

on topic,
I wouldn't compare VEGA FE against RX VEGA in games using current VEGA FE Gaming performance.
 
  • Like
Reactions: tential

Samwell

Senior member
May 10, 2015
225
47
101
PCGH tested the card and now we at least know where the problem lies:
http://www.pcgameshardware.de/Vega-...elease-AMD-Radeon-Frontier-Edition-1232684/3/
http://www.pcgameshardware.de/Vega-...elease-AMD-Radeon-Frontier-Edition-1232684/3/
They tested the b3d suite and you can clearly see that the effective bandwidth at the moment ist terrible. Black texture with color compression at fury level and random texture bandwidth even way lower than fury. Whatever the reason for this low bandwidth might be, it's the reason for the slow performance. It's just starving on bandwidth.
 
  • Like
Reactions: Tee9000

Karnak

Senior member
Jan 5, 2017
399
767
136
PCGH tested the card and now we at least know where the problem lies:
http://www.pcgameshardware.de/Vega-...elease-AMD-Radeon-Frontier-Edition-1232684/3/
They tested the b3d suite and you can clearly see that the effective bandwidth at the moment ist terrible. Black texture with color compression at fury level and random texture bandwidth even way lower than fury. Whatever the reason for this low bandwidth might be, it's the reason for the slow performance. It's just starving on bandwidth.
Probably the two 8Hi-Stacks. Doesn't matter that much for professional workloads, but for gaming the results are extremely bad.
 
Status
Not open for further replies.