AMD Radeon RX Vega 64 and 56 Reviews [*UPDATED* Aug 28]

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Kallogan

Senior member
Aug 2, 2010
340
5
76
wow it's basically on same level than fury x what a huge failure, one could think it's just a rebadge !

it's beyond understanding !
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Let me explain to you what I see going on.

Hardware review sites, especially in this day and age (shift to mobile, YouTube, and Facebook gobbling up all the ad revenue), are struggling to remain profitable/viable.

AMD stuff tends to get, by far, the most clicks/views than articles about products from any other component maker by far, and in this industry having reviews ready for launch day is absolutely critical.

Thanks. It makes sense. Any News sites ride on controversy or anything that doesn't usually happen. A small company that has been behind for years but seems to have potential provides that.

I did notice this "bias" or whatever you want to call it. But they do not say it outright, probably has to do with another fad needing to be politically correct or something.

It doesn't make it correct to do so, money doesn't factor into this.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Let me explain to you what I see going on.

Hardware review sites, especially in this day and age (shift to mobile, YouTube, and Facebook gobbling up all the ad revenue), are struggling to remain profitable/viable.

AMD stuff tends to get, by far, the most clicks/views than articles about products from any other component maker by far, and in this industry having reviews ready for launch day is absolutely critical.

Getting to do a launch-day review means that you NEED cooperation from the IHV. Intel will sample sites pretty much no matter what as long as they're big enough, ditto NVIDIA (unless it's some halo product like Titan Xp, but let's be honest -- the people buying this stuff aren't messing around with reviews).

AMD, on the other hand, seems to play nice with you if you agree to play nice back. If you want the hardware, you do what AMD says, you kiss the ring, and you don't get too negative about their products.

Remember when Kyle at HardOCP was harshly critical of AMD/AMD products and then AMD started cutting off his review samples? All of a sudden, once GTX 1060 came out to do battle with Polaris, his site's test suite became narrow/AMD-friendly (see: https://www.hardocp.com/article/2016/07/19/nvidia_geforce_gtx_1060_founders_edition_review).

Then, of course, you saw Kyle recently doing the "blind taste test" with RX Vega ahead of the launch, after going on stage at an AMD event.

Don't think that the other hardware reviewers/review sites haven't noticed what's going on here.

The review sites want the clicks and viewership to feed their businesses, and AMD has a lot of power to help them out. Pure and simple.
Or if you want to say this in a lot less words :
Reviewers don't want to lose their early access review samples.

Considering you could purchase all relevant gpus and cpus for $5-10k a year I just don't get it.

It's why I wanted to review gpus but I'm just not the person for it. I do want to get into YouTube reviews though. With Vega no longer being an option I may get the rx 100 mark 5 (or wait for Mark 6) and start my YouTube adventure.
 

FatherMurphy

Senior member
Mar 27, 2014
229
18
81
Of the many questions I have, I keep returning to a design choice made by AMD. AMD has told that its transistor budget was blown mostly on ramping up clock speeds. In the reviews, it is clear that AMD has pushed the clock speeds on Vega 64 further down the voltage/clock curve than the architecture/foundry process combination would like to go.

Did AMD expect more from GF's process (e.g. lower voltages at x clock)?
Was AMD not able to ramp up the clock speeds as a high as they projected?
Why not build a lower clocked but wider GPU?

Knowing the electrical properties of the 14 LPP process and GCN's stubbornness with higher clock speeds, it just seems that AMD was perhaps too ambitious to seek clock speeds > 50% over Fiji.

Makes me wonder if there was (1) a miscalculation in the design/simulations at the beginning of the process or (2) a failure in implementation or both.

I'd love a retrospective story on AT similar to what's been done (a long time) before.
 

Mopetar

Diamond Member
Jan 31, 2011
8,529
7,795
136
Well, guess the wait to see what AIBs can do but those aren't even expected until freaking September. Fury didn't do this bad and AMD ditched that name. I can only imagine how they'll try to distance themselves from Vega.

Perhaps they chose to call it Vega so they can can it. They knew it wasn't going to meet expectations so they used the code name as the official branding so it doesn't tarnish the Radeon brand. AMD may be ahead of the curve on this one.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
The review sites want the clicks and viewership to feed their businesses, and AMD has a lot of power to help them out. Pure and simple.

Hardware.fr changed most of their gaming benchmark suite in favor of Gaming Evolved titles and Vega 64 still didn't beat GTX 1080, though Vega 56 had a minor lead over GTX 1070. Basically the same performance per watt as Fiji as well, despite being a 14nm FinFET product - in line with TPU's review.

- GTX 1080 Ti Review

Battlefield 4
Crysis 3
DiRT Rally
DOOM
Dying Light
Fallout 4
Far Cry Primal
Ghost Recon Wildlands
Grand Theft Auto V
Hitman
Project Cars
Rise of the Tomb Raider
Star Wars Battlefront
The Division
The Witcher 3 Wild Hunt

- Vega Review

Battlefield 1
Hitman
Rise of the Tomb Raider
Ashes of the Singularity
Deus Ex : Mankind Evolved
DOOM
Sniper Elite 4
Civilization VI
TotalWar Warhammer
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Of the many questions I have, I keep returning to a design choice made by AMD. AMD has told that its transistor budget was blown mostly on ramping up clock speeds. In the reviews, it is clear that AMD has pushed the clock speeds on Vega 64 further down the voltage/clock curve than the architecture/foundry process combination would like to go.

Did AMD expect more from GF's process (e.g. lower voltages at x clock)?
Was AMD not able to ramp up the clock speeds as a high as they projected?
Why not build a lower clocked but wider GPU?

I think its much simpler than that. Halo products provide the money. If pushing for the last 5% makes it go from 2nd place to 1st, they will do it. Most people are saying its ok because power consumption doesn't matter and Vega 56 provides performance at low enough cost.

AMD likely does it for CPUs. That's why their products have no headroom for overclocking. Plus with smaller market share its more likely you can afford to do so. Less volume means you can spec it closer to maximum without getting bunch of calls for RMAs.
 
  • Like
Reactions: FatherMurphy

Reinvented

Senior member
Oct 5, 2005
489
77
91
I stopped picking up Nvidia cards when I couldn't afford them, and they'd have very little gain over the previous generation. Kinda sad about how now AMD has a fantastic card, but they are already out of stock. I guess its good, cause then I can save money. :( On the other hand, I want shiny new tech to go with my Ryzen setup. Probably should wait for the partners to come out with their versions too however.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I bet someone at AMD got fired for backing themselves into this ridiculous HBM2 corner. What a worthless implementation. What ever the 1070/80 cost right now, these Vega cards should cost $75-$100 less. That should be AMD's pricing rule for these cards.
Let it float $100 below the competition because these cards are hot, slow and hungry. For some reason Vega reminds me of Krusty the Clown. Slow, hot, power hungry, people waited forever and got trolled finally with a joke product at a joke price. That's clown shoes. God, AMD blew it.

:D
I really want to know Who is responsible? China Team ?
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Hardware.fr changed most of their gaming benchmark suite in favor of Gaming Evolved titles and Vega 64 still didn't beat GTX 1080, though Vega 56 had a minor lead over GTX 1070. Basically the same performance per watt as Fiji as well, despite being a 14nm FinFET product - in line with TPU's review.

- GTX 1080 Ti Review

Battlefield 4
Crysis 3
DiRT Rally
DOOM
Dying Light
Fallout 4
Far Cry Primal
Ghost Recon Wildlands
Grand Theft Auto V
Hitman
Project Cars
Rise of the Tomb Raider
Star Wars Battlefront
The Division
The Witcher 3 Wild Hunt

- Vega Review

Battlefield 1
Hitman
Rise of the Tomb Raider
Ashes of the Singularity
Deus Ex : Mankind Evolved
DOOM
Sniper Elite 4
Civilization VI
TotalWar Warhammer
You mean that same hardware.fr who along with the rest of Europe, didn't get sampled with Skylake-X by Intel?
Hardware.fr changed most of their gaming benchmark suite in favor of Gaming Evolved titles(which I believe was to show AMD in a better light, but is something I cannot prove) and Vega 64 still didn't beat GTX 1080, though Vega 56 had a minor lead over GTX 1070.
 

FatherMurphy

Senior member
Mar 27, 2014
229
18
81
I think its much simpler than that. Halo products provide the money. If pushing for the last 5% makes it go from 2nd place to 1st, they will do it. Most people are saying its ok because power consumption doesn't matter and Vega 56 provides performance at low enough cost.

AMD likely does it for CPUs. That's why their products have no headroom for overclocking. Plus with smaller market share its more likely you can afford to do so. Less volume means you can spec it closer to maximum without getting bunch of calls for RMAs.

Yeah, I agree that there is a value in pushing the cards hard for the halo effect. But (1) I don't think that AMD targeted 1080-level performance when they designed Vega or, another way to put it, I don't think AMD planned for Vega to be merely 30% faster than two year old Fiji and (2) I don't think AMD wanted to bring out a product for which they had no choice but to push balls-to-the-wall on the power limit right out off the bat.

These factors lead me to believe that AMD targeted a more efficient, higher performing part but fell short by 20-30% and 50 to 100W.
 
Mar 10, 2006
11,715
2,012
126
Or if you want to say this in a lot less words :
Reviewers don't want to lose their early access review samples.

Considering you could purchase all relevant gpus and cpus for $5-10k a year I just don't get it.

It's why I wanted to review gpus but I'm just not the person for it. I do want to get into YouTube reviews though. With Vega no longer being an option I may get the rx 100 mark 5 (or wait for Mark 6) and start my YouTube adventure.

It's not about the $ so much as it is about launch day reviews.

If you don't get your review out on Day 1, nobody's going to read it.
 
  • Like
Reactions: Phynaz and tential

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Why not build a lower clocked but wider GPU?

That doesn't really work, especially nowadays.

One, everyone else would have thought of that idea, which includes your competitor. Second, CPUs and GPUs can't scale much down in voltage anymore. You can, but it results in exponential degradation in clock speeds. So not only you may not gain anything by a wider design lowering clocks and voltage, but you may even lose efficiency by doing so. The key is always to get the right balance.
 
  • Like
Reactions: FatherMurphy

vissarix

Senior member
Jun 12, 2015
297
96
101
Asking $699 for an Gtx1080 performance equivalent plus 200W more power consumption is just insane...Not even $450 would cut it...it should be $400 and then someone might pick it over a Gtx1080 which costs $499....
 
  • Like
Reactions: Sweepr

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
These factors lead me to believe that AMD targeted a more efficient, higher performing part but fell short by 20-30% and 50 to 100W.

Simply, they messed up. Everything else is detail that we may never know. When it happens though many undesirable things like this happen. They might have found out a year ago that performance is not up to par, which would have caused the team to push out revisions of silicon to get higher clocks. And result in much higher power consumption.

But saying that Vega can be clocked lower and save lot of power is a crutch. Polaris can do that. Pascal can do that. Fury can do that. They released non-X Fury with minimal performance loss and Nano with small performance loss. Clearly its because they felt they needed to get that extra bit.

Actually if they had only Vega 56 or they clocked Vega 64 lower they might have ended up with a perception that Vega is a GTX 1070 competitor. Extra 10% allows them to price it $100 higher, and have one that costs further $100 more and say "look at us, we have WC cards!".

hopefully vega will get better with time, it's usually the case with amd

As a business that absolutely does not make sense. Video cards have a short life span. Highest revenue is by front loading sales. It would make sense for a #1 company, because they can say even next gen competitor's cards are barely at the level of our previous gen cards since "Fine Wine".
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
In my opinion AMD needs to revert back to the small-die strategy that made the RV770 possible back in the day and make perf/watt their biggest priority. GCN blown up to the size of Fiji and Vega scales very poorly as we have seen from their respective performance. The problem is that the people who built the RV770 - Carrell Killebrew et al., have been laid off. I'm not hopeful that the current RTG leadership can execute with respect to these parameters in a timely fashion.
 

Kallogan

Senior member
Aug 2, 2010
340
5
76
Seems like the future is more about freesync/gsync/better lows than brut average fps tho

Except for virtual reality
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
In my opinion AMD needs to revert back to the small-die strategy that made the RV770 possible back in the day and make perf/watt their biggest priority. GCN blown up to the size of Fiji and Vega scales very poorly as we have seen from their respective performance. The problem is that the people who built the RV770 - Carrell Killebrew et al., have been laid off. I'm not hopeful that the current RTG leadership can execute with respect to these parameters in a timely fashion.

AMD have to do what they did with Zen. Design a highly efficient architecture and die and use multi die with Infinity Fabric to scale up the product stack. The key here is that the basic die (like Zeppelin) must be very competitive in terms of perf/watt and perf/sq mm.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
AMD have to do what they did with Zen. Design a highly efficient architecture and die and use multi die with Infinity Fabric to scale up the product stack. The key here is that the basic die (like Zeppelin) must be very competitive in terms of perf/watt and perf/sq mm.
There are a lot of issues with making a NUMA-like GPU work. AMD needs to set specific performance targets and build at a specific die size to ensure the best harvesting capability while aiming for GTX x80 level of performance. 4K@60FPS isn't a target. Read Anand's articles on RV770 and see for yourself what execution means.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
AMD have to do what they did with Zen. Design a highly efficient architecture and die and use multi die with Infinity Fabric to scale up the product stack. The key here is that the basic die (like Zeppelin) must be very competitive in terms of perf/watt and perf/sq mm.

Breaking a GPU into multiple chips, really only benefits chip yield, it would actually increase power consumption and decrease performance so this really isn't a performance solution, it's a chip yield solution, and the problem here is NOT chip yield.
 
Status
Not open for further replies.