AMD Vega (FE and RX) Benchmarks [Updated Aug 10 - RX Vega 64 Unboxing]

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

daxzy

Senior member
Dec 22, 2013
393
77
101
That's not remotely true. 1:1 shrinking doesn't occur as different parts of the chip have different densities and architectures are typically designed around a node so they wouldn't have been able to just shrink Fiji and realize all the perf/w benefits of a new node.

If we look purely at density and area calculations, 14nm Fiji would be around 300-350 mm^2. Improve clock speed by a conservative 30% and give it 8GB HMB2, and that should compete against the GTX 1080. What do you think Pascal is compared to Maxwell? Except a few minor tweaks, they're the same core.

AMD now has a 550 mm^2 Vega with 16GB HBM2 and at least the FE barely holds against the GTX 1080. So you can see where the expectations don't meet reality. It may be probably that they just over-engineered the thing. Sometimes simplicity manages to escape many engineers.
 

beginner99

Diamond Member
Jun 2, 2009
5,315
1,760
136
how'd you swindle someone into paying that much for a 290X? i bought mine for less 2 years ago and could only manage $150.00 sale out of it... AND it had a full cover swift water block!

opps nvm... forgot about these mining yahoos

mining craze but also water block might cost more when you buy it but when selling it's actually a hindrance because now you are limited to enthusiast with water cooling system. Such people that water cool usual don't buy old GPUs. But yeah $350 for a 290x is crazy.

GPUs arent good for mining at all.
You need a gazillion of them to even get a decent profit. In addition to that you cant buy GPUs for mining now and think that in 6-10 months you can break even when difficulty is skyrocketing.

The people that mine are either big players. Or you have the same people that think they buy stocks that also mine. Running 1 GPU 24/7 and making like $50 a month. Wohooo you are awesome lol

Mining is too damn painful unless you rent a room in a warehouse. Running a leafblower or like 10 of these loud GPUs 24/7 in your house, with 1000s of watt spewing in to your house and maybe you break even in like a half year or even a year. Yeah no thank you

haha. fully agree. That's why I bough ETH with cash, a year ago. That worked out rather well. ;)
 

EXCellR8

Diamond Member
Sep 1, 2010
4,039
887
136
mining craze but also water block might cost more when you buy it but when selling it's actually a hindrance because now you are limited to enthusiast with water cooling system. Such people that water cool usual don't buy old GPUs. But yeah $350 for a 290x is crazy.

well, the one i sold was ASUS reference with OE cooler included, but the water block was installed. I def should have held on to that thing but oh well. even some of the reference models are being sold off between 3 and 4 hundred bucks. damn.

i do have a pair of higher end RX 480s but if Vega ends up a dud I wont be able to find replacements. one's a NITRO+ and the other a GamingX... both 8GB and seem to be selling at north of 400 bucks.
 
Last edited:

exquisitechar

Senior member
Apr 18, 2017
722
1,019
136
AMD now has a 550 mm^2 Vega with 16GB HBM2 and at least the FE barely holds against the GTX 1080. So you can see where the expectations don't meet reality. It may be probably that they just over-engineered the thing. Sometimes simplicity manages to escape many engineers.

550 mm^2 was a massive fail by PCPer. Raja Koduri himself recently confirmed the die to be 484 mm^2 or something that like that.
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
And its exciting because when AMD says they can fully execute a feature set at a given D3D level they mean it!
Yep and it's even higher than Pascal now. The main thing is that the feature levels are almost identical now so hopefully developers have more incentive to utilize some of the newer features.
 

Peicy

Member
Feb 19, 2017
28
14
81
Pcper uploaded their latest podcast with Vega FE discussion:


Basically, they all seem to be quite concerned about it. Reiterating that AMD has to price this right to have a successfull product, also talking about the (lack of) TBR not being a be-all end-all solution that will catapult Vega in front of the 1080 towards the 1080ti.
Its an interesting discussion.
 

Guru

Senior member
May 5, 2017
830
361
106
AMD's biggest mistake is that it is developing two separate architectures with less resources than Nvidia. GCN 4 for RX 400 and 500 and a different engine for Vega.

Nvidia develops one big product and then cuts it off into smaller pieces constitution all the cards we see now from the titan xp to the 1050. Its one architecture so they can develop and optimize it to the max and then just cut it and provide the cheaper versions of it.

I think if they worked on a big RX 480/580, increased the die area, added more shaders, put in optimizations that they've seen in the RX 400/500 series, like better memory compression, tile rasterization and more pixel processing power they could have had a true power house, especially since custom OC'd RX 580's to 1470+MHz compete against the GTX 1070 in some DX12 titles, it even comes very close to the 1070 in Doom, a vulkan based game.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
It's been said several times that AMD should have long ago realized Hawaii was the optimal GCN configuration.

2816:64:512-bit in Polaris would provide 15-20% IPC improvements (GCN 4 vs 1 or 2) and 20%+ higher clocks and potentially even more gains from a more favourable bandwidth to FLOP ratio (512-bit 6Gbps in 390X, vs nearly double that if they went GDDR5X).

I don't think they have the money to have multiple chips that much anymore. But AMD still seems to think they can defeat their bottlenecks but still create unbalanced chips. Polaris 10 is an improvement, but 32 ROPs and 256-bit GDDR5 still hold it back. And it remains to be seen if Vega fixed Fiji's problems. Maybe AMD doesn't realize Hawaii's ratio is the ideal GCN configuration actually...

But Xbox One X (Scorpio) is 2560SP and 384-bit, so even if it is still 32 ROPs it could provide a decisive 1060 killer, and possibly a 1070 competitor if it is 64 ROPs. Why this isn't a dGPU is odd.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
AMD's biggest mistake is that it is developing two separate architectures with less resources than Nvidia. GCN 4 for RX 400 and 500 and a different engine for Vega.

Nvidia develops one big product and then cuts it off into smaller pieces constitution all the cards we see now from the titan xp to the 1050. Its one architecture so they can develop and optimize it to the max and then just cut it and provide the cheaper versions of it.

I think if they worked on a big RX 480/580, increased the die area, added more shaders, put in optimizations that they've seen in the RX 400/500 series, like better memory compression, tile rasterization and more pixel processing power they could have had a true power house, especially since custom OC'd RX 580's to 1470+MHz compete against the GTX 1070 in some DX12 titles, it even comes very close to the 1070 in Doom, a vulkan based game.
Vega is stepping stone for future generations. We would not see any improvement per clock, like we see with Vega, compared to previous generations of GCN, if what you describe be the case.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
It's been said several times that AMD should have long ago realized Hawaii was the optimal GCN configuration.

2816:64:512-bit in Polaris would provide 15-20% IPC improvements (GCN 4 vs 1 or 2) and 20%+ higher clocks and potentially even more gains from a more favourable bandwidth to FLOP ratio (512-bit 6Gbps in 390X, vs nearly double that if they went GDDR5X).

I don't think they have the money to have multiple chips that much anymore. But AMD still seems to think they can defeat their bottlenecks but still create unbalanced chips. Polaris 10 is an improvement, but 32 ROPs and 256-bit GDDR5 still hold it back. And it remains to be seen if Vega fixed Fiji's problems. Maybe AMD doesn't realize Hawaii's ratio is the ideal GCN configuration actually...

But Xbox One X (Scorpio) is 2560SP and 384-bit, so even if it is still 32 ROPs it could provide a decisive 1060 killer, and possibly a 1070 competitor if it is 64 ROPs. Why this isn't a dGPU is odd.

Yes, I think everyone knows that, but AMD cannot do this as they have to focus on Navi. That's where the small resources and more out of the box thinking can be had. The Vega chip is step one of many.

This is what people don't understand, it's a business decision and while all the focus is on the CPU, the graphics division can slide a little. The stock is doing great because BUSINESSES focus on business decisions. No one goes in a meeting and look at the CEO, or CIO and giggle about gaming benchmarks. The CPU is about cores, power, and overall footprint. The GPU is about stability, price, and features.

If you all want to discuss a terrible decision, it's AMD having shitty enterprise products and support. It's about time they put priorities first.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
If we look purely at density and area calculations, 14nm Fiji would be around 300-350 mm^2. Improve clock speed by a conservative 30% and give it 8GB HMB2, and that should compete against the GTX 1080. What do you think Pascal is compared to Maxwell? Except a few minor tweaks, they're the same core.

AMD now has a 550 mm^2 Vega with 16GB HBM2 and at least the FE barely holds against the GTX 1080. So you can see where the expectations don't meet reality. It may be probably that they just over-engineered the thing. Sometimes simplicity manages to escape many engineers.

484 not 550.please update your info.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
This is what people don't understand, it's a business decision and while all the focus is on the CPU, the graphics division can slide a little. The stock is doing great because BUSINESSES focus on business decisions. No one goes in a meeting and look at the CEO, or CIO and giggle about gaming benchmarks. The CPU is about cores, power, and overall footprint. The GPU is about stability, price, and features.
Sure, it is always a business decision, but, that don't mean it is the correct decision.
There is no possible way AMD was thinking that HBM2 was gonna be cheaper than GDDR5(X), yet, they seem to have bet the farm on HBM2. Sure, it is supposed to be cheaper than HBM, but I doubt it is a substantial savings. I also do realize that they most likely couldn't afford to do a dual memory design Vega chip.
That also seems to be one of the reason why the chip is so massive, to support the HBM2 tech, but, that isn't 100% clear yet. We need to see a die shot to see what is eating up all the space.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Sure, it is always a business decision, but, that don't mean it is the correct decision.
There is no possible way AMD was thinking that HBM2 was gonna be cheaper than GDDR5(X), yet, they seem to have bet the farm on HBM2. Sure, it is supposed to be cheaper than HBM, but I doubt it is a substantial savings. I also do realize that they most likely couldn't afford to do a dual memory design Vega chip.
That also seems to be one of the reason why the chip is so massive, to support the HBM2 tech, but, that isn't 100% clear yet. We need to see a die shot to see what is eating up all the space.
In the long run it is cheaper technology and offers benefits compared to GDDRX tech: Power consumption, package size, cost reduction, and higher bandwidth.

Also, enables powerful APUs, with with memory on package. True SOC's which will deliver higher performance for masses, and will phase out low level GPUs.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Yes, I think everyone knows that, but AMD cannot do this as they have to focus on Navi. That's where the small resources and more out of the box thinking can be had. The Vega chip is step one of many.

This is what people don't understand, it's a business decision and while all the focus is on the CPU, the graphics division can slide a little. The stock is doing great because BUSINESSES focus on business decisions. No one goes in a meeting and look at the CEO, or CIO and giggle about gaming benchmarks. The CPU is about cores, power, and overall footprint. The GPU is about stability, price, and features.

If you all want to discuss a terrible decision, it's AMD having shitty enterprise products and support. It's about time they put priorities first.
Lets not forget that this Vega is brand new architecture from the ground up.
So its extremely poor that this is the best they could do.

Power efficiency is poor.
Chip usage is poor
IPC over Fiji is poor

What exactly is good about Volta? Hard to find anything worth praising AMD for spending so many years making this architecture
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Ryzen gives life to AMD but Vega only gives Hope, that's why AMD doesn't care so much about Perf of Vega.Ryzen is everywhere : Tablet , Mobile , Desktop , Server , Datacenter and etc....
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Sure, it is always a business decision, but, that don't mean it is the correct decision.
There is no possible way AMD was thinking that HBM2 was gonna be cheaper than GDDR5(X), yet, they seem to have bet the farm on HBM2. Sure, it is supposed to be cheaper than HBM, but I doubt it is a substantial savings. I also do realize that they most likely couldn't afford to do a dual memory design Vega chip.
That also seems to be one of the reason why the chip is so massive, to support the HBM2 tech, but, that isn't 100% clear yet. We need to see a die shot to see what is eating up all the space.

What the heck is a good decision? You mean the insane rise in stock price and market value? Yeah, total dud
 

Elixer

Lifer
May 7, 2002
10,371
762
126
In the long run it is cheaper technology and offers benefits compared to GDDRX tech: Power consumption, package size, cost reduction, and higher bandwidth.
The same was said for HBM, but that didn't work out so well.
The only way it will be cheaper tech in the long run is via economics of scale, and with only one fab cranking out HBM2, that is going to be a very long time before we see any cost savings.

Also, enables powerful APUs, with with memory on package. True SOC's which will deliver higher performance for masses, and will phase out low level GPUs.
Assuming another tech won't compete against it, yeah, but, this is gonna be really expensive still, and as we can see today, most all OEMs just want the cheapest option, not the best option.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
If you all want to discuss a terrible decision, it's AMD having shitty enterprise products and support. It's about time they put priorities first.

If they want to get into the enterprise GPU market, they need to support CUDA. Period.
 
Status
Not open for further replies.