• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Is Vega going to be DOA!?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
I do wish AMD would stop feeling the need to really ramp up the stock clocks on reference models way past the most efficient point. You can see why I suppose but I really don't think it helps them overall with what it does to the power consumption.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
I do wish AMD would stop feeling the need to really ramp up the stock clocks on reference models way past the most efficient point. You can see why I suppose but I really don't think it helps them overall with what it does to the power consumption.

Perhaps the year since the original Polaris release has resulted in refined production methods etc and the the clock speeds can be increased for performance without affecting power draw.
 

[DHT]Osiris

Lifer
Dec 15, 2015
17,381
16,662
146
Here's a pretty awesome set of charts I just found on the googles, looks like it's everything from geforce 3 series through 5xx, ati radeon 9xxx series or so, through 6970.

For reference, 1080gtx power usage averages 175w (under benchmarking or whatever), full bore has the occasional spike above 250w.

Massive amount of data there, but power consumption spiked noticeably around the geforce 8800gtx era, from 57w on the 7800gtx to 132w on the 8800gtx (193 on the ultra). This rose to 265w with the 9800, 463w with the 295, before finally tapering a little with the 480 at 310w. 580 was at 310w as well, and the trend has gone downward from there.

Similar things can be pulled from ATI, starting with the x850 (70w), x1900 (120w), x2900 (230w), x3870 (180w), 4890 (240w), 5970 (470w), and finally 6870 (254w). Generally speaking it looks like ati kept a lower power usage ceiling than NV historically, but they've got a few standouts as well, like that 600w oc'd 5970.

gfxpowerchartbybrandgen.png

gfxpowerchartbygen.png
 
Last edited:

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,730
136
Here's a pretty awesome set of charts I just found on the googles, looks like it's everything from geforce 3 series through 5xx, ati radeon 9xxx series or so, through 6970.

For reference, 1080gtx power usage averages 175w (under benchmarking or whatever), full bore has the occasional spike above 250w.

Massive amount of data there, but power consumption spiked noticeably around the geforce 8800gtx era, from 57w on the 7800gtx to 132w on the 8800gtx (193 on the ultra). This rose to 265w with the 9800, 463w with the 295, before finally tapering a little with the 480 at 310w. 580 was at 310w as well, and the trend has gone downward from there.

Similar things can be pulled from ATI, starting with the x850 (70w), x1900 (120w), x2900 (230w), x3870 (180w), 4890 (240w), 5970 (470w), and finally 6870 (254w). Generally speaking it looks like ati kept a lower power usage ceiling than NV historically, but they've got a few standouts as well, like that 600w oc'd 5970.
That GTX 480 quad-SLI result made me chuckle.
 

[DHT]Osiris

Lifer
Dec 15, 2015
17,381
16,662
146
That GTX 480 quad-SLI result made me chuckle.


Haha, I know. I thought the chart maker was daft at first for having the power axis go out to 1,000w instead of capping at something more 'reasonable' until I saw all the x3/4 SLI and xfire results.
 

OatisCampbell

Senior member
Jun 26, 2013
302
83
101
If vega doesn't pan out as a real threat, then I could easily see another repeat of pascal, but even worse.

1190FE $850
Titan Vista $1400
1 year later
1190Ti $850
Titan Vista xp $1400

Edit: How did nvidia fans not riot in the streets about the founder's edition? An extra $100 for a worse card!? It was a blatant and appalling ripoff, yet people were buying them as if they were collectibles.

You underestimate the power of pent up demand in the PC gamer market. NVIDIA, or AMD, could have released a bare BYO cooler model last summer and sold out.

Also, reference coolers have been on the decline for a long time, it's the bone tossed AIBs to make their cards the big sellers.
 

[DHT]Osiris

Lifer
Dec 15, 2015
17,381
16,662
146
You underestimate the power of pent up demand in the PC gamer market. NVIDIA, or AMD, could have released a bare BYO cooler model last summer and sold out.

Also, reference coolers have been on the decline for a long time, it's the bone tossed AIBs to make their cards the big sellers.

I actually think a universal socket/locking mechanism for VC's would be badass, let you drop-in air/water cooler heat plates/fans as you see fit.

Or even as you said, a 'white box' card with no cooling/heat sinks attached, let people roll their own cooling solution. Yank $50 off the cost of the card or whatever, yank them off the production line early, and call it a day.
 

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
i really hope AMD has something in the 1080 range for ~400$ that would be perfect for me to replace my "champion of 2012-3" GTX680.
but history tends to repeat itself so i predict it's gonna go down like:

AMD releases Vega, does not dethrone GTX1080TI-> nVidia price cuts 10 series -> few months later nVidia releases Volta to leave AMD in the dust for 2 more years.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
i really hope AMD has something in the 1080 range for ~400$ that would be perfect for me to replace my "champion of 2012-3" GTX680.
but history tends to repeat itself so i predict it's gonna go down like:

AMD releases Vega, does not dethrone GTX1080TI-> nVidia price cuts 10 series -> few months later nVidia releases Volta to leave AMD in the dust for 2 more years.
That's not really what has happened historically speaking.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Just a reminder of how AMD has in fact destroyed the competition in the past:

perfwatt_2560.gif


When you have a card like that at the top end, you know the cut down version is going to be a really great value, as the HD 5850 was.
You need to show FPS, not % of. That chart is useless when 100% is unplayable, and .01% run 25x16, which was only due to the bigger VRAM's that AMD carried at the time...
 

DisEnchantment

Golden Member
Mar 3, 2017
1,777
6,791
136
Vega 10
Code:
{0x1002, 0x6860, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6861, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6862, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6863, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6867, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x686c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x687f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},

Based on AMD's convention of higher IDs denoting lower performance, would this indicate that the Chip from CES is the slower Vega Part?
The one from CES is the 687F. If the fastest is 687F then 6 slots below would be RX 460 grade? :D:D

6861/2/3/4 could be close to each other in performance with such adjacent IDs, or simply different features?
6867 and 686C seems ...
 
Last edited:
  • Like
Reactions: DarthKyrie

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Just a reminder of how AMD has in fact destroyed the competition in the past:

perfwatt_2560.gif


When you have a card like that at the top end, you know the cut down version is going to be a really great value, as the HD 5850 was.

That was AMD's pinnacle. It's been downhill ever since. Their follow-up flag ship on the same node couldn't match the efficiency of the HD 5870, and barts couldn't separate itself from Cayman either.


perfwatt_2560.gif



Back on topic, Vega will likely be more hard-pressed than Fiji was. Competition has been out way longer and it's likely that new competition will be introduced in a shorter time frame into Vega's shelf life than Fiji faced.
 
Last edited:

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
I suspect Vega will be fine. There will be some growing pains like Polaris but the Radeon Group needs a new high end gpu. Fury is getting old.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Vega 10
Code:
{0x1002, 0x6860, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6861, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6862, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6863, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6867, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x686c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x687f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},

Based on AMD's convention of higher IDs denoting lower performance, would this indicate that the Chip from CES is the slower Vega Part?
The one from CES is the 687F. If the fastest is 687F then 6 slots below would be RX 460 grade? :D:D

6861/2/3/4 could be close to each other in performance with such adjacent IDs, or simply different features?
6867 and 686C seems ...
Assuming you are correct that a higher ID denotes lower performance, this might just be server/machine learning configurations in play. Some features might be enabled/disabled depending on market, and the professional parts will use different HBM2 stacks (higher density) than consumer.

What consumers are getting might very well be the slowest Vega, but it could still be just as fast in gaming as its big brothers.
 

alcoholbob

Diamond Member
May 24, 2005
6,389
468
126
I suspect Vega will be fine. There will be some growing pains like Polaris but the Radeon Group needs a new high end gpu. Fury is getting old.

Vega only has double the core count of RX 480. Even assuming 100% scaling, they need to increase the core clock by about 20% to reach the 1080 Ti (Vega needs to hit at least around 1525MHz), and to reach an overclocked 1080 Ti it needs closer to a 35% speed bump (around 1700 MHz).

We'll see if AMD has managed that significant of a bump. They didn't manage that much from R9 290 -> R9 390 -> Fury.

Most of the evidence points to big Vega being more along the lines of a $450 card and a 1080 competitor.
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
Vega only has double the core count of RX 480. Even assuming 100% scaling, they need to increase the core clock by about 20% to reach the 1080 Ti (Vega needs to hit at least around 1525MHz), and to reach an overclocked 1080 Ti it needs closer to a 35% speed bump (around 1700 MHz).

We'll see if AMD has managed that significant of a bump. They didn't manage that much from R9 290 -> R9 390 -> Fury.

Most of the evidence points to big Vega being more along the lines of a $450 card and a 1080 competitor.
Yeah i don't see Vega being able to touch 1080Ti either. Am guessing will get a 1080 competitor at $450 and a 1070 competitor at $330.
Maybe another card that's slightly faster than 1080 for $550.
Slightly cheaper than Nvidia but end up being the same price outside usa because amd doesn't have direct control over their retail channels like nvidia.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
Vega only has double the core count of RX 480. Even assuming 100% scaling, they need to increase the core clock by about 20% to reach the 1080 Ti (Vega needs to hit at least around 1525MHz), and to reach an overclocked 1080 Ti it needs closer to a 35% speed bump (around 1700 MHz).

We'll see if AMD has managed that significant of a bump. They didn't manage that much from R9 290 -> R9 390 -> Fury.

Most of the evidence points to big Vega being more along the lines of a $450 card and a 1080 competitor.
You cannot compare them like that. They are totally different architectures. The tflop to compute performance comparison might be totally different.

die
RX 480 - 232 mm2
1080 - 314 mm2
Vega 10 - ~530 mm2 (with HBM2)
 
Last edited:

alcoholbob

Diamond Member
May 24, 2005
6,389
468
126
Except we do have some evidence, which is the Doom demo they showed, in Vulkan (the absolute best light) and it was only about 10% faster than the 1080. That seems to point to a general DX11 perf possibly even lower than the GTX 1080.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
June. Some think it will be sooner.



Looks like projection to me.

I am waiting for vega, but I am not a fanboy. I try to support amd when possible to insure we still have some semblance of competition.

Any fool should be able to see what a market without amd looks like, because we have gotten a good glimpse of it over the years.

$1800 cpus & $1200 gpus, with 4 flagship gpus in one generation.

You have to ask yourself why people would root for this type of market. Then when you exhaust all of the possibilities you'll see that arguing with them only feeds their agenda. It gives them a stage to perform on.

I'm not saying not to present your opinions. Just don't waste the time with the back and forth never ending repetitive rhetoric.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Actually, with the density improvements of HBM2 in the PHY department, they can turn GP102 into a 430mm^2 die.

If they go full guns blazing and create a 600mm^2 GP202 or something of the sort, they have quite a bit of headroom left.
To maintain their margins what would they have to sell something like that for?
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
There are three reasons why GCN, until now, has been unable to compete with Maxwell and Pascal.

(1) Maxwell implemented tile-based rendering, which was a huge leap forward. This provided roughly 30% better DX11 performance per TFlop compared to Kepler. As of now, GCN remains the only serious (i.e. non-Intel) GPU architecture in either desktop or mobile that doesn't implement tiled rendering. Vega will add it, which should be a substantial performance improvement.

(2) Nvidia has had the edge on clock speeds. Maxwell had higher clock speeds than GCN 1.1/1.2, and Pascal has higher clock speeds than Polaris. This has allowed Nvidia to get the same amount of performance out of less silicon by cranking up the clocks. But Vega is said by AMD to be optimized for higher clock speeds - even if it doesn't completely close the gap with Pascal, it should at least narrow it substantially.

(3) All versions of GCN so far have had a limitation of 4 shader engines. This was the primary reason why the Fury cards were so underwhelming; they were badly unbalanced designs, so in many cases they couldn't provide much better performance than Hawaii despite all the extra shaders. IMO, this is why there was never a Polaris card bigger than P10 - the gains would have been so marginal, it wouldn't have been worth it. Vega will remove this limitation and offer better load balancing.

In other words, all of the bottlenecks currently holding back GCN should be removed by Vega. That's why I am optimistic about its performance. Could AMD screw this up? Sure, they've done so in the past. But all things considered, Ryzen was a smashing success, and I think Vega will be as well.