• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Is Vega going to be DOA!?

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
You have to ask yourself why people would root for this type of market. Then when you exhaust all of the possibilities you'll see that arguing with them only feeds their agenda. It gives them a stage to perform on.

I'm not saying not to present your opinions. Just don't waste the time with the back and forth never ending repetitive rhetoric.
True. I have even seen people go so far as to try to argue that monopolies are good.

There are three reasons why GCN, until now, has been unable to compete with Maxwell and Pascal.

(1) Maxwell implemented tile-based rendering, which was a huge leap forward. This provided roughly 30% better DX11 performance per TFlop compared to Kepler. As of now, GCN remains the only serious (i.e. non-Intel) GPU architecture in either desktop or mobile that doesn't implement tiled rendering. Vega will add it, which should be a substantial performance improvement.

(2) Nvidia has had the edge on clock speeds. Maxwell had higher clock speeds than GCN 1.1/1.2, and Pascal has higher clock speeds than Polaris. This has allowed Nvidia to get the same amount of performance out of less silicon by cranking up the clocks. But Vega is said by AMD to be optimized for higher clock speeds - even if it doesn't completely close the gap with Pascal, it should at least narrow it substantially.

(3) All versions of GCN so far have had a limitation of 4 shader engines. This was the primary reason why the Fury cards were so underwhelming; they were badly unbalanced designs, so in many cases they couldn't provide much better performance than Hawaii despite all the extra shaders. IMO, this is why there was never a Polaris card bigger than P10 - the gains would have been so marginal, it wouldn't have been worth it. Vega will remove this limitation and offer better load balancing.

In other words, all of the bottlenecks currently holding back GCN should be removed by Vega. That's why I am optimistic about its performance. Could AMD screw this up? Sure, they've done so in the past. But all things considered, Ryzen was a smashing success, and I think Vega will be as well.

That sounds promising.
 
  • Like
Reactions: DarthKyrie

Mopetar

Diamond Member
Jan 31, 2011
8,496
7,753
136
Am guessing will get a 1080 competitor at $450 and a 1070 competitor at $330.
Maybe another card that's slightly faster than 1080 for $550.

Vega is around 500 mm^2 and is using HBM2 so there's no way it sells for as low as $330. If it's that much of an absolute dog for gaming, AMD probably wouldn't bother and just sell it to the professional market.
 

TerionX6

Junior Member
Jun 29, 2015
14
20
46
From the announced feature changes we can compare Fiji+Polaris to Vega as we could compare Kepler to Maxwell. Plus there's the unknown factor of that HBCC unit and specifically what it does, how it works, and in technical detail how it improves situational VRAM constrained performance.

The 980 had similar feature changes as Vega and was a massive, massive improvement over the 780/680 in perf/watt and clock ceiling. We already know Vega has the clock limit increased compared to Polaris pretty much by the same ratio as 680 > 980. We should assume that due to the similar changes (tiled raster, L2 linked ROPs, improved DCC, better balanced RBEs) Vega should see somewhat similar improvements in overall performance.

People sometimes talk about hype and failure and this is easy to do because of the last 3 years of AMD history. Look at the facts already given, the conclusions are rather easy to draw out.
 

alcoholbob

Diamond Member
May 24, 2005
6,389
468
126
You don't know the state of the card or drivers. It means almost nothing without more information.

Obviously I don't know since I'm not clairvoyant. It's called a prediction based on current evidence. The thing is some people *do* get predictions right on some of these occasions, and it reaffirms the usefulness of empirical data and deductive reasoning.

I find those who always who always says "you are wrong because you don't know exactly what the future entails" an interesting opinion, because we have all kinds of scientific and industrial endeavors which rely on prior data points (complexity theory) to make predictions because forming a model based on human behavior is almost impossible. Which is to say, seems like you have a pretty defeatist outlook in life, and wouldn't be a good candidate to be in cutting edge scientific research.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
ex·trap·o·late
[ikˈstrapəˌlāt]
VERB
  1. extend the application of (a method or conclusion, especially one based on statistics) to an unknown situation by assuming that existing trends will continue or similar methods will be applicable:
I just don't think we know enough to make educated guesses, as there are far too many unknown variables.

I make predictions too, and I typically do a decent job at it, but I can't pin down vega. I am not saying your wrong or right, idk.
 
Last edited:

Dave2150

Senior member
Jan 20, 2015
639
178
116
I'm surprised we haven't seen any leaks since Vega was first demonstrated in December. Surely, this can only mean that it wasn't the final silicon and that it's being respun etc? Or perhaps they are waiting for yields to improve as it can't compete with the 1080ti's price/performance ratio, hmm.
 
  • Like
Reactions: PHiLiPZ_

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Except we do have some evidence, which is the Doom demo they showed, in Vulkan (the absolute best light) and it was only about 10% faster than the 1080. That seems to point to a general DX11 perf possibly even lower than the GTX 1080.
Vulkan is only the absolute best light for it if developers coded for it. Vega is a new architecture, doubtful it got the optimizations that the rest of GCN recieved. I would hazard a guess and say that its performance there was no different than its general DX11 performance.

And that's on a thorttling card not hitting its clockspeed targets, on very early drivers.

So no, that DOOM demo doesn't tell us enough.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
There are three reasons why GCN, until now, has been unable to compete with Maxwell and Pascal.

(1) Maxwell implemented tile-based rendering, which was a huge leap forward. This provided roughly 30% better DX11 performance per TFlop compared to Kepler. As of now, GCN remains the only serious (i.e. non-Intel) GPU architecture in either desktop or mobile that doesn't implement tiled rendering. Vega will add it, which should be a substantial performance improvement.
Tile based rendering saves bandwidth and increases energy efficiency. It also reduces cache misses, though not enough to account for the performance uplift. The better DX11 performance per TFLOP wasn't due to that. It was due to reducing CUDA cores per SM from 192 to 128, better balancing the entire chip in terms of resources, and having better load balancing and scheduling.

(2) Nvidia has had the edge on clock speeds. Maxwell had higher clock speeds than GCN 1.1/1.2, and Pascal has higher clock speeds than Polaris. This has allowed Nvidia to get the same amount of performance out of less silicon by cranking up the clocks. But Vega is said by AMD to be optimized for higher clock speeds - even if it doesn't completely close the gap with Pascal, it should at least narrow it substantially.
No problem here

(3) All versions of GCN so far have had a limitation of 4 shader engines. This was the primary reason why the Fury cards were so underwhelming; they were badly unbalanced designs, so in many cases they couldn't provide much better performance than Hawaii despite all the extra shaders. IMO, this is why there was never a Polaris card bigger than P10 - the gains would have been so marginal, it wouldn't have been worth it. Vega will remove this limitation and offer better load balancing.
While true, from Linux patches showing Vega's configuration, we can see that it's still 4 shader engines.
https://lists.freedesktop.org/archives/amd-gfx/2017-March/006570.html
Code:
+ case CHIP_VEGA10:
+ adev->gfx.config.max_shader_engines = 4;
+ adev->gfx.config.max_tile_pipes = 8; //??
+ adev->gfx.config.max_cu_per_sh = 16;
+ adev->gfx.config.max_sh_per_se = 1;
+ adev->gfx.config.max_backends_per_se = 4;
+ adev->gfx.config.max_texture_channel_caches = 16;
+ adev->gfx.config.max_gprs = 256;
+ adev->gfx.config.max_gs_threads = 32;
+ adev->gfx.config.max_hw_contexts = 8;
+
+ adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+ adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+ adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+ adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+ gb_addr_config = VEGA10_GB_ADDR_CONFIG_GOLDEN;
+ break;
Still a max of 4 shader engines, and still a max of 4 render back ends per SE.

In other words, all of the bottlenecks currently holding back GCN should be removed by Vega. That's why I am optimistic about its performance. Could AMD screw this up? Sure, they've done so in the past. But all things considered, Ryzen was a smashing success, and I think Vega will be as well.
Nothing to add.
 
Last edited:

KompuKare

Golden Member
Jul 28, 2009
1,230
1,601
136
I'm surprised we haven't seen any leaks since Vega was first demonstrated in December. Surely, this can only mean that it wasn't the final silicon and that it's being respun etc? Or perhaps they are waiting for yields to improve as it can't compete with the 1080ti's price/performance ratio, hmm.

Silicon respins are a possibility, but I would imagine that if Vega is such a big departure from previous GCN architectures as it appears to be then there will be quite a bit of driver work to do. Now the usual AMD approach is to get the product out and optimise the drivers later. That keeps getting them bad reviews and bad sales though. So maybe this time, they are polishing the drivers first?

Obviously Ryzen is by far their most important launch in years, and while Radeon engineers might not have much to do with that, there other things which go into a launch and some of those must be shared like some packaging expertise, marketing, some validation + QA, etc. And those shared resource naturally have Ryzen as priority #1 (and probably #2 and #3 too).

Other factors might have been the Scorpio work plus of course the APUs where there actually is overlap between CPU and Radeon engineers.
 
  • Like
Reactions: DarthKyrie