News AMD Announces Radeon VII

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

x3sphere

Senior member
Jul 22, 2009
722
24
81
www.exophase.com
Kinda interested in this card. I don't care about RTX features, they seem to be mostly unusable at higher resolutions at this time. It'll also be bundled with one of the games I'm interested in (DmC 5) so that's a plus too.

Also still seems a lot of people are experiencing RTX card failures. Of course you always run the chance of getting a lemon but I have never seen so many reports of failures before.

The downside, it doesn't seem to be much more powerful than a 1080 Ti which offered around this performance 2 years ago. Not a great value at all. But I got a 34GK950F FreeSync monitor recently, so this would pair nicely with it. The only reason Nvidia is an option as they recently announced compatibility with non G-Sync monitors, but I am not seeing much reason to get an RTX 2080 over this.
 

Thrashard

Member
Oct 6, 2016
140
28
71
I'm shocked Jensen said those things, but considering what our CIA run news has been doing last 2 years, this all looks staged. And they are related too, and did not know that. That's a nice zinger
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
That was a great way to handle that, and made Jensen look pretty unprofessional IMO.

But probably right. Matching a 2080 / 1080Ti 14/12nm while being on 7nm and using 300W isn't exactly great. probably same over again, clocked to the limits. That it uses a lot of power can be seen from the cooling solution and 2x 8-pin.

Anyone who bashes this new AMD card is smoking crack. There is so much fake news and sock puppet accounts, you can't trust anyone.

Hello ? There are people like myself who have a 5yo system and regardless what news says, we are still living in a recession.

As other have said: Same performance as 1080ti which released 2 years ago for same price. Stagnating performance/$ over 2 years is just bad.

No way is amd is bringing this to market at a loss. They are not that profitable yet. They don't have the luxury to take such moves.

Of course they make money with it but I suspect they can't lower price much more and only made it a product once they realized NV hiked prices again while not offering higher performance.
 
  • Like
Reactions: happy medium

PhonakV30

Senior member
Oct 26, 2009
987
378
136
He also said Freesync doesn't work!
perhaps he mentioned very specific workload that freesync won't work
makes sense as it seems to suffer from same problems of poor balance and performance/watt.
Memory halved. 1TB/s vs 484gb/s

https://cdn.wccftech.com/wp-content...a-VII-GPU-Official-Presentation_2-740x416.jpg

on Vega64 only 2 HBM package for 4 graphic pipeline , Radeon VII has 1 HBM for each graphic pipeline, so each graphic pipeline in Radeon VII accesses It own HBM memory (No more sharing data)
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
makes sense as it seems to suffer from same problems of poor balance and performance/watt.
Rops doesnt matter. Its just an old pre Maxwell gaming gpu arch type. The rops is doing tons of work that isn't nessesary. The new arch was meant to alleviate that and other problems but it was borked. So we are stuck with a prehistoric gpu build of high-end stuff.

Now as for the performance. It remains to be seen if the perf is where amd says it is but looking at the 1.16 freq uplift vs vega I wouldn't think this gpu is a 300w part. If you ran it at same freq as Vega you would get aprox half power usage. I think anandtech is wrong about the power usage. But I don't think this is quite 2080 performance either. Then some of those few new transistor is used to fix the borked fast path and that's highly unlikely. On the other hand navi without that fix seems a total waste.
 

coercitiv

Diamond Member
Jan 24, 2014
6,182
11,831
136
Now as for the performance. It remains to be seen if the perf is where amd says it is but looking at the 1.16 freq uplift vs vega I wouldn't think this gpu is a 300w part. If you ran it at same freq as Vega you would get aprox half power usage.
The card is specifically clocked to match RTX 2080. Maybe AMD finally did the right thing and made a ~250W card with decent overclock headroom, but I really doubt AMD marketing would give up the opportunity to show a 5%+ advantage over RTX 2080 at CES over having a card with decent power usage in reviews. They're just not wired that way.

Meanwhile, the Radeon Instinct cards clock to 1746 and 1800Mhz respectively and come with a 300W TDP. The Instinct MI50 actually matches Radeon VII in shaders and memory configuration, and comes with 300W TDP for 1746 Mhz boost.
 
  • Like
Reactions: lightmanek

Shamrock

Golden Member
Oct 11, 1999
1,438
558
136
  • Like
Reactions: kawi6rr

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Well I was definitely wrong on my assumptions. I'm baffled. AMD outright said they kept Vega 64's rendering stuff the same with Vega 20, so where they double ROPs comes from I don't know. But I saw one of the Anandtech guys saying that AMD had told him they double ROPs, but that also makes little sense to me since Vega 20 was supposedly enterprise focused. But then they're apparently using these in some of the game streaming enterprise solutions, so I guess that makes sense (but doesn't make sense that was basically the only way they addressed that). For me it would make more sense to buy up Vega 64 cards that were made for mining.

Granted a lot of my takes (Navi being ready now) were based on rumor (and I tried to make that clear), so I'm not really disappointed that didn't pan out like I thought. But this card seems to deliberately refute things AMD has said, so now even what they say has to be taken with a grain of salt.

The silliest part is that, its not like this card is really upsetting Nvidia's market strategy, so there really was little to no reason to keep it under wraps and say Vega 20 would not be for gamers (which they didn't make this a Frontier Edition card, so either they're taking the total opposite track of Nvidia and claiming that above this is prosumer level, or this is a gaming card). I don't see that they likely would've lost Vega 64 sales (if anything, they should've announced this thing ASAP to steal some thunder from Nvidia's RTX), but then I wonder if the only reason we're seeing this is because the RTX stuff didn't go over as well - largely because of the pricing - which made AMD feel like they could offer this as a competitive product. Well that and either the yields on 7nm are already that good or its flopping hard in enterprise (which I'd think they'd just drop the price, or shift it to prosumer where they could probably sell it for double or at least $1000; maybe offer special pricing packages for Threadripper workstations that use it).

I really hope this isn't an indication that Navi is that far off.



Did other performance figures get leaked? (Just saw that there are some being discussed.) The only thing I've seen touted its performance via AMD compared to the 2080 (which is likely best case scenario), and while its definitely a step up over Vege 64, its honestly not that impressive, and considering the price, and I have serious questions if AMD is making money selling the card at that price (since 7nm and more HBM channels and stacks), I just have to wonder why they're doing this. Its inline with what I was thinking and to me isn't that good. I'm pleasantly surprised that they apparently can get it down to that price, as I was thinking it'd cost them even more, but even for that price I'm not impressed.

Not only that, but I'm not at all impressed with the efficiency. That to me indicates the issues with Vega were not related to GF's process much (that Zen 2 is seeing such a big boost in efficiency tells me that 7nm seems to be living up to its potential quite a bit already). Same with the clock speeds. Vega seems like an inherently flawed architecture, one that seemingly prioritized compute capability. Which that would be fine, but its rendering/rasterizing capability suffered mightily for it, and I'm not sure that compute outstripped just doubling Polaris and adding support for further reduced precision math. We can be fairly certain that gaming performance didn't benefit versus that, and if they would've done 96 ROPs, it'd probably be more balanced. On top of that, just doubling the memory channels would've probably made it cheaper and it'd have had more bandwidth than HBM2 solution had.

Guess we'll see. People have speculated a lot that Vega was bottlenecked in ROPs and/or memory bandwidth. But if things were that severe, I'd expect a bigger improvement when doubling both. Seems like it will still be unbalanced, just in the opposite direction now or in some other manner (unless its still ROP and/or memory bandwidth bottlenecked in which case...WTH).

I really hope AMD was getting access to some of the data that Microsoft used to decide on the One X setup, as that seems like a well balanced and efficient design. Which that would seem to support that AMD's dGPUs tend to be bandwidth limited some, but I believe its actually the TMUs that was boosted relative to Polaris, while the ROPs were the same. And I believe AMD has said they don't believe ROPs is a limiting factor in the past (so I wouldn't be surprised if most of Vega 20's performance increase comes simply from the memory).

Its a shame that I think GPU IP has been pretty staunchly locked up, as I feel like this market could use a boost from companies trying some similar ideas but just trying different ratios or other things. I also feel like right now that AMD and Nvidia are both trying to figure out next steps, where they're trying to determine specialization versus integration. While they're also dealing with major engineering challenges, and trying to balance product portfolios, where they're trying to design GPUs for such a wide breadth of markets that I can't help but feel like we're getting a lot things that aren't well suited for their markets.

You may be correct that the previously ROP count makes no sense:

https://www.extremetech.com/gaming/...-viis-core-configuration-has-been-misreported
"According to an AMD spokesperson we’ve since spoken to, the number of reported ROPs for the Radeon VII is incorrect. The 128 figure, while widely and credibly stated, is wrong. “Radeon VII is 64 ROPs,” the AMD spokesperson stated to us. The GPU’s actual configuration is therefore 3840:240:64."

Edit: from Anandtech page twitter:
 
Last edited:
  • Like
Reactions: lightmanek

GodisanAtheist

Diamond Member
Nov 16, 2006
6,780
7,107
136
Happy/not Happy that the ROPs thing got sorted, that was keeping me up at night :p.

So really we're looking at the second shrink of Fiji. No one can say AMD didn't get it's money's worth out of GCN...
 

linkgoron

Platinum Member
Mar 9, 2005
2,293
814
136
Happy/not Happy that the ROPs thing got sorted, that was keeping me up at night :p.

So really we're looking at the second shrink of Fiji. No one can say AMD didn't get it's money's worth out of GCN...

Vega64 wasn't a straight shrink from Fiji. However, it does look like Vega20 is essentially unchanged from Vega10. I believe that the performance levels from Vega20 at 7nm are essentially what they originally expected from Vega10 at 14m. I'll wait for benchmarks, as I might be surprised, but for gaming I don't see why anyone would buy this over a (similarly priced) 2080 or cheaper 1080ti.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
2:1 FP64, doubled memory controllers, >2x memory bandwidth.

Won't set any records in gaming versus RTX 2080 Ti, but if that DP compute rate part of Ryan's tweet is true, it has the potential to be a DP compute monster. The Radeon 7970 was the last consumer card with good bang for the buck on the DP front (1:4).
 

Veradun

Senior member
Jul 29, 2016
564
780
136
  • Like
Reactions: Kenmitch

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
Vega64 wasn't a straight shrink from Fiji. However, it does look like Vega20 is essentially unchanged from Vega10. I believe that the performance levels from Vega20 at 7nm are essentially what they originally expected from Vega10 at 14m. I'll wait for benchmarks, as I might be surprised, but for gaming I don't see why anyone would buy this over a (similarly priced) 2080 or cheaper 1080ti.
There is still aprox 1b extra transistors in there. What do they do?
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
makes sense as it seems to suffer from same problems of poor balance and performance/watt.

The ROP count has ZERO impact on its Perf:Watt. Polaris and Vega have low Perf:Watt only when clocked to the sky to be a desktop GPU. They were designed for low power usage, and when run in that configuration, is extremely power efficient. But since AMD wanted to sell them as desktop GPUs as well, they clocked them way up, which leads to power consumption skyrocketing.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
There is still aprox 1b extra transistors in there. What do they do?

Perhaps what the first extra set of transistors did for Vega from Fiji, helped increased clock speed.

https://www.anandtech.com/show/1168...-vega-64-399-rx-vega-56-launching-in-august/3
Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.

I get a sneaking suspicion this thing is gonna suck power down. It seems AMD is left with juicing their current products with each node drop to keep them competitive. I hope Navi turns that ship around because I don't see Intel making a huge initial splash, so NV is left to do as they please.
 

garagisti

Senior member
Aug 7, 2007
592
7
81
You may be correct that the previously ROP count makes no sense:

https://www.extremetech.com/gaming/...-viis-core-configuration-has-been-misreported
"According to an AMD spokesperson we’ve since spoken to, the number of reported ROPs for the Radeon VII is incorrect. The 128 figure, while widely and credibly stated, is wrong. “Radeon VII is 64 ROPs,” the AMD spokesperson stated to us. The GPU’s actual configuration is therefore 3840:240:64."

Edit: from Anandtech page twitter:
From extremetech link you posted, provided benchmark info is 4k highest settings for the games, then how good or poorly does it compares with 1080ti or 2080rtx at ultra settings?
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
The ROP count has ZERO impact on its Perf:Watt. Polaris and Vega have low Perf:Watt only when clocked to the sky to be a desktop GPU. They were designed for low power usage, and when run in that configuration, is extremely power efficient. But since AMD wanted to sell them as desktop GPUs as well, they clocked them way up, which leads to power consumption skyrocketing.

Gotta disagree. NVIDIA is still more power efficient when clocked low. Yes AMD is pushed so hard that it's performance/watt is terrible, but when they're at performance parity, NVIDIA is still consuming less juice. NVIDIA has done a great job of designing chips for high clocks while remaining efficient.

Navi couldn't come soon enough.
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
The ROP count has ZERO impact on its Perf:Watt. Polaris and Vega have low Perf:Watt only when clocked to the sky to be a desktop GPU. They were designed for low power usage, and when run in that configuration, is extremely power efficient. But since AMD wanted to sell them as desktop GPUs as well, they clocked them way up, which leads to power consumption skyrocketing.

I remember the hype in forums about Polaris offering 970 performance or such at 75W power, lol. Back then nobody took a step back and thought about how that would be possible. If Polaris and Vega aren't meant to be great desktop GPUs then why have been enthusiasts excited about its possibilities since the last 3 years.
 

Mopetar

Diamond Member
Jan 31, 2011
7,830
5,976
136
If Polaris and Vega aren't meant to be great desktop GPUs then why have been enthusiasts excited about its possibilities since the last 3 years.

Hope and hype. Enthusiasts always hope there's something new, great, and exciting just around the corner and that naturally builds up huge amounts of hype over time.

In the case of AMD, I suspect that there's a little bit of the rooting for the underdog mentality thrown in there as well, and not just from the longtime fans of the company.

Sometimes the hopes are realized and the product lives up to the hype as was the case with Ryzen. Other times, well other times you get Polaris and Vega. Even if it doesn't seem good initially, there's still holdout hope that driver updates or something magical will unleash some latent potential, and Vega had that in spades with all of these different technologies that were talked about, but not always well understood.

I think Vega is enough of a known quantity that there isn't much sense in getting excited about it in a general sense or expecting it to go ever touch the competing Ti card unless maybe it's five years down the road and there's severe driver rot.

If there's anything to be excited about, it's that AMD has managed to learn a lot from their Vega missteps and that Navi will be a significantly improved product. If nothing else, hopefully it will at least represent an improvement in performance per price, as we've not gotten that from AMD with the 590 or this newest Vega.