News AMD Announces Radeon VII

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Just curious, but why no support for Windows 8.1, but there is for 7?!

There are still some people who run 7 because they don't want to move to 10. Windows 8 is a little bit like Vista, where it's skipped because people either kept on XP or moved to 7 as soon as they could.

I recall early in Windows 10's release where XP still had more people using it than Windows 8, so it's not surprising that it doesn't get support. It's already outdated, but it doesn't have a massive amount of users like Windows 7.

I did some more digging for updated stats and it seems that this is still the case. One site that tracks marketshare for the different versions of Windows has stats for 2019:

Windows 10: 52%
Windows 7: 36%
Windows 8.1: 7%

Funny enough, XP has more share than 8. I'm kind of floored that people are still running an OS that's over 15 years old at this point.
 

Muhammed

Senior member
Jul 8, 2009
453
199
116
Yeaa this is some weird situation and I agree they need something extra to reach 2080..
There is nothing extra, mostly just a speed pump, (with some enhancements and tuning here and there) and it won't reach 2080. It will still be slightly slower.
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Yeaa this is some weird situation and I agree they need something extra to reach 2080..
If we take Lisa explanation that this gpu was also intended for gaming at face value and I think we can do that

I think she just meant that they were always going to release this gaming version of it. The design changes in this chip are very obviously compute focused - quite rightly too as that’s where Fury/Vega etc actually compete OK.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Just curious, but why no support for Windows 8.1, but there is for 7?!

Fwiw, I've yet to find a device that I can't get a driver working across W7, W8/8.1, and W10, though once in a rare while you have to force it.

A frequent example I run into is older motherboards that have onboard Nvidia 6150 or 7100 IGP. No official drivers on Nvidia site post-W7, and windows 10 automatic install doesn't find a driver by itself. Installing the final W7 package works though with no problems.

In fact, I find older Nvidia cards way easier to get working with modern OS compared to ATI/AMD. Perhaps GCN will eventually prove as long-lived (I hope). I realize this is irrelevant to most, but I clean and refurb donated PCs of all types for a resale shop whose sales support an excellent community food bank.

ATI X800/etc launched in 2004, same as Nvidia 6xxx. With Nvidia 6xxx and newer, I never have any notable issues getting it to work easily. With X1xxx and older, I have had a ton of issues getting them to work with Windows 8 and 10. W7 I can kind of force it, but it's still never all that stable or satisfactory. When you think about it, it's kind of impressive : 2004-2019, 15 years on, and they still work just fine.

The oldest AMD/ATI GPUs that seem to work easily in W10 in my experience seem to be the 3xxx series. Then again, I haven't run into any used 2900XT/etc to mess with recently. Those cards seem to be kind of hard. Ages ago in 2006ish I have a 2600XT 256MB in a PC, but it was gone well over a decade ago.

Anyway :) I bet Radeon Vega 7 will work in W7/W8/W8.1/W10. I'd be really surprised if that weren't the case.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Yea that's why I said I don't think Navi is coming this year, it will be Vega on 7nm.
It looks like Navi vs the Nvidia 3000 series both on 7nm at the end of 2020 at best.

Navi could start at the lower segments than Vega II ($699) , for example a 200mm2 navi at 7nm could be the foundation for sub $499 cards within 2019.
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Yea that's why I said I don't think Navi is coming this year, it will be Vega on 7nm.
It looks like Navi vs the Nvidia 3000 series both on 7nm at the end of 2020 at best.

That seems terribly unlikely. We know for a fact that Vega is not a good architecture when it comes to gaming. You think they'll go another 18+ months (until the end of 2020) limping along with Vega?

There also aren't even any rumored Vega products to suggest that's what they'll do. The only thing that hasn't been accounted for is Vega 12, which cropped up early last year, but even then it was thought to be a 12nm part. Some driver code that people found seemed to indicate that it was more likely to be a low-end part and a possible Polaris replacement, but with the release of Polaris 30 (RX 590) that may be an indication that Vega 12 got shelved.

Meanwhile we have references to multiple Navi models showing up in beta versions of MacOS. While that's no guarantee of a release this year, it gives us a glimpse at what AMD's plans are. I think they're content to just ride things out until they can start shipping 7nm Navi products towards mid-year. Turing prices have been high and AMD likely feels confident that consumers won't rush to upgrade. Similarly, they know Vega doesn't offer good value for money, so they won't convince anyone outside of AMD loyalists to buy them.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
That seems terribly unlikely. We know for a fact that Vega is not a good architecture when it comes to gaming. You think they'll go another 18+ months (until the end of 2020) limping along with Vega?
It is not unlikely to assume most of the gpu devs are working on next gen consoles right now. If that is the case then they will make something new based off the console gpu's, which won't happen till after they have finished them, which means to be at least 2020.
That's not to say that something called Navi won't come out this year, but if it does it's unlikely to be the revolutionary new cards AMD needs.
 
Last edited:

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
FP64 at 2:1 and doubled memory controller?

Doubled memory controller. You mean just like Fury X? Adding a wider data bus doesn't say "compute" focused, and FP64 at 1:2 is likely an easy change that has been in the lineup before. Hawaii GPU had it.
https://arrayfire.com/explaining-fp64-performance-on-gpus/

In reality AMD is designing one architecture to cover everything, and if anything is foremost it's gaming since that is where AMD makes it's GPU money.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
More compute things - support for integer 4 & the ability to glue a bunch of them together. Realistically the 16gb of main memory too.

It isn't on the scale of NV with Volta but they definitely put a little effort into tweaking this for data centre markets.

I don't think there's anything similar known for gaming?
 

DrMrLordX

Lifer
Apr 27, 2000
21,583
10,785
136
What Design changes?

Increase FP64 was not present in MI25, so that's a compute-focused change internal to the Vega family. Also included is improved low-precision performance which is useful in AI deep learning:

AMD is supporting new low precision data types as well. These INT8 and INT4 instructions are especially useful for machine learning inferencing, where high precision isn’t necessary, with AMD able to get up to 4x the perf of an FP16/INT16 data type when using the smallest INT4 data type. However it’s not clear from AMD’s presentation how flexible these new data types are – and with what instructions they can be used – which will be important for understanding the full capabilities of the new GPU. All told, AMD is claiming a peak throughput of 7.4 TFLOPS FP64, 14.7 TFLOPS FP32, and 118 TOPS for INT4.

That's not a "gaming" feature unless you plan on supporting some kind of real-time raytracing.

And there's more: IF over PCIe4.0 (probably part of the CCIX standard):

On the PCIe front, AMD has revealed that the GPU supports the recently finalized PCIe 4 standard, which doubles the amount of memory bandwidth per x16 slot to 31.5GB/sec. However AMD isn’t stopping there. The new GPU also includes a pair of off-chip Infinity Fabric links, allowing for the Radeon Instinct cards to be directly connected to each other via the coherent links. I’m still waiting for a confirmed breakdown on the numbers, but it looks like each link supports 50GB/sec down and 50GB/sec up in bandwidth.

https://www.anandtech.com/show/1356...ct-mi60-mi50-accelerators-powered-by-7nm-vega
 

Guru

Senior member
May 5, 2017
830
361
106
- I am really curious to see how these reviews pan out. Where is this supposed 25-30% performance increase coming from?

Maybe it's just sustained boost clocks, maybe AMD got some of their secret sauce working... We'll find out soon enough.
A big part of it will come from the full unlease of the HBM2 memory, it comes fully clocked and at full power. We know that AMD's Vega design is really dependant on memory speeds, so having a full blown 16GB HBM2 memory at fully capacity and speed is probably where they got at least 10% of their performance boost, maybe even 15%. Couple that with about 200MHz increase in core clock speeds and probably few optimizations in architecture and you've got your 30% performance increase.
 
Mar 11, 2004
23,031
5,495
146
That seems terribly unlikely. We know for a fact that Vega is not a good architecture when it comes to gaming. You think they'll go another 18+ months (until the end of 2020) limping along with Vega?

There also aren't even any rumored Vega products to suggest that's what they'll do. The only thing that hasn't been accounted for is Vega 12, which cropped up early last year, but even then it was thought to be a 12nm part. Some driver code that people found seemed to indicate that it was more likely to be a low-end part and a possible Polaris replacement, but with the release of Polaris 30 (RX 590) that may be an indication that Vega 12 got shelved.

Meanwhile we have references to multiple Navi models showing up in beta versions of MacOS. While that's no guarantee of a release this year, it gives us a glimpse at what AMD's plans are. I think they're content to just ride things out until they can start shipping 7nm Navi products towards mid-year. Turing prices have been high and AMD likely feels confident that consumers won't rush to upgrade. Similarly, they know Vega doesn't offer good value for money, so they won't convince anyone outside of AMD loyalists to buy them.

Yeah, no idea why he thinks Navi won't be til end of 2020 now (or even that Nvidia won't have consumer 7nm GPU til then either). AMD's roadmap says that the next chip after Navi is due by the end of 2020.

https://images.anandtech.com/doci/12233/gpu_to_2020.jpg
With a planned sampling in Q4 2018, we might expect volume production to be nearer Q2 2019. This means that the next generation of consumer-focused graphics, perhaps using the newer Navi architecture, will be in the mid-2019 timeframe. According to AMD’s roadmaps, it is committed to demonstrating Vega on 7nm, Navi on 7nm, and a ‘next-gen’ design on 7+ before the end of 2020. Obviously there was no clarification on whether that final design is consumer or enterprise focused for 2020. In our recent interview with AMD’s CEO, when asked if the GPU market will at some point have to bifurcate between gaming focused and compute focused designs, Dr. Lisa Su stated that ‘it must be the case’.

Wasn't Vega 12 mobile (going into Macbooks as Vega Pro 20 and 16)?
https://www.anandtech.com/show/13532/amds-vega-mobile-lives-vega-pro-20-16-in-november

Still trying to find their one investor call transcript that I remember reading where they said Vega 20 was not for gamers, that Navi would be their first 7nm GPU for gaming market. So far I just saw that they said the Instinct cards were releasing Q4/before end of last year and that those were for enterprise market.

We'll find out soon enough I guess, but I'm expecting that Radeon VII is going to just be a limited run thing and despite what they're now saying, I don't think Vega 20 was meant to be a gaming card. If this was a long term product, I'd have expected it as a Frontier Edition card and for them to offer something for the CAD/render/etc group and professionals who want the compute capability but don't have the budget for Instinct cards, both groups that would have more money (and thus higher margins). Those are people that could benefit from it more as well (due to the memory mostly).

Something weird is going on though. Either Instinct sales are not doing nearly as well as they expected, maybe there was a flaw in some that made them unusable for that market (like it disrupted the end to end ECC or maybe the Deep Learning stuff if its totally separate had more flaws than expected) but fine for gamers. Likely the best scenario is that they just ponied up for some extra wafers as a way to commemorate the one guy retiring, and that costs are low enough that they could do a run of them for gamers and not lose money.

What Design changes?

As far as I know, AMD has not actually given us information beyond that most (actually think they said all, but seemingly can't take their previous comments at face value any more) of the Vega 20 chip transistors versus Vega 64 were for Deep Learning compute tasks. They seem to be taking a page out of Nvidia's playbook and not giving us a lot of details about what they've actually done so far. We'll see if that changes any at launch (kinda doubt it, since most of that isn't really for consumers/gamers).

As far as the block diagram it is the exact same between Vega 64 and Vega 20, so they either substantially reworked individual block pieces (that wouldn't show up in the block diagram, so they reworked the NCUs maybe?), or they added something that isn't part of the traditional GPU block and haven't shown us what that is (honestly, at this point I expect its just AMD's version of Tensor Cores).

Vega 64/10: https://images.anandtech.com/doci/11717/vega10_block_diagram.png
Vega 20: https://images.anandtech.com/doci/13547/1541527170830461274558.jpg

It is not unlikely to assume most of the gpu devs are working on next gen consoles right now. If that is the case then they will make something new based off the console gpu's, which won't happen till after they have finished them, which means to be at least 2020.
That's not to say that something called Navi won't come out this year, but if it does it's unlikely to be the revolutionary new cards AMD needs.

Except that's not how things worked previously, they had dGPU chips/cards out before the consoles that used their designs. Fairly sure the GCN version the One/PS4 were based on came out prior to them, Polaris predated the PS4 Pro, and Vega predated the One X. There were some differences (One X wasn't really Vega, etc), but there hasn't been anything stopping AMD from releasing dGPUs based on the engineering/design work that went into consoles, prior to those consoles launching. I think some of the

No idea what you're saying on that last part. I really hope you weren't expecting some major upheaval in GPU design from Navi. Undoubtedly AMD needs improvements to architecture (or to bolster their software side to actually get the potential out of the stuff they make), but they don't even need that drastic of change to see some good improvements. Just a doubling and shrink of Polaris combined with GDDR6 would be a big step up and quite feasible. That should be able to top Vega 64 and still be quite low power, compact, and cheap. And it sounds like they've reworked things so (reworked the block so that things aren't stuck at the ratio that GCN was) that they can balance the chip more. I think there was like a patent application or something that would seem to indicate that geometry throughput should be up like 50% (from 4 per clock to 6 per clock) or something. Granted they'll probably use that to achieve same throughput with fewer CUs. Unless the compute stuff

FP64 at 2:1 and doubled memory controller?

I don't think those things are that big of changes (heck isn't the FP64 stuff not even actually hardware limitation, its just AMD and Nvidia gimp GPUs for market segmentation? Meaning that its not that they added a bunch to bolster the FP64 performance, rather they're just not gimping it as much like they tend to do on gaming cards these days) that would account for the increased size. But I don't think AMD has said much beyond the extra is targeted at Deep Learning stuff. Kinda wonder if its their own versions of Tensor Cores (or basically achieving the same). I got the feeling that AMD is integrating a lot of the lower precision math stuff into the traditional graphics pipeline in the past, would be curious if they're still doing that (and so kept that the same as Vega 64 so that they wouldn't screw anything up by integrating new stuff in with it) or if they just kept the whole Vega 10 block the same so they could maybe gauge 7nm performance (how much shrink it was enabling them, clock speeds, efficiency, etc).
 
  • Like
Reactions: Mopetar
Mar 11, 2004
23,031
5,495
146
A big part of it will come from the full unlease of the HBM2 memory, it comes fully clocked and at full power. We know that AMD's Vega design is really dependant on memory speeds, so having a full blown 16GB HBM2 memory at fully capacity and speed is probably where they got at least 10% of their performance boost, maybe even 15%. Couple that with about 200MHz increase in core clock speeds and probably few optimizations in architecture and you've got your 30% performance increase.

Yeah it likely is full on 25% from the memory bandwidth alone (I think there was a thread on here about it, but I think someone tested and found that most GPU architectures - not just Vega - get a good 25% boost just from doubling memory bandwidth). And then maybe an extra 5-10% from the clock speeds, with how much overall benefit it gets depending on where the bottleneck is (hence why some games see big improvements but it averages out to around 25-30%).

That's also not full clock of HBM2 either (its up to 307GB/s per stack, with 4 stacks offering ~1.2TB/s, or about 20% higher than the HBM2 speed that AMD has Vega 20 rated for - possibly so they can source the normal spec as I think only one company is making the higher spec stuff so far; although maybe there's some other issue at hand preventing it from hitting those speeds, like how Vega 10 didn't).

Increase FP64 was not present in MI25, so that's a compute-focused change internal to the Vega family. Also included is improved low-precision performance which is useful in AI deep learning:



That's not a "gaming" feature unless you plan on supporting some kind of real-time raytracing.

And there's more: IF over PCIe4.0 (probably part of the CCIX standard):



https://www.anandtech.com/show/1356...ct-mi60-mi50-accelerators-powered-by-7nm-vega

What extra would be needed for that level of FP64 support? I didn't think it really required a fundamental change in the hardware, and was just more that AMD was only enabling that for enterprise customers while gimping it elsewhere because it requires validation (end to end ECC) that only the enterprise customers could justify paying for.

Can it actually be used for that? Wouldn't that just be in line with Tensor Cores, which as far as I know are not used for ray tracing (otherwise I'd think Nvidia would be using them for that with Turing)? I don't know enough about how this new raytracing API stuff is being processed (so I'm not trying to be glib).

The thing is, I thought even Vega 10 had IF, it just wasn't enabled, but it was considered an integral part of their plan moving forward (so they either only enabled it for some key customers, or maybe just for internal testing prior to planning to enable it in the future sometime). I seem to recall Raja maybe talking about it at some non-consumer show before he left the company).

I also swear I saw some AMD thing saying they could apply IF between the CPU and GPU over PCIe. I also was baffled when they said without bridges (for GPU-GPU on Vega 20) but then it showed what appeared to be bridges connecting GPUs? But maybe I'm mistaking the SLI/Crossfire bridges when "without bridge or switch" mean something else (like the old SLI bridge thing some non Nvidia chipset boards had for a time)?
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
I don't think those things are that big of changes (heck isn't the FP64 stuff not even actually hardware limitation, its just AMD and Nvidia gimp GPUs for market segmentation? Meaning that its not that they added a bunch to bolster the FP64 performance, rather they're just not gimping it as much like they tend to do on gaming cards these days) that would account for the increased size. But I don't think AMD has said much beyond the extra is targeted at Deep Learning stuff. Kinda wonder if its their own versions of Tensor Cores (or basically achieving the same). I got the feeling that AMD is integrating a lot of the lower precision math stuff into the traditional graphics pipeline in the past, would be curious if they're still doing that (and so kept that the same as Vega 64 so that they wouldn't screw anything up by integrating new stuff in with it) or if they just kept the whole Vega 10 block the same so they could maybe gauge 7nm performance (how much shrink it was enabling them, clock speeds, efficiency, etc).

I don't see any major rework, nor anything like Tensor cores.

What we have is improved flexibility of the register usage. So you can pack 4 int-8 or 2 int-16 into a 32 bit register for math ops, similarly you can treat two FP 32 registers as one FP 64 register for FP math. This might be at most a couple of control lines and some extra microcode.

This stuff is trivial, not a major update.

It will be interesting to see how long it takes AMD to incorporate Tensor cores, which are now part of the NVidia lineup and part of most new high end Smartphone SoCs.

The real hard part is what can AMD to improve the efficience of it's GPUs for gaming. That requires a major rework.
 

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
It will be interesting to see how long it takes AMD to incorporate Tensor cores

Tensor cores are fixed faction hardware only usable for specific calculations. Personally I wouldnt want AMD to include any fixed faction hardware inside their GPUs unless they are designed for gaming.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
Tensor cores are fixed faction hardware only usable for specific calculations. Personally I wouldnt want AMD to include any fixed faction hardware inside their GPUs unless they are designed for gaming.

Fixed function HW, with a wide and increasing array of uses. IMO, it's only a quesiton of when, not if, AMD includes them, when even smartphones are including them, and one the hottest things in the data center is machine learning.
 
  • Like
Reactions: godihatework

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
Fixed function HW, with a wide and increasing array of uses.

Not that much for gaming. I prefer they will invest more transistors to increase cores/TMUs and ROPs than Tensor Cores.

IMO, it's only a quesiton of when, not if, AMD includes them, when even smartphones are including them, and one the hottest things in the data center is machine learning.

For server products i dont mind if they will include any fixed function hardware, for gaming its a waste of transistors.