[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 99 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Stuka87

Diamond Member
Dec 10, 2010
4,260
191
126
AMD's graphics division really, really needs to stop overvolting all of their cards by default. All the expertise they've built up in the CPU division about maximizing the efficiency of their products has been completely ignored on the GPU side.
They do it so that every last chip can be used. If they could wanted to bin their chips better, they could lower the voltages. But then they end up with chips that may not be usable.
 

amenx

Platinum Member
Dec 17, 2004
2,446
73
126
AMD's graphics division really, really needs to stop overvolting all of their cards by default. All the expertise they've built up in the CPU division about maximizing the efficiency of their products has been completely ignored on the GPU side.
Pretty sure they are very familiar with the voltage tolerances of their GPUs but have to look at the bigger picture of mass production. ie, 80% of those GPUs may work fine with lower voltages while 20% may not. Do they want to risk a 20% RMA rate? I think not.
 
Mar 11, 2004
18,886
1,094
126
AMD's graphics division really, really needs to stop overvolting all of their cards by default. All the expertise they've built up in the CPU division about maximizing the efficiency of their products has been completely ignored on the GPU side.
They're going to have to if they're going to make chiplet APUs. Although I'm starting to wonder if we're not going to end up in the same place as we have been with GCN, where they make GPUs for a certain market (in this instance it'd be for APU chiplets), and then we get some castoff developed version that has to be pushed outside of its optimal perf/efficiency range just to be able to be sold as a dGPU.

If that's the case, AMD should just drop dGPU and make entire gaming focused box, where they could do something much more interesting. Think console like design (so leverage their advantage of being in the consoles), but be less constrained than the consoles (by pricing and power/thermals), with the ability to be more flexible (so give us a single 8c/16t CPU chiplet, a GPU chiplet, and then between them an I/O chiplet on an interposer with 16GB of HBM2 (2 8 hi stacks with ~300GB/s per stack), and then say 128GB of NAND offering ridiculous bandwidth where it'd have the OS and then can swap an install of a full game for max performance (and when you're done playing the game it could be swapped to lower speed storage like SATA or even USB3 SSD or HDD). Sell that for $1000. Then have tiers (say a $1500 one with 12 cores from 2 CPU chiplets, a larger GPU or second GPU chiplet if they get that worked out, 24GB from 3 stacks of HBM2, 256GB of NAND; then a $2000 one with 16 cores, more GPU, 32GB from 4 stacks of HBM2, and 512GB of NAND). Put it in a small box with a self-contained liquid cooling, so that you could make some interesting design while keeping noise low and thermals in check. Work with Microsoft to tailor Windows for it (especially gaming performance). Partner with Valve to develop a gaming focused Linux distro and sell it as a Steam Box (with maybe some special version for VR). They could even sell higher end versions as workstations. And that'd be a great cloud gaming box when that becomes more the norm. With the I/O on an interposer being the main difference it could scale almost directly with the market its intended for, with the much higher workstation ones being on larger boards for more CPU and GPU chiplets (and the related IF links, larger interposer, and more NAND). And I think it makes their hardware shine in performance, limiting the bottlenecks as little as possible.

The cost for consumer should be about the same as buying a complete system, but with higher performance in a smaller packaging. I'm not sure it'd cost more for AMD (and they gain more control over the whole platform which means they can make a system that is more appealing to consumers, and thus then let OEMs just glitz it up in whatever packaging and/or pairing with peripherals (so Dell pairs it with some monitor of theirs; Razer pairs it with their peripherals; others can do VR headsets, etc).

I think that could potentially stave off the move to cloud gaming while the rest of that infrastructure develops. Plus console development would be able to transfer over to PC (and be something that Nvidia and Intel couldn't take advantage of) more so AMD should gain a big advantage in optimization, and I think it'd push the overall gaming performance (I think the memory bandwidth and NAND would allow for ridiculously high quality textures and other pre-rendered assets, which when paired with some pre-calculated path tracing I think could offer ray-tracing quality without the real-time performance hit, so you pre-process what lighting would look like using high quality ray-tracing and then you save that as image textures and swap them in based on the lighting situation - which you could maybe even do base texture layer and then have texture masks that overlay it that have the reflections or other similar highlights/shadows so you can store less overall data, and perhaps you could warp the mask layer for perspective and other aspects, kinda like bump-mapping where you could give the illusion of depth).

So if the consoles moved to be more PC like but with some optimizations that the PC space hasn't realized, make the PC gaming stuff more like high end versions of the consoles (where you get the console advantages, but paired with the higher end PC parts), as the current situation actually makes things more wonky (and the PC space takes the console advantages and then makes them even better; while being able to offer the better hardware that PC space traditionally offered as well). The console companies shouldn't be too mad as it means they'll have the most cost effective version of that and an easy path to improve on should they decide they'd like to offer newer systems sooner. Plus they'd have their own lock-ins still, and potentially they could maybe transition so they don't even need to fund the major development (cutting their R&D costs), and transition to more potential users by porting outside of their install base (meaning, Sony could turn Playstation into a service and have it be on more than just the Playstation install base; Microsoft already is moving that direction and Sony arguably is as well as they've been doing development and porting to Android of like PS1 era games and Playstation Vue). And its not like they lose out really, as they could still be the publisher of games, but now they can sell even more than they would have with the games locked to just their platform. And they shouldn't be losing money on the console hardware (they might not make much profit from it, but they shouldn't lose money while it being how they would've treated any other electronic device, VCR, DVD player, HTPC, etc that both companies have tried repeatedly over the years).

I don’t think they’ve done that to the same degree as with Polaris or Vega. The 5700 seems to have good settings, mainly just a poor cooling solution in the reference card due to using a cheap blower again.

We’ve seen people able to push these cards quite a bit harder, which want possible in the past because the cards were already at their limits.

So just wait for the third party cards if you want better cooling and noise levels and aren’t willing to jump through the hoops of putting on a better cooler yourself.
People were able to push Polaris quite a bit higher using similar methods to how they're pushing higher on the 5700s, so I'm not sure I agree. I don't recall Vega having much headroom, but I didn't pay close attention. I heard that the HBM memory made things tougher. (I seem to recall it could be pushed a lot with special power profile/BIOS, but it came at much bigger expense in power and heat than Polaris and Navi, so its not as similar in comparison.)

That was true of their previous cards as well. Nothing has changed except they apparently put a better blower on the reference card this time around.

They do it so that every last chip can be used. If they could wanted to bin their chips better, they could lower the voltages. But then they end up with chips that may not be usable.
The chips would be usable, they'd just need a different spec to be so. The smart thing would be to bin chips and sell them as different tiers. We see them doing exactly that with the 50th Anniversary Edition (seemingly). I'd guess they could do more as there's probably not even 20% of the production that are going towards that (and people are already finding cards with higher capabilities with not that much extra power draw). They should've made an XT PE with the VII's cooler and clocked for 2GHz (they could call it the 2 GigaHertz edition as a call back to the 7970 series). Which maybe they will later or that's their plan for a revision/refresh. And then sell the worse quality stuff as 5700T or something.

Hopefully they're binning the most efficient ones to put in laptops where they can charge more for them and look better.

Pretty sure they are very familiar with the voltage tolerances of their GPUs but have to look at the bigger picture of mass production. ie, 80% of those GPUs may work fine with lower voltages while 20% may not. Do they want to risk a 20% RMA rate? I think not.
If its anywhere close to 80% then they're shooting themselves in the foot and being stupid, as that 20% is making them look worse and giving them a reputation that is hurting their business. People have been touting that since the start of GCN and it has destroyed AMD's rep in dGPU. Its why people go "I want new better video cards from AMD to force Nvidia to lower prices."

Heck, I find it outright frustrating that I basically need to tweak it myself just to make my RX 480 non-irritating due to it pumping a lot of extra heat and being so noisy when I don't. And I have to perpetually do this, as new drivers for instance make me have to go and reset that stuff (I can save a profile which helps on the voltage part although I still have to go back and reset stuff like FRTC).
 

arandomguy

Senior member
Sep 3, 2013
479
16
116
Is anyone considering that AMD's validation and stability testing is not the same as what users are doing?

Look at how lax user stability testing for GPUs is by comparison to CPUs. Users basically consider a GPU stable if they cannot visibly notice any graphical errors in an arbitrary game (which have very high fluctuation in terms of what work load they put on a GPU) in a single play session?

While for CPU clock testing users will go for hours in dedicated CPU stress test app (or multiple) that can themselves can throw faults on even a single error. GPU testing isn't even to the level of simply being able to load windows and play games without crashing as CPU errors have less tolerance in terms of how they are handled.
 
Last edited:

lifeblood

Senior member
Oct 17, 2001
860
16
91
I'm curious as to whether Navi 14 will be able to support 4GB or 8GB vram (256-bit bus), or whether they'll limit it to 4GB with a 128-bit bus. Navi 10 is targeted at 1440p where 8GB vram is or soon will be a minimum. As Navi 14 will be targeted at 1080p and lower they may "cheap out" and only do a 128-bit bus. That would be unfortunate as the 470/480 we're optimized for 1080p and supported 4GB or 8GB vram. To step down to a 128-bit bus would be a definite backslide.

Why would they go 128-bit vs 256-bit? How much would it really save them? They've already designed and are using a 256-bit bus so no savings there. How much could the physical die space and traces and all the other things actually cost? The more I think about it the more I doubt they'll go for a 128-bit bus.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
3,110
310
136
I'm curious as to whether Navi 14 will be able to support 4GB or 8GB vram (256-bit bus), or whether they'll limit it to 4GB with a 128-bit bus. Navi 10 is targeted at 1440p where 8GB vram is or soon will be a minimum. As Navi 14 will be targeted at 1080p and lower they may "cheap out" and only do a 128-bit bus. That would be unfortunate as the 470/480 we're optimized for 1080p and supported 4GB or 8GB vram. To step down to a 128-bit bus would be a definite backslide.

Why would they go 128-bit vs 256-bit? How much would it really save them? They've already designed and are using a 256-bit bus so no savings there. How much could the physical die space and traces and all the other things actually cost? The more I think about it the more I doubt they'll go for a 128-bit bus.
If Navi 14 has GDDR6, it can have 8 GB on a 128 bit bus. GDDR6 comes in two variants: 1 GB chips and 2 GB Chips. So you may see 8 GB's on 128 bit bus, and 16 GB's on 256 bit bus on those GPUs that use GDDR6.

Effectively, small Navi will have the same amount of VRAM and the same bandwidth as RX 470.

And you forgot about the biggest cost of 256 bit memory bus. Power draw. 8 GDDR5 memory chips on RX 480 were drawing 37-40W alone. Small GPUs, that are designed for Mobile solutions also cannot afford this type of power envelope.
 

lifeblood

Senior member
Oct 17, 2001
860
16
91
If Navi 14 has GDDR6, it can have 8 GB on a 128 bit bus. GDDR6 comes in two variants: 1 GB chips and 2 GB Chips. So you may see 8 GB's on 128 bit bus, and 16 GB's on 256 bit bus on those GPUs that use GDDR6.

Effectively, small Navi will have the same amount of VRAM and the same bandwidth as RX 470.

And you forgot about the biggest cost of 256 bit memory bus. Power draw. 8 GDDR5 memory chips on RX 480 were drawing 37-40W alone. Small GPUs, that are designed for Mobile solutions also cannot afford this type of power envelope.
Oops, your right. I forgot the whole GDDR6 vs GDDR5 thing. That does change the equation dramatically.
 

Veradun

Senior member
Jul 29, 2016
287
47
86
Building on that VRAM thread:

2G chips will be too expensive for a mid to low card, and probably also useless since 4G v 8G is a major selling point for the higher ASP/margin SKUs

btw a lineup with 4G 5600, 8G 5700 and 16G/12G 5800 looks wonderfully spaced into FHD, QHD and UHD segments
 

psolord

Golden Member
Sep 16, 2009
1,239
16
136
Why does it consume 17W during media playback, while the vanilla 5700XT consumes 10W? Ok it does not break the electricity bill, I just as as general knowledge.
 

beginner99

Diamond Member
Jun 2, 2009
4,195
248
126
Why does it consume 17W during media playback, while the vanilla 5700XT consumes 10W? Ok it does not break the electricity bill, I just as as general knowledge.
Board power use or higher voltage.

All in all looks like a pretty bad card. barley any gain for huge power increase.
 

Glo.

Diamond Member
Apr 25, 2015
3,110
310
136
Im not even bothering to be interested in those reviews.

GIVE ME RX 5600 XT! Come on AMD! Spill the beans on this GPU!
 

mohit9206

Golden Member
Jul 2, 2013
1,125
176
136
Building on that VRAM thread:

2G chips will be too expensive for a mid to low card, and probably also useless since 4G v 8G is a major selling point for the higher ASP/margin SKUs

btw a lineup with 4G 5600, 8G 5700 and 16G/12G 5800 looks wonderfully spaced into FHD, QHD and UHD segments
Full hd needs more than 4gb these days, 6gb should be standard for 1080p going forward for sub 200 cards.
 

joesiv

Junior Member
Mar 21, 2019
15
5
41
Why does it consume 17W during media playback, while the vanilla 5700XT consumes 10W? Ok it does not break the electricity bill, I just as as general knowledge.
possibly higher clocks, but most of it is likely the RGB and more fans.
 

nurturedhate

Golden Member
Aug 27, 2011
1,583
48
136
Actually scrap that, 8gb should be the standard sub 200 cards going forward.
Yep, I'd feel bad recommending a 4gb card for gaming in mid 2019 going forward. People buying under $200 usually sit on that card for some time and a 4gb card is going to die on ps5/xbox2 ports in late 2020/early 2021. This isn't an issue of owning a 4gb card today, it's about buying a new card today to use over the next 3+ years.
 

fleshconsumed

Diamond Member
Feb 21, 2002
5,474
332
126
Asus as usual botching their AMD cards. Asus aftermarket 5700XT runs hotter and louder than Sapphire Pulse and undoubtedly it'll cost more as well. I would be curious to see Sapphire 5700XT Pulse pricing around holidays. Assuming there are decent sales I may pick one after all.
 
Apr 27, 2000
12,759
1,563
126
Its logical equivalent to: "Actually scrap that, Nissan GT-R should be the standard sub 20 000 $ car going forward. "
With the C8 being out there at $55,000 you might not be too far off from the truth.
 


ASK THE COMMUNITY

TRENDING THREADS