[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 100 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
Its logical equivalent to: "Actually scrap that, Nissan GT-R should be the standard sub 20 000 $ car going forward. "
U can already buy sub 200 rx 570 and 580 8gb models then why would you want to go backwards? I don't think u can buy a gt-r for sub 20k.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
U can already buy sub 200 rx 570 and 580 8gb models then why would you want to go backwards? I don't think u can buy a gt-r for sub 20k.
Manufacturing costs. GDDR6 memory and HBM2 is significantly more expensive than dirt cheap GDDR5.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
Sooner or later stacked memory will be the only way to economically increase density of any memory type as shrinking process tech becomes more and more expensive.

Already with the enhanced HBM2 specs you can far surpass the GDR6 mem density of a reasonably sized board - 96 GB for 4 stacks, and there are 6 stack chips out there.

Obviously for gaming purposes you will only ever need so much memory even for 4K gaming, but for offline ray/path traced rendering there will always be a demand for more onboard memory.

With the newly announced generations of flash at the FMS last month, I wonder if AMD has another SSG card planned with a 4TB onboard buffer....
 
Feb 4, 2009
34,577
15,794
136
Sooner or later stacked memory will be the only way to economically increase density of any memory type as shrinking process tech becomes more and more expensive.

Already with the enhanced HBM2 specs you can far surpass the GDR6 mem density of a reasonably sized board - 96 GB for 4 stacks, and there are 6 stack chips out there.

Obviously for gaming purposes you will only ever need so much memory even for 4K gaming, but for offline ray/path traced rendering there will always be a demand for more onboard memory.

With the newly announced generations of flash at the FMS last month, I wonder if AMD has another SSG card planned with a 4TB onboard buffer....

In a scenario like above you’d think graphics card makers could do something different like add some sort of AI assist for game developers to use. Maybe graphics card has a large pool of hyper fast memory it could share.
Interesting time we are in, I don’t think there will be much excitement for anything past 4K monitors, VR seems to be stalled due to glasses and stuff, Ray tracing will probably be the next big thing but we could easily end up with entry level cards being good for 90% of what anyone could ever need. Doesn’t appear there will be any other versions of direct x too.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
In a scenario like above you’d think graphics card makers could do something different like add some sort of AI assist for game developers to use. Maybe graphics card has a large pool of hyper fast memory it could share.
Interesting time we are in, I don’t think there will be much excitement for anything past 4K monitors, VR seems to be stalled due to glasses and stuff, Ray tracing will probably be the next big thing but we could easily end up with entry level cards being good for 90% of what anyone could ever need. Doesn’t appear there will be any other versions of direct x too.
Even DX12 has seen significant revisions since its release - they had an entire new major version of the Shader Model (6.0) after that initial release, and further revisions since, not to mention DXR.

DXR itself is effectively a rollback to fixed function hardware for RT specifically, I would expect future revisions to be more programmable in nature as vendor implementations gain said programmability, or at least more of it if they are already partially programmable.

What exactly do you mean by AI assist?

While AI/ML can potentially improve many areas as far as power consumption goes (like Zen's perceptron branch predictor), it isn't necessarily applicable to every problem, especially due to it's often inherently biased nature.
 
Feb 4, 2009
34,577
15,794
136
Even DX12 has seen significant revisions since its release - they had an entire new major version of the Shader Model (6.0) after that initial release, and further revisions since, not to mention DXR.

DXR itself is effectively a rollback to fixed function hardware for RT specifically, I would expect future revisions to be more programmable in nature as vendor implementations gain said programmability, or at least more of it if they are already partially programmable.

What exactly do you mean by AI assist?

While AI/ML can potentially improve many areas as far as power consumption goes (like Zen's perceptron branch predictor), it isn't necessarily applicable to every problem, especially due to it's often inherently biased nature.

Kind of went over my head but I get the idea.

What I’m saying are we approaching the end of continuous video card tech, what I’m saying will video become the next audio 10-12 years ago people were all about add in sound cards but then suddenly motherboard audio was good enough for nearly everyone.
Are we close to that moment regarding video? Will near future cards look like:
Entry level unbranded cards do everything good enough, including games
Mid range cards that do everything great
High end cards to do everything great plus do something extra like the previously mentioned better AI or support more multiplayer instances or something else.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
GDDR6 256-bit(8*32/16*16) => 448 GB/s - production start 2018.
HBM2 => 204 GB/s - production start 2016. - 4 GB
HBM2 => 224 GB/s - production start 2016. - 4 GB
HBM2 => 256 GB/s - production start 2017. - 4 GB
HBM2 => 307 GB/s - production start 2018. - 8 GB
HBM2E => 410 GB/s - production start 2019. - 16 GB <- Death of GDDR6 begins here, one stack is enough to compete with GDDR6.
HBM2E => 512 GB/s - production start in 2020? - 24 GB <-- sampling begins in 2019.
HBM3 => 512 GB/s - production start in 2021? - 32 GB <-- sampling begins in 2020.

If I remember my numbers correctly GDDR6 128-bit costs(AIB pays) equal 1x stack of HBM2E(AMD pays).
 
Last edited:

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
GDDR6 256-bit(8*32/16*16) => 448 GB/s - production start 2018.
HBM2 => 204 GB/s - production start 2016. - 4 GB
HBM2 => 224 GB/s - production start 2016. - 4 GB
HBM2 => 256 GB/s - production start 2017. - 4 GB
HBM2 => 307 GB/s - production start 2018. - 8 GB
HBM2E => 410 GB/s - production start 2019. - 16 GB <- Death of GDDR6 begins here, one stack is enough to compete with GDDR6.
HBM2E => 512 GB/s - production start in 2020? - 24 GB <-- sampling begins in 2019.
HBM3 => 512 GB/s - production start in 2021? - 32 GB <-- sampling begins in 2020.

If I remember my numbers correctly GDDR6 128-bit costs(AIB pays) equal 1x stack of HBM2E(AMD pays).
Actually with HBM, not only is the AIB partner not paying for the RAM itslef, they are also not paying for the copper traces and more complicated PCB to route them to the GPU, not to mention the PCB itself is theoretically significantly smaller (assuming they don't add extra to support HSF weight safely).

It's a shame we never saw a Vega 12 based gfx card, I'd be interested to know how big that PCB would be.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
If I remember my numbers correctly GDDR6 128-bit costs(AIB pays) equal 1x stack of HBM2E(AMD pays).
GDDR6 currently costs between 8 and 10$.

4 GB HBM2 stack costs around 30$. So the stack alone may be cheaper than the 128 bit 4 GB GDDR6 memory subsystem. The problem here is TSV's and Interposer, and manufacturability of the design. Yes, HBM2 is the future, and will replace any other form factor of GDDR memory.

Whether it is the sooner or later future, I think availability of Vega 12 for mainstream market will show. And we have seen that Vega 12 GPU has been tested by AIB's, for desktop. So who knows?

I can tell you one thing. If Navi 14 has HBM2, I dont care whether it costs 200 or 250$. Im buying one.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
4 GB HBM2 stack costs around 30$.
Not anymore, 24 GB is $30, 16 GB is $20, 8 GB is $15, and 4 GB is $10. HBM has never been as expensive as GDDR5/GDDR6. It has always been relatively easy to manufacture and produce in high quanities. But, who cares DRAM cartel wooooo~ Selling inferior DRAM at absurd highly costs, and manipulating supply to make people pay more for modern DRAM.
The problem here is TSV's and Interposer, and manufacturability of the design. Yes, HBM2 is the future, and will replace any other form factor of GDDR memory.
We are way past the 2013s here.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,221
136
It's neither cheap nor easy to fab; TSVs are ass.
TSVs are easy, what are you talking about? Flash(done) it is in HVM, Memory(done) it is in HVM, Logic(ramp) it isn't in HVM. Once said and done it's modified Wright's law from then on.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
It's neither cheap nor easy to fab; TSVs are ass.
TSVs are easy, what are you talking about? Flash(done) it is in HVM, Memory(done) it is in HVM, Logic(ramp) it isn't in HVM. Once said and done it's modified Wright's law from then on.
If what Seronx says is true, we will see IF Vega 12 will land on desktop, DIY market, as a GTX 1650/1650 Ti competitor, and if Navi 14 will have HBM2.

If Seronx is correct it might be more feasible to use single HBM2 stack for small chips, for numbers of reasons. If both GPUs will land in HBM2 spec, and on desktop market, this might be correct.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
If what Seronx says is true, we will see IF Vega 12 will land on desktop, DIY market, as a GTX 1650/1650 Ti competitor, and if Navi 14 will have HBM2.

If Seronx is correct it might be more feasible to use single HBM2 stack for small chips, for numbers of reasons. If both GPUs will land in HBM2 spec, and on desktop market, this might be correct.
Vega12 is deprecated and N14 uses G6.
That's it.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
Vega12 is deprecated and N14 uses G6.
That's it.
Navi 14 is the little brother to the 5700 so it would be ridicules to think it will have HBM2. Navi 20 may have HBM, that’s yet to be seen.

Where have you heard Navi 12 is deprecated? Link? I’m not saying you’re wrong as we’ve seen leaks for Navi 14 while word about Navi 12 has been nonexistent (as far as I know), just curious if you’ve seen something I haven’t.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
Which one?
Well now, that’s kinda the question, isn’t it? I’ve heard lots of rumors about how many dies will be released this generation. It’s safe to assume three dies, Navi 10 (5700 XT & 5700), Navi 14 (small Navi), and then what I thought was Navi 20 (big Navi). Rumors also imply a fourth die (bigger Navi?) to take on the 2080ti, but that’s just unsubstantiated rumor. I would assume that if the rumored fourth die is real it can very well have HBM2.

Of course if you can add clarity to these rumors and give out some hard model numbers and so forth, that would be very enlightening.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
What hes saying is that AMD is planning Navi 14: small one, Navi 12 - big one with 4096 Shaders, bigger one: Navi 21, and I think Navi 23 If I remember correctly. About that last one I do not even dare to speculate ;).
 
Status
Not open for further replies.