Consumer 7nm GPUs from AMD are late 2019 to early 2020

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NeoLuxembourg

Senior member
Oct 10, 2013
672
10
106
#77
False alarm, folks. The code names are just something we will be using in the open source driver code, so we can publish code supporting new chips for upstream inclusion before launch without getting tangled up in marketing name secrecy.

Trying to avoid the confusion we have today where we publish new chip support as "VEGA10" and then the product gets marketed as "Vega 56" or "Vega 64". In future the new chip support might get published as something like "MOOSE" and then be marketed as something completely different.

The code names are per-chip not per-generation.
 
Jul 12, 2006
92,854
1,181
136
#78
I feel like if AMD had anything remotely competitive, remotely soon to the Turing release we would have seen intentional "leaks" to take a little wind out of the sails, as we have seen many times before. That we see no credible rumors or even really any rumors at all about a Turing competitive AMD GPU tells me they have nothing close to release.
Right, though it doesn't really mean they have nothing to release, just nothing that is Turing competitive. Does anyone expect AMD to have a Turing competitive card that is either Polaris, Vega, or Navi? We already know Navi is more than a year away.

Something could be released...maybe...but probably quietly because it isn't going to compete with Turing in performance. It could offer a price/performance option compared to 2 year-old Pascal which is currently Turing's only competitor.
 

Headfoot

Diamond Member
Feb 28, 2008
4,388
40
126
#79
Right, though it doesn't really mean they have nothing to release, just nothing that is Turing competitive. Does anyone expect AMD to have a Turing competitive card that is either Polaris, Vega, or Navi? We already know Navi is more than a year away.

Something could be released...maybe...but probably quietly because it isn't going to compete with Turing in performance. It could offer a price/performance option compared to 2 year-old Pascal which is currently Turing's only competitor.
Yeah that's what I'm thinking the current rumor volume leads us to. I think we see some kind of stopgap until Navi but nothing that is a game changer the way Ryzen is.
 

Dribble

Golden Member
Aug 9, 2005
1,612
64
106
#80
Yeah that's what I'm thinking the current rumor volume leads us to. I think we see some kind of stopgap until Navi but nothing that is a game changer the way Ryzen is.
Too much being bet on Navi here, there's a bit of an assumption they'll just be able to add all the stuff Nvidia has added and then release a competitive card in late 2019-early 2020. Seems very unlikely. Just the DLSS will be very hard to do as it's not good enough to just put some AI cores on your card, you also need the AI super computers to generate the code to run, and that in turn requires all the software to run on the super computers. The AI cores on the card, the design of the AI super computers themselves and all the code is Nvidia proprietary stuff. Nvidia even own the super computers and provide all the time on them required to generate DLSS algorithms. AMD need all of those things to duplicate it.

In addition by the time AMD have Navi you can assume Nvidia will have the 3 series with top to bottom DLSS support, most games will support DLSS, and it gives something like a free 50% performance boost over not using it. So without a DLSS competitor you can't compete?

That's not even thinking about ray tracing, which might be a MS standard but that doesn't make it easy to do, and it is also using DLSS to de-noise the image so you really need to have your DLSS copy first.

Obviously it's not impossible for AMD to achieve this, but being as they haven't really released a fundimentally architecture since GCN
 

Headfoot

Diamond Member
Feb 28, 2008
4,388
40
126
#81
I'm not betting anything on Navi, nor did I ever say so. I'm simply pointing out the facts that a shrunk Vega definitely wont close the gap so the only possible thing that could would be Navi. Which may or may not. I wouldn't bet money on it
 
Jul 11, 2016
106
0
71
boostclock.com
#82
AMD need all of those things to duplicate it.
The RTG / Radeon team has to step up. It is far from impossible to get their own version of DLSS and other cool stuff - even without dedicated hardware elements. For that they have to recruit top-talent and pay top-money, just as NVIDIA does. Check out how many of the ray-tracing gurus of the past now work for NVIDIA. In AI ecosystem, CUDA is lightyears ahead as well.
 
Jul 12, 2006
92,854
1,181
136
#83
Too much being bet on Navi here, there's a bit of an assumption they'll just be able to add all the stuff Nvidia has added and then release a competitive card in late 2019-early 2020. Seems very unlikely. Just the DLSS will be very hard to do as it's not good enough to just put some AI cores on your card, you also need the AI super computers to generate the code to run, and that in turn requires all the software to run on the super computers. The AI cores on the card, the design of the AI super computers themselves and all the code is Nvidia proprietary stuff. Nvidia even own the super computers and provide all the time on them required to generate DLSS algorithms. AMD need all of those things to duplicate it.

In addition by the time AMD have Navi you can assume Nvidia will have the 3 series with top to bottom DLSS support, most games will support DLSS, and it gives something like a free 50% performance boost over not using it. So without a DLSS competitor you can't compete?

That's not even thinking about ray tracing, which might be a MS standard but that doesn't make it easy to do, and it is also using DLSS to de-noise the image so you really need to have your DLSS copy first.

Obviously it's not impossible for AMD to achieve this, but being as they haven't really released a fundimentally architecture since GCN
Not that I understand all of it, but Vega remains only ~60-70% "functional" considering "all of the stuff" that remains inactive on those (1st gen?) chips. Is it not plausible that a more competent team can work this design into whatever Navi is, refine it, put together better software and actually score some developer interest in making it work? It would be interesting to see what a supposed full-performance Vega would actually look like, if such a unicorn could materialize.

Though I'm not sure if that matters much or is in any way relevant to whatever hard limits are determined by GCN design.

At the same time, they pretty much pulled that miracle "from nowhere" with Zen, so I dunno, we'll see. I still maintain that Zen is their real focus right now because it remains a far greater market than the one Nvidia is determined to dominate.
 
Mar 11, 2004
17,750
278
126
#85
Right, though it doesn't really mean they have nothing to release, just nothing that is Turing competitive. Does anyone expect AMD to have a Turing competitive card that is either Polaris, Vega, or Navi? We already know Navi is more than a year away.

Something could be released...maybe...but probably quietly because it isn't going to compete with Turing in performance. It could offer a price/performance option compared to 2 year-old Pascal which is currently Turing's only competitor.
We do?

Too much being bet on Navi here, there's a bit of an assumption they'll just be able to add all the stuff Nvidia has added and then release a competitive card in late 2019-early 2020. Seems very unlikely. Just the DLSS will be very hard to do as it's not good enough to just put some AI cores on your card, you also need the AI super computers to generate the code to run, and that in turn requires all the software to run on the super computers. The AI cores on the card, the design of the AI super computers themselves and all the code is Nvidia proprietary stuff. Nvidia even own the super computers and provide all the time on them required to generate DLSS algorithms. AMD need all of those things to duplicate it.

In addition by the time AMD have Navi you can assume Nvidia will have the 3 series with top to bottom DLSS support, most games will support DLSS, and it gives something like a free 50% performance boost over not using it. So without a DLSS competitor you can't compete?

That's not even thinking about ray tracing, which might be a MS standard but that doesn't make it easy to do, and it is also using DLSS to de-noise the image so you really need to have your DLSS copy first.

Obviously it's not impossible for AMD to achieve this, but being as they haven't really released a fundimentally architecture since GCN
I'm not sure where you're getting that from? Maybe other people are expecting more, but most of what I read is clear in indicating that Navi is a mainstream class GPU, so I'm not expecting a big ray-tracing/AI aspect to it.

That's assuming DLSS actually lives up to the hype. AMD could release their own thing and leave it up to the game developers. Plus, I'd imagine that Microsoft would probably play a role in that (seeing how that would benefit them, it'd benefit Xbox and it'd benefit PC), so its not like there's nothing for AMD to leverage there. In fact, because of their work with Microsoft, I have a hunch that AMD will have plenty of insight into some of this (ray-tracing especially), since Microsoft can dictate it more than anyone. Which I'd expect Microsoft would want to be fairly standard (meaning they want anyone making a GPU to be able to meet compatibility).

Why do you say that? We have no idea what Nvidia's plan is. We don't know when they'll be on 7nm. And we don't know what chips they'll make when they are on there. Where are you getting that from (that DLSS is free 50% performance boost)?

You're assuming that Nvidia's ray-tracing + DLSS will be the best option. Personally, I'm not at all sold on it. There's potential, but that doesn't mean it'd be the end all be all. Honestly, right now, DLSS just looks like another kinda cheat, which there's almost always alternatives for achieving. But it also to me, means, inherently that it will be flawed versus just outright rendering it natively. Which I'm not saying Navi is going to brute force the RTX cards rendering capabilities (in fact, I personally don't even know why people are comparing them, RTX are large pro-level GPUs that Nvidia is leveraging Microsoft's new ray-tracing API, and their own AI image analysis so that they can attempt to sell them to gamers even though they didn't do that much to bolster the traditional rendering capabilities of them; Navi has been touted as being a mainstream class GPU, it doesn't need to be chasing all those features, it just needs to be a solid card for rendering games as they are now at an affordable price).

What? GCN has changed quite a bit over time, and people keep refusing to see that AMD had to approach multiple markets with basically a singular GPU design, and thus while they boosted the rendering capabilities, more of the GPU seems tailored for pro-level markets on their higher end products. And the fundamental stuff has little to nothing to do with the stuff you're talking about too, so I'm not even entirely sure why you're commenting on that aspect. If they left the traditional graphics pipeline of Vega in tact, that would be a big boost for a mainstream GPU. And there's likely some tweaks they can make to improve it further (without drastically overhauling it). So just maximize the traditional rendering capability, limit bottlenecks (memory bandwidth), push clock speeds (but keep them out of the inefficient range) and they could make a GPU that could match if not outdo Vega at much reduced size, power, and price. Gamers would buy it. Just like they bought Polaris.

Not that I understand all of it, but Vega remains only ~60-70% "functional" considering "all of the stuff" that remains inactive on those (1st gen?) chips. Is it not plausible that a more competent team can work this design into whatever Navi is, refine it, put together better software and actually score some developer interest in making it work? It would be interesting to see what a supposed full-performance Vega would actually look like, if such a unicorn could materialize.

Though I'm not sure if that matters much or is in any way relevant to whatever hard limits are determined by GCN design.

At the same time, they pretty much pulled that miracle "from nowhere" with Zen, so I dunno, we'll see. I still maintain that Zen is their real focus right now because it remains a far greater market than the one Nvidia is determined to dominate.
I'm not sure where you're getting that. Now, some of the big features (NGG fastpath or whatever their advanced discard thing was) aren't there, but I don't think that was really hardware, rather the software (just that the Vega pipeline was very programmable, and it could basically discard early enough that it wouldn't then stall the pipeline, and so it would discard a good amount - cutting down the amount of geometry, then they could process the new culled geometry, before final shading, all in a single clock cycle when before they could cull geometry, then they'd have to process the new geometry taking 2 clock cycles). And there might have been another feature or two, but I got the impression a lot of that was more on how AMD's driver handles thing (and Raja was building it to be more like Nvidia, where they add features through software, then work to implement them better in hardware).

Which Glo (?) has linked to patents that make me think of something like that, where they can process more geometry than the base GPU would seem to indicate. So no need for the complex software setup (although if they could get that working, along with the hardware geometry stuff, it could be a very substantial improvement, but the hardware aspect might mean its easier to implement a version of the other as well), or potentially even major changes to the setup of the GPU.

Which, I get people going "yeah, well AMD has to prove it before you should take that seriously", which is very true. I mean they talked up the NGG fastpath stuff, and then ditched it. As always, it comes down to how the product performs and at what price. Navi doesn't have to be some RTX killing GPU, it just needs to be solid, which AMD could just make a 4096 processor (whatever they call theirs) GPU, that is quite similar to Vega, and then rely on pushing the clocks to around 2.0GHz, and then just try to get about 500GB/s (or higher) memory bandwidth, and it should be a solid mainstream level card without any particularly fantastic work necessary. Couple that with improvements to stuff like color compression, and Navi should easily be able to be a solid gamer's GPU.

The thing is, pro market uses actually do tend to scale up (meaning they could address that by adding GPUs, making dual GPU cards and/or adding cards whereas Crossfire/mGPU for games requires a lot more work for less return). And AMD's GPUs (Vega for instance) are already pretty great at some of those tasks (so no heavily specialized hardware needed), so they don't even necessarily need to be doing what Nvidia is. And even for stuff like ray-tracing and AI inferencing, there's other options (maybe AMD does Crossfire/multi-GPU there, as a means of sorta brute forcing it, where even though the graphical rendering might not scale up, other aspects might, so if you want ray-tracing or to utilize some inferencing, you add a second card; which that might bring renewed push for improving multi-GPU rendering - there's rumors about doing per eye rendering for VR, so there's some potential things that might call for it).

There was an article I read (think it was when I was checking some console related thing), think it was Tweaktwon or Extremetech (I'd have to see if I can find it again), where their source said AMD was aiming for a Q1 Navi release, but that 1H (so Q2) would be more realistic. Just like I think the PS5 is closer than people think, I think Navi is as well. AMD is looking to leverage their 7nm jump.
 

beginner99

Diamond Member
Jun 2, 2009
3,957
102
126
#86
I still maintain that Zen is their real focus right now because it remains a far greater market than the one Nvidia is determined to dominate.
Agree because the CPU dies are smaller (better yield) and have higher profit (margin) per die area than a GPU. If you are spending big money on 7nm, you produce the product with the biggest margin/profit and that is clearly CPU / zen 2. Second point in favor of CPU is that you don't need to invest huge sums into software to make it work and catch up to your competitor.

Compute wise they would be very competitive,if you invest the money to get it up and running:

http://blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_bench_on_tf13/

Eg vega frontier edition with tensorflow ROCm almost as fast as tesla v100.

Problem is the getting up and running and being several versions behind usually not worth saving just couple thousands.
 

Dribble

Golden Member
Aug 9, 2005
1,612
64
106
#87
That's assuming DLSS actually lives up to the hype. AMD could release their own thing and leave it up to the game developers.
This is the sort of thing I hear a lot of and what my post was getting it - you say it like it's just the trivial matter of adding some AI cores to the card and everything else will magically happen - the devs will do the rest. See my previous post as to why this isn't the case.
 
Mar 11, 2004
17,750
278
126
#88
This is the sort of thing I hear a lot of and what my post was getting it - you say it like it's just the trivial matter of adding some AI cores to the card and everything else will magically happen - the devs will do the rest. See my previous post as to why this isn't the case.
Read my post again as you're misunderstanding what I was saying. They'll leave it up to devs for how to address that. Meaning, they could choose to work with Microsoft (who's Azure stuff is already being used for Xbox related processing, and that is going to grow substantially as Microsoft rolls out their game streaming service proper, and I can almost guarantee they'll leverage that for PC as well as they work to unify and turn Xbox into a service). Sony will probably have something of their own. Amazon will likely start doing something similar (they've started getting big in gaming and they have huge infrastructure). Or if they're big enough or have the capability (think someone like id, DICE, etc) they might choose to work their own internal solution. Plus the major publishers are possibly going to want to keep this kind of thing in house just to protect their own IP. And of course Nvidia will have theirs. But, you act like Nvidia would be the only game in town for this stuff, and I just disagree.

Personally, I don't think DLSS (that I've seen) is very impressive, but that could definitely change. But right now, I can't fathom putting all that processing power towards this, where on top of that they have to kinda pipe that back to the end user so their video card can then process the game in real time based on the algorithms that their deep learning analysis comes up with. So I'm not sure how valuable that will even be. I'd much rather they put those AI cores to work on AI for instance.
 

SpaceBeer

Senior member
Apr 2, 2016
284
2
71
#89
As I wrote before, just because some technology is not implemented in consumer product, it doesn't mean it doesn't exist on paper or even in the lab. There is no doubt that nVidia is the leader in this area, but I doubt AMD (or Intel, Qualcomm, Imagination, Samsung, Apple...) haven't been looking for similar solutions and techniques in last few years. I mean, there is Neural Net prediction included in Zen arch, so there is no reason something similar wasn't at least considered to be a part of some GPU block. It's not like AMD has only yesterday started with AI/ML, even though their ecosystem is far from green one. Google have been using their TPU (in-house) before Volta (Tensor cores "revolution") was presented. Imagination added Ray tracing block in their chips 3 years ago, and so on. What I want to say is, nVidia for sure knows how to make and sell new tech to consumers, but in many of those cases, they are/were not pioneers who actually turned an idea into working product.

So, even though I'm sure AMD doesn't like what they see in RTX, it's not something they didn't expect
 
Jul 12, 2006
92,854
1,181
136
#90
Yeah, I believe AMD's most recent slide showing their product road map updated Navi release to ~late Q4 19 / Q1 20. AFAIK, Navi is only going to be 7nm, and this makes a lot of sense when considering their upcoming year. I'm not sure if the delay in release has anything to do with issues at RTG or Navi as much as it is related to a focus on directing all of their 7nm orders for Zen.

Intel is publicly terrified that AMD is going to claw back that 20-30% market share within 2? years. I still think that's a bold prediction, but if it can happen, the only way is for AMD to use up all of their 7nm allotment for Zen cores.
 
Mar 11, 2004
17,750
278
126
#91
Yeah, I believe AMD's most recent slide showing their product road map updated Navi release to ~late Q4 19 / Q1 20. AFAIK, Navi is only going to be 7nm, and this makes a lot of sense when considering their upcoming year. I'm not sure if the delay in release has anything to do with issues at RTG or Navi as much as it is related to a focus on directing all of their 7nm orders for Zen.

Intel is publicly terrified that AMD is going to claw back that 20-30% market share within 2? years. I still think that's a bold prediction, but if it can happen, the only way is for AMD to use up all of their 7nm allotment for Zen cores.
Can you find it? As AMD has been pretty nebulous on GPU timetables. Even earlier this year, they talked like Vega 20 was going to just be sampling in Q4 of this year, but that is already happening (and has been for months, they announced that back at Computex, which means its like half a year ahead of schedule and I don't know that I've seen that updated on any slides). It sounds like AMD has shifted a lot of their timetables up (because 7nm is going well).

The more recent rumors I've seen are saying that AMD has told their partners they're aiming for a Q1 2019 release (but that the partners think 1H 2019 is more likely). There's also talk that AMD pivoted resources to Navi in order to speed up its development (partly because Sony requested it, but I think their experience with 7nm has made them more aggressive; sounds like they're wanting to capitalize on their advantage as its something they'll have over both Intel and Nvidia).

You're basing that without knowing if Intel is having more problems than so far has been let out. Sounds like they're gonna be using 14nm for awhile (just announced they're investing another billion on 14nm). Which if they were on equal process, AMD's chips would have an advantage in the server space. More cores, more memory bandwidth, more I/O, similar if not better perf/W, and all at cheaper prices; if AMD is going to have a process advantage on top of that, and Intel knows it could potentially be a year or more (Intel 10nm has been said late 2019, which could mean server chips in 2020, and Epyc Zen 2 should be ready early 2019), then yeah AMD could get a lot of market share in that time so Intel would be worried.

Also, we don't know how much 7nm capacity TSMC has (it is quite a bit though, they put a lot of development, and built several new fabs in advance of it - they spent like $20B), or what deals AMD and TSMC has and if those have changed any since the GF announcement (I get the impression AMD had already worked deals to compensate for GF - and I still think that 7nm part that was supposed to tapeout late 2018 at GF was Navi, and its been moved to TSMC, where AMD already has experience with GPUs and 7nm seems to be ahead of schedule for AMD).

Oh and don't forget that Navi will be a fairly small chip too (Vega 20 is like ~360mm, which while Navi would still probably be 2-2.5x the size of the alleged 8 core modules of Ryzen 2, that's not a huge chip so they should have pretty good yields). Especially if early on they focus on the higher margin products (so like $200-400 range), while maybe that rumored 12nm Polaris fills the $200 and below dGPU market.
 
Mar 11, 2004
17,750
278
126
#92
As I wrote before, just because some technology is not implemented in consumer product, it doesn't mean it doesn't exist on paper or even in the lab. There is no doubt that nVidia is the leader in this area, but I doubt AMD (or Intel, Qualcomm, Imagination, Samsung, Apple...) haven't been looking for similar solutions and techniques in last few years. I mean, there is Neural Net prediction included in Zen arch, so there is no reason something similar wasn't at least considered to be a part of some GPU block. It's not like AMD has only yesterday started with AI/ML, even though their ecosystem is far from green one. Google have been using their TPU (in-house) before Volta (Tensor cores "revolution") was presented. Imagination added Ray tracing block in their chips 3 years ago, and so on. What I want to say is, nVidia for sure knows how to make and sell new tech to consumers, but in many of those cases, they are/were not pioneers who actually turned an idea into working product.

So, even though I'm sure AMD doesn't like what they see in RTX, it's not something they didn't expect
Yeah. This is just how AMD has operated. They came out with Radeon Rays as their accelerated ray-tracing platform.

I'm skeptical that Nvidia's ray-tracing hardware is actually that specialized (I think that's partly why they haven't really detailed it much so far, they don't want people to realize its probably heavily a software layer, kinda like CUDA or some of their compute methods), I think it was hardware they just repurposed for that after finding out it performed well while they were seeing what they'd do for Microsoft's new ray-tracing API. That's not to say its bad (if anything, I think that is a very smart move by Nvidia, and then they can tweak it to further improve it for that use going forward; plus if it let them get a jump then that's an even smarter move to make). And it seems quite strong.

I'm not going to say AMD will be better or anything either, but they're not as out of the loop as some people on here seem to want to suggest. I've pointed it out before, considering Microsoft is the one dictating it, and considering their work in consoles, I have a hunch AMD has an idea of what type of hardware Microsoft is wanting to implement this stuff. I doubt they were caught off guard by Nvidia's RTX (even if it might have happened sooner than AMD probably expected, as they would probably figure that Nvidia would push that in the pro stuff first; which I have a hunch Nvidia wasn't planning on making consumer gaming RTX cards yet, but their plans got derailed some and they felt they needed to offer some new gamer cards since they never did that with Volta and Pascal was already 2 years old - even though they released the 1080Ti more recently).
 
Jul 12, 2006
92,854
1,181
136
#93
Can you find it? As AMD has been pretty nebulous on GPU timetables. Even earlier this year, they talked like Vega 20 was going to just be sampling in Q4 of this year, but that is already happening (and has been for months, they announced that back at Computex, which means its like half a year ahead of schedule and I don't know that I've seen that updated on any slides). It sounds like AMD has shifted a lot of their timetables up (because 7nm is going well).

The more recent rumors I've seen are saying that AMD has told their partners they're aiming for a Q1 2019 release (but that the partners think 1H 2019 is more likely). There's also talk that AMD pivoted resources to Navi in order to speed up its development (partly because Sony requested it, but I think their experience with 7nm has made them more aggressive; sounds like they're wanting to capitalize on their advantage as its something they'll have over both Intel and Nvidia).

You're basing that without knowing if Intel is having more problems than so far has been let out. Sounds like they're gonna be using 14nm for awhile (just announced they're investing another billion on 14nm). Which if they were on equal process, AMD's chips would have an advantage in the server space. More cores, more memory bandwidth, more I/O, similar if not better perf/W, and all at cheaper prices; if AMD is going to have a process advantage on top of that, and Intel knows it could potentially be a year or more (Intel 10nm has been said late 2019, which could mean server chips in 2020, and Epyc Zen 2 should be ready early 2019), then yeah AMD could get a lot of market share in that time so Intel would be worried.

Also, we don't know how much 7nm capacity TSMC has (it is quite a bit though, they put a lot of development, and built several new fabs in advance of it - they spent like $20B), or what deals AMD and TSMC has and if those have changed any since the GF announcement (I get the impression AMD had already worked deals to compensate for GF - and I still think that 7nm part that was supposed to tapeout late 2018 at GF was Navi, and its been moved to TSMC, where AMD already has experience with GPUs and 7nm seems to be ahead of schedule for AMD).

Oh and don't forget that Navi will be a fairly small chip too (Vega 20 is like ~360mm, which while Navi would still probably be 2-2.5x the size of the alleged 8 core modules of Ryzen 2, that's not a huge chip so they should have pretty good yields). Especially if early on they focus on the higher margin products (so like $200-400 range), while maybe that rumored 12nm Polaris fills the $200 and below dGPU market.
Hmm, this was the first that I found so, obviously, don't even begin to trust it because wccf, but it does rehash some of the info that was announced at the latest earning report (I think)

https://wccftech.com/exclusive-amd-navi-gpu-roadmap-cost-zen/

basically, 7nm Vega is not a consumer chip--unless a version will appear in PS4/Xbox consoles? That could be Navi, but it would certainly make more sense with the Navi release that the first version would appear in consoles with a later consumer GPU card release. So, it's likely that Navi exists next year, but not as a discreet GPU until the very end or into 2019.

The point is that there isn't a lot of evidence that AMD is having problems with production or design--it is just industry-wide forces that might see them re-shifting priorities in order to capitalize on some unexpected advantages. I just don't see them wasting 7nm space on low margin, probably not-all-that-competitive GPU chips when high-margin Epyc cores are going to be $$$$$ for them
 

Paul98

Diamond Member
Jan 31, 2010
3,638
5
91
#94
I am actually quite happy AMD isn't just trying to push stuff out and chase. Instead just release some of the mid range stuff that works fine. Then hopefully similar to what they did with their CPU they are working on a good new solid foundation so that moving forward then can really compete top to bottom again.
 
Mar 11, 2004
17,750
278
126
#95
Hmm, this was the first that I found so, obviously, don't even begin to trust it because wccf, but it does rehash some of the info that was announced at the latest earning report (I think)

https://wccftech.com/exclusive-amd-navi-gpu-roadmap-cost-zen/

basically, 7nm Vega is not a consumer chip--unless a version will appear in PS4/Xbox consoles? That could be Navi, but it would certainly make more sense with the Navi release that the first version would appear in consoles with a later consumer GPU card release. So, it's likely that Navi exists next year, but not as a discreet GPU until the very end or into 2019.

The point is that there isn't a lot of evidence that AMD is having problems with production or design--it is just industry-wide forces that might see them re-shifting priorities in order to capitalize on some unexpected advantages. I just don't see them wasting 7nm space on low margin, probably not-all-that-competitive GPU chips when high-margin Epyc cores are going to be $$$$$ for them
I think from the start AMD made it clear that Vega 20 is HPC market. Navi is developed with the PS5 in mind, we'll see what the next Xbox has. Why would that make more sense? The dGPUs usually come out before the console chips. I don't know if there's any particular reason for that other than that the dGPUs tend to be spring, summer, or early fall, while consoles tend to be late fall. I think the last time that didn't happen was the Xbox 360, which is partially true, as it was based on the 1000 series (which was already out), but was unified like the later Terrascale.

I never said they were (the total opposite actually), but you might be remarking on what others are saying. I'm not really seeing that. I don't think they'll be production constrained with Zen 2 EPYC. And I believe they won't be doing new consumer PC APUs on 7nm til 2020 so they won't have that taking production. I think Vega 20 being so far ahead of schedule is going to let them produce Navi much earlier than people expect, so it won't be taking capacity from Zen 2, just taking over when Vega 20 production tapers off. And its possible that TSMC has a lot of capacity (they put a lot into 7nm, I think they built like 3 new fabs for it and spent like $20billion), and/or that AMD, having probably a heads up on GF's move, worked deals to buyup as much production as possible. With Apple tapering off early in the year, that'll mean a good amount of open production. The other thing is that I think GPUs are integral to AMD's future (and success with their CPUs, including EPYC). They need to get it on track, and AMD has said gaming is a particular aspect they're wanting to focus on (both CPU and of course GPU).

Oh and AMD/Lisa Su is doing one of the 2019 CES Keynotes, and specifically talking about 7nm products and also specifically mentioned moving gaming forward. I think they announce (but don't launch, will be curious if they have working silicon though) Navi and Zen 2 there. Which maybe it'll be Zen 2 focused and that's certainly big enough to warrant the Keynote, but I have a hunch people are going to be surprised.
 

DooKey

Golden Member
Nov 9, 2005
1,452
5
106
#96
I think from the start AMD made it clear that Vega 20 is HPC market. Navi is developed with the PS5 in mind, we'll see what the next Xbox has. Why would that make more sense? The dGPUs usually come out before the console chips. I don't know if there's any particular reason for that other than that the dGPUs tend to be spring, summer, or early fall, while consoles tend to be late fall. I think the last time that didn't happen was the Xbox 360, which is partially true, as it was based on the 1000 series (which was already out), but was unified like the later Terrascale.

I never said they were (the total opposite actually), but you might be remarking on what others are saying. I'm not really seeing that. I don't think they'll be production constrained with Zen 2 EPYC. And I believe they won't be doing new consumer PC APUs on 7nm til 2020 so they won't have that taking production. I think Vega 20 being so far ahead of schedule is going to let them produce Navi much earlier than people expect, so it won't be taking capacity from Zen 2, just taking over when Vega 20 production tapers off. And its possible that TSMC has a lot of capacity (they put a lot into 7nm, I think they built like 3 new fabs for it and spent like $20billion), and/or that AMD, having probably a heads up on GF's move, worked deals to buyup as much production as possible. With Apple tapering off early in the year, that'll mean a good amount of open production. The other thing is that I think GPUs are integral to AMD's future (and success with their CPUs, including EPYC). They need to get it on track, and AMD has said gaming is a particular aspect they're wanting to focus on (both CPU and of course GPU).

Oh and AMD/Lisa Su is doing one of the 2019 CES Keynotes, and specifically talking about 7nm products and also specifically mentioned moving gaming forward. I think they announce (but don't launch, will be curious if they have working silicon though) Navi and Zen 2 there. Which maybe it'll be Zen 2 focused and that's certainly big enough to warrant the Keynote, but I have a hunch people are going to be surprised.
If I was a betting man I would bet she's going to talk about Zen 2. At least I hope so because I want to upgrade my 2700X ASAP.
 
Jun 4, 2004
12,456
450
146
#98
New Navi rumor:

https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https://www.chiphell.com/thread-1937354-1-1.html

translate:
1. no raytracing
2. Navi10 performance target RTX2080
3. 7nm die size vastly decrease, cost would go down a lot
4. power consumption is surprising
5. Q2-Q3(2019) release
I dropped some more Navi rumors in the Sony PS5 thread:

So there have been some rumors going around (AdoredTV) that Navi will form the basis of a chiplet that will be used by Sony, and as the basis for the GPU in the G series of Ryzen (Zen2) 3000 APUs and as the basis for three lower end GPUS:
  • 3060 Navi12 equal to RX580 /1060/2050 75W (no power connector) $129
  • 3070 Navi12 = RX56/1070/2060 120W $199
  • 3080 Navi10 = RX64+15% / 1080+ / 2070 150W $249

That’s a pretty solid low end to medium range across power price and performance. If true of course.
It looks pretty good for the low to mid end.
 

Mopetar

Diamond Member
Jan 31, 2011
4,376
327
126
#99
There's a lot of interesting aspects to that rumor.

First, the naming change is odd, but AMD did the same thing to Intel with their motherboard lineup by co-opting Intel's naming scheme. This is pretty much spitting in the face of NVidia since it's grabbing their next generation card names. They have done this before, so it's believable that they would do it again.

The power consumption might seem low at first, but it could be a sign that AMD is going to stop cranking the stock voltages so high. Perhaps they've realized that it's unreasonable for them to try to do it just to hit whatever performance target the closest NVidia card has when they just get pilloried for the insane power draw and many consumers end up having to undervolt in order to overclock their card.

The price is probably the most surprising part. I understand that they're not competing at the high end and probably can't afford to, but even then the prices seem oddly low for a 7nm part. I could understand Navi12 being cheep if it's small enough to function like a chiplet similar to Zen dies, but Navi10 is replacing the 486 mm^2 Vega die, and even with a good shrink to 7nm, doesn't seem small enough where it could be sold for $250. An RTX 2070 is going for $500 (whether you think that's overpriced is another argument) so it's hard to believe that AMD is going to leave that much space between the cards unless they really don't care about anything other than gaining market share.

Call me skeptical, but if that 3080 did come out for $250 in the middle of next year, I'd jump on it.
 

Dribble

Golden Member
Aug 9, 2005
1,612
64
106
One thing I'd be pretty confident about is that the actual price will be as high as AMD can get away with - AMD are a company who badly need to make money and grow their margins. So if it's a $250 gpu then it's going to have $250 performance. If it's got 2080 performance it won't cost $250.

Anyone claiming otherwise is just stoking the AMD hype train based around the mythology that AMD somehow care about their users more then profits.
 

Similar threads



ASK THE COMMUNITY

TRENDING THREADS