Lisa Su: 20 nm GPUs coming in the "next few quarters".

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
SperglordActual, no doubt. When has NV's next generation flagship card ever beat the last gen flagship by only 7-10%? Never. NV though fully acknowledges that 980 is midrange Maxwell since in all of their marketing slides, they are putting it as an upgrade to a 680
- a midrange Kepler card. It leaves me with a sour taste in my mouth when I am asked to pay $550 for midrange next gen performance just because it's shiny and new. What leaves me more disappointed are the fall PC games. Not discussing gameplay but Mordor was not demanding, evil within is locked to 30 fps. Right now people are upgrading based on hype imo and cuz they are bored. No next gen game has come out yet that warrants the upgrade, yet 980 is still not fast enough for 4K. That's why I think NV is waiting for GM200 because they see that PC gamers will buy a 980 for $550 with a tiny increase in performance over the 780Ti, so why release the main attraction? They will milk the market until 390X beats 980 and release GM200 to claim the fastest crown. That's when there will be price adjustments and heated competition, hopefully.
 
Sep 27, 2014
92
0
0
SperglordActual, no doubt. When has NV's next generation flagship card ever beat the last gen flagship by only 7-10%? Never. NV though fully acknowledges that 980 is midrange Maxwell since in all of their marketing slides, they are putting it as an upgrade to a 680
- a midrange Kepler card. It leaves me with a sour taste in my mouth when I am asked to pay $550 for midrange next gen performance just because it's shiny and new. What leaves me more disappointed are the fall PC games. Not discussing gameplay but Mordor was not demanding, evil within is locked to 30 fps. Right now people are upgrading based on hype imo and cuz they are bored. No next gen game has come out yet that warrants the upgrade, yet 980 is still not fast enough for 4K. That's why I think NV is waiting for GM200 because they see that PC gamers will buy a 980 for $550 with a tiny increase in performance over the 780Ti, so why release the main attraction? They will milk the market until 390X beats 980 and release GM200 to claim the fastest crown. That's when there will be price adjustments and heated competition, hopefully.

Lets hope for their sake that doesn't backfire on them and they wait and wait and then the 390x outperforms GM200. Of course I know next to nothing about what GM-200 can/will do. I would rather they just release GM200 ASAP and market it like the flagship it is.

Looking at overclocking and benchmarks it really is starting to look like the 980 was... gimped? (that may not be the right word). It looks severely voltage locked. Why would Nvidia release a card that doesnt perform at the best it can?
 

TrulyUncouth

Senior member
Jul 16, 2013
213
0
76
As an avid NVIDIA fan, I agree with your statement that this whole perf/watt thing was just silly. As an enthusiast I don't give a damn about power usage, give me all the performance I can get. My initial excitement at the 980 launch has dwindled steadily as I looked at benchmarks and whatnot, it really does seem like just an overpriced mid-range card (albeit high mid range) but certainly no flagship. I really would like to see what GM 200 can do, and I am cautiously optimistic to see what AMD could do with 20nm and HBM.

So that all being said, the first company that offers an excellent single card 4k solution is getting my money, hands down.

Couldn't agree more. While I am very impressed by perf/watt the lack of really moving the top end is kinda sad. The most likely use for me of a 970 is a reasonable price/temp HTPC gaming card. I do a ton of TV gaming and it seems perfect for that. Otherwise, its the same slow progress we've seen the last 3+ years.

Once they move maxwell to lower-end mobile and tablets... that is gonna be amazing.
 

Wild Thing

Member
Apr 9, 2014
155
0
0
Perf/$ is always the most important metric with consumer video cards.
Perf/watt only rises above that when NVidia GPUs are more efficient.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Perf/W is important because it allows to make GPUs faster. Think a bit more ahead than just one gen. If Perf/W were not to increase (much), GPUs would become ever more power hungry. More than 250W is okay for some people, but what about more than 350, 450, 550... watt? Where does it end? THAT is the essential question.
Nevertheless I would not be against a new performance class with 300-350W. But you still have one thing to consider: Performance is also limited by die size and cost. You cannot make GPUs much larger than GK110. Hence you would have to increase frequency, but that becomes inefficient quite quickly. If perf/W would scale linearly, no problem, but it doesn't if you go for the highest possible performance.

Do you think 100W more for 10% more performance (just an example) is a good idea? I don't. Diminishing returns...
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
Perf/W is important because it allows to make GPUs faster. Think a bit more ahead than just one gen. If Perf/W were not to increase (much), GPUs would become ever more power hungry. More than 250W is okay for some people, but what about more than 350, 450, 550... watt? Where does it end? THAT is the essential question.
Nevertheless I would not be against a new performance class with 300-350W. But you still have one thing to consider: Performance is also limited by die size and cost. You cannot make GPUs much larger than GK110. Hence you would have to increase frequency, but that becomes inefficient quite quickly. If perf/W would scale linearly, no problem, but it doesn't if you go for the highest possible performance.

Do you think 100W more for 10% more performance (just an example) is a good idea? I don't. Diminishing returns...

I agree. GK110 is huge and I doubt they could make it much bigger. Even with its size at stock it still only consumes about 250W. Once you start overclocking it and getting into 1300+ clocks with voltage increases you can see power consumption in the 350-400W range. I don't think nvidia should of shipped a 350W GK110 clocked to 1300, so even with its big size the chip is still pretty efficient at stock, but it does allow a user like me to get another 25% performance out of it for a 50% increase in power consumption.

What I want them to do is stop releasing mid range dies as flagships and taking years to replace big dies with smaller dies that don't bring any more performance, just efficiency gains. It's not going to happen though because it's working and people buy the cards. The 980 is the absolute worst flagship performance increase ever, even simple refreshes like 480 to 580 or 280 to 285 brought bigger performance increases. Regardless of that the card still reviewed well because of how efficient it is and how much they improved price/perf with the 970 and to a lesser degree the 980.

I've adjusted how I buy GPUs to compensate is all. Skip the mid-range 'flagship' and wait for the big die seems to be the best bet. Buying the new flagship every release will just amount to sidegrade-upgrade-sidegrade-upgrade etc. Might as well just wait for the upgrade as I don't care about power consumption and the new features never gain much traction because they are proprietary so they don't affect my purchasing decisions.
 

NTMBK

Lifer
Nov 14, 2011
10,264
5,116
136
This story is garbage. Where in their sources does Lisa Su say that 20nm graphics are coming? Actually go and read the Venturebeat article, which says nothing of the source- and then read the Fudzilla article. The closest the FZ one comes is:

Lisa Su has to deal with a number of upcoming products, such as K11 and K12 chips, the Seattle ARM server series, new Radeon 20nm graphics and many other projects that should see the light of day over the next few quarters.

That isn't a quote from LS- it's very unclear whether FZ is paraphrasing Lisa Su, or whether they are just adding context to her current situation. I believe that it is the latter. For a start, "K11"? There is no K11. There is K12, and there are lots of other products- but none of them are K11.

This is just standard internet echo-chamber nonsense, and WCCFtech intentionally misinterpreting things in order to get a few hundred clicks. Again.
 
Last edited:

uribag

Member
Nov 15, 2007
41
0
61
I think the transition from 28nm to 20nm is a bit overrated and makes more sense in the economics (smaller die size) than in perf/watt.

Don´t Nvidia just proved (with Maxwell) that a change in uarch can be as or more effective than a change to a smaller node (perf/watt wise)?
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I think the transition from 28nm to 20nm is a bit overrated and makes more sense in the economics (smaller die size) than in perf/watt.

Don´t Nvidia just proved (with Maxwell) that a change in uarch can be as or more effective than a change to a smaller node (perf/watt wise)?

AMD's GCN is 3 years old. Maxwell was in development for 3-4 years as a brand new architecture to succeed Kepler. AMD is cash strapped to keep developing such advanced architectures from the ground up every 3 years. GCN is here for another 3 years and it will continue improving with node shrinks and small advancements in IPC. Node shrinks will take care of power consumption and transistor density, allowing AMD to increase performance. If all you care about is performance/watt, then NV is the go to brand for another 2-3 years until AMD revamps GCN entirely. With AMD's limited cash flow and R&D budget, they cannot afford to do what NV does - which is design brand new architectures every 2 years. It makes sense since NV is primarily a graphics company but AMD isn't.

Discrete Desktop GPUs for AMD are less than 10% of their stock price. AMD cannot go ahead and spend 90% of their R&D budget on graphics like NV can. For this reason AMD needs to rely on node shrinks and advanced memory designs like HBM to compete with Maxwell's architecture.

If you look at GCN 1.2 in Tonga, memory bandwidth efficiency is up 40%, color fill-rate is up 50%, tessellation performance is up 90% over Tahiti. The reason these advantages hardly show up in 285 is because in most games it's constrained by other factors (shader performance for example). Once AMD incorporates these advancements into GCN 2.0 and adds more shaders and memory bandwidth, these advantages in Tonga will suddenly show up in a larger GPU. Basically if your major bottleneck is not geometry performance or memory bandwidth, then these advancements in Tonga are just wasted for now. AMD is simply using Tonga as a test bed for 300 series.
 
Last edited:

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Actually Kepler was a derivative of Fermi(with some major changes like the move to software scheduling) and Maxwell is rejigged Kepler with greater control granuality when it comes to voltage,clockspeed and switching parts of the GPU off.

The TH review actually showed this,although that thread was lost in the internet brand wars.

The following chart is from the Mullins/Beema launch in April and covers AMD GPUs and CPUs:

http://images.anandtech.com/doci/7974/Screen-Shot-2014-04-29-at-1.08.08-AM_575px.jpg


Screen-Shot-2014-04-29-at-1.08.08-AM_575px.jpg


Look at the power saving features in Maxwell and look at what is in that chart.

AMD will enable similar tech to Maxwell,but are behind Nvidia in time to market it seems.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Look at the power saving features in Maxwell and look at what is in that chart.

AMD will enable similar tech to Maxwell,but are behind Nvidia in time to market it seems.

What happens if there is no magic slide dust to be found? And that Maxwell is simply a better uarch in all ways.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
What happens if there is no magic slide dust to be found? And that Maxwell is simply a better uarch in all ways.

Then AMD's best card outperforms 980 by only 10-20% from now until 2016, GM200 crushes AMD's best card by 20-30% and we all enjoy new mid-range deals in price/performance due to AMD having to lower prices to compete in the sub-$350 market with 390 series, while NV rakes in major profits with GM200 derivatives. However, considering 970 matches an after-market 290 and 980 only manages to beat a 290X by 20-21% at $550, the bar has not been set very high. When was the last time AMD did not have a response to NV's GPUs? NV has tended by lead by 15-18% at the flagship level with 480/580/780Ti but AMD has hung in there. The more critical aspect for AMD is getting wins in the notebook sector. This is where 970M and 980M are killer chips.

One major advantage AMD has over NV is their 2nd best card is barely neutered compare to NV. There will likely be a big difference in performance between GM200's top part and 2nd card from the top. For AMD though, their 2nd best part will probably offer 95-97% of the performance of the top part once both are overclocked and be priced so low that you could buy almost 2 of them for what the top flagship Maxwell will cost - basically a repeat of 4870s vs. 280, 6950s vs. 580, 7950s vs. 680, R9 290s vs. 780Ti.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So all in all, you agree with what you quoted from him. :thumbsup:

Right, but in many of his posts, he is panicking implying that he doesn't think AMD even has a response beyond next 2 quarters. I don't think the best AMD can deliver is just 10-20% faster than 980 from now until 2016. I think over the next 2 years AMD will squeeze 30-40% more performance over the 980, first with 390X and then its follow-up. And his statement that what if Maxwell is better in all ways is already not true because it's worse at Compute per mm2 (Ryse: Son of Rome) and of course DP performance of consumer Maxwell is atrocious. And I expect AMD to have 6-8GB card options unlike 970/980 right now.

7970Ghz is 77% faster than 6970
R9 290X is 29% faster than 7970Ghz.
The move from HD5870 (September 2009) to an HD7970Ghz (June 2012) is a 2.3x increase in performance in 2.5 years.

Since R9 290X came out October 2013, I fully expect AMD to have a card 50% faster than R9 290X by December 2015 based on historical data.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,143
3,840
136
What happens if there is no magic slide dust to be found? And that Maxwell is simply a better uarch in all ways.

The crude numbers says that it s about the same as the previous gen, less exe units but clocks crancked up by 30% and 23% respectively for the 970 and 980 compared to previous gen.
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
What happens if there is no magic slide dust to be found? And that Maxwell is simply a better uarch in all ways.

Because there's no magic fairy dust inside Maxwell, it will not stay the "best uarch" forever ... it's not like the Maxwell arch is a revolution compared to Kepler. It's just a well made, efficiency trimmed evolution of Kepler.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Right, but in many of his posts, he is panicking implying that he doesn't think AMD even has a response beyond next 2 quarters. I don't think the best AMD can deliver is just 10-20% faster than 980 from now until 2016. I think over the next 2 years AMD will squeeze 30-40% more performance over the 980, first with 390X and then its follow-up. And his statement that what if Maxwell is better in all ways is already not true because it's worse at Compute per mm2 (Ryse: Son of Rome) and of course DP performance of consumer Maxwell is atrocious. And I expect AMD to have 6-8GB card options unlike 970/980 right now.

7970Ghz is 77% faster than 6970
R9 290X is 29% faster than 7970Ghz.
The move from HD5870 (September 2009) to an HD7970Ghz (June 2012) is a 2.3x increase in performance in 2.5 years.

Since R9 290X came out October 2013, I fully expect AMD to have a card 50% faster than R9 290X by December 2015 based on historical data.

What you've failed to point out is since the 7xxx series arrived Jan 2012 how AMD have mostly upped performance by burning even more watts. They haven't produced a more efficient architecture. They can't keep doing that - the 290X already burns silly watts. Nvidia on the other hand have produced more efficient architectures with Kepler significantly better then Fermi, and Maxwell significantly better then Kepler.

You can't say "oh AMD will just move to a smaller node to be competitive" because Nvidia will also move to that node and then their more efficient architecture means they stay just as far infront as they are now. AMD's not going to fix anything by going to a smaller node, they need a new architecture comparable to maxwell and they need it yesterday. Without that they can't compete.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
What you've failed to point out is since the 7xxx series arrived Jan 2012 how AMD have mostly upped performance by burning even more watts. They haven't produced a more efficient architecture. They can't keep doing that - the 290X already burns silly watts. Nvidia on the other hand have produced more efficient architectures with Kepler significantly better then Fermi, and Maxwell significantly better then Kepler.

You can't say "oh AMD will just move to a smaller node to be competitive" because Nvidia will also move to that node and then their more efficient architecture means they stay just as far infront as they are now. AMD's not going to fix anything by going to a smaller node, they need a new architecture comparable to maxwell and they need it yesterday. Without that they can't compete.
Maybe some of us just care about performance.
For us as long as AMD has the performance, they are still competing.
 

Alatar

Member
Aug 3, 2013
167
1
81
This story is garbage. Where in their sources does Lisa Su say that 20nm graphics are coming? Actually go and read the Venturebeat article, which says nothing of the source- and then read the Fudzilla article. The closest the FZ one comes is:



That isn't a quote from LS- it's very unclear whether FZ is paraphrasing Lisa Su, or whether they are just adding context to her current situation. I believe that it is the latter. For a start, "K11"? There is no K11. There is K12, and there are lots of other products- but none of them are K11.

This is just standard internet echo-chamber nonsense, and WCCFtech intentionally misinterpreting things in order to get a few hundred clicks. Again.

Came here to post this but since it has been said I'll just quote it.

There's nothing in the articles wccf linked to that even implies that Su said that AMD was going to release 20nm radeons in the next few quarters.
 

SolMiester

Diamond Member
Dec 19, 2004
5,331
17
76
You gotta give AMD a chance since they were already class leading for 1 year at 4K.

Huh, on one hand you say we still need 2 cards for 4k, and yet here you are singing AMD praise once again over class leading 4k?....If one card isnt gonna do it, then neither are class leading!
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
If that were true AMD would be the sales leader.


You have to know how fanboyism and emotions play into purchasing decisions for many people. There are a bunch of people that post here that I couldn't imagine ever seeing an AMD part in their rig.



On topic: I've been using my 7970 since it launched in January 2012. The 290's, 780's, and 9xx Maxwell parts are fine and dandy. But I'm waiting for the real next gen (though the closer R9 290's get to $200, the more tempting they become). I'm using water, I'm ready to add a 20nm GPU to the loop. :)
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
AMD is always coming first on the new nodes. We'll get GPUs on a new node from them before nvidia most likely. I'd like to see a 20nm AMD flagship GPU go against nvidia using a huge GM200 28nm chip.

We are due for a real absolute performance increase. We have not gotten one since the Titan released. Just a few percent with 780ti, then a few more percent from 980.

It's time for the new process with 20nm AMD and/or GM200 on 28nm and some solid performance gains to start showing up.