14nm r9 nano question

Hugo Drax

Diamond Member
Nov 20, 2011
5,647
47
91
If AMD comes out with the same R9 nano but 14nm size I would imagine a significant power requirement drop. How much do you thing the drop would be?

The die would be half the size so maybe 120w for same level of performance as the current r9?
 

Hugo Drax

Diamond Member
Nov 20, 2011
5,647
47
91
That would be insane. I am replacing an R9 285 which has a peak TDP of 190w with the R9 nano with a peak of TDP 175. I suspect HBM has a lot to do with this. You are talking about a card with double the performance of the R9 285 but lower TDP. pretty impressive.

half of this in 2016 will be insane. Would love to see a videocard with this level of performance running a TDP of 80W.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
is that true? that would make nano even more attractive.

But it wouldn't be "current" Nano performance. More like Tonga performance in relation to 14/16FF products. You could then get a Nano 2.0 with much better performance at ~175W
 

tential

Diamond Member
May 13, 2008
7,348
642
121
But it wouldn't be "current" Nano performance. More like Tonga performance in relation to 14/16FF products. You could then get a Nano 2.0 with much better performance at ~175W
This....
Seriously. Although really with a node shrink this is a huge jump in performance but devs will still target the average level of performance. So it's the best time to own a high end as it will handle everything at max settings.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
This....
Seriously. Although really with a node shrink this is a huge jump in performance but devs will still target the average level of performance. So it's the best time to own a high end as it will handle everything at max settings.

Not necessarily. If AMD chooses to invest the benefits from two node shrinks into energy efficiency, then they can make a GPU with current Nano performance for half the TDP. They could also do a Nano 2.0 with the same TDP but with higher performance. One wouldn't exclude the other(although making the Nano 2.0 is probably easier with same TDP but higher performance).

But the new Nano 1.0 would not have Tonga-like performance, it would be like this generation's Nano but with much lower TDP. And that level of performance is most likely more than enough for most people in 2016 and beyond for 1080p.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Not necessarily. If AMD chooses to invest the benefits from two node shrinks into energy efficiency, then they can make a GPU with current Nano performance for half the TDP. They could also do a Nano 2.0 with the same TDP but with higher performance. One wouldn't exclude the other(although making the Nano 2.0 is probably easier with same TDP but higher performance).

But the new Nano 1.0 would not have Tonga-like performance, it would be like this generation's Nano but with much lower TDP. And that level of performance is most likely more than enough for most people in 2016 and beyond for 1080p.
I'm saying I'd rather get the faster gpu as it will be very fast over an energy efficient gpu that's just as good as current gpus out now.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Tonga performance was in relation to current series compared with future series. In other words, the current Nano would be some 200-400$ lower/middle class product using 14/16FF. While the Nano 2.0 would be 700-750$ or whatever high end product.
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
I worry because it looks like intel gained so very little these past couple nodes. I think if intel isnt getting much then why would TSMC?

Intels 22nm jump might have just been a fluke, but now their 14nm?????

The gains are nothing like they used to be. When intel went from 40 to 32nm, that was huge. In every single way their was improvement. It is really scary.

I think the path moving forward: architectural improvements > node
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
GPUs should have big gains due to their nature. CPUs is entirely different.

Intels 22 to 14nm also goes from 18 to 28 cores on the server side.
 

Hugo Drax

Diamond Member
Nov 20, 2011
5,647
47
91
I worry because it looks like intel gained so very little these past couple nodes. I think if intel isnt getting much then why would TSMC?

Intels 22nm jump might have just been a fluke, but now their 14nm?????

The gains are nothing like they used to be. When intel went from 40 to 32nm, that was huge. In every single way their was improvement. It is really scary.

I think the path moving forward: architectural improvements > node

Intel sticking a gpu on the chip probably does not help. I bet if they delete the GPU you would have more room for CPU transistors which means more CPU performance.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I worry because it looks like intel gained so very little these past couple nodes. I think if intel isnt getting much then why would TSMC?

Intels 22nm jump might have just been a fluke, but now their 14nm?????

The gains are nothing like they used to be. When intel went from 40 to 32nm, that was huge. In every single way their was improvement. It is really scary.

Intel gained quite a bit from newer process nodes. It's just that on the desktop segment that most of us here care about, they used it to pad profit margins instead of providing a significantly better product.

If you look at the high-margin server segment and the very competitive mobile segment, gains are substantially bigger. The core counts on Intel's large-die server parts have gone way up - this is a better harbinger of what is in store for GPUs, because GPUs are all massively parallel in nature. On the mobile side, the focus with new process nodes has been improved performance/watt.

The stagnation on desktop CPUs is due to the fact that there seems to be an inherent clockspeed wall somewhere in the 4-5 GHz range; above that, power consumption starts to shoot up exponentially for small clock gains. This applies to all vendors and all process nodes. IPC improvements are becoming harder and harder for Intel, with all the low-hanging fruit plucked long ago. And Intel is reluctant to give mainstream users more than 4 physical cores. Hopefully there will be enough competition from AMD's Zen to change this...
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
The problem is that CPU's don't scale with more cores, unless those cores are needed. GPU's, with die shrinks can simply add more cores and gain more performance. CPU's on the other hand, cannot add more cores and expect more performance, unless the application can take advantage of it. Games, which most of here are talking about, won't see significant gains past 4-6 cores.

Maybe in time that will change, but for now, that is where we are at. Adding transisters only helps if there is something that more transisters can help.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
I'm saying I'd rather get the faster gpu as it will be very fast over an energy efficient gpu that's just as good as current gpus out now.
fury performance in a laptop has me salivating :biggrin: my 980m is like a 770. a fury would be a huuuuuuuuge performance boost.
 
Feb 19, 2009
10,457
10
76
Lisa Su has said the next-gen GCN on the new ff node will be doubling the power efficiency than current stuff.

Given its a huge node jump, that's expected.

We are also focused on delivering our next generation GPUs in 2016 which is going to improve performance per watt by two times compared to our current offerings, based on design and architectural enhancements as well as advanced FinFET products process technology.

– Lisa Su, AMD (latest conference call)
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Can't wait to quote this when a chip that performs as well as the Fury comes out for laptops once 16nm laptop gpus are launched
I was told this same thing by raghu that I'd see hbm mobile chips that can do 4k....

As always with amd at this point , I'll believe it when I see it. There are so many hurdles with mobile amd could be giving away 1000 dollars with every chip and still have a hard time selling....
 

NTMBK

Lifer
Nov 14, 2011
10,401
5,638
136
Why the heck would anyone even want 4K in mobile? On a 17" display, can you really tell the difference from 1080p?
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
Can't wait to quote this when a chip that performs as well as the Fury comes out for laptops once 16nm laptop gpus are launched

Quote all you want, I agree you will see a nano 2 with the 16 nm process , not likely in a laptop next year because Nvidia will have a lower powered gtx980ti laptop chip that most laptop company's will choose first. AMD is dead in the laptop market unless they lower power consumption by 2x.
 

Hugo Drax

Diamond Member
Nov 20, 2011
5,647
47
91
Not necessarily. If AMD chooses to invest the benefits from two node shrinks into energy efficiency, then they can make a GPU with current Nano performance for half the TDP. They could also do a Nano 2.0 with the same TDP but with higher performance. One wouldn't exclude the other(although making the Nano 2.0 is probably easier with same TDP but higher performance).

But the new Nano 1.0 would not have Tonga-like performance, it would be like this generation's Nano but with much lower TDP. And that level of performance is most likely more than enough for most people in 2016 and beyond for 1080p.

This will be a game changer for PC gaming. 80watt boards that run cool/silent and have the performance of an R9 Nano today, but a few months from now.

AMD has squeezed all they can out of 28nm with the R9 Nano. So 14nm cannot come soon enough. Costs will plummet as well.