What does AMD have to fight the 780?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
AMD have nothing to fight of GTX 780.
They can just sit there and watch and run game bundle promos like they have done lately and hope people will buy their 2 year old 7970.

Tahiti (7950/7970) is too inefficient so they can`t make a bigger core based on it to match GTX 780. If they did, it would mean too much heat for the silicon and too much heat for the fan to remove. GTX 680 was 195W. GTX 780 have increased it to 250W.
7970 is already at 250W. A 7980 (example name) would be minimum 310W+ TDP on one single die. Good luck with that one.

Let me be blunt - you don't know what you're talking about. Big dies do not have to mean higher TDP's, in fact the more area you use, the lower the clocks you can get away with and the lower the TDP can be relatively.

The reason Titan and the 780 have a higher TDP is due to it being *massively* bigger than the 680. 88% bigger die, 20% higher TDP, 25-40%% higher performance. Does it make sense now or do you need more schooling on why the 8970 could beat the 780 with a ~550 mm2 die?
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
You are clueless.
Tell me how they are gonna do it when GK110 is over 20% more efficient than Tahiti, yet saw a 55W TDP increase?
They can`t just add more area and add more cores on a really inefficient architecture.

It won`t work. Stop dreaming and face reality. Why do you think AMD said themselves that 7970 is gonna be their greatest high end single GPU? Because they can`t go any further.

Atleast get your facts straight before trying to "school" people. 8000 series will be a new architecture, maybe on 20nm, maybe 28nm
 
Last edited:

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
You are clueless.
Tell me how they are gonna do it when GK110 is over 20% more efficient than Tahiti, yet saw a 55W TDP increase?
They can`t just add more area and add more cores on a really inefficient architecture.

It won`t work. Stop dreaming and face reality. Why do you think AMD said themselves that 7970 is gonna be their greatest high end single GPU? Because they can`t go any further.

Atleast get your facts straight before trying to "school" people. 8000 series will be a new architecture, maybe on 20nm, maybe 28nm

They said it'll be their leader for the remainder of the year. Later there's been rumors of 20nm this year but it could mean lower end in 20nm. You think it'll be their "greatest" gpu?
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
They said it'll be their leader for the remainder of the year. Later there's been rumors of 20nm this year but it could mean lower end in 20nm. You think it'll be their "greatest" gpu?

Sorry, I meant single GPU on GCN. :)
New architecture and/or die shrink to 20nm (8970) will be much better of course. But Q4 at earliest.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
You are clueless.
Tell me how they are gonna do it when GK110 is over 20% more efficient than Tahiti, yet saw a 55W TDP increase?
They can`t just add more area and add more cores on a really inefficient architecture.

So I guess kepler is an equally inefficient architecture? Too bad both companies made extremely inefficient architectures this generation :(

perfwatt_1920.gif
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
How many of you think there will be 20nm GPUs by Q4? i just am not sure we will be seeing them in 2013 at all.
 

Granseth

Senior member
May 6, 2009
258
0
71
How many of you think there will be 20nm GPUs by Q4? i just am not sure we will be seeing them in 2013 at all.

I think you will be right, though I expect a paper launch of some parts before the end of the year, and real launch in early 2014.
 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
"i just am not sure we will be seeing them in 2013 at all."

don't know ,but they might be better letting nv come out first and target between the nv small die's and the big proline dies.
-win the gaming card market with a med. size die ,as nv can't get too close to their pro chips or lose the big bucks sales in the ultra range.that covers some of the R&D for the low volume parts.
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
You are clueless.
Tell me how they are gonna do it when GK110 is over 20% more efficient than Tahiti, yet saw a 55W TDP increase?
They can`t just add more area and add more cores on a really inefficient architecture.

Wow you still don't get it? Tahiti was already heavy on bandwidth so the only thing AMD need increase is shaders. That will take up *much* less area and TDP than increasing the memory bus by 50% will.

Also...

54906.png


Looks like Tahiti is in-between the 680 and 780 in power draw. Now add some shaders to that (area), drop clock speeds and you'll get...something quite similar. Here's the clincher - the 7970 is a year and a half old. AMD isn't even trying, jig?

It won`t work. Stop dreaming and face reality. Why do you think AMD said themselves that 7970 is gonna be their greatest high end single GPU? Because they can`t go any further.
I'm going to tag this and come back and laugh in your face in 6 months time.

Atleast get your facts straight before trying to "school" people. 8000 series will be a new architecture, maybe on 20nm, maybe 28nm
:D

Maybe if you had a clue you'd be worth responding to further. That's a negative on both counts.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
So I guess kepler is an equally inefficient architecture? Too bad both companies made extremely inefficient architectures this generation :(

perfwatt_1920.gif

Atleast post the relevant chart.

GK110 (GTX 780) 100%, 7970GHz 79%. Thats 27% better performance/watt for GK110 than Tahiti. Now think that AMD further increased core count based on same architecture. GK110 saw 55W more TDP, AMD will see how high TDP? Yeah it can`t be done with the current architecture.
Dual GPU is their only option to fight Titan or GTX 780. Spread the cores around on 2 dies and therefor spread the heat around as well. One big die can`t take this sort of heat.

perfwatt.gif
 
Last edited:

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
Wow,nice thread crap Cloudfire...you are very good at that.
Didn't you see the mod message telling you to STFU for a while?
 

Granseth

Senior member
May 6, 2009
258
0
71
Atleast post the relevant chart.

GK110 (GTX 780) 100%, 7970GHz 79%. Thats 27% better performance/watt for GK110 than Tahiti. Now think that AMD further increased core count based on same architecture. GK110 saw 55W more TDP, AMD will see how high TDP? Yeah it can`t be done with the current architecture.
Dual GPU is their only option to fight Titan or GTX 780. Spread the cores around on 2 dies and therefor spread the heat around as well. One big die can`t take this sort of heat.
Are you sure you are reading these charts? To me it looks like Tahiti can be much more effective than gk110

About 6% more effective than gk110 actually.

So maybe there are a few thing one can do to make an power effective chip?


Please can you stop making things up and only participate if you have something smart to say.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
don't know ,but they might be better letting nv come out first and target between the nv small die's and the big proline dies.
-win the gaming card market with a med. size die ,as nv can't get too close to their pro chips or lose the big bucks sales in the ultra range.that covers some of the R&D for the low volume parts.

honestly, i think nvidia would wait and wait hoping AMD launches first. Unless they know for sure that AMD cant touch it, i think nvidia would be afraid to go first. Perhaps this is a lesson learned back when AMD forced them to drop their GTX280 hundreds of dollars over night.

We all know the gk104 was gonna be the 670ti, to me it seemed they had everything ready (even box designs) but they held off till AMD launched their hand. Since nvidia had the performance crown with the 580, the ball was in their court. They could wait it out as long as it takes with no pressure. I know a lot of people say nvidia was late but I think this is because they were not confident and was way more concerned with AMD than they ever let on.

After Tahiti launched we hear whispers from nvidia saying, "we expected more from AMD". Some people took that as talking smack but i honestly think that statement tells the whole story. Nvidia was in a situation because there was no way they could launch their big die in the foreseeable future and this had them afraid and insecure. The gk104 was all they had to work with so they reluctantly put together a gtx670ti but were hesitant, too unsure of what AMDs line up would be like or how badly it would do against it. Nvidia had no clue what AMD was coming out with but obviously they were truly concerned. This was most likely because AMD had been executing flawlessly lately and since the 4000 series they had been on a roll. I believe Nvidia held out on purpose this time, they held out to let AMD launch first. Once they seen Tahiti, they were completely surprised because they had feared the situation they were in. They were afraid that Tahiti was gonna be much more powerful.

See AMDs flagships the past few generations were great. Nvidias big dies barely notched them out. So I think when nvidia said they expected more, they meant "we could finally breathe again". Once they absorbed the 7970 performance they set to position their gk104 against it. Quickly they discovered that they not only could keep up with the 7970, with the right clocks they could surpass its performance. The 670ti was scratched entirely and they worked the gk104 into becoming the gtx680. Things really worked out well for nvidia.

Was it luck? Tahiti was a transition to a much more fermi like GPU. AMD had little choice but to go this route if they were to ever keep up. Things had to change. Tahiti wasnt bad at all if you considered how bad the original fermi went. Actually, Tahiti on a hardware level was a perfect transition. Look how much their software engineers have been able to squeeze out of it since launching. At launch though, it was a different story. One that could have turned out very very differently.

This is why i believe that nvidia will not launch first at all. Not unless they have an architecture that they are extremely confident cant be touched by AMD, i dont see them doing it. As long as they have the fastest GPUs out, there is little pressure on nvidia to do so.........

my take
 
Last edited:

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
That's an interesting and probably fairly accurate account of what happened.
I personally would prefer AMD stick to the sub $500 market for single GPUs and stick with their strategy of using dual chips to compete in the Ultra performance bracket rather than serve up $600-$800 single GPU cards.
Staying under $500 means most enthusiasts can upgrade every generation(which is techie heaven) but who wants to drop a $1000 on a card which gets superseded every refresh?
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Atleast post the relevant chart.

GK110 (GTX 780) 100%, 7970GHz 79%. Thats 27% better performance/watt for GK110 than Tahiti. Now think that AMD further increased core count based on same architecture. GK110 saw 55W more TDP, AMD will see how high TDP? Yeah it can`t be done with the current architecture.
Dual GPU is their only option to fight Titan or GTX 780. Spread the cores around on 2 dies and therefor spread the heat around as well. One big die can`t take this sort of heat.

It is relevant, you said that GCN was inefficient as an architecture as a whole and I showed you that there is a more power efficient card on GCN architecture then there is on Kepler. Use some logic and extrapolation. You called a whole architecture inefficient and you fixated on comparing specific SKUs.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
It is relevant, you said that GCN was inefficient as an architecture as a whole and I showed you that there is a more power efficient card on GCN architecture then there is on Kepler. Use some logic and extrapolation. You called a whole architecture inefficient and you fixated on comparing specific SKUs.

As you may have noticed with the chart, the higher end cards are less efficient. The larger you go, the less efficient they become.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
As you may have noticed with the chart, the higher end cards are less efficient. The larger you go, the less efficient they become.

That's probably why Titan is more efficient then 660Ti at 1920x1200, and with high performance cards you should use high resolutions.
At resolution that's suitable for a card like titan it's more efficient then any small die kepler card expect for a dual GPU GTX690 which uses binned GK104 chips.
perfwatt_2560.gif


So the veracity of your claim is busted.
 
Last edited:

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
That's probably why Titan is more efficient then 660Ti at 1920x1200, and with high performance cards you should use high resolutions.
At resolution that's suitable for a card like titan it's more efficient then any small die kepler card expect for a dual GPU GTX690 which uses binned GK104 chips.
perfwatt_2560.gif


So the veracity of your claim is busted.

No, what I said was completely accurate. I said that the chart showed that the higher end cards are less efficient, which the chart did show.

But as you can see by this chart, the Titan is still more efficient. This chart shows clock speed may have a lot to do with efficiency as well. The lower clocked versions of almost every card is more efficient and Titan's are clocked low.
 
Last edited:

Elfear

Diamond Member
May 30, 2004
7,163
819
126
No, what I said was completely accurate. I said that the chart showed that the higher end cards are less efficient, which the chart did show.

But as you can see by this chart, the Titan is still more efficient. This chart shows clock speed may have a lot to do with efficiency as well. The lower clocked versions of almost every card is more efficient and Titan's are clocked low.

I think what Lepton was trying to counter was the statement that GCN is a "much less efficient" architecture compared to Kepler. The 7970 and 680 are almost dead even in performance per watt and straight performance. The Ghz card is less efficient because of the clocks but it doesn't mean the architecture itself is inherently inefficient compared to Kepler.

I think that what some people are getting at is if AMD had the motivation and possibly the cash to do so, they could come out with something at the same TDP as the 7970 but with 10-20% more performance. They've had a year and a half to optimize GCN and we see some of those fruits with the 7790. Their high-end refresh wouldn't have to be some 310W inefficient monster.

Of course that's all wishful thinking right now because AMD has stated they won't bring anything out until the end of 2013 or beginning of 2014.
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
No, what I said was completely accurate. I said that the chart showed that the higher end cards are less efficient, which the chart did show.

But as you can see by this chart, the Titan is still more efficient. This chart shows clock speed may have a lot to do with efficiency as well. The lower clocked versions of almost every card is more efficient and Titan's are clocked low.

Ok continue with your false idea and try to find things to defend that idiocy.

ps. Titan's average clock in games is 976MHz which is higher then both 7970 and 7850. Enough said.
BTW.I'm still waiting for a chart comparing performance per watt of different cards at 640x480. That chart would confirm that higher-end cards have less performance per watt, CPU bottleneck be damned.
 
Last edited:

parvadomus

Senior member
Dec 11, 2012
685
14
81
GK110 (GTX 780) 100%, 7970GHz 79%. Thats 27% better performance/watt for GK110 than Tahiti. Now think that AMD further increased core count based on same architecture. GK110 saw 55W more TDP, AMD will see how high TDP? Yeah it can`t be done with the current architecture.
Dual GPU is their only option to fight Titan or GTX 780. Spread the cores around on 2 dies and therefor spread the heat around as well. One big die can`t take this sort of heat.

You just cant talk about architecture efficiency talking about a first gen (Tahiti) die vs a second gen (GK110, its just a refined GK100 which was never released btw) die.
First of all when AMD released Tahiti the process was very immature, so Tahiti is probably engineered to work over a very leaky process, I mean it must have things like unnecesary clock routing and signaling just to guarantee a high rate of funcional dies per wafer.

There has been also a lot of discussion about the inherent inefficience caused by choosing a suboptimal ratio of the different units inside a Tahiti die. For example, there are too much shaders for only 32 ROPs, there is also a strange configuration of ROPs <--> memory channels, as the 32 ROPs cannot be mapped directly with the 6-64 bit memory channels, so theres some kind of crossbar needed between them (which adds latency, and decreases efficiency).
If you look at any review about Cape Verde, Pitcairn and Tahiti, you will see that performance scales linearly from Cape Verde to Pitcairn, but not to Tahiti (looking at shader count, Tahiti should have been around 60% faster than Pitcairn, but its only about 35%).

perfrel_2560.gif


Look how 7870 is exactly 2x7770 (all functional units doubles and both run at 1ghz). Then you have VTX3D 7970, it runs at 1050Mhz, and its still not 60% faster than Pitcairn (despite having 60% more shaders)...
So we have about 15%/20% efficiency per watt lost only by choosing an incorrect ratio of functional units.

On top of that the 7970Ghz is only an overclocked 7970 with probably higher stock vcore to guarantee stability, that plus everything I wrote above just kills efficiency all the way, and its by no means because of the architecture.

I bet it would be very easy for AMD to release a GTX780 contender, with the same or better efficiency. The latest rumoured specs of HD8970 where 2560 shaders / 48 ROPs / 160TMUs, and I think it should fit very well into a 250watt TDP, even more, it should consume less power than the 7970GE.
Why AMD would not release this? Maybe its busy with PS4/Xbox, Kabini, Richland etc., or there's still too much stock of Tahiti based video cards, or they are just near enough to next process node and refreshing the current high-end is just a waste of resources.. I really dont know.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Lets take this one more time:

AMD does not do big dies. They tried before. Failed bigtime.
Nvidia have ditched the power demanding FP64 cores to make it more efficient with gaming. AMD have not. That result in a bigger power consumption and heat output. You see that from 680 vs 7970.

As the chart shows, the higher core count you get, the less efficiency. Add more voltage and clocks, you get a even less efficient GPU.

Same applies for Nvidia. Why do you think they built a new chip, aka GK110?
Take GTX 680 vs 7970 for example. GTX 680 offer -5% performance but 22% less TDP, 195W vs 250W. That means that AMD have to push up the thermal envelope by 30% to match Kepler. Its that inefficient.

That continues with bigger chips as well. Say AMD built a 500mm2 too. GTX 780 is 250W. For GCN to match that performance they need to raise the thermal envelope again by 30%. 250W * 1.3 = 325W.

When was the last time you guys saw 325W on a single die? It can`t be done. Too much heat.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
First of all: TDP does NOT equal power consumption.

7970 x Turbo is NOT a 348W TDP GPU
GTX 580 is NOT a 326W TDP GPU. It is 244W.

Second: That is from a heavily overclocked 7970 with custom built PCB and power phases. You won`t see AMD building something like this and guarantee that it will function with that sort of clocks.

Third: The maximum chart you post does not emulate real world use with this card. So the power consumption gets blown way out of proportions. Which you see if you compare it with this normal chart http://tpucdn.com/reviews/HIS/HD_7970_X_Turbo/images/power_average.gif

"Maximum: Furmark Stability Test at 1280x1024, 0xAA. This results in a very high non-game power-consumption that can typically be reached only with stress-testing applications. The Card was left running the stress test until power draw converged to a stable value. We disabled the power-limiting system on cards with power-limiting systems or configured it to the highest available setting - if possible. We also used the highest single reading from a Furmark run that was obtained by taking measurements faster than the power limit could kick in."
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
First of all: TDP does NOT equal power consumption.

7970 x Turbo is NOT a 348W TDP GPU
GTX 580 is NOT a 326W TDP GPU. It is 244W.

Second: That is from a heavily overclocked 7970 with custom built PCB and power phases. You won`t see AMD building something like this and guarantee that it will function with that sort of clocks.
So, is it possible to have GPU core operating at 300+ watts? As you can see it is.
Thats your words:
When was the last time you guys saw 325W on a single die? It can`t be done. Too much heat.
temp.gif

TDP of this HIS card seems to be way above 350 Watts
Why would AMD had any problems in making that cooler and ditching usual blower design? How is PCB a limiting factor? They made 6990 which uses 400 watts.

Third: The maximum chart you post does not emulate real world use with this card.
Ask one of them bit gold prospectors.
Delve deeper into the mud...