kit guru 8970/50 in JUNE ???

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
RS:
Please stop using peak values, they are irrelevant. What is important is average values. May I (again) refer to the analysis of 3DC:

http://www.3dcenter.org/artikel/launch-analyse-nvidia-geforce-gtx-660
http://www.3dcenter.org/artikel/eine-neubetrachtung-des-grafikkarten-stromverbrauchs

Multiple games, no peak values, card only. This is a good pool of data - what you're always quoting is not.
TDP is thermal design power. Thermal energy transfer is very slow compared to electrical energy transfer. A card may have a TDP of 250W, use 250W on average but still spike to 280W or so. Those 280W, however, say nothing about the actual averaged power use across relevant workloads, i.e. games.

When prices are cut by a manufacturer, rebates are given to the partners for prior purchases to balance stocks. Now, I'm not saying they make up for everything they've sold them, but overall the manufacturer eats the price drop.

Thanks for the info, didn't know that.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Peak power consumption is what even HardOCP uses. You can go ahead and compare averages, 480 drew about 230-250W, still way more than 460 did. It also changes little about the fact that the power consumption difference between 460 and 580 is still huge! GTX680 uses way more power than 460/560Ti did and that means there is less room for increasing performance than 580 had over 460/560Ti because there is less power headroom to start with. Can 780 be 40-50% faster than 680, maybe, 75-100% faster, I don't believe this for a second.

It also changes nothing about the fact that NV is not immune to the same 28nm problems AMD has faced with the added complexity of a compute + gaming chip. If AMD is pushing > 210-220W on a 365mm^2 in the 7970 GE at 1.05ghz, GTX780 GK110 520-600mm^2 2880 SP 240TMU 1Ghz part will be Fermi 2.0. Do you think NV can just go 520-600mm^2 @ 1ghz and somehow be immune to the 28nm leakage and power consumption issues? Do they have magical 28nm node now?

You guys are very optimistic about 780 and giving no credit at all to the 8970. Even if 8970 is 20% faster than 7970 GE, why would NV launch a 50-100% faster part than 8970 since never in the history of ATi vs. NV or AMD vs. NV has NV beaten the competing flagship by such enormous amoutn? Interesting how this is suddenly reasonable since GCN Tahiti XT is not a butchered architecture like 2900XT R600 was. I remember the same wild stories from Team Read about HD6970 being the next R300 for AMD, or 768 SP GTX580 rumors, or GTX680 blowing 7970 out of the water, 7970 with 3072 SPs - none came true.

If NV can deliver 1Ghz 2880 SP, 240 TMU, 384-bit card at 250W or less, I'll be super impressed. For now, I will remain sceptical since the hyped specs for 480/580/6970/7970 never came true. Each of those cards shipped with lower specs than people generally predicted on the forums. I believe this is because of human nature ---> tendency to over-exaggerate GPU specs from 1 generation to the next and has happened nearly every generation in the last 5 years.

And as far as GTX780 beating 8970 by 50-100% on average......ya about that. That's worthy of a signature.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
So if HardOCP does it, it's the right way to do it?

I'm not saying what you're saying is wrong, I'm only saying that your way of looking at power consumption is flawed. What you can read at 3DC is much better for the reasons I stated.

Btw, no one expects GK110 to hit 1 GHz, that is just ridiculous. Think 800 MHz, that is way more realistic and will help a lot to keep voltages and power consumption down compared to GK110@1GHz. And I said 20% over the 8970 sounds reasonable. I would also point out that GK104 is quite bandwidth limited in many cases. I'm not sure but I think that increasing bandwidth would be a very energy-efficient way of improving performance.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
You are comparing GTX460 --> GTX580 and GTX680 --> GTX780 and ignoring this massive power consumption difference between 460 and 680.....

Not at all. Mid tier GPU on new node process, high end GPU on far more mature node process. Power difference between the 460 and 580 is ~90 Watts. That would put them a bit over 250, but not much. You keep wanting to compare mature node to mature node, that isn't what we are looking at. This is very different.

In other words, GTX680 is nothing like GTX460 was to 480/580. NV is already running up against 190W of power use on the 680. With 460 they had 110-150W of room to play with before getting to 580 / 480 power consumption.

The 480 was well over 250 watts, under certain tests it was hitting close to 150 watts more then the 460. This 250 watt barrier is.... interesting. It isn't like nVidia has hesitated to blow it out of the water in the past.

Your entire projection for 100% is not unreasonable is falling apart and fast unless NV goes to 275-300W.

You think that is unrealistic?

Never in the entire history of ATI vs. NV or AMD vs. NV did NV have a 50-100% faster high-end flagship.

That's very wrong unless you are very young. Want to compare the RageIIc to the Riva128? Difference was probably closer to 1000% then 100%(that rift held for years until the Rage128 launched). In recent times it hasn't happened. In recent times a mid tier nV GPU didn't hit and best AMD's highest end part(yes I know, AMD has responded now, talking about at launch).

When did NV's GPU beat AMD's part by 50-100% on average? Ya, that also never happened.

To get to more recent times, both on 90nm-

http://www.anandtech.com/show/2116/24

The 2900xt used 80nm, it wasn't on the same fabrication process, it did come almost six months later so that shouldn't be surprising. You speak as if what I'm talking for is something that hasn't been close to happening before, when it really wasn't even that long ago. What's more, that wasn't with nV pushing the power envelope. On the same node, monolithic nV GPU demolishing the high end AMD part on the same build process.

Ummm...no. AMD launched 3-6 months earlier, raised prices and didn't lose market share = sounds like a much better generation for the firm than 4800/5800/6900 series were.

I didn't say it was bad from a business perspective. It was *terrible* from an engineering perspective for both AMD and nV. If you honestly look at the situation, you can't really come to any other conclusion. From a business perspective they were *both* saved because the other one screwed up so bad. The problem moving forward is that nV never brought out their big guns this round.

This excess fat Tahiti XT has will translate to wasteful transistors that GK110 has to deal with too. That means increased power consumption on GK110

Which is why I'm comparing the 780 to the 580.

You are telling me that GK110 was easily manufacturable this year at 520-600mm^2 die at 1ghz? Yup, NV has alien 28nm technology stashed just for them.

nVidia screwed up this generation
nVidia screwed up this generation
nVidia screwed up this generation
nVidia screwed up this generation

When you point out the failings of both companies and someone rabidly defends one of them, hard to view them as impartial. GK110 was too ambitious for a new 28nm node, they would have ended up with an insanely expensive and extremely poorly yielding part that further would have significantly reduced the very limited availability of wafers. From a business perspective they made the right choice and in fact did far better then they would have if they had shipped all their parts(GK110 consumer parts won't have the margins the 680 does).
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm not saying what you're saying is wrong, I'm only saying that your way of looking at power consumption is flawed. What you can read at 3DC is much better for the reasons I stated.

Even if you use averages, GTX480/580 vs. 460/560Ti is larger power consumption headroom than 680 vs. 780 has to begin with. 680 does not have the power consumption of a mid-range part like 460/560Ti did. Also, NV and AMD have to account for peak power consumption.

Btw, no one expects GK110 to hit 1 GHz, that is just ridiculous. Think 800 MHz, that is way more realistic and will help a lot to keep voltages and power consumption down compared to GK110@1GHz.

I agree with you but there have between at least 2-3 posters now that think GTX780 will be a full blown 1Ghz part 75-100% faster than 680.

And I said 20% over the 8970 sounds reasonable. I would also point out that GK104 is quite bandwidth limited in many cases. .

Same. 15-20%, believable. 50-100% faster than 8970 on average? No chance unless 8970 is 5% faster than a 7970 GE.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
"All that performance requires a certain amount of power, and Tesla K20 requires a single 8-pin and 6-pin power for a grand total of 300 Watts." ~ BSN

Ok, so who now thinks NV will actually throw the entire performance/watt out the window and launch a 300W GeForce card?

GTX480 = 90-94*C at load, jet engine cooler, 250W power consumption in games on average.

GTX780 = up to 300W, no problem. Sounds believable, NV going back to 275-300W part after being criticized by gamers that they wanted them to focus on performance/watt.

Ben, you linked 8800GTX vs. 7950/X1950XT series. There is no 2900XT card in those benchmarks. First of all 8800GTX is less than 50% faster on average than 2900XT was. Second of all 2900XT had broken AA vs. 7970 has superior AA performance to 680, 2900XT had terrible performance per clock vs. 7970 has better performance per clock vs. 680.

HD7970 GE is already 5-12% faster than 680 at 1080P/1600P. Out of the gate, NV needs to gain 5-12% more just to be even to 7970GE, then gain all the performance advantage 8970 will have over 7970 GE, then on top gain another 50-100% you are saying. Are you serious?

Here is another flaw in your logic: 7970 is faster per clock than 680 is. If 1-1.1ghz 8970 grows power from 210 to 230-240W, GTX780 needs to extract 50-100% more performance from just 60-70 remaining watts before it runs into 300W wall. How in the world is that possible? HD7970 and 680 use about the same power but 680 has no dynamic scheduler, no 384-bit bus, no 3GB of VRAM, no high DP transistor count. Add all those will cause a huge power penalty like they did to 7970.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Ok, so who now thinks NV will actually throw the entire performance/watt out the window and launch a 300W GeForce card?

I think it is absolutely within the realm of possibility. I'm not saying it will happen, but would it shock me? Not in the slightest.

BTW- I can see the 780 being only marginally faster then the 8970 as a possibility too, everyone just seems to focus on the upper limit of what I think is within the realm of reason. If the 8970 comes out much better then the 7970 did, and the 780 runs into 480 style problems then we could be looking at a relatively good matchup. Ideally, I want both companies to do well, at this point I'm rather hoping AMD does a lot better then what this round showed as we need some real competition across the board.

Hmm, I better expand on that. I know from a marketplace perspective AMD is quite competitive, but from an engineering standpoint they don't look good this generation. nV's mid range class GPU is running with their high end part, that is a larger disparity then we had when looking at NV30 vs R300.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
If NV releases a 300W geforce now it is safe to assume it will beat 680 by 50-60% on average and the lead can even surpass depending on situation.But I highly doubt K20 will debut as geforce in the same incarnation.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Missed your edit RS-

Ben, you linked 8800GTX vs. 7950/X1950XT series. There is no 2900XT card in those benchmarks. First of all 8800GTX is less than 50% faster on average than 2900XT was.

The 2900xt wasn't on the same build process as the 8800GTX, the 1950xt was. I think you aren't remembering what the market looked like at the time.

Second of all 2900XT had broken AA vs. 7970 has superior AA performance to 680, 2900XT had terrible performance per clock vs. 7970 has better performance per clock vs. 680.

You are comparing high end parts to other high end parts. That isn't what this generation has been so far. Step outside the consumer mindset for a minute here :)

AMD's high end part has been going up against nV's mid range. I know as a consumer that means absolutely nothing, but when looking at it from an engineering perspective that is of *staggering* importance. If the 5870 barely edged out the 460 would you not agree AMD would have had major issues? That *is* what we are seeing right now.

HD7970 and 680 use about the same power but 680 has no dynamic scheduler, no 384-bit bus, no 3GB of VRAM, no high DP transistor count. Add all those will cause a huge power penalty like they did to 7970.

7.1Billion transistors and 300 watts for K20. You talk like what I say is insane, but they actually fall in line with nVidia's already announced specifications for their Tesla parts. A general FYI- Tesla parts on a traditional basis have used *less* power then their GeForce counterparts. The question is if AMD can fix the performance of their architecture enough to be competitive.
 

biostud

Lifer
Feb 27, 2003
20,128
7,250
136
Couldn't nvidia just increase TMU, ROP's and SP of th gk104 by 50% and run a 384-bit memory controller? Wouldn't that just increase the transistor count ~50-60% making it a ~5.5B chip instead of the 7.1B K20 chip?
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Even if you use averages, GTX480/580 vs. 460/560Ti is larger power consumption headroom than 680 vs. 780 has to begin with. 680 does not have the power consumption of a mid-range part like 460/560Ti did. Also, NV and AMD have to account for peak power consumption.

I was not talking specifics for that extrapolation, I was speaking generally. It's just common sense to use average numbers and measurements that only include the card itself, nothing else. How do NV and AMD have to account for peak power? How is that relevant to the general discussion about how much a certain card uses in a real-world scenario? Both companies have protection circuits now that safeguard against too high spikes.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Couldn't nvidia just increase TMU, ROP's and SP of th gk104 by 50% and run a 384-bit memory controller? Wouldn't that just increase the transistor count ~50-60% making it a ~5.5B chip instead of the 7.1B K20 chip?

Yes, but it is a scales of economy issue. By using the same design for both the Tesla and GeForce lines you minimize your R&D costs which are an enormous portion of your operating overhead. It is also the reason why only AMD and nVidia are alive in the workstation market in any serious fashion. Because they can share R&D costs with their consumer lines, the specialty makers couldn't hope to match either their price point or their performance levels.

You can look at it as Tesla pays for R&D making GeForce a huge profit generator, or you can look at it as most people do is that the GeForce line does OK, while Tesla makes nVidia truck loads of cash(tens of thousands of parts sold in the $3K-$5K range that on a cost basis are close to the same as the $500 consumer graphics cards we buy).

If nV were to split it into two different platforms the increase in R&D would either price Tesla much higher then they would like, or reduce the overall performance they could offer due to dilution of engineering talent.

AMD does things in the same fashion, honestly it is just smart business. Bringing scales of economy into market segments that normally couldn't possibly benefit from them sets you up for big margins and happy customers(normally those don't go hand in hand).
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Couldn't nvidia just increase TMU, ROP's and SP of th gk104 by 50% and run a 384-bit memory controller? Wouldn't that just increase the transistor count ~50-60% making it a ~5.5B chip instead of the 7.1B K20 chip?

GK110 is only a bigger GK104 with a few more tweaks and DP-Compute-Units.
It makes no sense for them to design another Geforce only part when they have GK110.
 

biostud

Lifer
Feb 27, 2003
20,128
7,250
136
Yes, but it is a scales of economy issue. By using the same design for both the Tesla and GeForce lines you minimize your R&D costs which are an enormous portion of your operating overhead. It is also the reason why only AMD and nVidia are alive in the workstation market in any serious fashion. Because they can share R&D costs with their consumer lines, the specialty makers couldn't hope to match either their price point or their performance levels.

You can look at it as Tesla pays for R&D making GeForce a huge profit generator, or you can look at it as most people do is that the GeForce line does OK, while Tesla makes nVidia truck loads of cash(tens of thousands of parts sold in the $3K-$5K range that on a cost basis are close to the same as the $500 consumer graphics cards we buy).

But I can't imagine that using current tech and scale it would cost much in R&D. Of course they will over time need to develop new technology, but they could then choose to apply focus on compute power in tesla cards and focus on graphics power in GTX cards. (If it makes sense, like saving a lot of die space)
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But I can't imagine that using current tech and scale it would cost much in R&D. Of course they will over time need to develop new technology, but they could then choose to apply focus on compute power in tesla cards and focus on graphics power in GTX cards. (If it makes sense, like saving a lot of die space)

To some extent yes, but there are other factors involved(R&D is simply the largest the way things are done currently). Say nVidia orders a wafer of K20 parts, 75% of them have one bad cluster. If it is a compute only solution, they have yields of 25%. By sharing the same die, they have a healthy batch of 770s to sell for ~$400 and another 25% of the wafer they can sell for ~$5K.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
But I can't imagine that using current tech and scale it would cost much in R&D. Of course they will over time need to develop new technology, but they could then choose to apply focus on compute power in tesla cards and focus on graphics power in GTX cards. (If it makes sense, like saving a lot of die space)

You are talking about designing another chip. It's actually cheaper to just use the GK110 in both markets.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
At first I hated AMD raising prices and even was very vocal against HD7900 series, then I realized why AMD did it - because for 3 generations in row when they offered insane price/performance, NV users didn't switch anyway. Welcome to the new strategy then - no more price/performance unless the competitor forces your hand to do so. This is why I think AMD will launch first again and go for $500 range with 8970. If GTX780 smokes the 8970, AMD could always revert back to price/performance.

well, amd have to realize that they can't change the market that much in just 3 generations...

...well, for people that cares about perf/money...one generation is enought (and IMO, it seems to be like ~65% of all the market)

the other 35% need more than that, it needs a better overall product...and that means with a better features too...

then you realize that, most features just came with 7000 series: HD3D came last year, dx11 SSAA this years, tesselation and compute this year...
...and there is still some features missing

sure, amd have it's own features, but they are the underdog...
they need to copy nvidia, and copy better
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Way to completely dodge every question that was posed to you. :thumbsup: BTW here's something you may find shocking, you are the market, same as me. Same as everyone. The market is composed of all of us. That is who decides, you, me, everyone. You are not excluded, the market is not some abstract entity. So stop parroting your nonsensical "let the market decide" already.

It isn't nonsensical at all because the market does indeed decide, sorry!
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Also, it hasn't been answered why AMD should go back to $299/369 for 8950/8970 voluntarily?

Imho,

Why would they do this? Who is saying this? Your point for the new strategy was because AMD didn't make any money or garner enough market share using the sweet spot strategy and obviously wasn't the case based on many quarterly findings; and actually AMD retaking the over-all discrete share away from nVidia in 2010.

Personally have no problem with price-points really --- if a product is priced too high - market adjusts usually -- what I desire ideally, is strong competition so there may be more value and innovation because with out it their aggressive predator teeth may show.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I really think we will have a Big Kepler GTX 780 before the end of the year. In that case I think the January rumors of an 8970 release make more sense.
 

Granseth

Senior member
May 6, 2009
258
0
71
Quote:





Originally Posted by RussianSensation


There is no mention at all what Kepler compute parts were shipped, K10 or K20. K10 has been shipping and we know that. When K10 officially launched, it was on NV's website --> See link. Where is K20? It hasn't officially launched yet. Even if from that article they did get K20s, the article says they only got 32 of them for early development testing. 32 but 14,592 still not delivered. Again, that article does not explicitly state that they installed 32 K20 Tesla parts.




Expands on the info:


http://www.hpcwire.com/hpcwire/2012-...rcomputer.html

It would be great if you looked at your sources sources
http://blogs.knoxnews.com/munger/2012/09/the-big-computing-change-is-ta.html
because the original source only names the chip as Kepler, not K20.

Looks like hpcwire.com are just guessing.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
You are talking about designing another chip. It's actually cheaper to just use the GK110 in both markets.

I don't disagree but the real comparison may be not how the GK-110 compares to the GK-104 but how the Gk-110 may compare to the potential GK-114 sku. What will be the GK-114 or the refresh of the GK-104?
 
Last edited: