kit guru 8970/50 in JUNE ???

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Keep in mind, despite Revenues falling, AMD made more $ using the new strategy:
Q1 2011 Revenue = 413M
Q1 2012 Revenue = 382M (less than 1% drop)

vs.

Q1 2011 Operating Income = 19M
Q1 2012 Operating Income = 34M (79% higher)
http://www.anandtech.com/show/5764/amd-q112-earnings-report-158b-revenue-590m-net-loss

^ I can look for Q2 and Q3 data if you want.

The old strategy:

Q4 2010 --424 million revenue -- Operating income for the Graphic segment was $68 million.

http://seekingalpha.com/article/247...usses-q4-2010-earnings-call-transcript?page=3


Q2 2010 -- 440 million revenue -- operating income for the graphic segment was 33 million.

http://seekingalpha.com/article/214...s-inc-q2-2010-earnings-conference-call?page=3

Q1 2010 -- 409 million revenue -- operating income for the graphic segment was 47 million.

http://seekingalpha.com/article/199...s-inc-q1-2010-earnings-call-transcript?page=3

Q4 2009 -- 427 million revenue -- operating income for the graphic segments was 53 million.

http://seekingalpha.com/article/183...s-inc-q4-2009-earnings-call-transcript?page=3

It's not like they haven't made profits.
 

iCyborg

Golden Member
Aug 8, 2008
1,385
92
91
Considering AMD had much more of a complete 28nm family, no vocal problems about yields or constrains, wouldn't consider going down only 4 percent a solid result but may of felt disappointment and pressure to improve revenue and sales by price adjustments, imho.
I still cannot see how one can reasonably expect that the only major competition introducing two big-time products would have next to 0 impact on revenue. Even if nV didn't introduce anything, I would expect the pace to come down a bit - e.g. I don't have numbers, but I could bet my left hand that iPhone sales in the first 3 months upon launch would be better than in the next 3 months.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Possibly but nVidia did offer strong guidance for the next quarter -- will see how it plays out!
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's not like they haven't made profits.

Yes and 28nm wafer prices didn't go up 20% in 2012? That's an interesting detail you missed. You just assumed AMD could have easily afforded to maintain or have a chance to grow profits by keeping HD4800/5800/6900 price levels.

But here is the best part: You expect AMD cards to cost less. Ok now tell me why the the entire HD7000 series line should magically cost less than the GTX600 line after <$300 Kepler products were 6+ months late and after their arrival we have this:

http://www.techpowerup.com/reviews/Powercolor/HD_7870_PCS_Plus_Vortex_II/27.html

HD7970 GE > GTX680
HD7970 > GTX670
HD7950 V2 > GTX660Ti
HD7870 > GTX660
HD7850 = no competitor
HD7770 > GTX650
HD7750 = no competitor

Pretty interesting how you think AMD's line should have been at $299 for HD7950 and $369 for HD7970 from the start but GTX670 / 680 SKUs are perfectly fine at $399/499. That's a good one. Sorry, not everyone wants to pay $100+ for PhysX or believes AMD drivers don't even work half the time or compares the quality of drivers by looking at NV's control panel vs. how CCC looks. :thumbsup:

Did you buy

1) $199/$299 4850/4870?
2) $269/369 HD5850/5870?
3) $299/369 HD6950/6970?

Didn't think so. When AMD offered superior price/performance for 3 generations, you didn't buy. When AMD offered 2 unlocked 6950s for almost the same price as a single 580, you didn't buy. When AMD currently offers superior performance this generation for less, you still don't recommend AMD cards and keep implying they are overpriced since you think the premium strategy is not justified. When people are vocally saying you can make $ with AMD cards, NV loyalists still don't buy. So say again why should AMD bend over and drop prices to $299/$369 again? With bitcoin mining, HD7970 series is free this generation in North America. You know this but you still didn't buy one. Please do explain why AMD should go back to $299/369 price levels if NV's GTX700 series won't clobber them and deservingly force them to drop prices. I am curious.

If HD5850/5870 for $269/369 having the market all to themselves for 6 months didn't get NV users to switch, that's a sure sign the price/performance strategy was a total failure for the firm. BMW and Audi do not price their cars 30-50% less than Mercedes. AMD has managed to erode the ATI Graphics brand name to budget territory. It's about time they throw the small die strategy out the window and go all in to flagship GPUs. If you don't think a similarly performing AMD card is worth $450+, that's your personal choice but I think most gamers are a lot happier that there are 2 high-end flagships to choose from instead of just 1. You seem to have a huge problem with AMD going back to $500+ price levels but nothing against NV for selling $500-600 flagships for 10+ years. It's not like you are going to buy an AMD card anyway.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Doesn't matter what I buy; it matters what the market as a whole buys over-all.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
90% of the market buys LCDs and LEDs despite them being inferior in image quality to Plasma. Lackluster driver's cars such as Honda Civic, Accord, Camry and Corolla are among the best selling in their class. Does that make these cars good driver's cars? What's your point that the most popular products are better? Often the opposite is true -- the average market participant is easily swayed by marketing. I tend to think the average market buyer is actually very uninformed and more often than not makes purchasing decisions based on emotions not logic or factual data.

Secondly, you are saying HD7000 series was overpriced but so was GTX600 series and you didn't mention that. Since GT640/GTX650/660/660Ti/GTX670/680 cannot beat HD7750/7770/7870/7950/7970/7970GE convincingly, if you think AMD's line was overpriced, so is 600 series.

Going back to HD8950/8970, you are saying AMD would be better off going back to $299/369 price levels?
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Doesn't matter what I buy; it matters what the market as a whole buys over-all.
Way to completely dodge every question that was posed to you. :thumbsup: BTW here's something you may find shocking, you are the market, same as me. Same as everyone. The market is composed of all of us. That is who decides, you, me, everyone. You are not excluded, the market is not some abstract entity. So stop parroting your nonsensical "let the market decide" already.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
HD7970 GE > GTX680
HD7970 > GTX670
HD7950 V2 > GTX660Ti
HD7870 > GTX660
HD7850 = no competitor
HD7770 > GTX650
HD7750 = no competitor

Correct me if im wrong, but the line up didnt look like that at all when Kepler was released, AMD dropped their prices first, then the drivers got better before it looked like above....
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Yup, you are 100% correct, which means from January to Kepler launch HD7950/7970 competed with themselves, and after June 2012, AMD once again had better price/performance and single-GPU performance.

HD7750-7870 went uncontested for 6+ months.

So, AMD's 7950/7970 were overpriced for 1 quarter. What about 2.5 months before Kepler when 580 was $430-450 and 4 months since June 2012?

No one denies that 670/680 were the better value for 1 quarter. The problem is a year has 12 months in it, not 3. Generally we would expect price wars and new SKUs as competition heats up. If you buy a $500 GPU and in 6 months something faster comes out, well that's how the GPU market has worked for years and years until ATI went to the small die strategy and purposely buried their performance since they thought the small die + price/performance strategy was better. Here is another way to look at it: how much are HD7950/7970 going for now, 9 months after launch, $280-350 and $380-450.

Had AMD launched them at $299/369, the company would have lost ALL the extra profits they made starting from $450/550 and slowly dropping prices as competition arrived. So why exactly was it better for AMD (not for the consumer) to have launched 7950/7970 at $299/369 in January 2012 knowing how little of an increase NV brought this generation from 580? That's what I want to know and it hasn't been answered. Even back then GTX680 was just 7-9% faster overall and cost $500. AMD would have missed a lot of profits if they launched HD7970 for $369 in January. Why would you do that when for 3 generations in a row, this strategy did not increase market share for you.

Also, it hasn't been answered why AMD should go back to $299/369 for 8950/8970 voluntarily?

Why was perfectly fine that ATI launched X850Pro / X850XT / X850 XT PE for $399 / $499 / $549 but if AMD does it, it's totally unacceptable and ruining mind-share? Remember, the $399 6800GT was better than the $399 X850Pro.

x850pro_vs_6800gt.png


When did NV use the same strategy? Jeez, remember GTX280/260? It launched for $649/$399 on June 16, 2008. Market leader -> High prices. Sound familiar?
http://www1.anandtech.com/show/2549

HD4870 for $299 > $399 GTX260 and it took a GTX260 216 and price drop to $279 to match a 4870. Sound familiar? Oh isn't that familiar to what happened with HD7950 --> HD7950 V2 vs. 660Ti or HD7970 ---> HD7970 GE vs. 680?
http://www1.anandtech.com/show/2619

Funny how history repeats itself but when NV used the First Mover Advantage Premium Pricing Strategy, it was perfectly fine, and forced NV to drop prices subsequently months after.....when AMD did it, OMG! they are ruining mind-share, ripping us off everybody!

By April 1, 2009 (9.5 months later), HD4890 = GTX280 for $269.
http://www.gpureview.com/show_cards.php?card1=567&card2=608

That's a $380 erosion of value for a GTX280 in less than 10 months. Should NV have launched GTX280 at $269 then? I don't understand this thought process. Let's launch the world's fastest flagship card for way less $ than the competitor. In a market leader/first mover advantage, you deliver faster technology and demand higher price. This is exactly what NV did with GTX260/280 and subsequent price drops as competition arrives is the exact expected outcome of this strategy. The fact that AMD didn't do this with HD5850/5870 just shows their management incompetence and why RR got rid of that entire mgmt team. Why would AMD have launched HD7970 for $369 when GTX580 was going for $430-450 and HD7970 was at least 20% faster and 50% faster with overclocking? NV being late by almost 1 quarter with GTX670/680 series allowed AMD to price 7970 at $550 since generally speaking people who want the fastest single-GPU have had no problems paying these prices - just ask ATI and NV when they launched their flagship cards for $500+ for years and years. This is why I am saying that HD5870 launching at $369 was an outlier, not the norm. Normally such a card under ATI's old mgmt or NV's current mgmt would have launched at $499-549, easy.

At first I hated AMD raising prices and even was very vocal against HD7900 series, then I realized why AMD did it - because for 3 generations in row when they offered insane price/performance, NV users didn't switch anyway. Welcome to the new strategy then - no more price/performance unless the competitor forces your hand to do so. This is why I think AMD will launch first again and go for $500 range with 8970. If GTX780 smokes the 8970, AMD could always revert back to price/performance.
 
Last edited:

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
I thought 78xx series was the equal of price and performance of 580?, what do you mean uncontested?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
My point is, these volumes are too low to imply that NV can launch GK110 by January or even February 2013. I seriously doubt that NV will be able to launch GK110 before 8970 unless AMD flops or NV pulls a surprise. We don't even know if GK110 will be a full 15 SMX part or a cut-down 12-13 SMX part. If it's the latter, NV would need to build up a bunch of failed GK110 chips and that would take them 3-4 months of K20 production I bet. If it's a full 15 SMX part, the issue becomes yields and fulfilling all the pre-orders for K20 parts first.

You note that if nV decides to release a cut down GK110 it would take them three to four months of production on K20 to build up that level of inventory. You acknowledge that GK110 is in production now(actually, already shipping), and you think a January/February launch isn't reasonable?

I'm not saying when it will or won't launch, but there isn't any way everything you are saying can be accurate.

On the AMD counterparts, I think they have to wait for nVidia in terms of where they are going to place themselves. So far nV has only released a mid tier GPU in terms of die size this generation, noone outside of nV knows where it is going to stack up. If AMD comes out with a high end part for $500 and nV launches a week later with double the performance, the damage AMD would take long term would be immense. I think until AMD gets a ballpark idea of where nV is going to land there is some wisdom in holding off on launching as early as possible, particularly if the current lineup is maintaining healthy margins.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I thought 78xx series was the equal of price and performance of 580?, what do you mean uncontested?

Definitely not. HD7870 was $349, GTX580 was still $380-400 when 7870 launched. When 7850/7870 launched, GTX560Ti 448 1.2ghz/ GTX570/580 became instantly irrelevant. With the AMD cards you got better overclocking, better performance/watt and reasonable prices vs. the 40nm Fermis. Not sure how this is news as $250 HD7850 even overclocked to 580 speeds for just $250. The main reason HD7850/7870 was overpriced in the eyes of many AMD users was because of HD5850/6950 overclocking/unlocking. I mean a HD6950 @ 6970 speeds was $250 1 year ago before 7850 launched, making the 7850 seem like a rather poor value. However, as AMD quickly stopped 6950 unlocking and put 6950/6970 line to EOL, 7850/7870 found itself competing against 40nm Fermis.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
If AMD comes out with a high end part for $500 and nV launches a week later with double the performance, the damage AMD would take long term would be immense. I think until AMD gets a ballpark idea of where nV is going to land there is some wisdom in holding off on launching as early as possible, particularly if the current lineup is maintaining healthy margins.

Double performance? I wish ;)
I predict the GTX780 would be at most (on average) 20% faster than the 8970. Nvidia and AMD roughly know where the competing product will land, and making a chip "too fast" only costs money and robs you of the possibility of a refresh (that raises MSRP again).
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You note that if nV decides to release a cut down GK110 it would take them three to four months of production on K20 to build up that level of inventory. You acknowledge that GK110 is in production now(actually, already shipping), and you think a January/February launch isn't reasonable?
I'm not saying when it will or won't launch, but there isn't any way everything you are saying can be accurate.

I don't work for NV, but if volume production on K20 won't kick in until November-December 2012 based on their management's discussion, 3-4 months from that point would be around March 2013 to me. Right now, a very limited number of K20 parts are being sampled for "development testing" based on Oak Ridge article linked in this thread. Therefore, it appears to me that K20 production indeed won't go mass volume until November-Dec 2012 time-frame. This also coincides with Oak Ridge planning to have full 14,500+ K20 chips ready by March 2013.

I could be 100% wrong, but that's my thought process.

On the AMD counterparts, I think they have to wait for nVidia in terms of where they are going to place themselves. So far nV has only released a mid tier GPU in terms of die size this generation, noone outside of nV knows where it is going to stack up. If AMD comes out with a high end part for $500 and nV launches a week later with double the performance, the damage AMD would take long term would be immense.

If NV has a part that's 50-100% faster than HD8970, they could launch it for $800-1000. :p

How do you think ATI and NV operated in the past? ATI didn't just wait for NV every time to see what they have. That's a risk in business you have to take when you launch first and 1 firm has to take it. What if GK110 is only 10% faster than 8970? Then AMD would have missed 2-3 months of sales by not launching earlier. What if NV never launches the full 15 SMX GK110?

It goes both ways. If AMD waits, they could lose having market leading positioning should NV not beat them by a lot. If AMD launches at $500 and GTX780 happens to be way faster for $500, AMD would drop prices. GTX280 fell from $650 to $500 in 1 month.

Let's think about it realistically. NV doesn't have magical 28nm technology. If they blow up die size to 520-600mm^2 15 SMX part, and clock the GPU at 1Ghz, don't you think they'll exceed 250W TDP? They aren't free from these power consumption constraints either. GK110 has the added 'fat' such as dynamic scheduler, double precision and that will eat fast into the die size. That means the 500mm^2+ die size isn't just going to be purely based on TMUs, ROPs and CUDA cores. Realistically speaking, if AMD goes to 410-420mm^2, can GTX780 really beat it by 50-100% on the same 28nm node? In theory I agree fully with your logic that if GK110 beats 8970 by a lot, AMD would be embarrassed but will NV actually be able to launch a 2880 SP, 240 TMU, 1 Ghz GTX780? AMD probably has an idea of what the power consumption on 28nm is for a 410-420mm^2 die. I am sure they have some ballpark estimate of what GPU clock speeds a 500-600mm^2 28nm die could get away with and stay under 250W of power.

GTX680 already uses 180-190W of power at peak. There isn't a lot of room to increase performance by more than 50% and still stay under 250W TDP on 28nm I think.

HD7970 GE is about 5% faster at 1080P and 9-12% faster at 1600P against the 680.
http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-mit-925-mhz/3/

If HD8970 is 25% faster than 7970 GE => it would be 31% faster than 680 at 1080P, 40% faster than 680 at 1600P. If GK110 is 20% faster than this version of the 8970, it would need to be 57% faster than 680 at 1080P and 68% faster than 680 at 1600P. Those are humongous performance increases in just 9 months form the time 680 launched if you think GK110 is ready to be launched in January 2013.

Also, you can see based on this simple calculation that GTX780 being 50-100% faster than an 8970 is just not even remotely realistic. What are your thoughts on this?
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Btw if you look at the specs from the table linked in the OP, you can only come to the conclusion it has to be fake. 2133 SPs for the 8950? What kind of number is that? Mars Pro has 1.7 bn transistors, but Mars XT has 2.0 bn??? 32 ROPs on 8930 and 8950 but 48 ROPs on 8970 - when has that happened within a sub-family before?

I still think that the 8970/8950 specs are all only a re-hash of what 3DCenter predicted three months ago:

http://www.3dcenter.org/news/tape-out-von-amds-sea-islands-launch-im-ersten-quartal-2013

Venus XT:
410 mm2, 32/48 ROPs, 160 TMUs, 2560 SP, 5.1 bn transistors? Sound familiar?
This is a 1:1 copy of an educated prognosis, nothing more.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
All these specs are rumors until days before launch date anyway. To me the most interesting /new information about the entire article was:

"We&#8217;ve learned that, right now, partners are planning to show off the Radeon HD 8970 XT at CeBIT in March &#8211; with a full launch at Computex in June."

Notice how Kitguru says they've learned as if someone passed this information to them. Since they are posting this with this much confidence, they probably think this source is credible (or they are just generating page hits).

vs.

Videocards.com article today:

"I managed to confirm that AMD Radeon HD 8970 will be launched with 2560 Stream Processors, that&#8217;s unless AMD would change plans for this card. What I also know is that AMD is currently working on board layouts. First engineering samples will be available in November/December. Current release date is February 2013."

These 2 sources are 100% confident that they received accurate information from their inside sources and yet both have totally different launch dates.

February 2013 vs. June 2013. Big difference.
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Maybe the release date mentioned by Videocards.com is a paper launch like with Tahiti in December 2011 and the full computex launch may refer to good availability with all partner models out as well?

I also think it is unlikely that they launch so late. If you have something ready and in quantities, you launch it so you can raise the MSRP again. Unless...you still have much stock of the previous gen and don't want to anger your AIBs by making their stock obsolete too quickly.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Paper launch 4-5 months before actual launch though? When Tahiti XT paper launched, it was Dec 22, with full availability of 7970 on Jan 9. That's less than 3 weeks. At least one of these websites got fed incorrect information.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
GTX680 already uses 180-190W of power at peak. There isn't a lot of room to increase performance by more than 50% and still stay under 250W TDP on 28nm I think.

You think nV won't shatter the 250W barrier if they think it benefits them? If we compare the 460 to the 580 they would even be close to the 250 Watt figure(not quite, but certainly ballpark). I'm not saying it will be great on performance/watt basis, but I thought this entire sub forum had abandoned that metric once AMD's parts were so bad this generation in comparison?

Also, you can see based on this simple calculation that GTX780 being 50-100% faster than an 8970 is just not even remotely realistic. What are your thoughts on this?

Compare the 460 to the 580, it is frequently approaching 100%(exceeding it rarely, normally ~75%). That is the type of performance scaling we can reasonably expect from nV. How the 8970 performs is a big question mark. I wouldn't have believed the 7970 could possibly be as slow as it was prior to it launching, it was shockingly bad for a full node drop(this entire generation has followed suit from both parties).

Given that we have a direct historical comparison to make on the nV side, an increase of 100% over the 680 isn't close to being unreasonable. First release on a new node for mid tier die versus refresh high tier die(using the 460 vs 580 just because it is the most recent).

Do I think the 780 will end up 50%-100% faster then the 8970? I think that possibility is in play. I don't see it as unreasonable either, we have the historical performance characteristics of each company at relative die sizes to compare. Is it possible they will step out of form? Absolutely, but if I have to wager on who is going to come out ahead in designing a monolithic monster GPU at this point in the tech industry I'd be a fool to bet against nVidia.

All sorts of things can easily go wrong for nV and better then expected for AMD to change this in a profound matter, but since we are talking about the refresh the fact that the 680 was supposed to be a 660 becomes a very real factor. AMD screwed up *badly* this generation- the fact that they did so allowed nVidia to completely hide the fact that *they did to*, because they didn't need to ship anything high end to compete.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think most people in this thread put too much stock into the idea that each company is releasing products based on the other company. I think both companies release products for the market and to get more customers more than to compete with each other.

If anyone here thinks that a company should never release a new product, or to delay in order to see what the competition might bring, would fail hard. If you have a new product that is ready to go, you release. Newer products can always command higher prices and that is the best thing for a company. Waiting until there is competition in the market to release a product gives the customer choices BESIDES your product AND a premium can't be charged.


++1 :thumbsup:

It costs money (lots of it) to develop a new product. It makes no sense to shelf it because there's no competition for it. As you've said, that's the perfect time to release it.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
I think you forgot to take into account that GK110 is a full compute part, thus likely less efficient in perf/W in games than GK104. Rumors I've heard are talking GK104+50% which I find believable considering the specs. But anything more than that is wishful thinking.

Also consider what I said about only doing what is necessary. Going all in with GK110 would mean very high cost, immensely high TDP, low yields and low supply.

What is more believable and economically wise?


  • Beat the competitor by 20%, earn $100 (random number) per chip and sell 50k pieces
  • Beat the competitor by 50%, earn $10 per chip and sell 5k pieces
 

biostud

Lifer
Feb 27, 2003
20,125
7,240
136
Is it possible that both nvidia and amd will improve power efficiency before launching next gen chips? Could nvidia do an amputated version of gk110, where they slash some of the compute performance to get a smaller/more power efficient chip for the consumer market?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If we compare the 460 to the 580 they would even be close to the 250 Watt figure(not quite, but certainly ballpark).

GTX460 peaks at 119W.
GTX680 peaks at 186W.

You are comparing GTX460 --> GTX580 and GTX680 --> GTX780 and ignoring this massive power consumption difference between 460 and 680.....

GTX480 was 48% faster on average at 1080P vs. the 460 and peaked at 273W.

Ignoring that GTX480/580 like GK110 has Double precision and dynamic compute scheduler and both parts were 520-530mm^2 with power consumption going to 230-270W. That's 2 major areas you totally missed in your comparison. You cannot increase the functional units by 100% in GK110 since some of that die space is eaten up by compute related fat.

I'm not saying it will be great on performance/watt basis, but I thought this entire sub forum had abandoned that metric once AMD's parts were so bad this generation in comparison?

There is very little difference in power consumption this generation despite the hoopla.

HD7950 = 144W
GTX670 = 152W
and
GTX680 = 180-190W
HD7970 = 190W
HD7970 GE after-market cards = 210-220W
HD7970 1.175V @ 1150mhz = 225-238W.

A stock GTX580 still draws more than after-market 7970 GEs.
http://static.techspot.com/articles-info/555/bench/Power.png

In other words, GTX680 is nothing like GTX460 was to 480/580. NV is already running up against 190W of power use on the 680. With 460 they had 110-150W of room to play with before getting to 580 / 480 power consumption. There is only 60W left from 680 to 250W. Apples vs. oranges.

Compare the 460 to the 580, it is frequently approaching 100%(exceeding it rarely, normally ~75%). That is the type of performance scaling we can reasonably expect from nV.

No you cannot. GTX460 peaked at 120W. GTX580 peaks at 229W. 110W increase in power.

GTX680 peaks at 186W. To get to 250W, that's just 64W more, or almost half the headroom 580 had over 460.

Given that we have a direct historical comparison to make on the nV side, an increase of 100% over the 680 isn't close to being unreasonable.

No, just no. GTX680 = 186W.

100% faster on 28nm node in a GTX780 is not unreasonable? Okkkk. :thumbsup: I don't think you even remotely grasp modern 28nm manufacturing vs. die size relationship, or that GK110 has excess transistors used for dynamic scheduler and DP. 100% faster means 100% more functional units/memory bandwidth at 1058mhz GPU clocks on the same 28nm node when the 680 uses 186W. Ya, good luck with that.

HD7970 GE 1050mhz is 365mm^2 and pushes 210-220W in after-market version form, 238W in reference blower / stock VRM components form. You think NV has some magical 28nm node that allows it to grow the die size to 520-600mm^2, keep 1.06ghz clocks of the 680 and increase functional units 75-100% and not go to 275-300W?

Do I think the 780 will end up 50%-100% faster then the 8970? I think that possibility is in play.

When did a 520-600mm die size GPU beat a competing 410-420mm^2 die size GPU by 50-100%? Ya, that never happened. When did NV's flagship GPU beat AMD/ATi's flagship part by 50-100% on average? Ya, that also never happened. But this generation it will happen you are saying, especially since 680 is already trailing the 7970 GE which means NV needs to increase performance 5-12% to begin with just to match a 7970GE. But you are saying GTX780 will be 50-100% faster than 8970? Makes sense how 8970 may use 230-240W of power on a 410-420mm^2 die but GTX780 will beat that by 50-100% and still have double precision and dynamic scheduler, widen the bus width to 384-bit, all that stuff and not at blow way past 250W of power use. Based on your projection GTX780 = Fermi 2.0 here we go! :eek:

I don't see it as unreasonable either, we have the historical performance characteristics of each company at relative die sizes to compare.

Yes, we do but you ignored power consumption. You only compared die sizes which is just half the story.

GTX560 Ti = 68% - Peak 159W
GTX580 = 96% (41% faster) - Peak 229W

GTX680 - peak 186W......Your entire projection for 100% is not unreasonable is falling apart and fast unless NV goes to 275-300W, or NV builds an entirely different gaming chip and doesn't start with GK110 to begin with.

Is it possible they will step out of form? Absolutely, but if I have to wager on who is going to come out ahead in designing a monolithic monster GPU at this point in the tech industry I'd be a fool to bet against nVidia.

No one said anything about GTX780 not being able to beat HD8970, but these wild estimates of 50-100% faster than 8970 being thrown around are 100% wrong. Never in the entire history of ATI vs. NV or AMD vs. NV did NV have a 50-100% faster high-end flagship. Not even 8800GTX beat 2900XT by 50% on average and 2900XT was a total disaster since AA was broken on it.

All sorts of things can easily go wrong for nV and better then expected for AMD to change this in a profound matter, but since we are talking about the refresh the fact that the 680 was supposed to be a 660 becomes a very real factor. AMD screwed up *badly* this generation- the fact that they did so allowed nVidia to completely hide the fact that *they did to*, because they didn't need to ship anything high end to compete.

Ummm...no. AMD launched 3-6 months earlier, raised prices and didn't lose market share = sounds like a much better generation for the firm than 4800/5800/6900 series were. HD7970 GE > GTX680 and GCN has class leading compute architecture per mm^2. In 1 generation, AMD managed to maintain price/performance and single-GPU performance crown at the same time. The 365mm^2 die has dynamic scheduler + DP. Tahiti XT is faster clock per clock than GK104 is as well. Keper has its advantages, GCN has its advantages. If AMD screwed up this generation badly, NV wouldn't have lost the single-GPU performance, nor the price/performance at the same time, not lost 2.5% market share last quarter, etc.

Remove DP and you have Pitcairn, with superior performance/watt than GTX670/680 parts. This excess fat Tahiti XT has will translate to wasteful transistors that GK110 has to deal with too. That means increased power consumption on GK110, like 7970 had to suffer this generation due to this excess fat. There is no real evidence now that NV held back GK110 because they could. It seems they held back GK110 because they couldn't launch it. At first, many of us thought this was reasonable that NV held back GK110 purposely, but since K20 is just starting to ship, GK110 appears to have been completely unmanageable in large volumes for most of this year at reasonable profit margins and yields to be able to have been sold at $500-600 in the consumer market. That's actually the more reasonable story of why NV had to use GK104 since GK110 would have ended up too hot, too expensive, and yields and wafer capacity constraints forced NV go to plan B --> GK104 which turned out to be a lot better than they expected.

Again: I'll let this sink in for you --> HD7970 GE peaks at 210-220W for after-market cards (238W for reference blower cards @ 1.25V) on a 365mm^2 die using 1st generation 28nm node at 1.05ghz. You are telling me that GK110 was easily manufacturable this year at 520-600mm^2 die at 1ghz? Yup, NV has alien 28nm technology stashed just for them.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Maybe the release date mentioned by Videocards.com is a paper launch like with Tahiti in December 2011 and the full computex launch may refer to good availability with all partner models out as well?

I also think it is unlikely that they launch so late. If you have something ready and in quantities, you launch it so you can raise the MSRP again. Unless...you still have much stock of the previous gen and don't want to anger your AIBs by making their stock obsolete too quickly.

When prices are cut by a manufacturer, rebates are given to the partners for prior purchases to balance stocks. Now, I'm not saying they make up for everything they've sold them, but overall the manufacturer eats the price drop.