• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia's 600 series pricing

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I think TSMC gave very conservative estimates and the initial batches weren't so good on 28nm which is why even AMD underclocked their stock 7970 specs. Even if NV's yields aren't quite as good as AMD's (which we don't know for sure, esp. since the 680 die is smaller), NV can sell their chips for a far higher price as Tesla, Quadro, etc. And even GTX 680 is at a higher price than AMD's best. Plus the dinky PCBs and other cost-cutting measures such as locked voltage (allowing less robust components on the PCB) and 256 bit memory bus.

I was actually not comparing AMD vs. nVidia in my explanation. I realize they are in the same marketplace, but that has nothing to do with what I was talking about.

Suppose Tahiti had gotten scrapped and Pitcairn was the top AMD chip. In order to compete at all in the high end they would theoretically have to release Pitcairn at 1300MHz. What percentage of chips do you think can produce those clocks? Not anywhere near as many as can run at 1000MHz. It would raise the cost of the 7870 pretty dramatically and force it into a higher ($500?) price bracket. Even then AMD might very well make less money on it than they do now.

imageview.php
 
Interesting thread, and it popped one simple question in my head that could honestly make or break the OP's point (from my interpretation.)

Would anyone here pay $500 for a GTX 660 if it still beat an HD 7970?
 
Interesting thread, and it popped one simple question in my head that could honestly make or break the OP's point (from my interpretation.)

Would anyone here pay $500 for a GTX 660 if it still beat an HD 7970?
no matter what its called, Nvidia originally designed it as a mid range gpu which carries with it mid range costs for them. why would a give them $500 bucks for it just because AMD did not release a very competitive high end card in that price range?
 
no matter what its called, Nvidia originally designed it as a mid range gpu which carries with it mid range costs for them. why would a give them $500 bucks for it just because AMD did not release a very competitive high end card in that price range?

Ninja EDIT

EDIT: SOrry, like four people came to my cubicle as I was responding.

So you wouldn't pay $500 for a GTX 660 nor a GTX 680, how about others that did buy the GTX 680 - would you pay $500 for a GTX 660?

I bought a GTX 680, and no I wouldn't pay $500 if it were called the GTX 660. I guess I'm one of those guys.
 
Last edited:
Interesting thread, and it popped one simple question in my head that could honestly make or break the OP's point (from my interpretation.)

Would anyone here pay $500 for a GTX 660 if it still beat an HD 7970?
I would :thumbsup:
 
I'm very happy with my "mid range" GTX 680 moondog. What "mid range" gpu are you running?

At least toyota shows his gpu in his sig as I do.

I would NOT classify a GTX 570 as "mid range". It is still a very powerful GPU.

I don't want to appear overly sensitive but you started this post by asserting the GTX 670/680 were only "mid range" GPUs. If so what do you base that on at the present time?

Besides the GTX 690, a niche card, in the vein of the AMD 6990, the GTX 680 is the fastest consumer card for Nvidia at present and for the near future.

So I would like you to tell me what you base this on and what video card/s you have.

I'm waiting for your gpu spec moondog.
 
Last edited:
Interesting thread, and it popped one simple question in my head that could honestly make or break the OP's point (from my interpretation.)

Would anyone here pay $500 for a GTX 660 if it still beat an HD 7970?


Of course they wouldn't, no matter how it performed. People pay money for psychological reasons. Thats what marketing is for. Thats what fancy GPU coolers with lots of shiny heat pipes and colors are for. This is also what a high end GPU SKU being placed on a mid range GPU is for.
 
Of course they wouldn't, no matter how it performed. People pay money for psychological reasons. Thats what marketing is for. Thats what fancy GPU coolers with lots of shiny heat pipes and colors are for. This is also what a high end GPU SKU being placed on a mid range GPU is for.
Well fps numbers are anything but psychological 😎
 
no matter what its called, Nvidia originally designed it as a mid range gpu which carries with it mid range costs for them. why would a give them $500 bucks for it just because AMD did not release a very competitive high end card in that price range?

I had posted where I believe that the GK104 is actually costing nVidia more now due to them having to bin chips that'll run @ ~1GHz, where the chip was originally intended to run @ ~700MHz (affecting yields). Do you consider this plausible?
 
I had posted where I believe that the GK104 is actually costing nVidia more now due to them having to bin chips that'll run @ ~1GHz, where the chip was originally intended to run @ ~700MHz (affecting yields). Do you consider this plausible?

I consider it plausible, although with a small MFG process I would think that 1ghz should be pretty manageable.
 
I had posted where I believe that the GK104 is actually costing nVidia more now due to them having to bin chips that'll run @ ~1GHz, where the chip was originally intended to run @ ~700MHz (affecting yields). Do you consider this plausible?

Your statement says that GK104 costs NV more, but more than what?

GTX480/580 were 520-520mm^2 die size chips, with beefed up VRMs, more expensive heatsink designs and 384-bit memory buses. This chip is almost half the size. This is a GF114 successor so selling these chips for $400-500 a piece is money in the bank, even if wafer costs and binning is involved. Also, JHH said that the yields on 28nm at TSMC are better than 40nm yields with original Fermi (although not as good as he would have liked). I expect profit margins are insane on these cards, > 50%. 😎
 
Your statement says that GK104 costs NV more, but more than what?

GTX480/580 were 520-520mm^2 die size chips, with beefed up VRMs, more expensive heatsink designs and 384-bit memory buses. This chip is almost half the size. This is a GF114 successor so selling these chips for $400-500 a piece is money in the bank, even if wafer costs and binning is involved. Also, JHH said that the yields on 28nm at TSMC are better than 40nm yields with original Fermi (although not as good as he would have liked). I expect profit margins are insane on these cards, > 50%. 😎

Man, you're making it difficult to have a conversation about this. I asked a simple question. Which was, Do you think the GK104 is more expensive to market at ~1Ghz than it would be at ~700MHz. Obviously, it would yield better at a lower frequency. So, this is not the $300 midrange chip everyone wants to make it out to be.

If I recall correctly the statement about yields was that it is yielding better than 40nm did at the same stage in it's maturity. Please keep in mind that 40nm was around for a long time before the first Fermi chip was ever offered for sale. Like about a year, IIRC. The fact that nVidia has any cards out at this time in 28nm development is better than 40nm. We had the 4770, then all of the 5000's and then, months latter, we had the first Fermi that had to be cut down just to be marketed at all. So, yes, 28nm is doing better for nVidia. It doesn't mean that GK104 costs less than GF100. Because at this point in 40nm maturity there was no GF100.

Now, back on track. I'm just saying that GK104 is very likely costing nVidia more than they originally thought it would when they targeted it at $299. How much more, I have no idea. I also doubt that you or any of us do. I never compared it at all to the cost of GF100 at all. That's a whole different path that the variables are too unknown to try and calculate.
 
All ya have to do is look at nVidia's margins over-all. If the yields were crappy and poor they would effect the bottom line much more. They were above guidance and the guidance for next quarter is for even higher margins.
 
Man, you're making it difficult to have a conversation about this. I asked a simple question. Which was, Do you think the GK104 is more expensive to market at ~1Ghz than it would be at ~700MHz. Obviously, it would yield better at a lower frequency. So, this is not the $300 midrange chip everyone wants to make it out to be.

Who said it would run at 700Mhz?

Why do you think GK104 got GPUBoost mode and it kicks in basicly all the time. Because they cant put out enough chips at the speed?

GTX680 and GTX670 sells like mad here. But nobody wants a HD7970/HD7950.
 
There is a lot of evidence pointing to 680's originally intended as 670's, hell even 660ti. They're at 1006MHz because 28nm overclocks extremely well all around, obviously. Nvidias site still tells me m 680's default clock is 705MHz when I let it detect my GPU. Early drivers limited clock speed to 705, too. We just got good clockers 🙄
 
Now, back on track. I'm just saying that GK104 is very likely costing nVidia more than they originally thought it would when they targeted it at $299. How much more, I have no idea. I also doubt that you or any of us do. I never compared it at all to the cost of GF100 at all. That's a whole different path that the variables are too unknown to try and calculate.

GK104 sells for $400-500. It's far less complex in terms of die size, heatsink/PCB/VRM costs than GF100/110 were. The additional cost increases resulting from 28nm wafer pricing at TSMC and/or binning for GK104 are easily offset by the price premium that NV now charges for GK114 successor. What you are saying NV binning or overclocking GK104 from the factory making it more expensive may or may not be true but in either case how does this prove/disprove that it's still a mid-range chip?

All it takes is a look at NV's historical succession from 1 generation to the next from GeForce 2.

In the last 10 years, every single next generation mid-range NV chip has either equalled or outperformed the previous flagship:

GeForce 2 Ultra < GeForce 3 Ti 200 (mid-range)
GeForce 3 Ti500 < GeForce 4 Ti4200 (mid-range)
GeForce 4 Ti 4600/4800 < GeForce 5700U (Mid-range)
GeForce 5800U/5950U < 6600 GT (mid-range)
GeForce 6800U < 7600GT (mid-range)
7800GTX 256mb/7900GTX 512mb < 8800GT (mid-range)
8800GTX Ultra/9800GTX < GTS250/GTX260 (mid-range)
GTX280/285 < GTX460 1GB (mid-range)
GTX480/580 < Some new Kepler (mid-range) chip.

There is nothing unusual at all about next generation NV mid-range chip outperforming the previous generation high-end chip.

As some pointed out, just look at NV's gross margins and guidance for confirmation:

- Gross margin narrowed to 50.1% from 50.4% this quarter but for Q2 2013, - GAAP gross margin is expected to be 51.2% (51.5% for Non-GAAP)

If GK104 was much more expensive for NV to produce than chips in the past, we would have expected a substantial hit to Gross Margins.

Actually the gross margins during Fermi era were lower than 50-51%, which only supports the view that NV is making larger profits from Kepler chips.

We already know that GK110 is going to launch in Q4 2012 and that chip is a natural successor to GF100/110 but for the first time in history of NV as far as I know they are not launching the highest-end GPU chip for both consumer and professional markets.

GK104 can be sold as a high-end chip because AMD under delivered. There is no way to sugar coat it. The performance increase of HD7970 from GTX580 was only 20-25%, which is the lowest performance increase AMD/ATI have had over the previous generation NV card (just go back to all the generations to GeForce 2 and you'll see AMD/ATi's next generation chips outperformed previous NV high-end by miles).

This also explains why GTX660 is nowhere to be found. That's because 660Ti/GTX670 was meant to be GK104 not GK106. Since GK104 was meant to be mid-range to upper-midrange, it was never meant to be cut out into smaller parts (i.e., just like GF104 GTX460 was never meant to be cut down into 5-6 SKUs).

NV is now rumored to launch GTX660 (GK106) by the end of the summer, late July at best.

I already highlighted to you going back all the way to GeForce 2 that NV's next generation mid-range chip has always either outperformed or equalled NV's previous generation high-end chip. GK106 is rumored to have just 768 SPs. Do you think that chip has any shot of beating GTX480/580? Doubtful, which again shows it was never meant to be next-gen mid-range since it falls outside of NV's 10 year historical generation succession record.

At the end of the day, GTX670/680 is the fastest consumer NV gaming chip right now, so for all intents and purposes sure you can consider it the highest-end Kepler chip for graphics at the moment. But it is in no way shape or form a flagship successor of GTX480/580: it has gimped GPGPU compute and the performance increase is the lowest from ANY generation of NV GPUs since GeForce 2.

NV won't ever admit that GK104 is not the flagship since that would destroy all of their credibility.

Some other points:

- Memory bandwidth has not increased (not characteristic of next gen flagship going back to GeForce 2)
- Gimped GPGPU compute (not a characteristic of flagship compute chips since G80 compute / CUDA strategy was put into place)

At the end of the day, whether or not GK104 is mid-range or flagship may not even matter anymore since it will continue to sell for $400-500 in the near term. What we can say without any doubt is that this is the smallest performance increase from one generation to the next in the NV camp and also is AMD/ATI's worst performance increase from NV's previous flagship card ever. Based on that, this entire 28nm generation has not lived up to the performance increase or the price/performance curve based on Moore's Law and historical GPU succession of the last 10 years from either camp.

Whether or not this is the slowing down of Moore's Law in GPUs or an outlier bump in the road will become clearer in the next 2-3 years.
 
Last edited:
Wow, this debate is starting to sound like the 'Big Oil' debate where people try to rationalize incredibly high oil prices amid record profits by the oil companies.

Bottom line, yes the prices for discrete graphic cards is definitely on the high side, but if anyone is to blame it's the consumers. Discrete graphic cards are not one of life's necessities. We can opt out of paying those prices. For any publicly traded company the focus has to be on profits, otherwise what are the shareholders going to think?

So to quote Shakespeare, "The fault, dear Brutus, lies not in our stars, but in ourselves if we are underlings."
 
To folks who still cling to the belief that NV puts gamers first, dream on.

GTX 670 is the gaming card--using a cut down GK104.

Most uncut GK104 chips go into Tesla K10 dual-GPU professional boards that sell for multiple times what a GTX 690 would sell for. http://www.amazon.com/PNY-DisplayPor.../dp/B0044XUD1U (That's a Quadro board but still gives you an idea of how much a Fermi-based Tesla board might cost.) According to vr-zone, Tesla K10 and K20 have more than 150,000 pre-orders, meaning 300,000 GPUs that won't be going into GeForce cards.

http://vr-zone.com/articles/how-the...-prime-example-of-nvidia-reshaped-/15786.html

The leftover GK104 chips go into GeForce GTX 680. This number will go up as Tesla demand is satisfied, freeing up GK104s for use in GeForce cards.

Similarly, the first batches of GK110 are already claimed by Tesla pre-orders:

http://techreport.com/articles.x/22989/3

http://vr-zone.com/articles/nvidia-...-k20-2013-geforce-and-quadro-cards/15884.html

Nvidia is prioritizing profits over gross revenue, and this is absolutely the right thing to do when capacity-constrained. I always thought it was so stupid of NV to release Fermi to gamers first, but in hindsight maybe we helped debug things for them or whatever, giving them more time to get drivers together for Quadro and Tesla. In any case, I would rather that gamers continue to kvetch about not getting GK110 in GeForce cards this year, and instead having those GPUs go into HPC cards that solve real-world problems like drilling/seismic/financial/medical imaging problems.

If it's any consolation, Quadro is in the same boat we are: they aren't getting any Kepler-based Quadros until Kepler-based Tesla cards ship first, either.

Expect to see cut-down GK110 chips for GeForce GTX gaming-grade cards sometime next year, after some of the initial Tesla demand is met.
 
Last edited:
To folks who still cling to the belief that NV puts gamers first, dream on.

Actually NV needs Consumer GPU Business or it would be impossible to sustain R&D spend for Tesla and Quadro lines. JHH has stated so himself during the GPU Technology Conference just this month.

Not sure why this is news to anyone since NV has 3 distinct businesses under 1 company umbrella:

1) Consumer Products Business - Tegra / mobile solutions
2) Consumer GPU Business (Notebook and desktop discrete GPUs)
3) Professional Solutions Business - Tesla and Quadro

#2 brings in the most revenue but #3 has the highest profit margins. Currently, there is declining demand for business #2 around the world, as well as it is currently constrained by 28nm production ramp:

The consumer GPU business was down 6.7% quarter-over-quarter (Q1 2013 vs. Q1 2012).

To say that NV doesn't care about gamers is somewhat misleading since without that business, they would be out of business based on their current company structure.

GTX 670 is the gaming card--using a cut down GK104.

GK110 not making its way to the consumer GPU market probably because it would be too expensive to manufacture at current immature 28nm node and it would be far more profitable in the professional segment. Further, NV doesn't have any pressure to release such a large and expensive chip when GK104 is good enough based on current market competition. It's unfortunate for us, but it is what it is.

Considering NV brought more price/performance this generation, while AMD raised prices across all of its products, it's interesting how you think that NV doesn't care about gamers and yet it was AMD which succeeded $180 HD6850 with $250 HD7850 and HD6870 $240 with HD7870 for $350, while raising the price of HD6950 from $299 to $450 with HD7950. NV also had high prices in the past so I am not defending them either but based on current generation, AMD has dropped the ball like never before - they abandoned class leading performance and price/performance all in 1 generation. So what does their GPU division stand for now? Overpriced cards? Looks like it. Outside of HD7850, not a single card is worth buying for 90% of gamers in their desktop line-up. Thankfully their HD7950M and 7970M are excellent.

Also, what has AMD done to improve graphics and fidelity in games?

Besides Eyefinity, not much in the last 5 years. NV was first out of the gate with SM3.0 (which jump started the HDR lighting revolution with Far Cry), first with SuperSampling in DX9/10 games, first with SLI, first with getting the whole industry to move to GPGPU compute with G80 while AMD just waited on the side lines until NV took all the business risks by creating a new business segment that didn't exist prior to NV putting serious financial muscle behind it.

NV has spent a lot of marketing dollars trying to get tessellation into modern games while AMD introduced tessellation with Radeon 8500 as TruForm but did absolutely nothing with it until NV pushed this format forward.

Besides Eyefinity, AMD keeps following, not leading. NV has done a lot more for PC gaming in the last 5-6 years than AMD. So really your view that NV doesn't care about gamers contradicts many innovations which NV has brought into the gaming market.

I don't even view ATI and AMD as the same company anymore. There are hardly any of key ATI veterans left at AMD graphics. AMD graphics has almost nothing in common with ATI anymore because most of the influentional people who made ATI what it was have all left, and with Eric Demers going to Qualcomm, they lost one of the most respected ATI graphics guys.

From Radeon 9700 to X1950XTX, ATI stood for top-of-the line/class leading performance.
From HD3800 to HD6900, AMD stood for price/performance and performance/watt.

What does AMD's graphics division stand for now? Double precision compute, and not much else. Rory Read doesn't care about gamers, he only cares about making $, which is great for shareholders, not gamers.

When was the last time Rory Read made a presentation to gamers about GPUs and videogames and products he actually sells? JHH is like Steve Jobs of the GPU industry. He made a presentation for GTX690's launch. He is in the spotlight many times throughout the year.

Rory Read has been the CEO of AMD for a while now and so far there is no evidence at all that he is passionate about CPUs or GPUs, especially after he said that CPUs are fast enough for anybody today. There you go.
 
Last edited:
Nowhere did I say NV doesn't care about gamers. Please actually read my post and the links contained therein before commenting. I am just saying it's clear that GeForce does not come first anymore, so don't hold your breath on GTX 680s becoming more abundant or dropping in price anytime soon. Tesla is getting priority this time around. Quadro and GeForce come later. GeForce means mass orders required for greater economies of scale and splitting costs in R&D, but the majority of profits comes from Quadro and Tesla.

The news is that for the first time since I can remember, GeForce is NOT getting first dibs on the top GPU in the stack. Tesla is. I believe this is a good thing going forward as it advances more serious work first even if gamers have to wait a little longer for GK110 to filter down to GeForce and to deal with scarce uncut GK104 supplies.

As for the rest of your post, I have no idea why you are ranting about AMD. AMD has nothing directly to do with this except that they fouled up HD7xxx enough that NV doesn't feel pressure to release GK110 even in limited quantities, though NV is ready to do so if AMD tries to pull a fast one. This is according to the articles I linked to.

You mention JHH. You also did something very interesting recently. I posted a video link to JHH's speech about. In this thread: http://forums.anandtech.com/showthread.php?t=2246432 You commented and then hastily edited part of the comment out, but I saw what you deleted. You said that JHH in the video mentioned leveraging the gaming industry's scale and the R&D funding it provides to HPC. Why did you edit it out, because you thought it would make JHH/NV look bad or something? 😛

There was no need to edit out that comment. It is plain as day that as NV's chipset business and low-end graphics business dies off thanks to Intel's shenanigans and APUs/embedded CPU graphics, NV must find new sources of revenue, and it appears that they are seeking those new sources in mobile and HPC. (NV has a de facto monopoly on pro graphics already.) And JHH was pandering to the HPC crowd, anyway. The truth is that GeForce/Quadro/Tesla (and probably to some extent Tegra) all contribute to R&D, so one could just as easily say that the growing demand for Tesla may help subsidize GeForce cards' R&D as vice versa.

Btw anyone who knows anything about HPC knows that NVIDIA absolutely smokes AMD when it comes to "serious" uses of graphics chips, whether in pro graphics or HPC. NVIDIA's major league HPC conferences make AMD's HPC conferences look like Little League.

If NV had an x86 license it'd be all over for AMD as NV's APUs would undoubtedly beat up AMD's offerings. As it is, NV is stuck with ARM, which isn't a bad place to be stuck in, though there appears to be a large contingent of AT CPU subforumers who believe Intel will crush ARM soon enough with Intel's vaunted process advantage. (I don't believe that will happen.)

But enough talk about AMD. This is not about AMD or Rory Read (wtf? bringing Rory Read up? Seriously?). This is about explaining to people why there is an ongoing shortage of GeForce GTX 680s despite the relative abundance of GTX 670s. For those seeking GTX 680s, you are probably better off getting one of the nice custom GTX 670s and overclocking it, as you save ~$100 and don't have to wait for something in such scarce supply.

Actually NV needs Consumer GPU Business or it would be impossible to sustain R&D spend for Tesla and Quadro lines. JHH has stated so himself during the GPU Technology Conference just this month.

Not sure why this is news to anyone since NV has 3 distinct businesses under 1 company umbrella:

1) Consumer Products Business - Tegra / mobile solutions
2) Consumer GPU Business (Notebook and desktop discrete GPUs)
3) Professional Solutions Business - Tesla and Quadro

#2 brings in the most revenue but #3 has the highest profit margins. Currently, there is declining demand for business #2 around the world, as well as it is currently constrained by 28nm production ramp:

The consumer GPU business was down 6.7% quarter-over-quarter (Q1 2013 vs. Q1 2012).

To say that NV doesn't care about gamers is somewhat misleading since without that business, they would be out of business based on their current company structure.



GK110 not making its way to the consumer GPU market probably because it would be too expensive to manufacture at current immature 28nm node. Further, NV doesn't have any pressure to release such a large and expensive chip when GK104 is good enough based on current market competition. It's unfortunate for us, but it is what it is.

Considering NV brought more price/performance this generation, while AMD raised prices across all of its products, it's interesting how you think that NV doesn't care about gamers and yet it was AMD which succeeded $180 HD6850 with $250 HD7850 and HD6870 $240 with HD7870 for $350, while raising the price of HD6950 from $299 to $450 with HD7950. NV also had high prices in the past so I am not defending them either but based on current generation, AMD has dropped the ball like never before - they abandoned class leading performance and price/performance. So what does their GPU division stand for now? Overpriced cards? Looks like it. Outside HD7850, not a single card is worth buying in their desktop line-up. Thankfully HD7950M and 7970M are excellent.

Also, what has AMD done to improve graphics and fidelity in games?

Besides Eyefinity, not much in the last 5 years. NV was first out of the gate with SM3.0 (which jump started the HDR lighting revolution with Far Cry), first with SuperSampling in DX9/10 games, first with SLI, first with getting the whole industry to move to GPGPU compute with G80 while AMD just waited on the side lines until NV took all the business risks by creating a new business segment that didn't exist prior to NV putting serious financial muscle behind it.

NV has spent a lot of marketing dollars trying to get tessellation into modern games while AMD introduced tessellation with Radeon 8500 as TruForm but did absolutely nothing with it until NV pushed this format forward.

Besides Eyefinity, AMD keeps following, not leading. NV has done a lot more for PC gaming in the last 5-6 years than AMD. So really your view that NV doesn't care about gamers contradicts many innovations which NV has brought into the gaming market.

I don't even view ATI and AMD as the same company anymore. There are hardly any of key ATI veterans left at AMD graphics. AMD graphics has almost nothing in common with ATI anymore because most of the influentional people who made ATI what it was have all left, and with Eric Demers going to Qualcomm, they lost one of the most respected ATI graphics guys.

From Radeon 9700 to X1950XTX, ATI stood for top-of-the line/class leading performance.
From HD3800 to HD6900, AMD stood for price/performance and performance/watt.

What does AMD's graphics division stand for now? Double precision compute, and not much else. Rory Read doesn't care about gamers, he only cares about making $, which is great for shareholders, not gamers.

When was the last time Rory Read made a presentation to gamers about GPUs and videogames and products he actually sells? JHH is like Steve Jobs of the GPU industry. He made a presentation for GTX690's launch. He is at all the key gaming events.

Rory Read has been the CEO of AMD for a while now and so far there is no evidence at all that he is passionate about CPUs or GPUs, especially after he said that CPUs are fast enough for anybody today. There you go.
 
I am just saying it's clear that GeForce does not come first anymore, so don't hold your breath on GTX 680s becoming more abundant or dropping in price anytime soon.

Technically speaking, the product line GeForce is the most important at the company (see my Vanilla Ice Cream Store analogy). Kepler is a Graphics architecture, just like Fermi and G80, etc. There is no Tesla or Quadro products. Those are all GeForce chips with a new marketing label: Toyota (GeForce) and Lexus (Tesla) or like Honda and Acura.

Even from the links you posted (which I read), the main reason that appears to be why GK110 is not coming to consumer graphics space first this round is because NV can make more $$ selling it to professionals while their mid-range/mid-size Kepler chip ended up being good enough for the high-end Graphics market. However, it's probably also because NV couldn't manufacture it on time (hence why it's not coming until Q4 at the earliest). GK110 is still a GeForce Kepler chip though. It just happens to be that AMD dropped the ball so much that for the first time in the history of NV they will sell all of those high-end GeForce graphics cards as Tesla cards to professionals while using GK104 as the flagship 'step-in'. I don't NV had any idea they'd be able to pull off GK104 at $500. You would too if you knew your $200-250 GF104 successor was good enough to sell at $400-500. You'd take what was intended as a $500 GTX680 and sell it for $3.5K, re-badge your GTX660Ti as GeForce 680 and make bank! Why do you think all those comments about GK104 being a mid-range chip started? Why was NV so "relived" when they saw HD7970 scores? They panicked because they knew their could't get the flagship out on time.

The most important product at NV is GeForce, always has been. Just because NV is capitalizing on profits with GK110 first, doesn't mean GeForce is not the most important brand at the firm. The only reason Tesla is even possible is because NV reuses GeForce GPUs for that product line.

Tesla is getting priority this time around. Quadro and GeForce come later.

Obviously because AMD dropped the ball. NV is shifting product demand from a market segment where sales are slowing now - Consumer GPUs - to a market segment where there is substantial growth and healthy profit margins - Tesla / GPGPU compute. However, that has nothing to do with Tesla being the most important product at NV. All they are doing is selling Graphics Cards as Tesla cards because gamers won't pay $3.5-5K for graphics cards. At the end of the day Tesla is just a GeForce graphics card with full double compute enabled and dynamic scheduling, exactly like GF100 was a Graphics Card that was rebadged as Tesla.

GeForce means mass orders required for greater economies of scale and splitting costs in R&D, but the majority of profits comes from Quadro and Tesla.

As a single business segment, the professional graphics is more profitable than desktop or notebook GPUs. However, I think you are misunderstanding NV's business model. NV allocates its R&D Expenses to the Graphics Division. This is why the Tesla and Quadro lines are so profitable.

In other words, if you were to physically split up NV into 2 companies: The Graphics division could exist as a standalone business, but the Tesla and Quadro division would never be able to exist on its own. GeForce is the most important brand at the company today. Even if it's not the most profitable because it's the business that absorbs the R&D cost. GeForce is the sole reason the company exists. Tesla is GeForce.

Without GeForce business, there is no Nvidia. Without Tesla and Quadro, there is Nvidia.

The GeForce Business is about a 2.4B in revenue business per year. Kepler cost $3-4B to develop. It would be impossible to make Tesla if NV didn't sell graphics cards. Kepler would still exist easily if NV never sold a single Tesla card.

As for the rest of your post, I have no idea why you are ranting about AMD. AMD has nothing directly to do with this except that they fouled up HD7xxx enough that NV doesn't feel pressure to release GK110 even in limited quantities, though NV is ready to do so if AMD tries to pull a fast one. This is according to the articles I linked to.

So you are saying AMD has nothing to do with this and then you are saying that NV is in no rush to launch GK110 because HD7900 series isn't faster than GK104. So AMD has a lot to do with this actually. NV is now able to rebadge GeForce cards and sell more Tesla cards. All it is is re-branding a graphics card into a more profitable SKU segment.

You said that JHH in the video mentioned leveraging the gaming industry's scale and the R&D funding it provides to HPC. Why did you edit it out, because you thought it would make JHH/NV look bad or something?

Why would it make JHH/NV look bad. The only reason Tesla even exists is because of GeForce. I removed it because it has nothing to do with ray-tracing discussion in that thread. I noted it earlier since I wanted to point out that the ONLY reason Tesla exists is because of GeForce. I thought it was an interesting point that JHH reiterated it. However, it looks like I should have left that statement since it was just a confirmation that GeForce is the most important product at NV because without GeForce Tesla would not be possible.

"Kepler GPUs are accelerating our business," said Jen-Hsun Huang, president and chief executive officer of NVIDIA. "Our newly launched desktop products are winning some of the best reviews we've ever had. Notebook GPUs had a record quarter. Graphics is more important than ever." - MarketWatch

the growing demand for Tesla may help subsidize GeForce cards' R&D as vice versa.

That's one way to look at it. Here is my view:

Vanilla Ice Scream Store

Growing demand for Tesla doesn't really subsidize GeForce R&D. Tesla is GeForce. Imagine you owned a vanilla ice cream store. If you sold vanilla ice cream and suddenly you realized if you covered it in chocolate, you can sell it as Haagen-Dazs bars for 10x more $. As a business owner, you'd take all your vanilla ice cream and sell as many chocolate vanilla bars as possible.

08723591000_20110825160126556


But you cannot do that forever because the only reason your ice cream store exists is because customers come inside to buy vanilla ice cream cones ($600 million a quarter) and vanilla ice cream in buckets.

http://store.iuneeds.com/images/haagen_dazs_vanilla.jpg
isolated-vanilla-ice-cream-cone.jpg

If you cannot sell enough vanilla ice cream buckets (GeForce desktop GPUs) and vanilla ice cream cones (GeForce Notebook GPUs), you cannot afford to pay the lease on that store (Admin and Wages), the property taxes (Income taxes), or pay the employees to advance the vanilla extract necessary for next generation vanilla ice cream that's going to be needed to stay competitive (R&D).

GeForce is vanilla ice cream that is absolutely required for NV's ice cream store to exist. Tesla is a chocolate covered Haagen-Dazs version of the exact same premium vanilla ice cream that normally sells in cones/buckets. etc. at the main store but NV puts it in a nice wrapper and markets it to chocolate lovers who will pay 10x more for that feature (GPU Compute). The extra VRAM and double precision on Tesla are all the extras like the caramel added to vanilla ice cream or the almonds that are added later. It's still just GeForce with shiny trinkets.

Looks like NV realized the demand for Haagen-Dazs chocolate bars is so great this time that they will sell 100% of all their premium vanilla ice cream as Haagen-Dazs bars and sell mid-grade vanilla ice cream cones and buckets (GK104) in their main store. However, after extensive competitive research, NV realized that this time their mid-grade vanilla ice cream is as good as their competitor's premium vanilla ice cream. Suddenly NV is selling mid-grade vanilla ice cream cones as premium vanilla cones in their main store because consumers are buying it anyway since the competition's best product now is only as good as NV's mid-grade vanilla ice cream.

GeForce = premium vanilla ice cream
Tesla = chocolate covered Haagen-Dazs vanilla bars

Without GeForce, there is no Tesla, but NV can sell ice cream cones and buckets even without Haagen-Dazs bars and stay in business. Haagen-Dazs is simply a reallocation of their premium vanilla ice cream to a niche segment that just happens to be more lucrative and growing because people just love chocolate covered vanilla ice cream bars more than ever. 😉
 
Last edited by a moderator:
Computer chips as ice cream. This thread has officially taken a turn for the weird. 😛

Btw the reason why the ice cream store makes the profit it does is due to those chocolate covered vanilla bars or whatnot; the profit margins wouldn't be nearly as much if all they sold was standard ice cream. Can you image what would happen if NV suddenly got legally barred from making anything other than GeForce cards? Their stock price would instantly drop like a rock.
 
There is a lot of evidence pointing to 680's originally intended as 670's, hell even 660ti. They're at 1006MHz because 28nm overclocks extremely well all around, obviously. Nvidias site still tells me m 680's default clock is 705MHz when I let it detect my GPU. Early drivers limited clock speed to 705, too. We just got good clockers 🙄

Even if there were intentions of a GTX 670TI -- why would they be clocked lower than the GTX 560ti -- not just clocked lower but substantially lower?

Maybe nVidia knew they had a behemoth, gargantuan, incredible, juggernaut of an architecture and desired to keep the performance real close to them and artificially cap the clocks with drivers for leaks to confuse the populace and their competitors?

The little chip that could.
 
I don't think nVidia has forsaken gamers with the GK-110 but simply doesn't make economic sense to release it to GeForce based on the immaturity and constrain potential early on with 28nm. However, since the margins are much higher, while the volume is not, it makes much more sense to target a professional sector first.

Maybe with maturity, less constrains, strong competition from AMD, don't see why nVidia wouldn't eventually release the GK-110 to GeForce, especially if it can bring in more revenues, margins and profit.
 
Back
Top