[3dcenter] GK104 specs

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Arzachel

Senior member
Apr 7, 2011
903
76
91
But I thought smaller die sizes translated into lower costs for consumers?
I reme mber tbis being the cry against nvidias big die strategy. Now that AMD IS charging bigger bux for its product, suddely R&D comes into play.
Funny stuff.

Larger dies harder to manufacture, leak more voltage(if more transistors are used of course) and are better at heat dissapation. I think the GTX480 brought more ill will than any pricing concerns.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Hmm, why not :) I'm in I guess.

Good man! I think you have the better odds, since you can win if only one of the two conditions fail, whereas I need both conditions to be true to win. Nevertheless, I think the end results will be pretty close to both of our conditions regardless of who wins. Lets try to remember that this is all for "fun" although one of us gets to reward the other with a prize when the end of the rainbow is reached.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Good man! I think you have the better odds, since you can win if only one of the two conditions fail, whereas I need both conditions to be true to win. Nevertheless, I think the end results will be pretty close to both of our conditions regardless of who wins. Lets try to remember that this is all for "fun" although one of us gets to reward the other with a prize when the end of the rainbow is reached.

Agreed! Trust me i'm an anxious as you are in finding out how good the GK104 is. :)
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
But I thought smaller die sizes translated into lower costs for consumers?
I reme mber tbis being the cry against nvidias big die strategy. Now that AMD IS charging bigger bux for its product, suddely R&D comes into play.
Funny stuff.

A small die is cheaper once the process has matured enough to provide good yields. A smaller die means more chips per wafer. Which ultimately means less cost once production scales up. However, 28nm is still very new, so the process has not yet matured to a level where yields are high. If you can only harvest half the chips on a given wafer, that ultimately effects TSMC's ability to make money. So they charge much more per wafer until they can get to a point where losses are low.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
A small die is cheaper once the process has matured enough to provide good yields. A smaller die means more chips per wafer. Which ultimately means less cost once production scales up. However, 28nm is still very new, so the process has not yet matured to a level where yields are high. If you can only harvest half the chips on a given wafer, that ultimately effects TSMC's ability to make money. So they charge much more per wafer until they can get to a point where losses are low.

So was there no R&D to recoup from prior launches? In other words, does the point you're trying to make only apply to 7970?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I will admit I payed full msrp on a gtx 580 once. Believe me there were cuss words uttered about nvidia when I bought it. I really didn't think about price/performance or none of that. Just wanted the top performer. Turned out I had buyers remorse about 2-3 months in and sold it. So while yes I have been on the paying premium side I am no longer. Just like anything else you learn from your mistakes. So yes with my new outlook on gpu buying I'm giving the 7970 a failing grade.

Thats not an unreasonable position at all. I think both companies are overpricing, high end GPU pricing has far outpaced inflation I do believe. I remember getting voodoo2 way back in the day for 299$ at Best Buy of all places :eek: I don't think we've had 100% inflation over the past few years.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Fun (and fairly useless) facts:

The 5870 was approximately 40% faster than the 4890 at release.
The 7970 was approximately 40% faster than the 6970 at release.

The 5870 was 110% more expensive than the 4890 at release (380 usd versus 180 usd)
The 7970 was 60% more expensive than the 6970 at release (350 usd versus 550 usd)

In absolute terms both the 5870 and 7970 increased the price by 200 usd (which is less in the 7970's case due to inflation)

So all in all the 7970 came with a smaller price hike than the 5870 in both absolute and relative terms, but people still complain, even though no one really complained all that much with the 5870.

And I guess all this just goes to show that talking about the size or extremity of a price increase is pointless, all that really matters is the given price, at a given performance level.

This was a good post, thanks.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I will admit I payed full msrp on a gtx 580 once. Believe me there were cuss words uttered about nvidia when I bought it. I really didn't think about price/performance or none of that. Just wanted the top performer. Turned out I had buyers remorse about 2-3 months in and sold it. So while yes I have been on the paying premium side I am no longer. Just like anything else you learn from your mistakes. So yes with my new outlook on gpu buying I'm giving the 7970 a failing grade.


So with your own words you state this product is of no interest to you and have exited this price bracket completely.

Yet, you belittle the people who buy it, and only complain about the negative price? Want some cheese to go with that whine? Do you also sit there and point out how Store Brand Mac and Cheese is cheaper than Kraft Mac n Cheese?

Outside of crapping on people's parade, have you added anything of value to your points outside of waaaaaah? Seriously?
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
So was there no R&D to recoup from prior launches? In other words, does the point you're trying to make only apply to 7970?

It applies to prior releases as well. As mentioned above, the 5870 was 110% more money than the 4890 when it released. The 6K series was just a rehash of the same cards basically, so there was no big price jump over the 5K series.
 
May 13, 2009
12,333
612
126
So with your own words you state this product is of no interest to you and have exited this price bracket completely.

Yet, you belittle the people who buy it, and only complain about the negative price? Want some cheese to go with that whine? Do you also sit there and point out how Store Brand Mac and Cheese is cheaper than Kraft Mac n Cheese?

Outside of crapping on people's parade, have you added anything of value to your points outside of waaaaaah? Seriously?

I'm not ruling out a high end card entirely. The chances are slim though. I didn't know I had to buy one though to have an opinion on them. I don't buy $1000 intel cpu's although I'm sure I've commented on them. Thanks for your concern.
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
I stayed away from this thread for an entire day to let the smoke clear and hopefully move on to the OP.

Nope that didn't happen

There still seems to be plenty of bickering over the price of the current highend uncontested performance leading card.
 
Feb 19, 2009
10,457
10
76
A .08% increase in share prices kills the argument? :confused:

Charts not that confusing.

AMD shares up ~50% since October. Part probably by their huge APU sales.

Up ~30% since the launch of the 79xx series with no competition in sight.

By the time gk104 arrives, they can readjust prices if need be and the 7990 will be king, unmatched for a long time until gk110. Trinity APU due soon as well. This could be a great year for AMD shareholders.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
If one desires to play that game nVidia was at 11.81 in early october and at 16.24 now and they don't have any APU's and don't have their new generation out. What does it mean? Some place too much merit in some things.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
What do you think Jen Hsun Huang sees you as? His best friend? :rolleyes:

A potential bar code number on a GeForce brand. However it was nice of him to take the time for this:


Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?


Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

Keep in touch.

Jensen
 
Last edited:

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
So you're saying amd couldn't sell the 7970 at $399 and still make a ton of profit? I think they could. So why are they $550-$600? Hmm..

Wake up man. Getting fleeced for your hard earned dollars is not cool. I can't for the life of me figure how getting hard earned dollars taken out of your pocket is a good thing? :confused: Then to come into a thread and defend it is really the puzzling thing.
Wonder if these same guys drive by the gas station and get all giddy when gas goes up by a dollar. "oh look honey gas is $1.00 higher than yesterday! Yippee!!
Go Exxon!! A big nameless faceless corporation needs this money so much more than me. Here you go sir. I don't need it. Just so happy to pay premiums!"


They priced them in line with what the compeition priced their parts at (GTX580 is ~$500, the 7970 is faster and costs more because of that).

But even more than that, the cards are priced where they are because they will sell at that amount. If they know that every single chip made, every single card manufactured will sell at $550, why would they price it at $399? They'd only be hurting their own company. If the price was too high, they wouldn't sell, and AMD and their partners would be forced to lower the price.

Look at it like this, if you were selling a used car that you know is worth and will sell at $10,000, why on earth would you list it for $6,000?
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :p). We’ll share more details when we introduce Fermi’s graphics architecture shortly!

Date: November 1 2009, made me giggle
 
Last edited: