- Jan 12, 2005
- 17,305
- 1,001
- 126
Originally posted by: bryanW1995
Originally posted by: Idontcare
Originally posted by: SlowSpyder
Cliffs:
AMD GPU's are smaller than Nvidia GPU's but performance is similar.
It seems to me that you are assuming both architectures have been equally optimized in their respective implementations when making comparisons that involve things like die-size.
Let me use an absurd example to show what I mean.
Suppose NV's decision makers decided they were going to fund GT200 development but gave the project manager the following constraints: (1) development budget is $1m, (2) timeline budget is 3 months, and (3) performance requirements were that it be on-par with anticipated competition at time of release.
Now suppose AMD's decision makers decided they were going to fund RV770 development but gave the project manager the following constraints: (1) development budget is $10m, (2) timeline budget is 30 months, (3) performance requirements were that it be on-par with anticipated competition at time of release, and (4) make it fit into a small die so as to reduce production costs.
Now in this absurd example the AMD decision makers are expecting a product that meets the stated objectives, and having resourced it 10x more so than NV did their comparable project, one would expect the final product to be more optimized (fewer xtors, higher xtor density, smaller die, etc) than NV's.
In industry jargon the concepts I am referring to here are called R&D Efficiency and Entitlement.
Now of course we don't know whether NV resourced the GT200 any less than AMD resourced the RV770, and likewise for Fermi vs. Cypress, but what we can't conclude by making die size comparisons and xtor density comparisons is that one should be superior to the other in those metrics without our having access to the necessary budgetary informations that factored into the project management aspects of decision making and tradeoff downselection.
This is no different than comparing say AMD's PhII X4 versus the nearly identical in die-size Bloomfield. You could argue that bloomfield shows that AMD should/could have implemented PhII X4 as a smaller die or they should/could have made PhII X4 performance higher (given that Intel did)...or you could argue that AMD managed to deliver 90% of the performance while only spending 25% the coin.
It's all how you want to evaluate the metrics of success in terms of entitlement or R&D efficiency (spend 25% the budget and you aren't entitled to expect your engineers to deliver 100% the performance, 90% the performance is pretty damn good).
So we will never know how much of GT200's diesize is attributable to GPGPU constraints versus simply being the result of timeline and budgetary tradeoffs made at the project management level versus how similar tradeoff decisions were made at AMD's project management level.
good point, but if anything here nvidia is the one with 4x the R&D budget. how bad would it be to come in 6 months late, be larger, AND cost 4x the R&D budget? that could be the ultimate gpu trifecta.
Right.... I completel understand your post IDC, but it's been tossed around here that Nvidia spends more on R&D than AMD is worth as a company. With Nvidia being under the impression before the GTX2x0/48x0 launch that they were going to own the highend, I would think they knew they'd be selling a LOT of chips, so any savings per chip would add up to a very significant amount.
Selling GTX280's by the tens of thousands, maybe hundreds of thousands, it would make much more sense for them to try and make it as small as they can, say to save an average of $20/die. Just pulling numbers out of my ass. But a few bucks times thousands and thousands of GPU's, that adds to the bottom line.
I can't find it right now, but do we know how many transistors are in an RV770? Isn't ti about 250mm2? We know the GTX285 is about 1.4billion transistors, and 460mm2 (sorry for using both internal code names and product names, I don't know what the 55nm shrink of the GT200 is... GT200b?). Do their sizes scale as we'd expect as the transistor count increases?
I guess what I'm getting as is that we hear that the GT200 was aimed for GPGPU since the beginning, but we still don't know how, or if it really even was and it was just marketing by Nvidia to try and explain why their large chip performs near a much smaller chip. To the end user it doesn't matter, but if they are trying to explain to shareholders why their cost is higher it might.
Thanks for the 20% number Keys, I just wish there were more details thatn that I guess.
Originally posted by: Keysplayr
Originally posted by: HurleyBird
Rule 1:
Don't feed the troll.
Not matter how stupid his comment, or how brilliant your response to him, Wreckage wins the moment you decide to reply to one of his inflammatory posts.
Ignoring a troll has never caused a thread to derail, guys.
Seconded. Ignore from this point on?
Agreed by me. I will completely ignore his posts in this thread unless they are relevant to the conversation.