Originally posted by: Zstream
Originally posted by: Idontcare
This is really like three articles rolled up into one.
First let me point out the obvious:
If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area.
This is simply an embarrassment. I'm embarrassed for Charlie because he doesn't even know why he should feel embarrassed for publishing this "do the math".
Node labels are simply nothing more than that. They could call it the Apple node, the Pear node, and the Banana node for all the node label means anymore. A "55nm" node has nothing to do with the number "55" or the units "nm". Go to Home Depot and by yourself a 2x4 and measure it, does it measure 2inches by 4inches? No, the label "2x4" is not meant to mean anything mathematical or numeric beyond the basic logic that "a 2x4 is smaller than a 4x4 is smaller than a 4x6" etc.
Umm a 2x4 is two inches by four inches or replace inches with units.. All of the die sizes listed above serve as an average number.
I have no doubt charlie thinks his "do the math" is absolutely correct, just as I am sure you are absolutely convinced a "2x4" physically measures 2inches by 4inches.
But you've never measured a 2x4 (have you?), and charlie has never been involved in any of the hands-on aspects of the chip business, hence he has no clue that he has no clue about what he thinks he is expertly talking about.
I see BenSkywalker proved that out as well regarding the DX11 stuff. This guy Charlie is a legend in his own mind, the once and forever king.
Its only natural to be ignorant of stuff in life, when we are first born we are ignorant of all things in life. But Charlie's arrogance prevents him from extinguishing his ignorance in the very technological field he endeavors to be a technical writer...another thing Ben aptly hits on with the comment regarding whether or not Charlie actually understands the insider info his sources are trying to pass onto him. We've seen it time and time again from this guy, he just keeps dancing in the aisles.
The bump shear problem was perhaps one of the best trade journal documentations of a legitimate stress-fatigue failure mechanism in modern IC's that I have seen outside of work. Charlie really did team up with the right insiders with the right background to nail that problem right down to the root-cause. How much of it was charlie and how much of it was ATI engineers doing everything they could to help Charlie publicize NV's problems (heck, I'd do it if I worked for ATI :laugh
Originally posted by: SlowSpyder
IDC, I don't suppose you have any insider info, or can give us an educated guess on the cost to build some of the current cards, could you?
When the 8800GT was launched it was usually around $200 (a little more if I remember right) and I have no doubt Nvidia made a ton of money on them. The 4870 has a similar sized GPU, uses a 256 bit memory connection, and both being considered being a mid to lower part of the highend card I would imagine they use similar quality components.
Won't talk about the insider stuff for obvious reasons, but the educated guesses on costs is actually something we can ballpark guess about based on conference call admissions on gross margins. It is a crude estimation because GM comments apply to their entire product mix whereas we know different SKU's will have differing GM's. But its better than nothing.
Keep in mind making a ton of money on one SKU is not the same as selling enough product across the board at high-enough gross margins at to generate a self-sustaining business going forward. There is "cost to manufacture" and then there is "cost to produce". Producing a part includes R&D investments, admin, sales overhead, distribution and marketing, etc. Manufacturing the part merely includes a bill of materials, gross margins usually only speak to the bill of materials aspects of the business (not exactly, but close enough for this level of detail in a forum conversation).
The incremental cost of producing one more successive unit is pretty low provided the production capacity already exists.
The BoM to manufacture the HD4770 for example is somewhere around $60, slightly higher if GPU yields are worse than my expectation at the moment or slightly lower if GPU yields are coming up as TSMC has said they would. The gross margins for this SKU have got to be fairly low (maybe 30-35%?) once the resellers take their profits from the markup.
Keep in mind I am not saying these guys need to double their prices so they stay in business. A 10% higher price at retail can translate into 20-30% higher GM's at AMD's and NV's end of the business.
There, did I sufficiently dodge the question?
Originally posted by: Just learning
Do you think Nvidia having CUDA implemented in its hardware is significantly affecting their performance/$ ratio? (with respect to Games)
If this is true....then I hope someone is able to write some legitimately useful non-gaming software for CUDA.
Its a fair question. Presumably NV's gaming drivers engineers actually program the drivers to take advantage of the general purpose parts of the GPU to their maximum benefit. I would shudder at the idea that while gaming the GP part of the GPGPU is truly sitting idle, I doubt that is happening.
But a different question might be "had NV's chip architecture engineers been able to use the same xtor budget to craft the GT300 sans any and all CUDA general purpose support, would the resultant beast of a GPU deliver significantly better performance/$ in games?"
That answer has to be a yes, dedicated hardware will always trump general purpose hardware when it comes specifically to the tasks that the dedicated hardware is designed to do. If given the choice between computing the square root of Pi on a x86 CPU versus building a dedicated IC solely capable of computing SQRT(PI) the dedicated hardware will always win performance...(not performance/$ though in this case as volume of the general processor drives down the cost per unit, hence the entire reason the market exists).
Look at Nehalem vs. Penryn for gaming...your performance/$ when it comes to just gaming is better maximized with the tired FSB and older architecture than with the new CPU. (not talking about the extreme corners where tri-SLI edges out, etc, performance/$ is the subject here)
