What happens to nvidia?

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Can we now please drop this non-subject? TSMC has offered double via technology since their 130 nm process (or perhaps even sooner, can't recall). Why is everyone talking as if it is a new thing for 40 nm, and nVidia would somehow not know about it?
Heck, look at this document from 2003(!) from TSMC, discussing it on page 4:
http://www.tsmc.com/download/english/a05_literature/September_2003.pdf

The notion of discrete vias that were isolated and defined in layout but used to connect the same two metal-wires was a construct created by the advent of the dual-damascene copper back-end of the line circa and first implemented into production at 180nm by Motorola and AMD.

Before the copper process you had aluminum (which is not a dual-damascene process) and the vias were literally made by wet-etching big wide areas of dielectric (as large as you wanted) and then laying down your aluminum layer.

Because of the dimensions involved, making single large aluminum contacts (if needed/desired) did not create a reliability issue that could otherwise be mitigated by creating two or more isolated vias between the same two metal lines. (via voiding is the reason we do this with copper BEOL)

Nice and crisp via definition: (notice how they get larger as you go from bottom to top, increasing metal levels)
114999dieseal_Stack-c-s(1).BMP


Here is an example of the use of both single via (on the left) and double vias (on the right) to connect metal levels:
tihp-stack.jpg


Here is an example of the use of both single and double contacts in the same logic circuit:
sem-image.jpg
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
But I heard that the GT200 costed up to $60.00 per GPU when launched, how much would cost the GF100 at launch? Probably the difference isn't by much specially when both chips are in similar size. So assuming that the GF100 costs $50 per chip, plus memory costs, PCB stuff like wiring and routing, Voltage Regulators, capacitors and resistors, a GTX 480/470 should cost at least $135 as a guesstimate, and if we count relative size compared to Cypress which is almost half of the size of GF100 with a cheaper and less complex PCB thanks to the 256-Bits BUS, less complicated power circuitry and similar components plus more yields per waffer, a single HD 5870 GPU should cost less than $20.00 and along with the prices of the videocard, it shouldn't exceed the $65.00 as a guesstimate.

I haven't seen cost estimates on 5870, but I know that 4xxx series was considerably more expensive than that. in fact, even with the smaller die amd didn't have much of a price advantage over nvidia. can't find the graph, maybe somebody else saved it, however.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
You should point out that Keys also said they were using single vias too. There was no clear distinction of which products got single and which got doubles.

The misunderstanding is coming from the misperception that an IC must be entirely single or double via...the reality is that every IC made since the advent of copper metal integration (in production circa 2000) has used a mixture of both single and double vias (and even more, where needed) as a yield enhancement and reliability improvement engineering tool.

In the industry we call this "designing for manufacturability" (DFM) and it basically means we build in a little bit of redundancy to hedge our bets against the liklihood of defects forming no matter the quality of the fab and process.

DFM is a balancing act, the balance is between increasing production cost (including IC design cost which itself includes the cost to implement DFM) and decreasing net sellable product. Do you want 300 chips per wafer that are likely to yield 99% or do you want 600 chips per wafer that are likely to yield 90%?

So, you see, there isn't really such a thing as a "single via" product to be distinguished from a "double via" product. It is an admixture of single, double, triple, etc via placements all done to maximize gross profits.

The specific ratio of vias is one of those things that can actually change by a mere stepping.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
But that's the FUD spread by AMD, making it sound as if they invented double vias.
nVidia started building 40 nm DX10.1 parts at about the same time as AMD started with the 4770, same testbed approach. Fermi had been using double vias from the beginning. nVidia just never made a big deal about it, unlike AMD (to insiders, AMD's remark of moving to double vias sounds pretty amateuristic, actually. They've been around for many years).

riiiiiight, so remarking about moving to double vias is amateuristic but release a high end gpu family for a new direct X version 7 MONTHS after your competition is "professional"? did dirk meyer steal your girlfriend in 6th grade or something?


Personal attacks and insults are not acceptable...I know you know this :(

Moderator Idontcare
 
Last edited by a moderator:

Paratus

Lifer
Jun 4, 2004
17,638
15,826
146
Idontcare,

I understand that some amount of transistor placement happens by hand in CPU design. Is the same true in GPU design?
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
As far as I know IDC is the only one with engineering experience on ICs on these forums...

To break down what they did or didn't do to three factors in something will a billion parts is ridiculous. I have a hard time believing that the media reports on how the GPUs are put together (something I don't fully understand) are any more accurate to reality than those reporting how laser physics (something I do understand) will change the world. i.e. highly complicated things are almost always reported misleadingly or flat or incorrectly. Fun question... put up your hand if you actually know what a via is for, why you would need two instead of one, and what difference it makes other than "two is better than one". ATI likely learned some tricks with the 4770 that they did not know before.. perhaps one of them was exactly where to use double vias... Nvidia likely learned the same things. Given that the 4770 was within a few months of the 240ish cards we can likely assume both learned about the same things at about the same time, and if anyone learned more than another it is as far above our heads as the moon.

So for us to argue about it is really akin to arguing how well that tractor beam posted in OT will work...

If we know how much it costs to make each GPU, how much they sell for, and how much the R&D cost we can make educated guesses on who is better off (who will make or lose money and for how long... of course we don't know any of these numbers but from wildly differing rumours). We can comment on which card is the best product. We can even guess on which company will explode... But we really have no basis for talking about which GPU was built using the soundest engineering.

I only mention this because it all of a sudden came to me the other day that "Gee, the couple people who actually are experts on GPUs and ICs probably find the talk on it just as enlightening as I find some physics reports... I've become what I hate!"

Deep breath everyone, that is all.

pretty sure that amd's gpu used the soundest engineering since they actually produced something on time, and that something wasn't used as a grill in its spare time. nvidia's THEORY could very well end up better, and in the long run they could glean some useful info from gf100, but for now fermi has sucked big time.

look at it this way: if nvidia had released the 5xxx series and amd released fermi would they still be close in maket share, or would nvidia be pushing 70% + in discreet graphics right now?


I thought he was on the rude side himself... trolling and telling someone to "chill out", "relax", and other things.
Was actually doubting whether I should report him.
You've made up my mind for me now.

good idea, bait somebody else into an action, then "report" him for it. If you made even a modicrum of effort to be civil then your knowledge would be much easier to accept and appreciate.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Idontcare,

I understand that some amount of transistor placement happens by hand in CPU design. Is the same true in GPU design?

Yes, absolutely, for the discrete GPU markets that NV and AMD create products.

The embedded VR stuff that gets used alongside Cortex and so on will be entirely synthesized from a pre-existing optimized library. This too might not be explicitly true for 100% of the cases out there, but if you were going to find a 100% synthesized graphics product out there it will be the embedded stuff where cost is king.

I haven't had the pleasure of knowing anyone in Intel's GMA design team, so I can't say for certain whether they are purely synthesized designs but going by the diemaps it looks like they do a combination of both as well.

You know the perfect guy to ask these questions of is Ctho9305. You got to pm him though as he doesn't spend a whole lot of time in the forums but he does IC design for a living ;)
 

Scali

Banned
Dec 3, 2004
2,495
0
0
riiiiiight, so remarking about moving to double vias is amateuristic but release a high end gpu family for a new direct X version 7 MONTHS after your competition is "professional"?

It's not all black-and-white.
Even professionals can fail in a high-risk business. Is a Formula 1 driver not a professional when he crashes his car? I think you'll be hard-pressed to find any Formula 1 world champions who never crashed.
Likewise, you'll be hard-pressed to find any chip manufacturers who've never made a product that was late, underperforming, or whatnot. That even includes Intel, a company much larger than AMD and nVidia combined, employing a much larger team of professionals, including many of the brightest minds in the industry. And STILL they get it wrong sometimes? Yes, it happens. But not because they 'forgot' to use double vias.

Thing is, a remark like "we moved to double vias" sounds a bit like "Oh gee, we just discovered that the earth is not flat!". As in: everyone has known about this for years, and has been doing this for ages. It's not worth mentioning (which is why nobody ever did until now). Just like you never hear someone say "For our latest chip, we are using silicon!"
Ofcourse they do, that's what they have been doing for years, like everyone else.

So I don't see why you try to compare these things.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
pretty sure that amd's gpu used the soundest engineering since they actually produced something on time, and that something wasn't used as a grill in its spare time.

This GPU was designed by pretty much the same team that designed the HD2900 a few years earlier.
See what I mean?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
In the industry we call this "designing for manufacturability" (DFM) and it basically means we build in a little bit of redundancy to hedge our bets against the liklihood of defects forming no matter the quality of the fab and process.

To get back to the TSMC-problems...
These via's are etched into the metal (or how should I say that?), and as can be seen from your X-ray pictures, the process is not 'perfect', it's a bit inconsistent.
So if a via is etched poorly, it may not make enough of a contact to pass enough current through. Which is why adding more vias is a good way to compensate for that.

Apparently with the 40 nm process at TSMC, the vias were of such poor quality that the likelihood of defects was much larger than what TSMC had originally estimated and communicated to their customers... and also larger than what TSMC had historically done. So a lot of extra redundancy had to be built into the chip in order to compensate for TSMC's poor manufacturing.
And *this* is what nVidia spoke out against in the media.

TSMC later reported that they had fixed these process issues, but as far as I know, it's still not quite up there with competing foundries.
 
Sep 9, 2010
86
0
0
I haven't seen cost estimates on 5870, but I know that 4xxx series was considerably more expensive than that. in fact, even with the smaller die amd didn't have much of a price advantage over nvidia. can't find the graph, maybe somebody else saved it, however.

But how's that possible? The HD 5870 consumes pretty much the same power as the HD 4890 at full load, which means that they didn't had the need to beef up the power circuitry by much, the HD 5870 uses faster GDDR5 memory and its die is slightly larger, why the HD 4x00 series would be more expensive than the current HD 5x00 series? Or even worse that the GTX 2x0 series had a far more complex PCB with more sophisticated power circuitry and had a twice bigger die size? I doubt that AMD was selling the HD 4800 series at a loss, otherwise they couldn't survive.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
pretty sure that amd's gpu used the soundest engineering since they actually produced something on time, and that something wasn't used as a grill in its spare time.

So Tegra2, by your standards, was vastly superior engineering to the 5xxx parts since it was early and wasn't close to the power pig that the 5xxx parts are.

It's all relative.

http://www.anandtech.com/show/2977/...tx-470-6-months-late-was-it-worth-the-wait-/6

According to those numbers the 5870 was slower then the previous generation GTX 285- so you could say it was a year late and underperforming. A person with any sort of sense would realize that not every product has the same engineering goals and much as the GF100 may not look great when you use AMD's design goals, the 5870 looks pretty pathetic when you use nV's.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
So Tegra2, by your standards, was vastly superior engineering to the 5xxx parts since it was early and wasn't close to the power pig that the 5xxx parts are.

It's all relative.

http://www.anandtech.com/show/2977/...tx-470-6-months-late-was-it-worth-the-wait-/6

According to those numbers the 5870 was slower then the previous generation GTX 285- so you could say it was a year late and underperforming. A person with any sort of sense would realize that not every product has the same engineering goals and much as the GF100 may not look great when you use AMD's design goals, the 5870 looks pretty pathetic when you use nV's.


So are you saying that NVIDIA GForce GPUs shouldn't be reviewed primarily as something to allow games to function?
 

Paratus

Lifer
Jun 4, 2004
17,638
15,826
146
So are you saying that NVIDIA GForce GPUs shouldn't be reviewed primarily as something to allow games to function?

We've also been told we can't compare them on price either as it's impossible to tell how much each design costs.

So it's probably just better to take on faith that what we are seeing is "a can of whoop'ass".

I'l leave it to the reader to decide whose doing the whooping and who is the ass. ;)
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
But how's that possible? The HD 5870 consumes pretty much the same power as the HD 4890 at full load, which means that they didn't had the need to beef up the power circuitry by much, the HD 5870 uses faster GDDR5 memory and its die is slightly larger, why the HD 4x00 series would be more expensive than the current HD 5x00 series? Or even worse that the GTX 2x0 series had a far more complex PCB with more sophisticated power circuitry and had a twice bigger die size? I doubt that AMD was selling the HD 4800 series at a loss, otherwise they couldn't survive.

I'm not an engineer so I don't know how it's possible, but I do know what I read. I spent 30 minutes looking for it last night but was foiled by digitimes and their large signup fees. I'll do some more digging today so we can discuss actual numbers on a chart instead of theory.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
So are you saying that NVIDIA GForce GPUs shouldn't be reviewed primarily as something to allow games to function?

Last I checked the GF100 was the fastest gaming GPU available on the market.

What people are talking about is what is secondary. For the 5870, that was power consumption. For the GF100, that was GPGPU. It seems to me that the GF100 throttles the 5870 based on that criteria, much as the 5870 throttles the GF100 in power consumption terms. Saying that nV's engineers failed in any way really doesn't make a lot of sense to me since it seems like they delivered what they were aiming for, much as ATi's engineers did. Reality is that they weren't aiming for the same thing.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Ben, do you know where to get the "cost to manufacture gpu" info? I've looked all over, in the archives, google, froogle, yahoo, ask.com, and under my bed, all without success.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Ben, do you know where to get the "cost to manufacture gpu" info?

iSupply is what most people quote, but their information is based on guesses, highly educated guesses, but still guesses. Unless a company does a full breakdown of components that lists per item costs, which I'm not aware of anyone who does, then iSupply gathers as much info as they can and comes up with numbers they think are right(sometimes they are way off, the were claiming that a Sony BRD drive cost Sony $15 more then I could buy it off of NewEgg for as an example).
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
Last I checked the GF100 was the fastest gaming GPU available on the market.

What people are talking about is what is secondary. For the 5870, that was power consumption. For the GF100, that was GPGPU. It seems to me that the GF100 throttles the 5870 based on that criteria, much as the 5870 throttles the GF100 in power consumption terms. Saying that nV's engineers failed in any way really doesn't make a lot of sense to me since it seems like they delivered what they were aiming for, much as ATi's engineers did. Reality is that they weren't aiming for the same thing.

And I guess the sales reflect what criteria is more important for the current market.

So the design didn't fail, only the market evaluation.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
iSupply is what most people quote, but their information is based on guesses, highly educated guesses, but still guesses. Unless a company does a full breakdown of components that lists per item costs, which I'm not aware of anyone who does, then iSupply gathers as much info as they can and comes up with numbers they think are right(sometimes they are way off, the were claiming that a Sony BRD drive cost Sony $15 more then I could buy it off of NewEgg for as an example).

that sounds right, but they don't have that info in the archives, either. hmmm, I'm sure we'll see it again somewhere soon.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Last I checked the GF100 was the fastest gaming GPU available on the market.

What people are talking about is what is secondary. For the 5870, that was power consumption. For the GF100, that was GPGPU. It seems to me that the GF100 throttles the 5870 based on that criteria, much as the 5870 throttles the GF100 in power consumption terms. Saying that nV's engineers failed in any way really doesn't make a lot of sense to me since it seems like they delivered what they were aiming for, much as ATi's engineers did. Reality is that they weren't aiming for the same thing.

Yes, it's the fastest GPU. But consumers can't buy GPUs, they buy video cards.

The 5970 is the fastest video card. If two GF100 GPUs were on a single video card, that would be the fastest video card, but it can't be done, they are too hot and consume too much power.

So that lends some credence to BryanWs statement of AMD's superior engineering. Regardless of what a chip on its own can do, it needs to be put on a video card to be sold and used for its purpose. And the GF100 turned out in a state that made it unrealistic to be put on a card in a pair.

AMD produced a chip with characteristics that allowed them to create the fastest video card on the market whereas nvidia created a chip that did not allow them to attain that goal.

I'm really looking forward to seeing the power and temperature numbers for AMD's coming 6 series single gpu flagship card. 6970 or 6870 whatever they call it. Considering it is going to be faster than a GTX 480, what is going to make the card extremely impressive, is if it runs cooler and consumes less power while being faster than the 480 is.

Not that I don't think nvidia can deliver a card faster and cooler and less power hungry than the 480 is, but I don't think we have a chance of seeing a card like that from them for a good eight months whereas AMD's is just around the corner.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
And I guess the sales reflect what criteria is more important for the current market.

How many 5xxx parts did ATi sell for $5,000? People constantly confuse what they want to see with what is good for a company. When this generation is over, we can look at how much total revenue was generated and then discuss who failed if you want to look at it from that criteria. If I sell 10 cars for 5 million each and you sell 10 million cars for $5 we both hit the same sales numbers. Which car was better? I'd be willing to bet mine was ;) :p

The 5970 is the fastest video card.

And the fastest possible solution is 480 SLI. You can keep trying to spin it, the reality is that they had different goals, and both companies seemed to match the goals they shot for.

So that lends some credence to BryanWs statement of AMD's superior engineering.

Because they have the second fastest graphics setup, because they have the second fastest single GPU, or because they have ~6th fastest GPGPU setup? I'm not saying one group of engineers is better then the other, I'm pointing out that they had different design goals. I don't know why people have trouble wrapping their head around that.

I'm really looking forward to seeing the power and temperature numbers for AMD's coming 6 series single gpu flagship card.

I'm really interested in seeing some GPGPU numbers, perhaps the 6 series will unseat the 285 for the fastest GPGPU of 2009 ;)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Because they have the second fastest graphics setup, because they have the second fastest single GPU, or because they have ~6th fastest GPGPU setup? I'm not saying one group of engineers is better then the other, I'm pointing out that they had different design goals. I don't know why people have trouble wrapping their head around that.

And that really is the nut of it. Not every engineering team is handed the exact same objectives and resources as their competition, as you well know, but it is a concept that escapes even engineers in school.

I remember well my ignorance then (in college) versus what I learned about from being an engineer once I started working in industry.

It is never about being able to design the absolute best performing widget, it is always about designing the best widget within a time/design-budget/production-cost/etc bevy of constraints...and if those constraints aren't identical to the constraints placed on the engineers of a competing product you really can't claim to divine "engineering superiority" based on end-user experience with the product.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
How many 5xxx parts did ATi sell for $5,000? People constantly confuse what they want to see with what is good for a company. When this generation is over, we can look at how much total revenue was generated and then discuss who failed if you want to look at it from that criteria. If I sell 10 cars for 5 million each and you sell 10 million cars for $5 we both hit the same sales numbers. Which car was better? I'd be willing to bet mine was ;) :p



And the fastest possible solution is 480 SLI. You can keep trying to spin it, the reality is that they had different goals, and both companies seemed to match the goals they shot for.



Because they have the second fastest graphics setup, because they have the second fastest single GPU, or because they have ~6th fastest GPGPU setup? I'm not saying one group of engineers is better then the other, I'm pointing out that they had different design goals. I don't know why people have trouble wrapping their head around that.



I'm really interested in seeing some GPGPU numbers, perhaps the 6 series will unseat the 285 for the fastest GPGPU of 2009 ;)

Your post is rife with spin. AMD is not focused on GPGPU, yes it's relevant to this forum, but if you want to compare the two of their cards, then feel free to compare their workstation products for the GPGPU aspect and their consumer based cards on the gaming aspect.

This is how they market the cards themselves, makes sense to compare them as such ?

Two physical video cards is not a relevant comparison to one physical video card. For a lot of users out there, buying two video cards is a hurdle they won't jump over, but buying one is comfortable for them. Trying to claim NV's goal was to come out with the fastest multi card solution for gaming is a real stretch, especially given their history of producing multi-gpu single cards and the complete vacuum of said part with their current generation. Even the 460 looks to be too power hungry and would break through the 300W power envelope in a dual gpu single card configuration.

I could care less who sells what or makes what profit. That is not relevant to me or anyone else who is interested in getting themselves a good performing general consumer based card for gaming. This is what GTX 480s, 5970s and 5870s are for, gaming.

Fact is, it's going to be a huge win and massive slamdunk if AMD's 6870 flagship comes in with a lower power draw and heat output than a GTX 480 along with superior performance.

I don't work in this industry, I would think though, from the perspective of those engineers, quality engineering on their part would go beyond just performance and touch on things like heat output and power consumption. Especially as these qualities will inhibit performance levels. They certainly have inhibited what the GF100 has been able to offer the consumer.

We can check back in on the power draw and heat output and performance numbers of AMD's 6 series next month and their flagship part in a couple months.

As for NV's changes in their next generation, well we don't know when we'll see those numbers, sometime mid to late next year most likely, they're sort of behind AMD's release schedule currently. Again, it is not my field, so I don't know if AMD being ahead there is a superiority or not :)
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
How many 5xxx parts did ATi sell for $5,000? People constantly confuse what they want to see with what is good for a company. When this generation is over, we can look at how much total revenue was generated and then discuss who failed if you want to look at it from that criteria. If I sell 10 cars for 5 million each and you sell 10 million cars for $5 we both hit the same sales numbers. Which car was better? I'd be willing to bet mine was ;) :p

How many GeForces did NVIDIA sell for $5000?

If NVIDIA can sell them for $5000 why the hell are they selling them for $400 or $250?