Nvidia 40nm update.

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: SlowSpyder
We obviously don't have hard numbers. But I think that because AMD chose to create the 4770 with a 128 bit memory configuration with GDDR5 to get their bandwidth goal vs. going with a 256 bit configuration with GDDR3 is somewhat telling.
And nvidia chose the exact opposite. Why do you assume that the cost distribution choice of AMD was more correct while nvidia's was less? Reading between the lines I would say they must have been REALLY close in price and it was anyone's guess as to how it would go if each company chose the opposite.

If I remember correctly, anandtech suggested that nvidia chose the cheaper route for the short term (bigger die with cheaper ram) at the time of choice, but that it is expected that AMDs solution would become cheaper in the long term. (by which point nvidia PLANNED to have made modifications, but weather they stick to the plan and weather it all pans out is anyone's guess).

The bottom line is that we simply do not know, and for some reason you guys assume that AMD chose the cheap method of doing things, while nvidia chose the expensive method of doing things.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: taltamir
Originally posted by: SlowSpyder
We obviously don't have hard numbers. But I think that because AMD chose to create the 4770 with a 128 bit memory configuration with GDDR5 to get their bandwidth goal vs. going with a 256 bit configuration with GDDR3 is somewhat telling.
And nvidia chose the exact opposite. Why do you assume that the cost distribution choice of AMD was more correct while nvidia's was less? Reading between the lines I would say they must have been REALLY close in price and it was anyone's guess as to how it would go if each company chose the opposite.

If I remember correctly, anandtech suggested that nvidia chose the cheaper route for the short term (bigger die with cheaper ram) at the time of choice, but that it is expected that AMDs solution would become cheaper in the long term. (by which point nvidia PLANNED to have made modifications, but weather they stick to the plan and weather it all pans out is anyone's guess).

The bottom line is that we simply do not know, and for some reason you guys assume that AMD chose the cheap method of doing things, while nvidia chose the expensive method of doing things.

You're right, we don't know for sure, we're speculating here. I never said that this is the absolute truth, but to me that's what the 'evidence' points towards. Nvidia has a recent history of taking the safer more conservative route with new technologies... 65nm vs. 55nm. DDR3 vs. DDR5. My guess is that's why the chose the route they did, it's proven and reached the performance goals they wanted.

Remember, Nvidia was expecting no competition from AMD in this market segment, and was planning on pricing their cards much higher than they are currently selling for. They had no reason to care if their cards cost more to manufacture than AMD's, they weren't planning on having competition above the 9800GTX level.

Also, how long is 'long term'? Maybe GDDR5 is cheaper now, rumors are that Nvidia's next cards will be using GDDR5 as well. I'm sure there is a reason they are going that route (assuming the rumors I read turn out to be true) vs. using GDDR3 or GDDR4 over a 448/512bit connection.

 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
TSMC confirms 40-nm yield issues, gives predictions

Silicon foundry giant Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) posted better-than-expect results in the first quarter of 2009.

During a conference call with analysts, Rick Tsai, president and chief executive of TSMC (Hsinchu, Taiwan), acknowledged that the company had some ''yield'' issues with its new 40-nm process. He also provided some predictions for 2009.

Last year, TSMC rolled out its 40-nm process. In Q1 of 2009, the company's 40-nm process represented about 1 percent of its overall sales, which is better-than-expected. In Q2, TSMC expects to have 2 percent of its overall sales in the 40-nm arena.

When an analyst asked about yield problems with the company's 40-nm process, Tsai said: ''There have been difficulties with the yields. 40-nm is a difficult technology to manufacturer. We understand the root of the problem.''

The TSMC CEO said the company has or is fixing the problem, but he did not elaborate. He also said that TSMC has demonstrated a functional SRAM cell, based on its upcoming 28-nm process, which includes high-k and metal gates for the gate stack.

The 28-nm process will also include a second gate-stack option, based on more conventional silicon dioxide. As previously reported, TSMC is expected to move into 28-nm production in the first part of 2010.

http://www.eetimes.com/news/se...l;?articleID=217201043

The 40nm stuff should surprise no one, but the 28nm comments really are aggressive to say the least.

Still stumbling around to get 40nm into HVM and they are boldly predicting 28nm will be ready in 12 months? Wouldn't that be awesome if it comes true?
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: Creig

I've looked, but was unable to find a specific instance mentioning pricing models of GDDR3 vs GDDR5. But I would be highly surprised if the price difference between the two was enough to overcome the GPU disparity in size and complexity between the GT200 and 4870. AFAIK, the GPUs cost in engineering, development and production makes it, by far, the most expensive component in a modern day video card.


I'm pretty sure that GDDR5 over 128-bit is noticeably cheaper to manufacture than DDR3 on a 256-bit bus, otherwise why switch to GDDR5 at all? Halving the memory bus makes for a cooler, less power hungry card with fewer PCB layers as it is easier to trace fewer memory bits on the card.

The 512-bit bus was a total failure for ATI (2900XT). That was the card that invented the 8-pin PCI-e connector!

There's no question in my mind that Nvidia will switch to GDDR5 in the next generation. They probably were worried about yield issues initially, so they stuck with the safe GDDR3 and GDDR4.
 

yacoub

Golden Member
May 24, 2005
1,991
14
81
Originally posted by: jiffylube1024
Originally posted by: Creig

I've looked, but was unable to find a specific instance mentioning pricing models of GDDR3 vs GDDR5. But I would be highly surprised if the price difference between the two was enough to overcome the GPU disparity in size and complexity between the GT200 and 4870. AFAIK, the GPUs cost in engineering, development and production makes it, by far, the most expensive component in a modern day video card.


I'm pretty sure that GDDR5 over 128-bit is noticeably cheaper to manufacture than DDR3 on a 256-bit bus, otherwise why switch to GDDR5 at all? Halving the memory bus makes for a cooler, less power hungry card with fewer PCB layers as it is easier to trace fewer memory bits on the card.

The 512-bit bus was a total failure for ATI (2900XT). That was the card that invented the 8-pin PCI-e connector!

But if they went back to a wider bus, it would allow them to run the card at lower clockspeeds, which would require less voltage and generate less heat. I see it the other way around: this card with 256bit DDR5 would probably come in cooler and with less power required, possibly even negating the need for the PCI-e connector altogether! That would be the perfect card to passively cool.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
There's only so much bus speed that you need before it becomes a complete waste. With the 2900XT, it would have performed just as well on a 256-bit bus, which was proved by their next part.

To me the most important stat is performance per watt. Performance per transistor is also relevant as it will influence cost.

The performance that cards like the 4830 and 4770 are achieving on a 128-bit bus is incredible IMO.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: jiffylube1024
Originally posted by: Creig

I've looked, but was unable to find a specific instance mentioning pricing models of GDDR3 vs GDDR5. But I would be highly surprised if the price difference between the two was enough to overcome the GPU disparity in size and complexity between the GT200 and 4870. AFAIK, the GPUs cost in engineering, development and production makes it, by far, the most expensive component in a modern day video card.


I'm pretty sure that GDDR5 over 128-bit is noticeably cheaper to manufacture than DDR3 on a 256-bit bus, otherwise why switch to GDDR5 at all? Halving the memory bus makes for a cooler, less power hungry card with fewer PCB layers as it is easier to trace fewer memory bits on the card.

The 512-bit bus was a total failure for ATI (2900XT). That was the card that invented the 8-pin PCI-e connector!

There's no question in my mind that Nvidia will switch to GDDR5 in the next generation. They probably were worried about yield issues initially, so they stuck with the safe GDDR3 and GDDR4.

I thought it was NV that had long ago adopted GDDR5, and it was ATI that stuck with the GDDR3?

At any rate it seems like a "six of one, half dozen of the other" situation. Obviously GDDR5 is no longer cost prohibitive to implement as ATI went with it for their low-cost HD 4770.

So maybe we'll see GDDR5 across the board for both NV and ATI going forward?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Idontcare
Originally posted by: jiffylube1024
Originally posted by: Creig

I've looked, but was unable to find a specific instance mentioning pricing models of GDDR3 vs GDDR5. But I would be highly surprised if the price difference between the two was enough to overcome the GPU disparity in size and complexity between the GT200 and 4870. AFAIK, the GPUs cost in engineering, development and production makes it, by far, the most expensive component in a modern day video card.


I'm pretty sure that GDDR5 over 128-bit is noticeably cheaper to manufacture than DDR3 on a 256-bit bus, otherwise why switch to GDDR5 at all? Halving the memory bus makes for a cooler, less power hungry card with fewer PCB layers as it is easier to trace fewer memory bits on the card.

The 512-bit bus was a total failure for ATI (2900XT). That was the card that invented the 8-pin PCI-e connector!

There's no question in my mind that Nvidia will switch to GDDR5 in the next generation. They probably were worried about yield issues initially, so they stuck with the safe GDDR3 and GDDR4.

I thought it was NV that had long ago adopted GDDR5, and it was ATI that stuck with the GDDR3?

At any rate it seems like a "six of one, half dozen of the other" situation. Obviously GDDR5 is no longer cost prohibitive to implement as ATI went with it for their low-cost HD 4770.

So maybe we'll see GDDR5 across the board for both NV and ATI going forward?

Right now AMD has three cards with DDR5, the 4770, the 4870, and the 4890. Nvidia has no cards that use DDR5 that I know of. Nvidia choose DDR3 over a 448 or 512 bit connection depending on the card from the GTX260 on up to the GTX285 and everything in between.

AMD also had some DDR4 versions of cards out there, you could get the 3870 with DDR3 on some models or DDR4 on others. I don't think it made much difference other than synthetic benches like 3DMarks though.
 

MarcVenice

Moderator Emeritus <br>
Apr 2, 2007
5,664
0
0
Originally posted by: Idontcare
Originally posted by: jiffylube1024
Originally posted by: Creig

I've looked, but was unable to find a specific instance mentioning pricing models of GDDR3 vs GDDR5. But I would be highly surprised if the price difference between the two was enough to overcome the GPU disparity in size and complexity between the GT200 and 4870. AFAIK, the GPUs cost in engineering, development and production makes it, by far, the most expensive component in a modern day video card.


I'm pretty sure that GDDR5 over 128-bit is noticeably cheaper to manufacture than DDR3 on a 256-bit bus, otherwise why switch to GDDR5 at all? Halving the memory bus makes for a cooler, less power hungry card with fewer PCB layers as it is easier to trace fewer memory bits on the card.

The 512-bit bus was a total failure for ATI (2900XT). That was the card that invented the 8-pin PCI-e connector!

There's no question in my mind that Nvidia will switch to GDDR5 in the next generation. They probably were worried about yield issues initially, so they stuck with the safe GDDR3 and GDDR4.

I thought it was NV that had long ago adopted GDDR5, and it was ATI that stuck with the GDDR3?

At any rate it seems like a "six of one, half dozen of the other" situation. Obviously GDDR5 is no longer cost prohibitive to implement as ATI went with it for their low-cost HD 4770.

So maybe we'll see GDDR5 across the board for both NV and ATI going forward?

I think ATI had a world first with gddr5.

Also, from what I see in webshops in Holland, there's enough HD 4770's to go around. But from what I've heard, ATI is only supplying AIB's with limited amounts of RV740-chips.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: SlowSpyder
Originally posted by: Idontcare
I thought it was NV that had long ago adopted GDDR5, and it was ATI that stuck with the GDDR3?

At any rate it seems like a "six of one, half dozen of the other" situation. Obviously GDDR5 is no longer cost prohibitive to implement as ATI went with it for their low-cost HD 4770.

So maybe we'll see GDDR5 across the board for both NV and ATI going forward?

Right now AMD has three cards with DDR5, the 4770, the 4870, and the 4890. Nvidia has no cards that use DDR5 that I know of. Nvidia choose DDR3 over a 448 or 512 bit connection depending on the card from the GTX260 on up to the GTX285 and everything in between.

AMD also had some DDR4 versions of cards out there, you could get the 3870 with DDR3 on some models or DDR4 on others. I don't think it made much difference other than synthetic benches like 3DMarks though.

Originally posted by: MarcVenice
I think ATI had a world first with gddr5.

Also, from what I see in webshops in Holland, there's enough HD 4770's to go around. But from what I've heard, ATI is only supplying AIB's with limited amounts of RV740-chips.

Thanks guys for setting me straight, had all that backwards I guess.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Idontcare

Thanks guys for setting me straight, had all that backwards I guess.

No problem, we'll allow even you to make a mistake every once in a while. ;) :beer:
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: SlowSpyder
Originally posted by: Idontcare
Originally posted by: jiffylube1024
Originally posted by: Creig

I've looked, but was unable to find a specific instance mentioning pricing models of GDDR3 vs GDDR5. But I would be highly surprised if the price difference between the two was enough to overcome the GPU disparity in size and complexity between the GT200 and 4870. AFAIK, the GPUs cost in engineering, development and production makes it, by far, the most expensive component in a modern day video card.


I'm pretty sure that GDDR5 over 128-bit is noticeably cheaper to manufacture than DDR3 on a 256-bit bus, otherwise why switch to GDDR5 at all? Halving the memory bus makes for a cooler, less power hungry card with fewer PCB layers as it is easier to trace fewer memory bits on the card.

The 512-bit bus was a total failure for ATI (2900XT). That was the card that invented the 8-pin PCI-e connector!

There's no question in my mind that Nvidia will switch to GDDR5 in the next generation. They probably were worried about yield issues initially, so they stuck with the safe GDDR3 and GDDR4.

I thought it was NV that had long ago adopted GDDR5, and it was ATI that stuck with the GDDR3?

At any rate it seems like a "six of one, half dozen of the other" situation. Obviously GDDR5 is no longer cost prohibitive to implement as ATI went with it for their low-cost HD 4770.

So maybe we'll see GDDR5 across the board for both NV and ATI going forward?

Right now AMD has three cards with DDR5, the 4770, the 4870, and the 4890. Nvidia has no cards that use DDR5 that I know of. Nvidia choose DDR3 over a 448 or 512 bit connection depending on the card from the GTX260 on up to the GTX285 and everything in between.

AMD also had some DDR4 versions of cards out there, you could get the 3870 with DDR3 on some models or DDR4 on others. I don't think it made much difference other than synthetic benches like 3DMarks though.

its Gddr.... GDDR3 = DDR2 type connection. GDDR4 = highly overclocked DDR2 type connection. and GDDR5 = DDR3 type connection.