What actually limited the FX performance?

sash1

Diamond Member
Jul 20, 2001
8,896
1
0
Why is it that the FX seems to be doing so bad? By reducing the fabrication process to .13-micron, it significantly increases speed and heat. But why so much heat? It only has ~15 million more transistors than the R300, but even while they produce more heat due to the smaller chip size, it can't possibly produce so much to require that big of a cooling system?

What is it really that created such a massive amount of heat? Was it a problem nVidia encountered in moving to .13-micron that made the chip dissipate so much heat? Why does it require an additional power supply?

Seriosuly, where did nVidia go wrong in manufacturing this chip. I find it hard to believe that simply moving to a .13-micron core could possibly make the card run so hot even with the massive cooling it requires. Anyone have some info?

Thanks,

~Aunix
 

s2kpacifist

Member
Jan 21, 2003
108
0
0
0.13 micron technology is still in its infancy, so flaws in the new system are bound to occur. ATi has continued on .15 micron technology, which is a much more mature technology than the .13 micron fabrication process.

Personally, I think that instead of NVIDIA screwing up at all in the design of the card, they took a gamble that's going to pay off in the long run: they got their first .13 micron card, and can learn much from the flaws. ATi still has that extra step to take after their 350 card which might slow them later on, but for now, they're dominating most of the vid-card market.
 

sash1

Diamond Member
Jul 20, 2001
8,896
1
0
Originally posted by: s2kpacifist
Personally, I think that instead of NVIDIA screwing up at all in the design of the card, they took a gamble that's going to pay off in the long run: they got their first .13 micron card, and can learn much from the flaws. ATi still has that extra step to take after their 350 card which might slow them later on, but for now, they're dominating most of the vid-card market.
That's a very good point. I would like to see nVidia's next .13-micron card and ATi's first. It's interesting that nVidia would take such a large step even though ATi is still sporting their .15-micron chips even on teh R350. Be fun to see how ATi handles the jump to .13

Thanks,

~Aunix
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
If Ati can keep up the momentum on 0.15, who's to say they won't skip right over .13 and hit up Intel for help with .09?
 

imgod2u

Senior member
Sep 16, 2000
993
0
0
Well, considering that the FX is loosing when you get to higher resolutions with AA and Anisotropic filtering on, I would say the 128-bit memory bandwidth is limiting it. I know it has supposed 4:1 lossless color compression, but it may not work out as well as the engineers thought it would and the 9700 Pro does have a color compression algorithm on it as well.
 

SuperSix

Elite Member
Oct 9, 1999
9,872
2
0
Originally posted by: imgod2u
Well, considering that the FX is loosing when you get to higher resolutions with AA and Anisotropic filtering on, I would say the 128-bit memory bandwidth is limiting it. I know it has supposed 4:1 lossless color compression, but it may not work out as well as the engineers thought it would and the 9700 Pro does have a color compression algorithm on it as well.

Agreed. It simply can't move anywhere near as much across the memory bus as the Radeon 9700
 

sinisterDei

Senior member
Jun 18, 2001
324
26
91
Originally posted by: AunixP35
Originally posted by: s2kpacifist
Personally, I think that instead of NVIDIA screwing up at all in the design of the card, they took a gamble that's going to pay off in the long run: they got their first .13 micron card, and can learn much from the flaws. ATi still has that extra step to take after their 350 card which might slow them later on, but for now, they're dominating most of the vid-card market.
That's a very good point. I would like to see nVidia's next .13-micron card and ATi's first. It's interesting that nVidia would take such a large step even though ATi is still sporting their .15-micron chips even on teh R350. Be fun to see how ATi handles the jump to .13

Thanks,

~Aunix

I don't know about that one. I mean think about it, TSMC is producing chips for both ATi and nVidia. At the same plant even I believe. Now, nVidia is going through growing pains of using a new production process with TSMC, and they have been blamed (TSMC) for many of the delays in delivering NV30.

The question is, and where I don't know about your logic, is whether ATi will go through this. Just think, after producing NV30, 31, 35 etc..., TSMC will have decent experience in producing .13 micron chips. Then, ATi comes to them and wants to swich from .15 to .13. They don't exactly have to relearn how to make a .13 micron chip, they already know, and they've already figured out ways to fix yields and work out the bugs in their manufacturing process. I think ATi has played the correct hand in waiting and letting TSMC experiment their money away on someone else, and once they've got it all figured out go to them for Fabbing. I think nVidia pushed .13 too hard with the wrong Fab, and it burned them because the Fab wasn't ready, not because nVidia wasn't ready.
 
Jun 18, 2000
11,208
775
126
Originally posted by: imgod2u
Well, considering that the FX is loosing when you get to higher resolutions with AA and Anisotropic filtering on, I would say the 128-bit memory bandwidth is limiting it. I know it has supposed 4:1 lossless color compression, but it may not work out as well as the engineers thought it would and the 9700 Pro does have a color compression algorithm on it as well.
I believe AunixP35 was primarily referring to the die process.


I think people are expecting too much out of 130nm. A die shrink to 130nm from 150nm should only provide a ~25% decrease in die size for a given number of transistors and a core voltage decrease to 1.3-1.4V from 1.5-1.6V.

We know NV30 has about 15million transistors (~15%) more than R300 and is clocked 175mhz (~54%) higher. This puts the specs about in line with expectations. The chip seems to run a bit hot, but that's more an issue with the design, rather than the die process.
 

s2kpacifist

Member
Jan 21, 2003
108
0
0
I don't know about that one. I mean think about it, TSMC is producing chips for both ATi and nVidia. At the same plant even I believe. Now, nVidia is going through growing pains of using a new production process with TSMC, and they have been blamed (TSMC) for many of the delays in delivering NV30. .


Yeah you're absolutely right. I didn't realize that EVERYONE goes to TSMC...but apparently they do :eek: Wow ATi definately played this out right. They're waiting for the .13 process to mature, then hopping onto the bandwagon with the NV400. Those sneaky ATi people...:)
 

sash1

Diamond Member
Jul 20, 2001
8,896
1
0
Originally posted by: Fallen Kell
Heat?!? Just a guess :)
Well, duh! But where did that heat come from? Just changing the manufacturing process won't create that much extra heat.
We know NV30 has about 15million transistors (~15%) more than R300 and is clocked 175mhz (~54%) higher. This puts the specs about in line with expectations. The chip seems to run a bit hot, but that's more an issue with the design, rather than the die process.
So how did nVidia screw up on design? nVidia has been in this business long enough. They should have been smarter to correct porblems such as this, rather than just leave it as is and put a beastly vaccum cleaner on top of the chip. Which leads me to, is the problem, that you say, in the design, even correctable?

Thanks,

~Aunix
 

everman

Lifer
Nov 5, 2002
11,288
1
0
Originally posted by: AunixP35
Originally posted by: s2kpacifist
Personally, I think that instead of NVIDIA screwing up at all in the design of the card, they took a gamble that's going to pay off in the long run: they got their first .13 micron card, and can learn much from the flaws. ATi still has that extra step to take after their 350 card which might slow them later on, but for now, they're dominating most of the vid-card market.
That's a very good point. I would like to see nVidia's next .13-micron card and ATi's first. It's interesting that nVidia would take such a large step even though ATi is still sporting their .15-micron chips even on teh R350. Be fun to see how ATi handles the jump to .13

Thanks,

~Aunix

That's basically my thoughts. nVidia may be behind in terms of performance, but ATI Must switch to .13 eventually and that is where nVidia has the lead. That is unless ATI is working on it more than we know of which is possible... But currently I would say the next big battle will be with nVidia's next 2 cards.
 

sinisterDei

Senior member
Jun 18, 2001
324
26
91
Originally posted by: everman
That's basically my thoughts. nVidia may be behind in terms of performance, but ATI Must switch to .13 eventually and that is where nVidia has the lead. That is unless ATI is working on it more than we know of which is possible... But currently I would say the next big battle will be with nVidia's next 2 cards.

I've seen it on the net that they are already working on .13. And if you think about it, it does make sense. Of course they're working on .13 micron, since .13 micron chips are cheaper to produce its in their best interest to use that technology. They can get faster cores, and more cores on each 200mm wafer due to the smaller die size. So there has to be a reason they waited, and they've said it before: .13 wasn't ready at TSMC, so they went for the more mature .15 process. Now analyze that sentence real quick; by applying a bit of logic you can assume that they used .15 so they could wait on .13 to mature, otherwise reading as they were waiting for the bugs of .13 micron to be worked out before they adopted it. That logic also indicates that it was possible for the .13 process to mature without ATi's assistance, which leads to the assumption that someone else is doing all the experimenting and that TSMC would learn from them and pass that knowledge onto ATi. That person is nVidia. I think ATi is going to have far fewer manufacturing process issues than nVidia has since the platform will have a few months to mature before they start making chips on it.

My 2 cents.
 

Smilin

Diamond Member
Mar 4, 2002
7,357
0
0

Don't forget guys - the nv30 tapped out really really late as well so it's not entirely TSMC's fault.

The heat on that thing tells me that the NV30 is running close to it's maximum clockspeed. I'm also pretty certain that drivers are a huge factor right now. Some of the more cpu bound tests aren't reflecting well on the NV30 so if I had to speculate I'd say they could get another 20% speed across the board after the drivers have matured a few times.

I'm quite happy I bought a 9700 pro when it came out. I've been enjoying it for some time and it's only fallen a little bit from the new top end GFFX. It's the first ATI card I've ever owned.

The bottom line for both companies is this: Whoever has the performance crown when Doom hits the streets is going to be in a really nice position.
 

sinisterDei

Senior member
Jun 18, 2001
324
26
91
Man, over the last few days I've been hearing the rumor mill spouting out about nVidia canning the GFFX due to its disappointing performance, in order to focus on NV35.

Dunno if its true, but it would make this whole argument moot.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Then there's the rumours that the 9900PRO will be a 0.13 micron part, showing ATi already moving to the next process by spring this year.
 

funduck

Member
Aug 29, 2002
176
0
0
The reason the chip is so hot is because its running @ 500Mhz. It was never suppose to run @ 500Mhz maybe 400, but more likely 350-375, and it was suppose to be released in 2002. Nvidia seeing that the thing was not going to be released on time decided to attach a dustbusted and oc the card to 500Mhz. To keep it stable @ 500Mhz they need some serious volts which puts up the heat and also makes the use of an external power connector nessary.
 

sinisterDei

Senior member
Jun 18, 2001
324
26
91
I'm hearing the GF FX5800 Ultra is getting canned (the part that ran at 500/1000) and they are only going to be releasing the GF FX5800 (non-ultra) which is a 400MHz/800MHz part. Which will lower the performance 20% (theoretically) and remove the need for a dustbuster as a fan assembly. That means it might lose to a 9700 though, which can't be good for their image.
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
Originally posted by: Smilin
Don't forget guys - the nv30 tapped out really really late as well so it's not entirely TSMC's fault.

The heat on that thing tells me that the NV30 is running close to it's maximum clockspeed. I'm also pretty certain that drivers are a huge factor right now. Some of the more cpu bound tests aren't reflecting well on the NV30 so if I had to speculate I'd say they could get another 20% speed across the board after the drivers have matured a few times.

I'm quite happy I bought a 9700 pro when it came out. I've been enjoying it for some time and it's only fallen a little bit from the new top end GFFX. It's the first ATI card I've ever owned.

The bottom line for both companies is this: Whoever has the performance crown when Doom hits the streets is going to be in a really nice position.

nv30 tapped out late because tsmc couldn't figure .13 micron fabbing out.

from anand's introduction in november of last year:
With that out of the way, we're finally able to tell you everything there is to know about GeForce FX. To tell the truth, we've been sitting on this information since March of this year and very little (if any) has changed in the specification. NVIDIA had the design and the features of the GPU ready very early this year indicating that it truly was manufacturing that held them back.
they got burned by relying on the latest fab tech. a guy at 3dfx predicted they would, and they have. they're now sitting on a stable of roughly year-old products whose biggest advantage seems to be that they're cheap, which is good for crappy OEM video.
 

craigcoen

Junior Member
Feb 9, 2003
1
0
0
If all the Chips are made at the same Plant then why is Nvidia having so hard a problem when SIS has had a .13 micro Vid card for about 3 months or more now.. Thier card is the DFI Xabre 400, running on .13 micron and they Don't have a massive Heatsink.. /shrug

So everyone saying that Nvidia is opening the door for ATI because the chip manufaturer is just starting off making .13 micron chip is false.
 

sinisterDei

Senior member
Jun 18, 2001
324
26
91
But check the transistor counts on the SiS Xabre vs FX, the FX is more complex on orders of magnitude. This affects yields and profitability of a component.