NVidia made good desicion to jump to .13?

Boxical

Junior Member
Oct 25, 2002
1
0
0
Im no genius and I do realize that ATI has a better card out now. What Im wondering about is:
Even though Nvidia is experiencing delays as a result of decided to use .13micron for their new card, once they get it out, they will have a much easier time making newer and faster cards (raising gpu core speeds) with the .13 micron process.
ATI on the other hand is going to hit a roadblock, sure by not switching to .13 they got a faster card out now; but are they not going to run into problems when trying to make even faster gpu core speeds because they are still on the .15. Eventually they will be forced to switch and they will then again be behind Nvidia.

Any thoughts? Rebuttals? Complaints?
 
Aug 27, 2002
10,043
2
0
ATI has it on thier roadmap to start using a .13 micron fab in Q4 of next year, but it all depends on yeilds, yes you can clock smaller die parts higher, but if you only get 10 out of 200 chips as good working components you have to charge exhorborant prices. ATI did the right thing by using the .15 micron process on the 9700 pro because if I remember right they have something close to 90% yeild, which is why the card didn't start out a high $500 part instead of a low $400 part, and is already in the sub $300 catagory less than 6mo. later. btw my 9700pro rocks!
 

rbV5

Lifer
Dec 10, 2000
12,632
0
0
Any thoughts? Rebuttals? Complaints?

If the nV30 is delayed because of TSMC 0.13-micron process yields, how could ATI even consider a 0.13 process? I hear even the r350 will be a 0.15 part initially and that RV350 will be ATI's first 0.13 part. They've already shown they can go 325MHz and higher on 0.15 with good yields, memory bandwidth IMHO to feed a second TMU perhaps may make more sense in the short term, meanwhile TSMC perfects the 0.13 process JIT for RV350? Who knows? Seems like things are going the way the have to go, risky buisness to be in these days

I just hope ATI does go behind again, or someone else enters the fray at some point.
 

Swanny

Diamond Member
Mar 29, 2001
7,456
0
76
I just hope ATI does go behind again, or someone else enters the fray at some point.

Why do you hope ATI falls behind. nVidia lead for so long, I think it's about time that ATi has the upper hand.
 

rbV5

Lifer
Dec 10, 2000
12,632
0
0
Why do you hope ATI falls behind

I jus't don't want either to get too comfortable is all. This game of one-upmanship is what is driving video technology to new heights, making powerful expensive cards into value leaders in a single product cycle. No doubt in my mind that the constant pressure from nVidias driver team is what caused ATI to finally get serious about their products after they left the shelves. Much like ATI's superior hardware is causing nVidia to worry about playing second fiddle for a change. Cinematic video rendering will require upgrades in PC's beyond video cards, which in turn may help turn the high tech sector around a bit. Putting more folks back to work will create more demand for products, both at home and more importantly at work as well. I've got a lot of friends suffering right now, they need work...I'll do my part with a major upgrade upcoming:D very soon. Competiton is good.
 

scottrico

Senior member
Jun 23, 2001
473
0
0
If you use a .13 micron process, what do you get? higher clock speed?

We dont need higher clock speeds.
We need a video card that makes games look better.
ATI with their 9700 pro has increased image quality.
Read the reviews,
Games running on nvidia cards look like sh*t compared to the 9700.

Nvidia needs to focus of making the games we have right now look better not just higher frame rates.
peace
 

sechs

Golden Member
Oct 6, 2002
1,190
47
101
Use of such "cutting" edge technology as 0.13 manufacturing may or may not have been a good technology choice for Nvidia; that's yet to be seen.

What is clear, however, is that it is a current bad sales choice. The delays are eroding Nvidia's market leadership and are not pushing sales. Perhaps it will pay off in the end, but right now, it's bringing them didley-squat.
 

Rand

Lifer
Oct 11, 1999
11,071
1
81
I don't believe the NV30's delay is wholly or even half-way due to TSMC's .13u process.
nVidia initially taped out the NV30 months later then their original schedule called for, if it took them that long to even tape-out the NV30 then it was bound to be late regardless of what fabrication process it was designed for.
Also, TSMC has been fabbing VIA's C3 processor on a .13u process for nearly a year now.


Cetainly their .13u process has had a great deal of problems, and yields are still very low. From everything I've heard the current leakage is also rather high on TSMC's .13u process which will serve to minimze much of the potential thermal benefits.
So while the .13u process certainly hasnt done nVidia any favors, it most definitely is not the primary factor in the last intro of the NV30.

Even had they been designing for a mature .18u process in which they well knew any adverse effects they may have to design around and it would be unlikely any unanticiapted fabrication issues would pop up they would still likely have had to re-spin their new design a few times to get it working well, and in a best case scenario they'd see first silicon a good 100 days after initial tape-out.
Such a time-line would have put the NV30 at very late October/early November at the earliest.
As it stands it doesnt look like it'll be too much later then that on the .13u process.


It remains to be seen whether it was a good decision on nVidia's part to go with a .13u process, but IMHO it is already clearly an excellent choice on ATi's part to stick with a mature .15u process.
It's helped to enable them to launch the R300 months ahead of nVidia's next generation, and even to have mainstream R300 parts prior to nVidia's high-end NV30.
For the first time they've solidly taken the performance crown from nVidia and shown the entire industry that there can be no doubting they are a formidable force to be reckned with and are fully capable of leading the industry.
Such a showing won't be soon forgotten regardless of whether nVidia re-takes the performance crown in the near future.
They've also ensured that they will be the only manufacturer to have DX9 compliant graphics cards widely available from the $150 price range on up in time for the very important holiday sales season.

In addition, it's clear the R300 is very capable of scaling to 375MHz or so without too many issues and perhaps even 400MHz+ despite the .15u process.
This along with the fact their on a fast pace to intro DDR2 in graphics card shows their quite caopable of launching a faster R300 if necessary.

IMHO ATi's decision to stick with .15u has proven to be a tremendous success that has and will continue to benefit them greatly.

 

TourGuide

Golden Member
Aug 19, 2000
1,680
0
76
I try to buy the best technology available at the time I am ready to buy. I have to say though that I'm going to stick with nVidia for now simply because since I have been using their cards, I have had fewer problems with drivers than when I had endless and never ending driver hassles with ATI.

As for the DX9 issue, THAT is nothing but a big sticky load and anyone bringing it up knows it. The software that will use DX9 won't arrive on the scene in volume for another 1 or 2 product cycles yet. So for the time being DX9 is nothing but a marketing ploy. DX9 is meaningless until we have games that use it.
 

Mingon

Diamond Member
Apr 2, 2000
3,012
0
0
Nvidia needs to focus of making the games we have right now look better not just higher frame rates.

The reason nvdia is going to 0.13 is it allow them to have the best of both worlds - performance and IQ otherwise your just left with a matrox :)

Games running on nvidia cards look like sh*t compared to the 9700
- nice troll comment
 

Mingon

Diamond Member
Apr 2, 2000
3,012
0
0
Games running on nvidia cards look like sh*t compared to the 9700
I dont get it.

Troll / flamebait - take your pick. Games on the ti4600 do not look like sh*t compared to the 9700, the only visible difference between them is AF and AA, strangely enough the higher the frame rates the better the eye candy you can have running - perhaps thats a difficult concept for you.
 

scottrico

Senior member
Jun 23, 2001
473
0
0
Mingon
the only visible difference between them is AF and AA,

Thanks for proving my point.

That is a hugh difference.

Btw
What does a troll have to do with flaming?
Yes, it is a hard concept for me.

lol
 

gregor7777

Platinum Member
Nov 16, 2001
2,758
0
71
Originally posted by: TourGuide
I try to buy the best technology available at the time I am ready to buy. I have to say though that I'm going to stick with nVidia for now simply because since I have been using their cards, I have had fewer problems with drivers than when I had endless and never ending driver hassles with ATI.

As for the DX9 issue, THAT is nothing but a big sticky load and anyone bringing it up knows it. The software that will use DX9 won't arrive on the scene in volume for another 1 or 2 product cycles yet. So for the time being DX9 is nothing but a marketing ploy. DX9 is meaningless until we have games that use it.

Yes, but people like me who only buy cards yearly or some people even less frequently having compliance with the latest standards is of the utmost importance.

I don't want to spend my hard earned money for something that will be technically obsolete in a year.

 

JSSheridan

Golden Member
Sep 20, 2002
1,382
0
0
Originally posted by: scottrico
If you use a .13 micron process, what do you get? higher clock speed?

The .13u process results in faster, smaller, cooler cores, in theory. If a designer has a size limit on the area of a core, then they can put more into the limited area with a smaller process.

There is one thing about GPU architecture that I don't understand though. One the .18u CPU's (Athlon or Pentium), they were able to get around 1.4 GHz before they needed to migrate to the .13u process. For the GPU cores, it seems like they won't get above 500 MHz before the manufactures go to the .13u process. What are the architectural differences between a CPU and GPU that allow a CPU to run faster, but limit the GPU's speed?

Of course, the reason to switch to .13u may not be speed, but to keep thermal dissipation low and keep die sizes small. Thanks. Peace.
 

Dulanic

Diamond Member
Oct 27, 2000
9,965
589
136
Originally posted by: JSSheridan
Originally posted by: scottrico
If you use a .13 micron process, what do you get? higher clock speed?

The .13u process results in faster, smaller, cooler cores, in theory. If a designer has a size limit on the area of a core, then they can put more into the limited area with a smaller process.

There is one thing about GPU architecture that I don't understand though. One the .18u CPU's (Athlon or Pentium), they were able to get around 1.4 GHz before they needed to migrate to the .13u process. For the GPU cores, it seems like they won't get above 500 MHz before the manufactures go to the .13u process. What are the architectural differences between a CPU and GPU that allow a CPU to run faster, but limit the GPU's speed?

Of course, the reason to switch to .13u may not be speed, but to keep thermal dissipation low and keep die sizes small. Thanks. Peace.


It all depends on the architecture of the core, Its just like how clock for clock Athlon beats the P4 but the P4 can go higher speed. Plus look at the fact the 9700 has what over 110 million transistors I believe? Where as a Athlon has 37.5 million and I believe the P4 has between 40-50? Plus a CPU does generic work.. it has a relatively small pipeline. A GPU does alot of different types of dedicated work with different units in the GPU for each type of work, a GPU is really alot more advanced then a CPU when you look at it, a GPU like the 9700 has 8 rendering pipelines and 4 vertex shader piplines plus a ton of other units to do other work. Plus 1/3 to 1/2 the transitors on the CPUs is just cache... so really your looking at a good 5X the amount of transitors in the 9700 then say the Athlon not counting its cache.
 

The_Lurker

Golden Member
Feb 20, 2000
1,366
0
0
With the smaller die size, u dun just get faster speeds. Look at it this way, if you got a chip the size of a TNT2 (i mean those things are huge), and you have say 10 million transistors on a .18 micron processing fab. Now, you take a .13 micron processing fab to the same size of the tnt2 chip, and suddenly, you can cram a LOT MORE transistors into it, meaning more units to do graphics processing, meaning what? Better image quality with no loss to speed :)