Has the NV40 graphics processor been taped out already

Shagga

Diamond Member
Nov 9, 1999
4,421
0
76
Xbit Labs

Unofficial sources report that Samsung supplied some 10 thousands of its recently announced GDDR2 memory chips with extreme speed to NVIDIA for samples of the code-named NV40 GPU-based graphics cards. This suggests that the Santa Clara, California-based memory manufacturer has taped out the NV40 chips and now is ready to make some graphics cards based on the very early silicon implementation of the NV40.

We reported back on August 28, 2003, that Samsung had supplied its 1600MHz GDDR2 memory products to ?leading graphics card manufacturers?. Now GPU:RW brings some more precise facts on the matter: the mentioned DRAMs had been supplied to NVIDIA Corporation for NV40 testing purposes. It makes me assume that either the NV40 graphics processor has been taped out already, or NVIDIA is about to tape out its highly-anticipated graphics chip with DirectX 9.1 support.

Peak theoretical bandwidth of 1600MHz memory on 256-bit bus is mind-blowing 51.2GB/s. In case NVIDIA utilizes so powerful memory on its NV40-based graphics cards, the latter will unbelievably leapfrog performance of the current GeForce FX 5900 Ultra and RADEON 9800 PRO solutions in a lot of cases. Unfortunately, the NV40 will not be available this year, but will come sometime in late Q1 or Q2 next year.

Well, at least now we know what kind of memory NVIDIA is considering for its NV40 at this point. It does not mean that NV40 will be equipped with 1600MHz GDDR2 memory for sure, but even this piece of information is better than nothing because we still know practically nothing about the competitor of the NV40 ? ATI?s code-named R420 VPU.

No NVIDIA or Samsung representatives commented on the news-story.

[edit] - Title changed...
 

Alkali

Senior member
Aug 14, 2002
483
0
0
Q1 or Q2 of 2004?

What on earth are nVidia doing? They will loose so much money over the fiasco of the past few months until then.....
 

GoodRevrnd

Diamond Member
Dec 27, 2001
6,801
581
126
So all that tells us is the memory will be uber fast. They could still have crappy shader routines (though unlikely).
 

GTaudiophile

Lifer
Oct 24, 2000
29,767
33
81
When I walk into a Best Buy or CompUSA, why is it that I always see at least a few Radeon 9600 PRO/9800 PRO boxes, but I NEVER see even a place holder for GeForceFX 5600 Ultra/5900 Ultra boxes? IMO, nVidia is still having yield issues with these boards, and I somehow don't think they are ready to start pushing thousands of NV40 GPUs out the door.

Then again, if NV40 is right around the corner, then they may be restricting output of NV3X cards.

Edit: Whatever the case may be, ATi pwns this Holiday Season.
 

Luagsch

Golden Member
Apr 25, 2003
1,614
0
0
i read over at elite bastards that the nv40 will probably also go for fp32 and fp16 and not fp24... if that turns out to be true... it won't be exactly dx9 compilant again (dx9 request a minimum of fp24. r350 and r360 are fp24.) if that turns out to be true nvidia didn't learn anything from what happened and tries to stick with the idea of "downsizing" shaders. that would seriously suck!!! :frown:

EDIT just a thought: i'm quite sure nv40 won't be dx9 compilant either, cause think about it: while nvidia was starting to produce nv3x the engeniers were working on the nv40.

nvidia's problem was that they couldn't bring microsoft to change dx9 to a >fp16 thing (nvidia left dx9 development for some time and than came crawling back... sounds familiar). i guess nvidia became aware that their strategy to go for fp16/fp32 would lead to some problems in dx9 (why else trying to change the dx9 specs). by the time the cards were long into developement nvidia had to go for the fp16/fp32 because they had no other choice and maybe thought, that their marketleader position would give some advantages.

so nv3x was in starting production and the nv40 would probably already be in the drawing board. now the problem: the developement time of a gfx chip is way longer than the time between two cycles. so nvidia was already finalising the nv40 with the restrictions we know from the nv3x. i think that only nv45 (if a totally new design) has the possibility to change the wrong strategy chosen. the nv40 will have the same basic problems as the nv3x. so nvidias only chance is to make it so blasting fast (>mhz) that using fp32 as a substitute for fp24 doesn't result in a catastrophic speed hit....

ok, wild theories, but what do you guys think about it? (not about my english, about my thoughts :p )
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Originally posted by: Alkali
Q1 or Q2 of 2004?

What on earth are nVidia doing? They will loose so much money over the fiasco of the past few months until then.....

NV38 ring any bells?

The R380 and NV38 will be out between now and Q2 2004, thus they have a gap filler.
 

magomago

Lifer
Sep 28, 2002
10,973
14
76
Originally posted by: Alkali
Q1 or Q2 of 2004?

What on earth are nVidia doing? They will loose so much money over the fiasco of the past few months until then.....

 

draggoon01

Senior member
May 9, 2001
858
0
0
Originally posted by: Luagsch
i read over at elite bastards that the nv40 will probably also go for fp32 and fp16 and not fp24... if that turns out to be true... it won't be exactly dx9 compilant again (dx9 request a minimum of fp24. r350 and r360 are fp24.) if that turns out to be true nvidia didn't learn anything from what happened and tries to stick with the idea of "downsizing" shaders. that would seriously suck!!! :frown:

EDIT just a thought: i'm quite sure nv40 won't be dx9 compilant either, cause think about it: while nvidia was starting to produce nv3x the engeniers were working on the nv40.

nvidia's problem was that they couldn't bring microsoft to change dx9 to a >fp16 thing (nvidia left dx9 development for some time and than came crawling back... sounds familiar). i guess nvidia became aware that their strategy to go for fp16/fp32 would lead to some problems in dx9 (why else trying to change the dx9 specs). by the time the cards were long into developement nvidia had to go for the fp16/fp32 because they had no other choice and maybe thought, that their marketleader position would give some advantages.

so nv3x was in starting production and the nv40 would probably already be in the drawing board. now the problem: the developement time of a gfx chip is way longer than the time between two cycles. so nvidia was already finalising the nv40 with the restrictions we know from the nv3x. i think that only nv45 (if a totally new design) has the possibility to change the wrong strategy chosen. the nv40 will have the same basic problems as the nv3x. so nvidias only chance is to make it so blasting fast (>mhz) that using fp32 as a substitute for fp24 doesn't result in a catastrophic speed hit....

ok, wild theories, but what do you guys think about it? (not about my english, about my thoughts :p )


makes lot of sense. and sad thing would be if by the time nv45 can be fully dx9 compliant, dx9.1 or dx10 is released which calls for fp32. so it may be smarter for nvidia to just stick with fp16/32 and make it as great as possible so that when dx specs do change, they may have lead over ati.

but i don't get why nvidia didn't follow specs. surely they must have known about it at equal time as ati or any other maker (unless msft was somehow trying to punish nvidia). and surely nvidia was informed of valve releasing half-life 2. it doesn't make sense. the way i see it, either nvidia was simply arrogant/lazy, or other companies were deliberately trying to screw over nvidia (not just being competitive).

at this point though, i'm sticking with ati. but am curious as to what really happened. the common thing heard is that nvidia mis-stepped by working on xbox but that doesn't sit right with me as a full explanantion. first, since it led to nforce, it was big boost for nvidia. second, ati is working on 2 consoles and also intel chipset. maybe it's just bad management over at nvidia.
 

Alkali

Senior member
Aug 14, 2002
483
0
0
Originally posted by: draggoon01
Originally posted by: Luagsch
i read over at elite bastards that the nv40 will probably also go for fp32 and fp16 and not fp24... if that turns out to be true... it won't be exactly dx9 compilant again (dx9 request a minimum of fp24. r350 and r360 are fp24.) if that turns out to be true nvidia didn't learn anything from what happened and tries to stick with the idea of "downsizing" shaders. that would seriously suck!!! :frown:

EDIT just a thought: i'm quite sure nv40 won't be dx9 compilant either, cause think about it: while nvidia was starting to produce nv3x the engeniers were working on the nv40.

nvidia's problem was that they couldn't bring microsoft to change dx9 to a >fp16 thing (nvidia left dx9 development for some time and than came crawling back... sounds familiar). i guess nvidia became aware that their strategy to go for fp16/fp32 would lead to some problems in dx9 (why else trying to change the dx9 specs). by the time the cards were long into developement nvidia had to go for the fp16/fp32 because they had no other choice and maybe thought, that their marketleader position would give some advantages.

so nv3x was in starting production and the nv40 would probably already be in the drawing board. now the problem: the developement time of a gfx chip is way longer than the time between two cycles. so nvidia was already finalising the nv40 with the restrictions we know from the nv3x. i think that only nv45 (if a totally new design) has the possibility to change the wrong strategy chosen. the nv40 will have the same basic problems as the nv3x. so nvidias only chance is to make it so blasting fast (>mhz) that using fp32 as a substitute for fp24 doesn't result in a catastrophic speed hit....

ok, wild theories, but what do you guys think about it? (not about my english, about my thoughts :p )


makes lot of sense. and sad thing would be if by the time nv45 can be fully dx9 compliant, dx9.1 or dx10 is released which calls for fp32. so it may be smarter for nvidia to just stick with fp16/32 and make it as great as possible so that when dx specs do change, they may have lead over ati.

but i don't get why nvidia didn't follow specs. surely they must have known about it at equal time as ati or any other maker (unless msft was somehow trying to punish nvidia). and surely nvidia was informed of valve releasing half-life 2. it doesn't make sense. the way i see it, either nvidia was simply arrogant/lazy, or other companies were deliberately trying to screw over nvidia (not just being competitive).

at this point though, i'm sticking with ati. but am curious as to what really happened. the common thing heard is that nvidia mis-stepped by working on xbox but that doesn't sit right with me as a full explanantion. first, since it led to nforce, it was big boost for nvidia. second, ati is working on 2 consoles and also intel chipset. maybe it's just bad management over at nvidia.

Yes, I was wondering the exact same thing. I mean, now, we have explanations of why nVidia hardware is having such a hard time, but we still don't know why the cards were created with such specs.

Obviously they ARE DX9 compatible... albeit actually with too high a precision (32-bit) if we assume that mode was what was expected to be used more often. The architecture of the pixel pipelines can also be explained if you think about how the nVidia engineers understood the requirments layed down by Microsoft's DX Department - Maybe there was a simple misunderstanding?




Whatever the reason, EVERYONE WOULD BE A LOT HAPPIER IF nVIDIA ACCEPTED THEY GOT IT WRONG, and actually said "This is why we chose to do a). b). c). etc , and this is what we were trying to achieve. Now we will go forward with this plan".
 

Luagsch

Golden Member
Apr 25, 2003
1,614
0
0
Originally posted by: Alkali
Whatever the reason, EVERYONE WOULD BE A LOT HAPPIER IF nVIDIA ACCEPTED THEY GOT IT WRONG, and actually said "This is why we chose to do a). b). c). etc , and this is what we were trying to achieve. Now we will go forward with this plan".
yeah. maybe that wouldn't change my opinion on the matter that my next card is an ati. but for the sake of the remains of nvidia's dignity as a company they better do that. i'm sure that the eyes of every enthusiast are focust on what nvidia is going to do next (maybe not Nebor ;) :p btw. we love you man :beer: ).
Originally posted by: [bdraggoon01[/b]
...and sad thing would be if by the time nv45 can be fully dx9 compliant, dx9.1 or dx10 is released which calls for fp32. so it may be smarter for nvidia to just stick with fp16/32 and make it as great as possible so that when dx specs do change, they may have lead over ati.
good point. haven't thought about that :):beer: