NV40 will have full sixteen pipelines according to The Inquirer

dragonic

Senior member
May 2, 2003
254
0
0
Well, The Inquirer has posted that NV40 will have full sixteen pipelines.........if it would be so nvidia would easily take the performance crown away from ATI. But remember, The Inquirer is behind this so it propably is totally wrong :p

Linky
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Usually they are right on confirmations and numbers. I find it to be a pretty good source of info esp 2-3 months before each product launch where they confirm the facts. Either way, full 16 pipelines as opposed to 8x2 sounds pretty good on paper (NOTE*??) but I think its important to wait for real-world benchmarks.

NOTE*
If you have 16 pipelines with 1 texture per pipeline or 16x1, or 8 pipelines with 2 textures per pass or 8x2 arrangement, shouldn't in theory performance be equal assuming equal GPU speed? Hmmm...maybe someone can explain this to me.
 

Mingon

Diamond Member
Apr 2, 2000
3,012
0
0
Originally posted by: RussianSensation
Usually they are right on confirmations and numbers. I find it to be a pretty good source of info esp 2-3 months before each product launch where they confirm the facts. Either way, full 16 pipelines as opposed to 8x2 sounds pretty good on paper (NOTE*??) but I think its important to wait for real-world benchmarks.

NOTE*
If you have 16 pipelines with 1 texture per pipeline or 16x1, or 8 pipelines with 2 textures per pass or 8x2 arrangement, shouldn't in theory performance be equal assuming equal GPU speed? Hmmm...maybe someone can explain this to me.

only during multi texturing will it be equal (which is quite often)
 

rgreen83

Senior member
Feb 5, 2003
766
0
0
As I understand it, the full pipeline (at least for current generation) is only used in special multitextering situations. I could be wrong but I believe in the case of z+color (which is quite often) the current gen only uses half its pipes. Someone confirm or correct this please.
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Steps to resolving this controversy:

1) Wait.
2) See.



Although I will say if the Inq is right, then NV40 appears to be hauling some heavy firepower onto the battlefield...
 

modedepe

Diamond Member
May 11, 2003
3,474
0
0
I'm not sure. I thought before nvidia was saying that they didn't think it made sense to go with only 1 texture unit per pipeline.

If you have 16 pipelines with 1 texture per pipeline or 16x1, or 8 pipelines with 2 textures per pass or 8x2 arrangement, shouldn't in theory performance be equal assuming equal GPU speed? Hmmm...maybe someone can explain this to me.
Well, with a 8x1 setup vs a 8x2 setup, you'll have the same amount of pixel fillrate, but with the 8x2 you'll get double the textel fillrate. So 8x2 is only helping you in some situations.
 

Shinei

Senior member
Nov 23, 2003
200
0
0
From my understanding of the way DX9 works, it effectively only uses "half" of the pipelines; rendering ATI's 8x1 design far superior to nVidia's 4x2. If DX9 took advantage of the second set of pipes, I would think that the low performance we see in the FX line would be more even with R3x0; this jump to 16x1 (or even 16x2, if nVidia really wants to rub it in ATI's face) is an incredible boost to nVidia's hardware...
If this rumor is in fact, true, we may be in for another year of nVidia. :)
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: Shinei
From my understanding of the way DX9 works, it effectively only uses "half" of the pipelines; rendering ATI's 8x1 design far superior to nVidia's 4x2.

thats not how it works at all. 4x2 means it only has half the pipelines of 8x1, the first number is the number of pipelines and the second number is the number of texture units per pipeline. so a dual texture pixel will require one run though a x2 pipeline, but take two passes in a x1 pipeline, this has been the same though dx ever sense we had dual texture pipleines, dx9 has done nothing to change this.


that said, it will be cool if this is true as it means that quite possably nvidia will take the lead, at which point i will buy a nvidia card and i can stick that in the face of morons who call me an ati fanboy. :D
 

Killrose

Diamond Member
Oct 26, 1999
6,230
8
81
It's going to take a lot more than this rumor before I beleive this, especially with nVidia swinging back to TSMC for silicon production after cozing up to IBM. This leak is probably not at all about NV40, but a future product IMHO. This is nothing more than a PR leak to counter word on the street that Ati's R420 has made it to A12 production silicon, no-doubt.
 

GTaudiophile

Lifer
Oct 24, 2000
29,767
33
81
Wasn't NV30 suppose to be an R300 killer? Sounds like is history is repeating itself.

If NV40 has 16 pipes, what kind of Dustbuster will it be?
 

Shinei

Senior member
Nov 23, 2003
200
0
0
Thanks for the information, Snow. I knew 4x2 was pipes-by-textures, but I thought that DX9's use of single textures crippled that second texture pipeset.

As for its cooling, I don't think pipelines affect the heat as much as a clockspeed bump. Being that the rumors put NV40 at 600 core and 1200MHz RAM, I think that it'll be a little warmer than a 5950; since I rarely hear anyone whine about 5900s making lots of noise, I doubt the NV40 cooling will be bad either. Of course, I could be wrong.
 

BoomAM

Diamond Member
Sep 25, 2001
4,546
0
0
Originally posted by: GTaudiophile
Wasn't NV30 suppose to be an R300 killer? Sounds like is history is repeating itself.

If NV40 has 16 pipes, what kind of Dustbuster will it be?
Got to agree.
200+ million transistors will mean big heat. Lots of power needed. And reports of the 500mhz+ clock speeds seem doubtfull.
I can see ATI clocking the R420 as high as possible in light of this.
If nVidia has improved its PS/VS performance to equal to ATIs, then it will almost definatelly "win". But if the performace still lacks, then it`ll be routed to second place, as games will come to rely on these features more and more, and fillrate less and less.

Without sounding like a fanboy, i would like to see ATI "win" the next gen as well, with nVidia very close behind, so ATI can prove to the industry that the R300 wasnt a fluke, and so nVidia doesnt seem like its losing its touch.

Whoever wins, it`ll be interesting, and good for us as well.

 

videoclone

Golden Member
Jun 5, 2003
1,465
0
0
Nvidia didnt become as big as they are by sitting back and doing nothing they have a very strong engineering team but i would say they have an even stronger marketing team who are going to milk any advantage over there competitor for all its worth if the NV40 ends up with highly improved AA and AF with extremely high PS 3.0 performance then 2004 will be the year Nvidia have the crown ? no matter what Revision ATI does to its R420 ? Nvidia?s Revision to its NV40 would be more then enough to keep it ontop over the yeah until the next GPU Architecture comes out! just like ATI did with the R300.
 

kylebisme

Diamond Member
Mar 25, 2000
9,396
0
0
Originally posted by: Shinei
Thanks for the information, Snow. I knew 4x2 was pipes-by-textures, but I thought that DX9's use of single textures crippled that second texture pipeset.


na you can use double textureing, trippletextureing and so on in dx9 just like the pervious versions. the issue comes when you don't use textures at all, but rather a shader (dx8 or dx9) in which case your multitexturing ablities are useless as the color of the pixel is not determanied by various texture layers at all. nvidia's pr put a lot of effort into confusing the public about this issue and convesing people that microsoft had it out for them with dx9, but that is simply not the case.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So then the games that now implement as many pixel and vertex shaders as possible for eye-candy will run faster on a 8x1 vs. 4x2 and 16x1 vs. 8x2 architecture from what I have understood since multitexturing is becoming the old way of making games look pretty (I am not sure just assuming? or is it because it's more time consuming to program multiple textures?). In other words, aren't games in the future going to rely more and more on pixel and vertex shader performance as opposed to other things like multi-texturing? I am just trying to figure out for long-term what would be more bulletproof, since for DX9 games 8x1 ATI format turned out to be significantly better than the 4x2 multitexturing of FX series as not a lot of game developers utilize multitexturing in games (I dont know the reason why they dont though?)

.....
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Got to agree.
200+ million transistors will mean big heat. Lots of power needed. And reports of the 500mhz+ clock speeds seem doubtfull.


You have to remember that the surface of the die will be larger than the ATI part so in essence it should cool a little bit easier :)

can see ATI clocking the R420 as high as possible in light of this.

If this is true it is their only chance at actually beating this thing unless for some reason it is horribly borked.
Even at 600 Mhz and the NV40 at 400Mhz the NV40 would have a 3rd more fillrate than the R420. They better hope Nvidia doesnt somehow keep their clock inline with the R420 or this could be a huge swing back in Nvidias favor.


If nVidia has improved its PS/VS performance to equal to ATIs, then it will almost definatelly "win". But if the performace still lacks, then it`ll be routed to second place, as games will come to rely on these features more and more, and fillrate less and less.

Agreed.

Without sounding like a fanboy, i would like to see ATI "win" the next gen as well, with nVidia very close behind, so ATI can prove to the industry that the R300 wasnt a fluke, and so nVidia doesnt seem like its losing its touch.

I would like it to swing back and forth personally.

As for multitexturing. My understanding is future games will not be making much use of this outside of the Doom3 engine games and instead will use Pixel Shaders to the same effect.

One thing that I found interesting over on beyond3d is a lot of people think the R420 will not be PS 3.0 compliant. I wonder if ben's prediction of sites ignoring this unlike PS2.0 will come true :)
 

SilverTrine

Senior member
May 27, 2003
312
0
0
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.
 

rbV5

Lifer
Dec 10, 2000
12,632
0
0
Originally posted by: SilverTrine
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.

How exactly does AGP bandwidth cripple pipeline performance?
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: rbV5
Originally posted by: SilverTrine
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.

How exactly does AGP bandwidth cripple pipeline performance?

We dont even use 4X yet on 5950s and R9800XTs :p why do we all of a sudden need more than 8x?
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
hmm 16 pipelines would take a massive amount of transistors, probably not a good idea in an economic standpoint because of the die size required to make such a chip.
However, it could be doable with a .11µ architecture as originally anticipated
 

BoomAM

Diamond Member
Sep 25, 2001
4,546
0
0
Originally posted by: rbV5
Originally posted by: SilverTrine
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.

How exactly does AGP bandwidth cripple pipeline performance?
Exactely.
On a side note; Have you lot seen the pictures of the PCI-E16 FX5200? The bridge chip has its own HS, quite a large one as well. It must be doing quite a bit of work if a bridge chip needs a HS.
On another note, when the nV40 hits PCI-E16, although it probably wont use the bandwidth, if the PCI-E16 isnt native, and it still requires a bridge chip, then the nV40 could see some big performance drops under certain situations.
 

SilverTrine

Senior member
May 27, 2003
312
0
0
Originally posted by: rbV5
Originally posted by: SilverTrine
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.

How exactly does AGP bandwidth cripple pipeline performance?

It doesnt cripple pipeline performance, do you even know what bandwidth is?
 

galperi1

Senior member
Oct 18, 2001
523
0
0
On a side note, about 2 months after the 9700Pro came out Nvidia started pimping their "8 pipeline" monster aka the 4X2 wonder NV30 that they spat out

Deja Vu, I'm more than willing to bet this is a 8X1 card. This is just marketing speak like the 8 pipeline design they had for the NV3X series

and I mean... for god sake, they aren't even using Low-k for the manufacturing of this beast. It's gonna run hot
 

SilverTrine

Senior member
May 27, 2003
312
0
0
Originally posted by: Acanthus
Originally posted by: rbV5
Originally posted by: SilverTrine
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.

How exactly does AGP bandwidth cripple pipeline performance?

We dont even use 4X yet on 5950s and R9800XTs :p why do we all of a sudden need more than 8x?

If you claim to know about AGP you WOULD know that its flawed and that in no way is 8x AGP 'twice' as fast as 4x AGP.

PCI-Express will provide significantly more bandwidth than AGP, and more importantly it will be bi-directional.
PCI-Express will give significantly more bandwidth to GPU's they're not making it for their health you know.
 

g3pro

Senior member
Jan 15, 2004
404
0
0
Originally posted by: SilverTrine
Originally posted by: Acanthus
Originally posted by: rbV5
Originally posted by: SilverTrine
Nvidia has already stated in their CC that they're not transitioning to native PCI-Express until fall. A 16 pipeline card on AGP or AGP equivalent bridge would be so bandwidth crippled it wouldnt be worth a damn.
Sorry not going to happen, yet, I would assume Nvidia's long term goals involve a 16 pipeline architecture though.

How exactly does AGP bandwidth cripple pipeline performance?

We dont even use 4X yet on 5950s and R9800XTs :p why do we all of a sudden need more than 8x?

If you claim to know about AGP you WOULD know that its flawed and that in no way is 8x AGP 'twice' as fast as 4x AGP.

PCI-Express will provide significantly more bandwidth than AGP, and more importantly it will be bi-directional.
PCI-Express will give significantly more bandwidth to GPU's they're not making it for their health you know.

except that the bandwidth isn't going to be used.