Nvidia's G80 has 32 pixel pipes

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sandorski

No Lifer
Oct 10, 1999
70,824
6,372
126
Originally posted by: zephyrprime
Originally posted by: sandorski
The Inquirer Equation 2005/2006: X FutureCard = 32 Pipes!
Yeah, let's not forget that the 7900 was supposed to have 32 pipes also according to the inq. Time has passed and manufacturing has advanced so it's no surpise that g80 will have 32pipes. If it only had 24 pipes, I would have been surprised.

And various Radeons. They'll keep doing it until someone releases a 32 pipe card, then we won't hear the end of how they were right! :D

I hope Nvidia just skips 32, just to prove the Inquirer wrong again! :evil: :D
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Then again, NV shaders =/= Ati shaders. I mean, a GTX has 24 pixel shaders against a XTX with 48 pixel shaders and pretty much these two cards are similiar in performance. (With a slight edge going to the XTX).

You cant assume 7800GTX 512mb SLi has 48 pixel shaders/48 TMUs in total, because its in SLi. Two 7600GTs does not equal to one 7900GT. (Assuming 2 7600GT is a 24 pipe, 10 VS, 16 ROPs, 256bit etc using your logic). However, a G7x card with 48 pipe/TMu would KILL an XTX.

If ATi can schedule the shaders to do pixel, vertex, geometry fast enough, the r600 will be pretty fast. On the either hand, it sounds like NVs archtiecture sounds more simpler in design, but rumours of it being over half a billion transistors make me think twice.

Question - So with the R600, it can change the number of pixel, vertex, geometry shaders during gameplay? But wouldnt this make the GPU do ALOT of work? which results in heat, and lots of power usage?
 

nts

Senior member
Nov 10, 2005
279
0
0
Originally posted by: Cookie Monster
Question - So with the R600, it can change the number of pixel, vertex, geometry shaders during gameplay? But wouldnt this make the GPU do ALOT of work? which results in heat, and lots of power usage?

This is all handled by the hardware, the scheduling (drivers can influence it). As for heat there was a rumor that the heatsink from the 1800/1900s is being redesigned.

ATi already has a working unified chip on the XBOX360 so this will be the second generation chip.

 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
After they've been dead wrong about the 32 pipe r520 and the 32 pipe g71, I'd be more likely to believe them if they said the g80 will not have 32 pipes.
 

Polish3d

Diamond Member
Jul 6, 2005
5,500
0
0
You cant assume 7800GTX 512mb SLi has 48 pixel shaders/48 TMUs in total, because its in SLi. Two 7600GTs does not equal to one 7900GT. (Assuming 2 7600GT is a 24 pipe, 10 VS, 16 ROPs, 256bit etc using your logic). However, a G7x card with 48 pipe/TMu would KILL an XTX.

How do you know? SLI 7900GT gets beaten by an XTX routinely, granted it's not a "unicard" with that many shaders and tmus but still, SLI has other advantages for speed.

Originally posted by: Cookie Monster
Then again, NV shaders =/= Ati shaders. I mean, a GTX has 24 pixel shaders against a XTX with 48 pixel shaders and pretty much these two cards are similiar in performance. (With a slight edge going to the XTX).

How do you know? Your inference that NV Shaders are superior to ATI shaders is without evidence in your post. A reason why the two could be close in performance is that most games simply dont NEED as many pixel shaders as provided by the XTX and therefore the deficit on the part of the 7900GTX isn't as apparent.

Also, Take a look at what happens when a reviewer is responsible enough to use equal image quality driver settings . A 7900GTX overclocked to 700mhz even gets beaten pretty badly by the XTX in most games.

It's unfortunate that more reviewers let nvidia get away with default lower quality settings in their drivers. As I've said before, what if ATI suddenly starts defaulting their drivers to "High Performance" and starts beating the GX2? That's no more legitimate than the current situation.


That said, everything else being equal, the only thing that will keep me with ATI is the signal quality that is broadcast to my monitor. Getting the XTX was like getting a new monitor because the picture is so much clearer... I dont know why nvidia doesn't match this but it's a big deal to me.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Originally posted by: Cookie Monster
Then again, NV shaders =/= Ati shaders. I mean, a GTX has 24 pixel shaders against a XTX with 48 pixel shaders and pretty much these two cards are similiar in performance. (With a slight edge going to the XTX).

You cant assume 7800GTX 512mb SLi has 48 pixel shaders/48 TMUs in total, because its in SLi. Two 7600GTs does not equal to one 7900GT. (Assuming 2 7600GT is a 24 pipe, 10 VS, 16 ROPs, 256bit etc using your logic). However, a G7x card with 48 pipe/TMu would KILL an XTX.

If ATi can schedule the shaders to do pixel, vertex, geometry fast enough, the r600 will be pretty fast. On the either hand, it sounds like NVs archtiecture sounds more simpler in design, but rumours of it being over half a billion transistors make me think twice.

Question - So with the R600, it can change the number of pixel, vertex, geometry shaders during gameplay? But wouldnt this make the GPU do ALOT of work? which results in heat, and lots of power usage?


Good on ya Cookie, I was think that unified shaders would have an overhead anyway. What makes think is that the GTX only trails the XTX by 5-15%, with 1/2 the shaders & that chip of the R580 is very big. With DX10 & unified architecture at least 18months away, what is the point of going there now with gpu overhead when down the road, the DX will do it for us via software. Or am I just waffling...LOL
 

Polish3d

Diamond Member
Jul 6, 2005
5,500
0
0
Again, according to the review where they are run at EQUAL image quality settings, a 7900GTX at 700mhz trails a stock x1900XTX often by as much as 25%, and if the XTX was overclocked like the 700mhz GTX, probably 30% or more.

According to the B3D interview, ATI engineers believe that since future games will rely more and more on pixel shaders, the shader heavy design was the right way to go.

They say that as the XTX will play all current and past games at great framerates, its more important to design for the future (a la Oblivion) performance than to attempt to hit 150fps in BF2 versus 120fps.

From my perspective, I wont keep the XTX long enough for that to play out for me personally, but its by far the best for Oblivion, IMO beats the 7900GTX by a fair bit in every category except noise, and perhaps will serve me well until R600 which will probably be my next card unless G80 fixes their image quality and is a real monster.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86

Polish3d

Diamond Member
Jul 6, 2005
5,500
0
0
Who cares about the 'FEAR performance boost', there's a performance boost in the Cat 6.6 drivers too that aren't used in these benchies either.

Why would I use the Xbit labs review? The point was equal driver quality settings dude, which is what you see on the Legit Reviews website.

And look at anandtech's own 7950GX2 benchies. 7900GT SLI wins in some situations, an X1900XT (not even XTX or oced XTX) wins in other situations.

And again for crying out loud, at EQUAL IMAGE QUALITY SETTINGS INSTEAD OF NVIDIA'S DEFAULT LOWER QUALITY SETTINGS VS ATI'S DEFAULT HIGHER QUALITY SETTINGS AN XTX BEATS A 7900GTX @ 700MHZ OFTEN BY 25% OR MORE.


 

Conky

Lifer
May 9, 2001
10,709
0
0
Doesn't matter if 32 pipes make a difference or not.

It will be just like the old 3.0 shaders discussion. Some Nvidia fanboy will declare it is a necessary feature even though the 3.0 shaders were only implemented the "King Kong" game at the time... man, maybe I coulda been spending hours on that game instead of COD2 and BF2, lol.

The big deal is if it will run faster... if not, I don't wanna hear about it.

If you can tell, my opinion is not based on any free hardware I ever received. I hate Nvidia for that alone... they gave it to that worthless tard who wasn't even a gamer! :roll:

:laugh:
 

coldpower27

Golden Member
Jul 18, 2004
1,676
0
76
Originally posted by: Frackal
Originally posted by: coldpower27
Originally posted by: Genx87
Originally posted by: TroubleM
From what I understood:
Ati R600: 64 shaders (vertex, pixel or geometry)
nVidia G80: 32 pixel shaders, 16 vertex shaders, 16 geometry shaders

If this is true, and Ati will be able to change "on the fly" the way its shaders behave and NVidia won't, it seems logical that Ati might have an ace up its sleeve.

Of course, this is just speculation, there's a lot more to a video card than just shaders.

possible but afaik DX9 games wont be able to take advantage of such a setup. Maybe ATI can do it in the driver?

I am curious what is\was holding back the x1900 series as they supposed have about twice the pixel resources but can only beat their competition by a few % points.

The games that are pixel limited should see a higher performance on ATI's arch.

It is interesting to note Nvidia doubling the vertex shader capability. My understanding is there are no games that are vertex limited even on 8 pipes. However I have read somewhere you need much more vertex shader power if you were to do physics type calculations on a GPU.

The texture mapping units.... Nvidia has more Texturing power this generation, lots of Pixel Shaders don't work, as you still need the texturing to back them up.


Depends on the game. If I'm reading the beyond3d R580 technology interview right, ATI's engineers believe that until faster memory comes along there won't be a huge benefit from additional texturing units... Look at Oblivion for instance, an XTX w/ 48 shaders running at 650mhz outperforms a 7800GTX 512 SLI with 48 shaders and 48 TMUs running at 550

Oblivion is not the best example, in this game the X800 XL is competitive the 7800 GT at times, from the reviews here on Anandtech, this game flat out likes ATI's ALU + mini ALU design. It was programed predominantly on ATI hardware.

This is why the R580's performance is so strong here. Since it still has that design albeit more advanced, and more units of them.

Yes I agree, it depends on the game, as well as Nvidia has 2 Full ALU from what I remember per Pixel Pipe.

But overall except for a few cases, 3 x the Pixel Shader Units will not bring about 3x the performance.

The SLI argument is worthless as SLI technology is not as efficient as a Single GPU with the same number of Pipelines when the MegaPixel fillrates are the same.



 

R3MF

Senior member
Oct 19, 2004
656
0
0
it is a shame that nVidia haven't gone for unified shaders.

i am a natural nVidian because of their superior linux drivers, but i would also like to have unified shaders for General Purpose GPU processing.

my next box will have a 3.2Ghz K8L or C2D in it, at such a time when 3.6GHz is the highest speed, if nVidia has a unified shader by then i will buy it, if not then i hope that ATI's linux drivers have improved by then.