SLI GT Reviews and questions

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: Cookie Monster
Its memory bus ISNT connected internally. I never said anywhere it does. Nor did i state memory adds up. Why are you putting words in my mouth?

Its connected by an SLi brigde. Why do you think the GX2 suffered the same SLi limitations? e.g Vsync? not performing as well in titles where SLi doesn't work etc etc

Anyway, ive been discussing this with ChrisRay and others who do have a profound knowledge behind multi GPU technology. Things ive come across:

-SLi is technically very optimal in fillrate bound situations. I.e High resolutions and High AA because it effectively "doubles" the fillrate thanks to the secondary GPU.
-How you described SLi is only descriding how SFR (Split Frame Rendering) works which isnt as efficent as AFR. AFR is used the most and what is AFR:
"Alternate Frame Rendering (AFR): One Graphics Processing Unit (GPU) computes all the odd video frames, the other renders the even frames. (i.e. time division)"

"Alternate Frame Rendering (AFR), the second rendering method. Here, each GPU renders entire frames in sequence - one GPU processes even frames, and the second processes odd frames, one after the other. When the secondary card finishes work on a frame (or part of a frame) the results are sent via the SLI bridge to the master GPU, which then outputs the completed frames. Ideally, this would result in the rendering time being cut in half, and thus performance from the video cards would double."

-This is why bandwidth/shader performance/fillrates effectively double (assuming the load is distributed evenly and also in fillrated situations i.e high resolution/AA environments) BUT not frame buffers because the output is being sent out from the master GPU. (the completed frame from the second GPU is sent to the first GPU via SLi bridge)

Conclusion:
8800GT SLi is a MUCH better buy compared to a 8800GTX because the benefits heavily outweigh the cons especially since the 8800GT SLI is priced around a single GTX.

ChrisRay's 8800GT SLi vs 8800GTX SLi preview

For the same price as a single 8800GTX you can have up to 60-70% more performance than a single 8800GTX.

Lastly the 7950GX2 IS SLi:

Link

You see, at the heart of the GX2 is NVIDIA?s SLI technology. To put it simply, the GX2 is basically two 512MB GeForce 7900 cards stacked one on top of the other and linked together via a sort of expanded SLI bridge. There is no revolutionary GPU hiding under the twin coolers, it?s essentially the same G71 with all the same features that we already have on the 7900 GTX and 7900 GT. And because its heart is SLI, it has all the same disadvantages that come with a regular dual-card SLI setup.


Now close this thread because i think this just ended the whole debate.

;)

edit - And sorry for going abit OT.


For the last time, the GX2 is NOT SLI.

The GX2 is comprised of two PCBs with 512MB of RAM on each, and the card has 1024MB of addressable memory through two 256-bit busses SIMULTANEOUSLY. That is why the bandwith and the memory is double in the GX2, otherwise the card would be marketed as 512MB, and in Windows it would only read 512MB.

For such reasons, the card has no trouble rendering high resolution frames, since it has 256-bit x2 of bandwith and a frame buffer of 1024MB. The real limitation in that card is the raw GPU power which isn't that great.

An 8800GT SLI setup on the other hand has fantastic GPU power, nearly GTX SLI power, but is severely limited by the bandwith and memory which is only 256-bit and 512MB. That is because even though the rendering work is split between two GPUs, and each GPU has its own frame buffer, the final video signal is compounded by the primary card (the one where your monitor is plugged in) and it occurs in the frame buffer of THAT CARD ONLY.

That is why SLI has never been able to work with two monitors Einstein. SLI only shows its teeth at high res, and with the power of G80, a 256bit bus is not enough anymore. Face it, an 8800GT SLI setup is totally useless, because it fails at what SLI was designed for, namely high res.

I rest my case. Please take any further discussion to pm.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
The GX2 does not show up as a single 1024MB card. There's plenty of documentation of users having problems with this when the drivers weren't matured and only being able to use one GPU on the card.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: aka1nas
The GX2 does not show up as a single 1024MB card. There's plenty of documentation of users having problems with this when the drivers weren't matured and only being able to use one GPU on the card.

Yes it does, I had one I should know. And it worked just fine with 93.71 drivers.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
JAG you are being a troll, for gods sake tone it down... not only is everything you say wrong, you are also downright insulting and rude to everyone else when saying it.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: taltamir
JAG you are being a troll, for gods sake tone it down... not only is everything you say wrong, you are also downright insulting and rude to everyone else when saying it.


im trying to teach you something, you're the troll buddy. you're the person who came into the thread and just to make a comment about me, and contributed nothing to the thread itself. thats pretty much the definition of a troll.

you think I would waste my time writing these long posts if I was trolling? I would just limit myself to calling you an idiot and be done with it. and also, please show proof that what I am saying is wrong, don't just barge in here and say I'm wrong.

and this is directed to the general public: quit the name calling. there is no need to place name + adjective to get your point across.
 

themisfit610

Golden Member
Apr 16, 2006
1,352
2
81
Well - to me this argument boils down to a couple of things...

given the price of the 8800gt and its apparently near 8800gtx performance, 8800gt SLI seems like a killer deal. BUT, there seems to be some facts to the contrary:

Wow, look at Call of Juarez 1920x1200 4xAA: from 8.4 fps (GT SLI) to 17.5 fps (single GTX). AM I DREAMING? Dirt 1600x1200 4xAA, from 4.5 fps to 24.5 fps. woa, kind of skewed for a card thats just as good as a GTX.

If these are hard numbers, then yes in those instances 8800gt SLI is a very bad choice, as it gets beaten badly by a single GTX. I wonder about 1920 without AA?

Jag, as you say - it makes technical sense that the 8800gt (with its 256 bit memory bus) wouldn't be a good choice for SLI when using SFR mode, as the primary card has to use its frame buffer to re-assemble the frame pieces. From my understanding, AFR mode wouldn't be as susceptible to this weakness.

I would be interested to know what SLI mode Call of Juarez and Dirt implement, as SFR might explain the terrible performance. Other benchmarks that compare GT SLI to single GTX seem to indicate the GT SLI owns the field.

All the anandtech review benchmarks show improvement, especially in Oblivion, where the gain is MASSIVE, ~85 vs 50fps for GT SLI vs single GTX at 1920. Much smaller for UT3 (almost no difference between GT SLI and GTX), but that game runs so fast it doesn't even matter :D I do care very much about 1920 (or at least 1680) performance, since I game on a 24, and my GTS320 is NOT cutting it for new games :)

http://www.anandtech.com/video/showdoc.aspx?i=3140&p=12

GT SLI does use ~ 50W more power than GTX single. Seems reasonable to me.

I think it all really comes down to Crysis. Nobody knows how it will really perform until it's actually released - so let's all just calm the frack down :D

I'm definitely stepping up from my GTS320 -> GT, since it's free (thanks eVGA), if nothing else for the H.264 / VC1 acceleration. I might even get another GT for SLI, who knows? I'm waiting until the VERY end of my step-up window (early December) to snag a new GTS if at all possible :)

That video acceleration is very nice for those of us without cutting edge CPUs. Let's not forget the GTX does NOT have that :)

~MiSfit
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: themisfit610
Well - to me this argument boils down to a couple of things...

given the price of the 8800gt and its apparently near 8800gtx performance, 8800gt SLI seems like a killer deal. BUT, there seems to be some facts to the contrary:

Wow, look at Call of Juarez 1920x1200 4xAA: from 8.4 fps (GT SLI) to 17.5 fps (single GTX). AM I DREAMING? Dirt 1600x1200 4xAA, from 4.5 fps to 24.5 fps. woa, kind of skewed for a card thats just as good as a GTX.

If these are hard numbers, then yes in those instances 8800gt SLI is a very bad choice, as it gets beaten badly by a single GTX. I wonder about 1920 without AA?

Jag, as you say - it makes technical sense that the 8800gt (with its 256 bit memory bus) wouldn't be a good choice for SLI when using SFR mode, as the primary card has to use its frame buffer to re-assemble the frame pieces. From my understanding, AFR mode wouldn't be as susceptible to this weakness.

I would be interested to know what SLI mode Call of Juarez and Dirt implement, as SFR might explain the terrible performance. Other benchmarks that compare GT SLI to single GTX seem to indicate the GT SLI owns the field.

All the anandtech review benchmarks show improvement, especially in Oblivion, where the gain is MASSIVE, ~85 vs 50fps for GT SLI vs single GTX at 1920. Much smaller for UT3 (almost no difference between GT SLI and GTX), but that game runs so fast it doesn't even matter :D I do care very much about 1920 (or at least 1680) performance, since I game on a 24, and my GTS320 is NOT cutting it for new games :)

http://www.anandtech.com/video/showdoc.aspx?i=3140&p=12

GT SLI does use ~ 50W more power than GTX single. Seems reasonable to me.

I think it all really comes down to Crysis. Nobody knows how it will really perform until it's actually released - so let's all just calm the frack down :D

I'm definitely stepping up from my GTS320 -> GT, since it's free (thanks eVGA), if nothing else for the H.264 / VC1 acceleration. I might even get another GT for SLI, who knows? I'm waiting until the VERY end of my step-up window (early December) to snag a new GTS if at all possible :)

That video acceleration is very nice for those of us without cutting edge CPUs. Let's not forget the GTX does NOT have that :)

~MiSfit

Thats true, but who is going to pair any of these cards with a CPU that is incapable of reproducing H264 and VC1 at 1080p. I think what matters is that the video card has video post processing, such as de interlacing, noise reduction and so on. The acceleration is secondary to me.

About your SFR and AFR argument, even AFR needs to be compounded in the primary video card. Its not that one frame comes from card 1 and one frame comes from card 2, they are processed that way, but all the frames must be united into a single video stream before reaching the TDMS transmitter, and finally the DVI port.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: JAG87
Originally posted by: themisfit610
Well - to me this argument boils down to a couple of things...

given the price of the 8800gt and its apparently near 8800gtx performance, 8800gt SLI seems like a killer deal. BUT, there seems to be some facts to the contrary:

Wow, look at Call of Juarez 1920x1200 4xAA: from 8.4 fps (GT SLI) to 17.5 fps (single GTX). AM I DREAMING? Dirt 1600x1200 4xAA, from 4.5 fps to 24.5 fps. woa, kind of skewed for a card thats just as good as a GTX.

If these are hard numbers, then yes in those instances 8800gt SLI is a very bad choice, as it gets beaten badly by a single GTX. I wonder about 1920 without AA?

Jag, as you say - it makes technical sense that the 8800gt (with its 256 bit memory bus) wouldn't be a good choice for SLI when using SFR mode, as the primary card has to use its frame buffer to re-assemble the frame pieces. From my understanding, AFR mode wouldn't be as susceptible to this weakness.

I would be interested to know what SLI mode Call of Juarez and Dirt implement, as SFR might explain the terrible performance. Other benchmarks that compare GT SLI to single GTX seem to indicate the GT SLI owns the field.

All the anandtech review benchmarks show improvement, especially in Oblivion, where the gain is MASSIVE, ~85 vs 50fps for GT SLI vs single GTX at 1920. Much smaller for UT3 (almost no difference between GT SLI and GTX), but that game runs so fast it doesn't even matter :D I do care very much about 1920 (or at least 1680) performance, since I game on a 24, and my GTS320 is NOT cutting it for new games :)

http://www.anandtech.com/video/showdoc.aspx?i=3140&p=12

GT SLI does use ~ 50W more power than GTX single. Seems reasonable to me.

I think it all really comes down to Crysis. Nobody knows how it will really perform until it's actually released - so let's all just calm the frack down :D

I'm definitely stepping up from my GTS320 -> GT, since it's free (thanks eVGA), if nothing else for the H.264 / VC1 acceleration. I might even get another GT for SLI, who knows? I'm waiting until the VERY end of my step-up window (early December) to snag a new GTS if at all possible :)

That video acceleration is very nice for those of us without cutting edge CPUs. Let's not forget the GTX does NOT have that :)

~MiSfit

Thats true, but who is going to pair any of these cards with a CPU that is incapable of reproducing H264 and VC1 at 1080p. I think what matters is that the video card has video post processing, such as de interlacing, noise reduction and so on. The acceleration is secondary to me.

About your SFR and AFR argument, even AFR needs to be compounded in the primary video card. Its not that one frame comes from card 1 and one frame comes from card 2, they are processed that way, but all the frames must be united into a single video stream before reaching the TDMS transmitter, and finally the DVI port.

Compounded in what way? Your talking as if the GPU that is connected to the monitor has to
re-render the frame that is sent over from GPU2 across the SLI bridge. Now I'm pretty sure that's not what you meant. You couldn't have. So please explain "compounded" to us. Then we will ask Chris Ray if you are correct. So do your homework.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: keysplayr2003
Originally posted by: JAG87
Originally posted by: themisfit610
Well - to me this argument boils down to a couple of things...

given the price of the 8800gt and its apparently near 8800gtx performance, 8800gt SLI seems like a killer deal. BUT, there seems to be some facts to the contrary:

Wow, look at Call of Juarez 1920x1200 4xAA: from 8.4 fps (GT SLI) to 17.5 fps (single GTX). AM I DREAMING? Dirt 1600x1200 4xAA, from 4.5 fps to 24.5 fps. woa, kind of skewed for a card thats just as good as a GTX.

If these are hard numbers, then yes in those instances 8800gt SLI is a very bad choice, as it gets beaten badly by a single GTX. I wonder about 1920 without AA?

Jag, as you say - it makes technical sense that the 8800gt (with its 256 bit memory bus) wouldn't be a good choice for SLI when using SFR mode, as the primary card has to use its frame buffer to re-assemble the frame pieces. From my understanding, AFR mode wouldn't be as susceptible to this weakness.

I would be interested to know what SLI mode Call of Juarez and Dirt implement, as SFR might explain the terrible performance. Other benchmarks that compare GT SLI to single GTX seem to indicate the GT SLI owns the field.

All the anandtech review benchmarks show improvement, especially in Oblivion, where the gain is MASSIVE, ~85 vs 50fps for GT SLI vs single GTX at 1920. Much smaller for UT3 (almost no difference between GT SLI and GTX), but that game runs so fast it doesn't even matter :D I do care very much about 1920 (or at least 1680) performance, since I game on a 24, and my GTS320 is NOT cutting it for new games :)

http://www.anandtech.com/video/showdoc.aspx?i=3140&p=12

GT SLI does use ~ 50W more power than GTX single. Seems reasonable to me.

I think it all really comes down to Crysis. Nobody knows how it will really perform until it's actually released - so let's all just calm the frack down :D

I'm definitely stepping up from my GTS320 -> GT, since it's free (thanks eVGA), if nothing else for the H.264 / VC1 acceleration. I might even get another GT for SLI, who knows? I'm waiting until the VERY end of my step-up window (early December) to snag a new GTS if at all possible :)

That video acceleration is very nice for those of us without cutting edge CPUs. Let's not forget the GTX does NOT have that :)

~MiSfit

Thats true, but who is going to pair any of these cards with a CPU that is incapable of reproducing H264 and VC1 at 1080p. I think what matters is that the video card has video post processing, such as de interlacing, noise reduction and so on. The acceleration is secondary to me.

About your SFR and AFR argument, even AFR needs to be compounded in the primary video card. Its not that one frame comes from card 1 and one frame comes from card 2, they are processed that way, but all the frames must be united into a single video stream before reaching the TDMS transmitter, and finally the DVI port.

Compounded in what way? Your talking as if the GPU that is connected to the monitor has to
re-render the frame that is sent over from GPU2 across the SLI bridge. Now I'm pretty sure that's not what you meant. You couldn't have. So please explain "compounded" to us. Then we will ask Chris Ray if you are correct. So do your homework.

Ask Chris Ray what the NVIO chip is for and see what he says.

The frame is not re-rendered. I used the word compounded because I couldn't think of a better word, since that is exactly whats happening. The two pieces, whether split frames or alternate frames, are compounded into one single frame stream which needs to be allocated in the buffer of the main card before it can be sent to the TDMS circuitry. Thats where the 512MB and 256bit bus become a choke point, and where SLI ceases to be useful.

In this regard, Crossfire was a better solution when the dongle was used, because each card outputted its share of frames, and the dongle had the duty of syncronizing the frame stream. Now that they are dongleless, I am fairly sure that it works just like SLI.

Hope that explains Keys.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: JAG87


About your SFR and AFR argument, even AFR needs to be compounded in the primary video card. Its not that one frame comes from card 1 and one frame comes from card 2, they are processed that way, but all the frames must be united into a single video stream before reaching the TDMS transmitter, and finally the DVI port.

Compounded in what way? Your talking as if the GPU that is connected to the monitor has to
re-render the frame that is sent over from GPU2 across the SLI bridge. Now I'm pretty sure that's not what you meant. You couldn't have. So please explain "compounded" to us. Then we will ask Chris Ray if you are correct. So do your homework.[/quote]

Ask Chris Ray what the NVIO chip is for and see what he says.

The frame is not re-rendered. I used the word compounded because I couldn't think of a better word, since that is exactly whats happening. The two pieces, whether split frames or alternate frames, are compounded into one single frame stream which needs to be allocated in the buffer of the main card before it can be sent to the TDMS circuitry. Thats where the 512MB and 256bit bus become a choke point, and where SLI ceases to be useful.

In this regard, Crossfire was a better solution when the dongle was used, because each card outputted its share of frames, and the dongle had the duty of syncronizing the frame stream. Now that they are dongleless, I am fairly sure that it works just like SLI.

Hope that explains Keys.[/quote]

Thanks, I will ask him about the NVIO chip. I don't know if this is relevant, but the NVIO chip is now integrated into the G92 core. External only on G80.
I understand that 2x512MB cards in SLI load the exact same data in the frame buffer essentially "wasting" 512MB of RAM, but isn't this only true with SFR or any instance both cards working on the same exact frame? In AFR, one card works on the 1st frame, while the 2nd works on the 2nd frame. Two different frames. Two different sets of data albeit minute differences. For load balancing purposes, I could see the 1st GPU having to "compound" a frame stream. Assembling percentages of frames for each card, then sending it out. But AFR wouldn't need to assemble anything. Is it possible the frame from the 2nd card is output directly through the 1st cards ROP's?

I'll pose these very questions to Chris and see what he comes back with.

P.S.: About the 7950GX2. While that certainly wasn't "conventional" SLI, it certainly utilized SLI technology to get it's job done. The connector joining the two cards together served multiple purposes. SLI bridge, Power, PCI-e interface.

 

nullpointerus

Golden Member
Apr 17, 2003
1,326
0
0
Originally posted by: keysplayr2003
I understand that 2x512MB cards in SLI load the exact same data in the frame buffer essentially "wasting" 512MB of RAM, but isn't this only true with SFR or any instance both cards working on the same exact frame? In AFR, one card works on the 1st frame, while the 2nd works on the 2nd frame. Two different frames. Two different sets of data albeit minute differences. For load balancing purposes, I could see the 1st GPU having to "compound" a frame stream. Assembling percentages of frames for each card, then sending it out. But AFR wouldn't need to assemble anything. Is it possible the frame from the 2nd card is output directly through the 1st cards ROP's?

I'll pose these very questions to Chris and see what he comes back with.

P.S.: About the 7950GX2. While that certainly wasn't "conventional" SLI, it certainly utilized SLI technology to get it's job done. The connector joining the two cards together served multiple purposes. SLI bridge, Power, PCI-e interface.

The way I understand the problem, it's an old technical limitation of video cards with unintended side effects for SLI AFR.

Ever since video cards were invented, output to the monitor's cable has been performed by processing memory allocated directly from the card's frame buffer (i.e. GPU memory). At 1920x1200 w/ 32-bit color, that is 1920 * 1200 * 4 = 9216000 bytes or about 8.8 MB. This is called the "primary surface" in Direct3D. When the time comes to send refresh data over the monitor cable, the TMDS transmitter (or whatever) processes the primary surface.

So, the first X MB of GPU memory (for the current display mode) is also used as the TMDS transmitter's buffer.

I believe what Jag is saying is that in SLI AFR mode, the cards do work on multiple frames, but the second card's frames must ultimately be transferred to the primary card's memory before the primary card's TMDS transmitter can send it out, so in practice the maximum rendering speed of a given frame is limited by the bandwidth of a single card.

This would not be an issue if the cards came with a dedicated buffer for the TMDS transmitter and a direct link from the SLI bridge to the dedicated buffer, but then you have to make the dedicated buffer large enough (and fast enough) to accomodate the highest possible display mode supported by the TMDS transmitter, and the card and SLI bridge would have to be significantly more complicated.

Currently, adding SLI support to cards in the whole lineup is not exactly very complicated, but adding new hardware that only benefits SLI in AFR mode to every card would not be very cost-efficient. Most cards end up be used in single-card mode.

Maybe it would be worth rolling out for mid/high-end cards in something like SLI 2.0?

Finally, you have the fact that motherboard vendors are the ones providing the SLI bridge connectors even though the actual SLI technology being used is specific to the video cards, not the motherboards. Getting everyone on the same page would probably require packaging new SLI connectors with single cards...which goes back to cost-efficiency again.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: nullpointerus
Originally posted by: keysplayr2003
I understand that 2x512MB cards in SLI load the exact same data in the frame buffer essentially "wasting" 512MB of RAM, but isn't this only true with SFR or any instance both cards working on the same exact frame? In AFR, one card works on the 1st frame, while the 2nd works on the 2nd frame. Two different frames. Two different sets of data albeit minute differences. For load balancing purposes, I could see the 1st GPU having to "compound" a frame stream. Assembling percentages of frames for each card, then sending it out. But AFR wouldn't need to assemble anything. Is it possible the frame from the 2nd card is output directly through the 1st cards ROP's?

I'll pose these very questions to Chris and see what he comes back with.

P.S.: About the 7950GX2. While that certainly wasn't "conventional" SLI, it certainly utilized SLI technology to get it's job done. The connector joining the two cards together served multiple purposes. SLI bridge, Power, PCI-e interface.

The way I understand the problem, it's an old technical limitation of video cards with unintended side effects for SLI AFR.

Ever since video cards were invented, output to the monitor's cable has been performed by processing memory allocated directly from the card's frame buffer (i.e. GPU memory). At 1920x1200 w/ 32-bit color, that is 1920 * 1200 * 4 = 9216000 bytes or about 8.8 MB. This is called the "primary surface" in Direct3D. When the time comes to send refresh data over the monitor cable, the TMDS transmitter (or whatever) processes the primary surface.

So, the first X MB of GPU memory (for the current display mode) is also used as the TMDS transmitter's buffer.

I believe what Jag is saying is that in SLI AFR mode, the cards do work on multiple frames, but the second card's frames must ultimately be transferred to the primary card's memory before the primary card's TMDS transmitter can send it out, so in practice the maximum rendering speed of a given frame is limited by the bandwidth of a single card.

This would not be an issue if the cards came with a dedicated buffer for the TMDS transmitter and a direct link from the SLI bridge to the dedicated buffer, but then you have to make the dedicated buffer large enough (and fast enough) to accomodate the highest possible display mode supported by the TMDS transmitter, and the card and SLI bridge would have to be significantly more complicated.

Currently, adding SLI support to cards in the whole lineup is not exactly very complicated, but adding new hardware that only benefits SLI in AFR mode to every card would not be very cost-efficient. Most cards end up be used in single-card mode.

Maybe it would be worth rolling out for mid/high-end cards in something like SLI 2.0?

Finally, you have the fact that motherboard vendors are the ones providing the SLI bridge connectors even though the actual SLI technology being used is specific to the video cards, not the motherboards. Getting everyone on the same page would probably require packaging new SLI connectors with single cards...which goes back to cost-efficiency again.



Now this is turning into an interesting conversation.

nullpointerus, you are absolutely right in your interpretation, that is exactly what is happening. the TDMS transmitter which generates the video signal needs to be fed from a buffer, and this buffer is the main card's memory. storing 1920x1200 'finished' frames is not that bad, but add that to the fact that the primary card also has to store half of those frames and the GPU has to apply filters to them potentially making them bigger in size (think FSAA). 512MB of memory don't seem enough anymore. The same applies to the 256bit bus.



keysplayr, you are correct as well, the same information is loaded in both buffers, and then each card stars rendering either odd/even frames, or split frames. I am not sure why you believe that this only applies to SFR, but even with AFR the same information is needed to construct those frames. About the GX2, although the card used SLI techniques for rendering the images and splitting the workload, the biggest strength of that card was that the memory busses were interconnected, resulting in a 1024MB buffer, but the bus was still 256bit (not 256x2 like I said before). I did some more research and it turns out that the connection was serial not parallel. Check the first picture of this review to get a visual of what I am saying:

http://www.pcper.com/article.php?aid=256&type=expert

That is why the card never excelled at very high resolutions, and also why quad SLI never scaled well.

Cheers boys, the more information we dig out, the more we understand the technology and its weaknesses.

 

themisfit610

Golden Member
Apr 16, 2006
1,352
2
81
who is going to pair any of these cards with a CPU that is incapable of reproducing H264 and VC1 at 1080p

I have an X2 3800+, which although it can decode 1080p H.264 at lower bitrates (apple trailers etc) - I cannot decode BluRay / HD-DVD stuff because of its massive bitrate. In fact anything less than a fast Core 2 Duo (E6600 or so and above) cannot handle these formats without dropping frames.

The hardware decoding is a very nice addition, in my case. It's also a great system for HTPCs with (for example) slower, cooler, energy efficient processors.

I will soon upgrade my whole system, but I made an investment in to Socket 939 and DDR1, so I'm trying to get as much out of this as I can :)

I think what matters is that the video card has video post processing, such as de interlacing, noise reduction and so on.

Absolutely. Noise reduction not so much, but the deinterlacing absolutely. I haven't done an in-depth comparison of nVidia's hardware deinterlacing versus real time software deinterlacing with AviSynth or ffdshow recently. One unintended benefit of doing all the decoding in hardware means you have lots of spare CPU cycles to deinterlace or do post processing. This will be interesting to investigate!

~MiSfit
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: JAG87



Now this is turning into an interesting conversation.

nullpointerus, you are absolutely right in your interpretation, that is exactly what is happening. the TDMS transmitter which generates the video signal needs to be fed from a buffer, and this buffer is the main card's memory. storing 1920x1200 'finished' frames is not that bad, but add that to the fact that the primary card also has to store half of those frames and the GPU has to apply filters to them potentially making them bigger in size (think FSAA). 512MB of memory don't seem enough anymore. The same applies to the 256bit bus.



keysplayr, you are correct as well, the same information is loaded in both buffers, and then each card stars rendering either odd/even frames, or split frames. I am not sure why you believe that this only applies to SFR, but even with AFR the same information is needed to construct those frames. About the GX2, although the card used SLI techniques for rendering the images and splitting the workload, the biggest strength of that card was that the memory busses were interconnected, resulting in a 1024MB buffer, but the bus was still 256bit (not 256x2 like I said before). I did some more research and it turns out that the connection was serial not parallel. Check the first picture of this review to get a visual of what I am saying:

http://www.pcper.com/article.php?aid=256&type=expert

That is why the card never excelled at very high resolutions, and also why quad SLI never scaled well.

Cheers boys, the more information we dig out, the more we understand the technology and its weaknesses.

Interesting info, thanks JAG. I wonder if using SLI AA would give you better overall scaling in some of these cases?
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: aka1nas
Originally posted by: JAG87



Now this is turning into an interesting conversation.

nullpointerus, you are absolutely right in your interpretation, that is exactly what is happening. the TDMS transmitter which generates the video signal needs to be fed from a buffer, and this buffer is the main card's memory. storing 1920x1200 'finished' frames is not that bad, but add that to the fact that the primary card also has to store half of those frames and the GPU has to apply filters to them potentially making them bigger in size (think FSAA). 512MB of memory don't seem enough anymore. The same applies to the 256bit bus.



keysplayr, you are correct as well, the same information is loaded in both buffers, and then each card stars rendering either odd/even frames, or split frames. I am not sure why you believe that this only applies to SFR, but even with AFR the same information is needed to construct those frames. About the GX2, although the card used SLI techniques for rendering the images and splitting the workload, the biggest strength of that card was that the memory busses were interconnected, resulting in a 1024MB buffer, but the bus was still 256bit (not 256x2 like I said before). I did some more research and it turns out that the connection was serial not parallel. Check the first picture of this review to get a visual of what I am saying:

http://www.pcper.com/article.php?aid=256&type=expert

That is why the card never excelled at very high resolutions, and also why quad SLI never scaled well.

Cheers boys, the more information we dig out, the more we understand the technology and its weaknesses.

Interesting info, thanks JAG. I wonder if using SLI AA would give you better overall scaling in some of these cases?

No problem dude, always happy to.

Personally, I think SLI AA is a huge waste since it is only used for insanely high AA levels like 8x, 16x and the mind numbing retarded 32x. I can hardly notice the difference between 4x and 8x, giving up SLI for such little IQ improvement is plain stupid. Perhaps it does scale better like you proposed, but the usefulness just isn't there. Now that we are on the AA topic what really looks appealing is nvidia's new MSAA which is extremely close to SSAA quality while having MSAA performance hit. Now that I look forward to!

@ Misfit, I agree with everything you posted, well put.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Cookie Monster
Its memory bus ISNT connected internally. I never said anywhere it does. Nor did i state memory adds up. Why are you putting words in my mouth?

Its connected by an SLi brigde. Why do you think the GX2 suffered the same SLi limitations? e.g Vsync? not performing as well in titles where SLi doesn't work etc etc

Anyway, ive been discussing this with ChrisRay and others who do have a profound knowledge behind multi GPU technology. Things ive come across:

-SLi is technically very optimal in fillrate bound situations. I.e High resolutions and High AA because it effectively "doubles" the fillrate thanks to the secondary GPU.
-How you described SLi is only descriding how SFR (Split Frame Rendering) works which isnt as efficent as AFR. AFR is used the most and what is AFR:
"Alternate Frame Rendering (AFR): One Graphics Processing Unit (GPU) computes all the odd video frames, the other renders the even frames. (i.e. time division)"

"Alternate Frame Rendering (AFR), the second rendering method. Here, each GPU renders entire frames in sequence - one GPU processes even frames, and the second processes odd frames, one after the other. When the secondary card finishes work on a frame (or part of a frame) the results are sent via the SLI bridge to the master GPU, which then outputs the completed frames. Ideally, this would result in the rendering time being cut in half, and thus performance from the video cards would double."

-This is why bandwidth/shader performance/fillrates effectively double (assuming the load is distributed evenly and also in fillrated situations i.e high resolution/AA environments) BUT not frame buffers because the output is being sent out from the master GPU. (the completed frame from the second GPU is sent to the first GPU via SLi bridge)

Conclusion:
8800GT SLi is a MUCH better buy compared to a 8800GTX because the benefits heavily outweigh the cons especially since the 8800GT SLI is priced around a single GTX.

ChrisRay's 8800GT SLi vs 8800GTX SLi preview

For the same price as a single 8800GTX you can have up to 60-70% more performance than a single 8800GTX.

Lastly the 7950GX2 IS SLi:

Link

You see, at the heart of the GX2 is NVIDIA?s SLI technology. To put it simply, the GX2 is basically two 512MB GeForce 7900 cards stacked one on top of the other and linked together via a sort of expanded SLI bridge. There is no revolutionary GPU hiding under the twin coolers, it?s essentially the same G71 with all the same features that we already have on the 7900 GTX and 7900 GT. And because its heart is SLI, it has all the same disadvantages that come with a regular dual-card SLI setup.


Now close this thread because i think this just ended the whole debate.

;)

edit - And sorry for going abit OT.

QFT... Listen JAG, no one cares if the control panel calls it "duel core GPU" instead of "SLI", nvidia clearly stated in their tech specs that it is two cards connected over an SLI bridge (which they described as a PCIe x16 to two pcie x8 sli adapter). Everyone else has told you that, and you keep on arguing it based on what it is being called in the drivers, well it is called that way in the drivers because they wan't people to not be confused.
 
May 30, 2007
1,446
0
0
This thread went outta control a few days ago and I vote to lock it due to ignorant content.

You kids took a perfectly legit inquiry and f'd it up with your arguement. GROW UP !!!!
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: Dazed and Confused
This thread went outta control a few days ago and I vote to lock it due to ignorant content.

You kids took a perfectly legit inquiry and f'd it up with your arguement. GROW UP !!!!

Actually we turned it into a very constructive conversation with lots of good information. If you bother to read you can see there is lots of good info on how SLI works.

Taltamir seems to be the only asshat who needs to make unappropriated comments in every post, despite having no knowledge or first hand experience to backup his statements.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
this thread made me want a single 8800gt, then 8800gt sli, and finally g100 or r700. I think I'll wait a while and see what's around the corner...

dazed, your drill seargent mentality can be very off-putting at times, but you have definitely helped this thread develop into a very good read.
 

Wolfrages

Junior Member
Dec 4, 2007
2
0
0
ok, very good fight here but yes
single gt slower then duel or single gtx
dual gt not going to beat duel gtx

now heres where it's going to get interresting, in december the 780I chipset is being released. It was suppoed to come out on november but due to problems with the penryn core it was delayed to december, now what i was going to say

the 780I is going to be a quad sli m/b, yes i said it 4 cards.

now that deff going to beat even a ultra on it's own with 8800gt's but we will see, but the gt's are going to be getting alot of sales due to the fact that it is a little faster on some programs vs the gts640

plus the gt's going to be an option when it come to the quad for the fact you don't need water cooling to use your pci slots vs the gts/gtx/ultra so it might be a plus for it's single slot cooling

the 790chipset for am2+ is already releeased and dose support quad crossfire you guys can look up some info on it if you wish for some info on the m/b's i can post some links for you guys