I took the plung and got another 8800gtx for Tri SLI

FiLeZz

Diamond Member
Jun 16, 2000
4,778
47
91
I picked up another GTX card for tri SLI

I have a dell 30" monitor. 2560x1600 res 78% more pixels then 1920x1200


So far I am very pleased with it.

I had to pick up a new power supply.. I had an OCZ 850 watt PS I did not think it would be able to cut it so I replaced it with a 1200 watt Silverstone power supply.

I will be waiting on the next refresh of cards to hit the market befor I change my cards.


I have had two 8800GTX cards in SLI from launch day.

The tri SLI at my res made a huge differance.

Kind of crazy that these cards have more memory bandwidth then the new cards and more memory to boot.. I have no idea why nividia went backwards as far as preformance at super hi res..

Pictures Added

http://www.flickr.com/photos/z...ll/2396695203/sizes/l/

http://www.flickr.com/photos/z...set-72157604430797542/

http://www.flickr.com/photos/z...set-72157604430797542/

 

FiLeZz

Diamond Member
Jun 16, 2000
4,778
47
91
OC on the cpu but never really pushed the cards..
So I do not have the cards Oc'ed
 

FiLeZz

Diamond Member
Jun 16, 2000
4,778
47
91
tuniq towner heatsink on cpu


Antec nine hundred case has loads of fans in the case a 200mm on the top and 4 120mm fans

 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Originally posted by: jaredpace
2200 MB of vmem FTW!

Yeah, or depending on your take: the same 768MB frame buffer in triplicate, or 1536MB wasted.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: betasub
Originally posted by: jaredpace
2200 MB of vmem FTW!

Yeah, or depending on your take: the same 768MB frame buffer in triplicate, or 1536MB wasted.

So you're saying the 1536MB of memory on the two additional cards is not used? If that were the case, then I would agree with you. But that is not the case. All the memory is used , although not exactly like the way you wished it would be used. Imagine how Tri SLI would perform if all the cards had only the memory of the lead card to use?

So, wasted, it is not.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: FiLeZz
Kind of crazy that these cards have more memory bandwidth then the new cards and more memory to boot.. I have no idea why nividia went backwards as far as preformance at super hi res..

Cost cutting. NV probably realized most gamers are still not in the high-end resolutions of 1920+ and that less VRAM and a smaller memory bus would still be enough. G92 saw some significant cost-cutting in terms of transistors with only 16 ROP and 4x64-bit memory controllers compared to 24 and 6x64-bit memory controllers on G80. Also, they save 33% on RAM in standard configurations going from 768MB to 512MB. In any case these changes along with the die shrink have clearly resulted in significant savings passed on to the consumer. You have similar performance to the highest performing parts at a fraction of the price.

 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Originally posted by: keysplayr2003
So you're saying the 1536MB of memory on the two additional cards is not used? If that were the case, then I would agree with you. But that is not the case. All the memory is used , although not exactly like the way you wished it would be used. Imagine how Tri SLI would perform if all the cards had only the memory of the lead card to use?

So, wasted, it is not.

Did I say it wasn't used? No, that was simply your spin on it. I also used the phrase "depending on your take" so please don't try to pick an argument.

There is only room for 768MB of exclusive frame buffer - the additional vRAM contains replications of this for the additional GPUs to work on. Ideally all the vRAM would be available to any of the GPUs, such as in dual-core CPUs which are able to share L2 cache.
 

Extelleron

Diamond Member
Dec 26, 2005
3,127
0
71
Originally posted by: chizow
Originally posted by: FiLeZz
Kind of crazy that these cards have more memory bandwidth then the new cards and more memory to boot.. I have no idea why nividia went backwards as far as preformance at super hi res..

Cost cutting. NV probably realized most gamers are still not in the high-end resolutions of 1920+ and that less VRAM and a smaller memory bus would still be enough. G92 saw some significant cost-cutting in terms of transistors with only 16 ROP and 4x64-bit memory controllers compared to 24 and 6x64-bit memory controllers on G80. Also, they save 33% on RAM in standard configurations going from 768MB to 512MB. In any case these changes along with the die shrink have clearly resulted in significant savings passed on to the consumer. You have similar performance to the highest performing parts at a fraction of the price.

The actual G92 chip has more transistors than G80 - incorporating UVD, the display chip (which was separate from the main G80 die), and double the texture units caused an increase in transistors from 681M~ (G80) to 754M (G92).

So I'm pretty certain all 24 ROPs are present in G92, just only 16 are activated. The reason for that is that you can't have a 256-bit bus with 24 ROPs. A 384-bit bus makes for a much more complicated PCB and is more expensive, plus as you pointed out 768MB of memory will be more expensive than 512MB.

 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Originally posted by: jaredpace
2200 MB of vmem FTW!

Something just occurred to me. Does this impact the amount of usable RAM with a 32 bit Windows OS?
 

FiLeZz

Diamond Member
Jun 16, 2000
4,778
47
91
I am sure it would

for sure it would chew up usable ram when you have 3 cards that need memory addressing.



Good thing I am on 64bit vista ult.

 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: betasub
Originally posted by: keysplayr2003
So you're saying the 1536MB of memory on the two additional cards is not used? If that were the case, then I would agree with you. But that is not the case. All the memory is used , although not exactly like the way you wished it would be used. Imagine how Tri SLI would perform if all the cards had only the memory of the lead card to use?

So, wasted, it is not.

Did I say it wasn't used? No, that was simply your spin on it. I also used the phrase "depending on your take" so please don't try to pick an argument.

There is only room for 768MB of exclusive frame buffer - the additional vRAM contains replications of this for the additional GPUs to work on. Ideally all the vRAM would be available to any of the GPUs, such as in dual-core CPUs which are able to share L2 cache.

No spinning or picking an argument here bud. If I am picking an argument with you, you'll know it. Just correcting your usage of a term.
You said it was "wasted". And where I come from, wasted means unused or discarded.
What does it mean where you come from? Used?

Bolded above ^ This is what I meant by "All the memory is used , although not exactly like the way you wished it would be used." (Like your example of shared cache on a DC CPU).

This is the way multi GPU solutions work (at least currently). Each GPU has it's own framebuffer with the same data. Or, at least a buffer of a particular frame.

 

Narse

Moderator<br>Computer Help
Moderator
Mar 14, 2000
3,826
1
81
Nice to see Crysis playable on 1920x1200 at very high settings. I may have to end up getting a third 9800gtx.
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,256
126
Originally posted by: Narse
Nice to see Crysis playable on 1920x1200 at very high settings. I may have to end up getting a third 9800gtx.

Wow @ Crysis...tri-SLI needed to play at high res... is this progress? :confused:
 

Throckmorton

Lifer
Aug 23, 2007
16,829
3
0
Originally posted by: Narse
Nice to see Crysis playable on 1920x1200 at very high settings. I may have to end up getting a third 9800gtx.

Sucks to have to run Crysis at 1920x1200 on a 2560x1600 LCD
 

n7

Elite Member
Jan 4, 2004
21,281
4
81
Originally posted by: Throckmorton
Originally posted by: Narse
Nice to see Crysis playable on 1920x1200 at very high settings. I may have to end up getting a third 9800gtx.

Sucks to have to run Crysis at 1920x1200 on a 2560x1600 LCD

You don't have to run it at 1920x1200.

I can run it @ 2560x1600 with a single GTX.

Obviously not at Very High settings though ;)
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Originally posted by: Throckmorton
Originally posted by: Narse
Nice to see Crysis playable on 1920x1200 at very high settings. I may have to end up getting a third 9800gtx.

Sucks to have to run Crysis at 1920x1200 on a 2560x1600 LCD

Run it st 1280x800 and watch the bullets fly. ;)
 
Oct 20, 2005
10,978
44
91
Originally posted by: FiLeZz
tuniq towner heatsink on cpu


Antec nine hundred case has loads of fans in the case a 200mm on the top and 4 120mm fans

200mm fans? never seen those before...
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Extelleron
Originally posted by: chizow
Originally posted by: FiLeZz
Kind of crazy that these cards have more memory bandwidth then the new cards and more memory to boot.. I have no idea why nividia went backwards as far as preformance at super hi res..

Cost cutting. NV probably realized most gamers are still not in the high-end resolutions of 1920+ and that less VRAM and a smaller memory bus would still be enough. G92 saw some significant cost-cutting in terms of transistors with only 16 ROP and 4x64-bit memory controllers compared to 24 and 6x64-bit memory controllers on G80. Also, they save 33% on RAM in standard configurations going from 768MB to 512MB. In any case these changes along with the die shrink have clearly resulted in significant savings passed on to the consumer. You have similar performance to the highest performing parts at a fraction of the price.

The actual G92 chip has more transistors than G80 - incorporating UVD, the display chip (which was separate from the main G80 die), and double the texture units caused an increase in transistors from 681M~ (G80) to 754M (G92).

So I'm pretty certain all 24 ROPs are present in G92, just only 16 are activated. The reason for that is that you can't have a 256-bit bus with 24 ROPs. A 384-bit bus makes for a much more complicated PCB and is more expensive, plus as you pointed out 768MB of memory will be more expensive than 512MB.

Its possible G92 has more ROPs and memory controllers...but I highly doubt it. Unless GT200 is just that, which would be a disappointment. But ya I figured between the incorporation of NVIO and bringing the texture units up to 1:1 that would've covered the difference in transistors between a 16 ROP G92 and a 24 ROP G80. Personally I don't think G92 has the necessary traces and pin-outs for more than a 256-bit bus; if it did I think NV would've gone that route instead of the recent half-baked solutions they've come up with like the GX2 and 9800 GTX.
 

betasub

Platinum Member
Mar 22, 2006
2,677
0
0
Originally posted by: keysplayr2003
No spinning or picking an argument here bud.

Bud? Ouch, is this getting personal and petty?

If I am picking an argument with you, you'll know it.

Is this macho posturing or an attempt to intimidate? Unnecessary.

Thankfully your post did have a point of argument:

Just correcting your usage of a term.
You said it was "wasted". And where I come from, wasted means unused or discarded.
What does it mean where you come from? Used?

Something can be "used" or employed in an inefficient or wasteful manner. For example, leaving a light bulb switched on 24/7 might be considered a "waste" of energy by some - it is still being used.

This is the way multi GPU solutions work (at least currently). Each GPU has it's own framebuffer with the same data. Or, at least a buffer of a particular frame.

Understood. So we are agreed that "2200 MB of vmem FTW" doesn't contain any additional information beyond that in a single 768MB set?
 

sammajidi

Junior Member
Apr 1, 2008
17
0
0
Originally posted by: FiLeZz
I picked up another GTX card for tri SLI

I have a dell 30" monitor. 2560x1600 res 78% more pixels then 1920x1200


So far I am very pleased with it.

I had to pick up a new power supply.. I had an OCZ 850 watt PS I did not think it would be able to cut it so I replaced it with a 1200 watt Silverstone power supply.

I will be waiting on the next refresh of cards to hit the market befor I change my cards.


I have had two 8800GTX cards in SLI from launch day.

The tri SLI at my res made a huge differance.

Kind of crazy that these cards have more memory bandwidth then the new cards and more memory to boot.. I have no idea why nividia went backwards as far as preformance at super hi res..

Nice! I just built a tri-sli this past weekend. I also chose to use 3x 8800GTX's, as I am matching them to a 30" 3007 as well.

To drive 2560x1600, everything pointed to the 384bit/768mb 8800's vs. 256bit/512mb 9800's.

This thing can score 14k+ 3dmarks @ 1920x1080. The raw horepower of tri-sli is unmistakable.

Crysis can run in all Ultra-high settings @ 1920x1080 with great framerates!

I have not even truly tweaked the system yet. I have only spent 10 or so hours on it. (My first build)

I will try to post some pics and numbers this week.