• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

7800 GTX 512mb countdown...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Ronin
I find it highly unlikely that a pair of GTs will outperform the 512MB GTX, if the clock speeds fall in line.
A pairt of GTs will outperform the 512MB GTX even if those clock speeds fall in line.
 
Originally posted by: crazydingo
Originally posted by: Ronin
I find it highly unlikely that a pair of GTs will outperform the 512MB GTX, if the clock speeds fall in line.
A pairt of GTs will outperform the 512MB GTX even if those clock speeds fall in line.

Not if the GTs are crippled by their "small" 256 MB of VRAM.
 
Originally posted by: xtknight
Not if the GTs are crippled by their "small" 256 MB of VRAM.
Which game gets a significant boost from 256->512MB?

A pair of 7800GTs are around 60-70% faster than a 256MB GTX (at highest resolutions with max details). I dont think the 512MB GTX will provide a bigger boost than that.
 
You don't understand much about hardware, do you?

Simple math negates your comments. If you'd like me to break it down for you, I'll be happy to.
 
Originally posted by: Ronin
You don't understand much about hardware, do you?

Simple math negates your comments. If you'd like me to break it down for you, I'll be happy to.
Be my guest. 🙂
 
Originally posted by: crazydingo
Originally posted by: Ronin
You don't understand much about hardware, do you?

Simple math negates your comments. If you'd like me to break it down for you, I'll be happy to.
Be my guest. 🙂

Two 7800GTs (theoretical; you lose some to SLI overhead):

Fillrate: 400 Mhz (clock rate) * 20 (pipelines) * 2 (SLI) = 16GPixels/sec.
Bandwidth: 1000Mhz (DDR clock) * 256 (bits/transfer) / 8 (bytes/bit) = 32GBytes/sec.

One 7800GTX (550/1800):

Fillrate: 550Mhz (clock rate) * 24 (pipelines) = 13.2GPixels/sec.
Bandwidth: 1800Mhz (DDR clock) * 256 (bits/transfer) / 8 (bytes/bit) = 57.6GBytes/sec.

With the higher memory clock rate, a single 7800GTX"Ultra" might be better in extremely bandwidth-limited situations (such as using 8xAA and 16xAF at very high resolution in games with extremely detailed/uncompressed textures). I'm not sure how bandwidth-limited a 7800GT setup gets at various settings.

But the 7800GT SLI has over 20% better pixel fillrate (and that's assuming 550Mhz core clock on the 7800GTX; if it's closer to 500Mhz, the 7800GTs start to pull further ahead).
 
Originally posted by: crazydingo
A pairt of GTs will outperform the 512MB GTX even if those clock speeds fall in line.

Not necessarily. If the game in question is not SLI driven, it won't.

And many would argue that a single high-performance card is superior to a pair, with the extra power consumption and heat output at an equal performance point.
 
Originally posted by: Matthias99
Originally posted by: crazydingo
Originally posted by: Ronin
You don't understand much about hardware, do you?

Simple math negates your comments. If you'd like me to break it down for you, I'll be happy to.
Be my guest. 🙂

Two 7800GTs (theoretical; you lose some to SLI overhead):

Fillrate: 400 Mhz (clock rate) * 20 (pipelines) * 2 (SLI) = 16GPixels/sec.
Bandwidth: 1000Mhz (DDR clock) * 256 (bits/transfer) / 8 (bytes/bit) = 32GBytes/sec.

One 7800GTX (550/1800):

Fillrate: 550Mhz (clock rate) * 24 (pipelines) = 13.2GPixels/sec.
Bandwidth: 1800Mhz (DDR clock) * 256 (bits/transfer) / 8 (bytes/bit) = 57.6GBytes/sec.

With the higher memory clock rate, a single 7800GTX"Ultra" might be better in extremely bandwidth-limited situations (such as using 8xAA and 16xAF at very high resolution in games with extremely detailed/uncompressed textures). I'm not sure how bandwidth-limited a 7800GT setup gets at various settings.

But the 7800GT SLI has over 20% better pixel fillrate (and that's assuming 550Mhz core clock on the 7800GTX; if it's closer to 500Mhz, the 7800GTs start to pull further ahead).
Your calculations are correct but the real-world performance is different. I still stick to my post. Waiting for Ronin's explanation.
 
Originally posted by: crazydingo
Originally posted by: Matthias99
Your calculations are correct but the real-world performance is different. I still stick to my post. Waiting for Ronin's explanation.

I actually agreed with you... 😕

A stock 7800GTX (430/1200) is significantly slower than a 7800GT SLI setup (the 7800GT SLI has nearly 50% more pixel/shader fillrate, and only 20% less bandwidth).

A super-mega-OCed 7800GTX with 1800Mhz RAM would be a lot closer, but still probably slower overall (although you could obviously come up with tests that would severely hurt the 7800GTs due to the lower memory speed and size).
 
Originally posted by: Matthias99
Originally posted by: crazydingo
Originally posted by: Matthias99
Your calculations are correct but the real-world performance is different. I still stick to my post. Waiting for Ronin's explanation.
I actually agreed with you... 😕

A stock 7800GTX (430/1200) is significantly slower than a 7800GT SLI setup (the 7800GT SLI has nearly 50% more pixel/shader fillrate, and only 20% less bandwidth).

A super-mega-OCed 7800GTX with 1800Mhz RAM would be a lot closer, but still probably slower overall (although you could obviously come up with tests that would severely hurt the 7800GTs due to the lower memory speed and size).
I didnt disagree with you either.
 
Originally posted by: Dkcode
My GTX just died and i voided my warrenty so i'll be buying a new card soon. This 512MB GTX and its ATi equivilent have got my eye. I do however feel a little put off from nvidia this been my first video card to break. However if this is true about the 550Mhz core and 1800Mhz memory i might just give them one more try.

Yeah but dude, you would have a nice RMA right now if you would have just left the card alone! 😉
But seriously, I know it's tempting to mod with better cooling and whatever, but on high dollar cards, it really hurts when you mod them and they just happen to die. Sux but that's the risk. The lifetime warranty on my GTX will remain intact forever. Ain't touchin it.

 
Originally posted by: Matthias99
Originally posted by: crazydingo
Originally posted by: Ronin
You don't understand much about hardware, do you?

Simple math negates your comments. If you'd like me to break it down for you, I'll be happy to.
Be my guest. 🙂

Two 7800GTs (theoretical; you lose some to SLI overhead):

Fillrate: 400 Mhz (clock rate) * 20 (pipelines) * 2 (SLI) = 16GPixels/sec.
Bandwidth: 1000Mhz (DDR clock) * 256 (bits/transfer) / 8 (bytes/bit) = 32GBytes/sec.

One 7800GTX (550/1800):

Fillrate: 550Mhz (clock rate) * 24 (pipelines) = 13.2GPixels/sec.
Bandwidth: 1800Mhz (DDR clock) * 256 (bits/transfer) / 8 (bytes/bit) = 57.6GBytes/sec.

With the higher memory clock rate, a single 7800GTX"Ultra" might be better in extremely bandwidth-limited situations (such as using 8xAA and 16xAF at very high resolution in games with extremely detailed/uncompressed textures). I'm not sure how bandwidth-limited a 7800GT setup gets at various settings.

But the 7800GT SLI has over 20% better pixel fillrate (and that's assuming 550Mhz core clock on the 7800GTX; if it's closer to 500Mhz, the 7800GTs start to pull further ahead).

You know, I myself have been guessing that SLI GTs will likely beat the new 512GTX. And all those GPixel numbers really don't mean anything unless there is a benchmark with fps for an actual game that backs them up.

And, there might actually be some good evidence of that now that I look around more (and take some liberties with some educated guessing).

Take BF2 at 2048x1536 4xAA, for example: (from AT)
X1800XT.........56
GTX................41
SLI GTs...........71
Now, the GTs are looking good compared to the everyone else. But if you were to throw a 512GTX in the midst, with as much ram as the great-performing XT, and a nice clock speed increase to boot, it could be a good match for the SLI GTs.

Another good example would be Quake4 2048x1536 4xAA/8xAF Ultra Quality
X1800XT.........48
GTX................35
SLI GTs...........43
Now, here we see the XT pulling ahead of the 256mb cards, even the SLI. A faster 512mb card would do even better I would think.

There's also some interesting info here showing 512mb cards stomping all over their 256mb competition.

I retract all of my previous rantings that I think SLI GTs will be faster than a GTX, and I'll leave it as a "let's see what happens".
 
Originally posted by: crazydingo
Originally posted by: Ronin
You don't understand much about hardware, do you?

Simple math negates your comments. If you'd like me to break it down for you, I'll be happy to.
Be my guest. 🙂

Fair enough. Let's estimate a 80% increase in performance by adding a second card.

A pair of GTs will generate:

20*400*2*80% = 12,800MTexels/s (Pipes*Core Frequency*number of cards*% variation)
500*32*2*2*80% = 51.8GB/s (Memory Frequency*Memory Bus Width in Bytes*number of cards*directional*% variation)

A 512MB GTX will generate (speculation based on Inq #'s):
24*550 = 13,200MTexels/s (Pipes*Core Frequency)
900*32*2 = 57.6GB/s (Memory Frequency*Memory Bus Width in Bytes*directional)

This doesn't account for the extra memory, will of course factor in (especially at higher resolutions, of course).

In an ideal world, where 100% improvement is achieved:
500*32*2*2 = 16,000MTexels/s
500*32*2*2 = 64GB/s

Now, let's use some numbers for the GT from Anandtech's own review (GT vs GT SLi/% increase from single to SLi)
BF2 1600x1200 No AA - 32%
BF2 2048x1536 No AA - 63%
BF2 1600x1200 4X AA - 30%
BF2 2048x1536 4X AA - 98%

D3 1600x1200 No AA - 13%
D3 2048x1536 No AA - 29%
D3 1600x1200 4X AA - 67%
D3 2048x1536 4X AA - 33%

EQII 1600x1200 No AA - 1%
EQII 2048x1536 No AA - 5%
EQII 1600x1200 4X AA - 68%
EQII 1920x1200 4X AA - 75%

HL2 1600x1200 No AA - 5%
HL2 2048x1536 No AA - 17%
HL2 1600x1200 No AA - 21%
HL2 2048x1536 No AA - 40%

SC:CT 1600x1200 No AA - 48%
SC:CT 2048x1536 No AA - 48%
SC:CT 1600x1200 4X AA - 55%

KOTR 1600x1200 No AA - 7%
KOTR 1600x1200 4X AA - 20%

Now, let's compare how much of a performance increase over a GTX a pair of GTs get:

BF2 1600x1200 No AA - 9%
BF2 2048x1536 No AA - 30%
BF2 1600x1200 4X AA - 33%
BF2 2048x1536 4X AA - 68%

D3 1600x1200 No AA - 3%
D3 2048x1536 No AA - 10%
D3 1600x1200 4X AA - 26%
D3 2048x1536 4X AA - 32%

EQII 1600x1200 No AA - -0.5%
EQII 2048x1536 No AA - -6%
EQII 1600x1200 4X AA - 49%
EQII 1920x1200 4X AA - 51%

HL2 1600x1200 No AA - -6%
HL2 2048x1536 No AA - -4%
HL2 1600x1200 No AA - 2%
HL2 2048x1536 No AA - -9%

SC:CT 1600x1200 No AA - -9%
SC:CT 2048x1536 No AA - -9%
SC:CT 1600x1200 4X AA - -3%

KOTR 1600x1200 No AA - -6%
KOTR 1600x1200 4X AA - 2%

Wow...look at that. I'm still looking for your 60-70% increase, based on Anandtech's own numbers. I see 1 situation where it breaks 60%, and another that breaks 50%.

It would seem that my 80% estimate was generous, wouldn't you say? I've bolded the negative % scenarios.
 
i'll only be getting them if i can watercool these things, but since there is a massive heatsink on the ram i don't know if i'll be able to put ram sinks on it and be just as effective as stock. i mean the ram must be really cooking if it has to be cooled by that huge heatsink. i could also get a water block that cools the ram too but that would just heat the block up too much which isn't what i want.

 
Originally posted by: damstr
i'll only be getting them if i can watercool these things, but since there is a massive heatsink on the ram i don't know if i'll be able to put ram sinks on it and be just as effective as stock. i mean the ram must be really cooking if it has to be cooled by that huge heatsink. i could also get a water block that cools the ram too but that would just heat the block up too much which isn't what i want.


Just put some large sinks on the ram.
 
Originally posted by: moonboy403
i knew i should've waited before i bought my gtx!....*sniffles*

You can wait forever and still not get the best .. !! Welcome to the computer industry!

People will buy this thing and still be pissed off once NV and ATi bring out DX10 cards end of next year. ^_~ It's all about being happy with how your current hardware runs the games you play .... Dont think about the future when it comes to computers .. thing about the now!
 
Come on, crazydingo. I put a fair amount of time in my response, so I'm waiting for yours. 🙂
 
i've made a simple decision

i'm waiting till 2 weeks before the release of TES : Oblivion (due for Q2 2006) and then I'm buying the single best video card out at that time ;p

that game is gonna look sweeeeee-eeeeet
 
Or simply because everyone else that has posted information on the card (none of which has been consistent) has removed theirs as well (ZZF, for one, and perhaps MWave, although I haven't checked there yet, but they didn't have clock speeds listed on theirs).
 
Back
Top