What clock would an 8800GTX need to be at to match GTX680?

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Exactly as the title says, how high of a clockspeed would an 8800GTX need to achieve to tie a stock GTX 680? Memory and core. Im guessing at least 6ghz on the core, and 10 on the memory.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I just figured it would be fun to guess. Im sure someone out there will do a comparison of performance of clock to FPS and scale that up :p
I did some rough math.

2 x 8800GTX = GTX 460
2 x GTX 460 = AMD 7870
Add in another GTX 460 and you have a GTX 680.

So a total of 6x whatever the base clockspeed of an 8800GTX was. Memory is another story.
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
I did some rough math.

2 x 8800GTX = GTX 460
2 x GTX 460 = AMD 7870
Add in another GTX 460 and you have a GTX 680.

So a total of 6x whatever the base clockspeed of an 8800GTX was. Memory is another story.

That is assuming linear scaling of clocks though.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
http://www.gpureview.com/show_cards.php?card1=474&card2=667

Too hard to estimate for 3 reasons:

1) 8800GTX is not a DX11 card, so it makes it difficult to compare gaming performance;

2) There is a 3x pixel fill-rate and a 10x texture fill-rate disadvantage against the 680. In theory that means if you increase GPU clock 10x (5.7Ghz), you end up with 3.3x the pixel shading power of the 680...However, if you match the pixel fill-rate you are way under in texture. This point itself makes it very difficult to determine the mid-point here.

3) Theoretical pixel and texture fill-rates are not necessarily related to actual real world performance an architecture can extract.

For these types of comparison I look to use this:
http://alienbabeltech.com/abt/viewtopic.php?p=41174

8800GTX = 53 points
GTX680 = 235 points

So you need about 4.43 8800GTXs to match a 680.
 

lkailburn

Senior member
Apr 8, 2006
338
0
0
Thanks for the link. My 8800 gts 512 gets 50pts. No wonder I need to upgrade :p

-Luke

Edit: do multi-gpus get 2x voodoo power on that page? Didn't see any listed
 
Last edited:

Leyawiin

Diamond Member
Nov 11, 2008
3,204
52
91
Threads like this make me sad when I think of the $550 I paid for mine in late 2006 plus the Arctic Cooling Accelero 8800 I stuck on it a few years later. I don't care if it lasted four years - it was still a heck of a lot of money. Now its just an impressive looking paperweight.
 

Marty502

Senior member
Aug 25, 2007
497
0
0
The 8800GTX was an absolute monster. I had one for a while in a AMD X2 2.6 Ghz and Crysis was amazing on it.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Thanks for the link. My 8800 gts 512 gets 50pts. No wonder I need to upgrade :p

-Luke

Edit: do multi-gpus get 2x voodoo power on that page? Didn't see any listed

My old 8800gts 512 i bought brand new in 2007 and retired in 2008 is still trucking in a friends rig,its been passed down now 2 times to two different friends of the family,its one tough sob as it refuses to not satisfy gaming at lower resolutions,as they are budget gamers.

I remember a 9800gtx i tested doing 1440x900 medium no issues during the BF3 beta,for 2006 era tech,that impresses the hell out of me that it can hold its own still.
 

NickelPlate

Senior member
Nov 9, 2006
652
13
81
Threads like this make me sad when I think of the $550 I paid for mine in late 2006 plus the Arctic Cooling Accelero 8800 I stuck on it a few years later. I don't care if it lasted four years - it was still a heck of a lot of money. Now its just an impressive looking paperweight.

When I think of stuff like that I just think back to my first and only turnkey PC which was a Gateway 2000. Pentium 75Mhz (pre MMX), 8MB RAM, some generic video card (the long defunct 3dfx was just getting started back then). I can't remember the HD size (maybe 60 MB) and Windows 3.1. Anyway I paid nearly $2000 for it back in the early 90s.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Exactly as the title says, how high of a clockspeed would an 8800GTX need to achieve to tie a stock GTX 680? Memory and core. Im guessing at least 6ghz on the core, and 10 on the memory.
N/A. No 8800-branded card ever came with enough memory, so no matter how fast it was run, it would be limited by PCIe and system RAM access time, in comparison to a 2GB card.
 

lavaheadache

Diamond Member
Jan 28, 2005
6,893
14
81
N/A. No 8800-branded card ever came with enough memory, so no matter how fast it was run, it would be limited by PCIe and system RAM access time, in comparison to a 2GB card.

obviously you knew what he meant and this wasn't to be a true direct comparison.

I can bring forth an even more obvious problem that is technically even more important than the lack of vram that you mention. The 8800's can't execute the dx11 code and could never get an apples to apples comparison in titles going forward
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
obviously you knew what he meant and this wasn't to be a true direct comparison.
Yes, but it's basically impossible to make a direct comparison. Once more than a couple generations elapses, the hardware is just too different.
 

BoFox

Senior member
May 10, 2008
689
0
0
http://www.gpureview.com/show_cards.php?card1=474&card2=667

Too hard to estimate for 3 reasons:

1) 8800GTX is not a DX11 card, so it makes it difficult to compare gaming performance;

2) There is a 3x pixel fill-rate and a 10x texture fill-rate disadvantage against the 680. In theory that means if you increase GPU clock 10x (5.7Ghz), you end up with 3.3x the pixel shading power of the 680...However, if you match the pixel fill-rate you are way under in texture. This point itself makes it very difficult to determine the mid-point here.

3) Theoretical pixel and texture fill-rates are not necessarily related to actual real world performance an architecture can extract.

For these types of comparison I look to use this:
http://alienbabeltech.com/abt/viewtopic.php?p=41174

8800GTX = 53 points
GTX680 = 235 points

So you need about 4.43 8800GTXs to match a 680.

Thanks, RussianSensation for referring to the ratings. It does make me feel like the hard work is justified.

Don't forget that the 8800GTX has 64 TAUs compared to 32 TMUs - it does seem to help some. Look at how well it has done against the 9800GTX/GTS 250 which has twice the TMUs but the same number of TAUs. GTX 480/580 still has only 64 TMUs, and it does not seem to be holding them back much in the newer games that have become so shader-bound anyways.

But I'd still go by the 4.43 number that you derived - meaning that it should be clocked at around 2.5GHz (compared against the original 575Mhz clock), with nearly 3x the bandwidth (maintaining the same 384-bit bus like HD 7970 today, but clocked at least 5000MHz effective rather than 1800Mhz DDR3). The shaders would also need to be clocked at 6 GHz, versus 1.35GHz! ;)

Thanks for the link. My 8800 gts 512 gets 50pts. No wonder I need to upgrade :p

-Luke

Edit: do multi-gpus get 2x voodoo power on that page? Didn't see any listed

Because of you, I edited it with a note right at the top of the list saying that the single card score should generally be multiplied by 1.55 to 1.65 depending on how new the generation is.

Never should it be a full 2x, since the average review score of dual-card setups are generally 1.7-1.8x of the single card benchmarks, taken together for an average. The ratings incurred a further 15% (which to me appears to be modest) penalty for microstuttering and for the fact that it does not scale in about 15% of games out there that are not covered by the latest reviews which usually review only a handful of really popular games that can scale well. Usually, when it does not scale, it performs worse than a single card.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Thanks, RussianSensation for referring to the ratings. It does make me feel like the hard work is justified.

It's pretty much my go to source when I want to get a quick estimate since as I understand it you compile hundreds of reviews to derive an average value. So for example, for 680/7970 series, I am guessing you compiled 10-20 reviews in an Excel sheet and came up to some average?

Also, because you account for many reviews and compile the data, that allows a much better average than just cherry-picking 1-2 reviews/games. Sometimes finding reviews for older cards is too time consuming and your table takes the guess work out of it since it keeps being updated with newer cards.

Great job! :thumbsup:

I wish there was a way to toggle 1080P vs. 1600P performance but I realize it would be a lot of work.
 

BoFox

Senior member
May 10, 2008
689
0
0
Oh, scratch that - the core clock does not need as much multiplication as the shaders (my mistake).. the core at less than 2GHz would more than suffice (with there already being 24 ROPs etc), but the shaders would probably need to be upwards of 7-8 GHz since there are only 128.

About compiling the reviews, it's a bit more complicated than that. I do give a little more weight to the more serious review sites like Techreport, Computerbase.de, PCGamesHardware.de, AlienBabelTech, XbitLabs, Hardware.Fr, HardwareCanucks, and Anandtech, etc. Sites like Techpowerup and Techspot are given less weight, due to the nature of their benches - some being completely CPU-limited on one hand for example, and blatantly controversial or unfair on the other hand.

EDIT- About 1080p vs 1600p, that is one of the reasons why the 2GB cards receive higher scores than 1GB cards of the same GPU - as long as they are powerful enough to take advantage of the 2GB size with improvements in performance. 1600p is also taken into account, but not as much weight as with 1080p, the practical resolution for most cards before and including 40nm. For the top cards like GTX 690 and HD 7970 GE, the 1600p resolution does carry a lot of weight. Given how expensive the 1600p monitors are ($1000), many of the 1600p gamers would buy two high-end cards for SLI/CF anyways. 3x 1080p and even 3x1440p benchmarks are also taken into consideration for the ratings.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
How fast would a 8800 GTX need to run to run as fast as a GTX 680 in Skyrim?

There is no such speed, at the quality settings you would use for the GTX 680. You would have to handicap it for visual quality to even be able to make a comparison, at which point it's not even worth it as a hypothetical exercise.
 

BoFox

Senior member
May 10, 2008
689
0
0
How fast would a 8800 GTX need to run to run as fast as a GTX 680 in Skyrim?

There is no such speed, at the quality settings you would use for the GTX 680. You would have to handicap it for visual quality to even be able to make a comparison, at which point it's not even worth it as a hypothetical exercise.
Yes, but there is a Quadro version using the same GPU, with 1.5GB of VRAM... probably still not enough for Skyrim, LOL!
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
considering most overclocks do not scale in a linear fashoin especially at the speeds needed for this i would guess its into the terahertz range.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
He didn't say Skyrim.
If we're going to go that route, how fast would a GTX 680 be in a game from 2200? Herp derp bottleneck!
We won't even be able to conceive of that far in the future. In 5-10 years, though, the hardware and software could very well have changed enough that an apples to apples comparison could only be made for how slow the old hardware actually is; and such that making it faster would not make it sufficiently good in comparison.
 

Leyawiin

Diamond Member
Nov 11, 2008
3,204
52
91
I know Skyrim wasn't part of the O.P.'s question but since it was mentioned I was curious and dropped my old 8800 GTX back in (with an X4 980 BE) to see how it did. Had to run it on the Medium preset to keep a playable frame rate (and its OC'd to 648Mhz). My GTX 460 1GB @ 850 Mhz can do Ultra and keep the same FPS.
 

BoFox

Senior member
May 10, 2008
689
0
0
considering most overclocks do not scale in a linear fashoin especially at the speeds needed for this i would guess its into the terahertz range.

Only if there is a fixed bottleneck somewhere. Even then, terahertz would not help.

Like with some of Intel's CPUs that have L2 cache running at full speed, overclocking it is quite linear when it comes to speed gains overall, even past 6GHz range with LN2 cooling.

I would think it's the other way around - higher clocks are usually better than adding the number of shaders, percentage-wise.

Look at how well HD 7770 is doing against HD 7970, or even HD 7870. HD 7970 GE has well over 3 times the shader power, yet is only a little more than twice as fast as HD 7770. Even HD 7870 with identically clocked core and twice the bus is not more than 90% faster HD 7770 overall.

The older 5870 is even worse than HD 7970 when it comes to shader efficiency - it would have been better off clocked twice as high with half the sp's.

A GTX 680 might have more efficient ROPs and improved usage of bandwidth, but the 8800GTX already has as many ROPs as a GTX 660 Ti that is nearly 4 times as fast with not even twice the core clock. And the bandwidth is not even double, either, so these bottlenecks are probably not going to hold the 8800GTX back when overclocking it. If both the core and shaders (and also memory) are overclocked, it seems that everything is pretty much being overclocked on the entire card.. perhaps the only thing it would need is PCI-E 2.0 instead of 1. Not even Skyrim uses FP16 textures (IIRC..?), so having FP16 filtering rate at full speed is hardly a factor yet. Other architectural improvements could be more of a factor (like improved 8xAA efficiency, SGSSAA, etc..), which would matter more if Skyrim were not a poorly optimized console port that runs nicely on 6-year old consoles with far less processing power.