[Hardwarecanucks]660ti performance oddities

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
http://www.hardwarecanucks.com/foru...roundup-asus-evga-gigabyte-galaxy-msi-21.html

56432-gtx-660-ti-roundup-asus-evga-gigabyte-galaxy-msi-21.html


Throughout our in-game testing some interesting performance discrepancies were noticed. Some of these focused upon the ASUS GTX 660 Ti TOP which had the highest on-paper boost clocks but failed to consistently beat Gigabyte’s offering and at times even struggled to stay ahead of the MSI Power Edition. We also noticed that as we progressed through our four benchmark runs of each game, the EVGA GTX 660 Ti SC and NVIDIA’s reference card both showed slightly lower framerates from one repetition to the next. Granted, ASUS’ card still came out as the overall performance winner once the dust settled and EVGA’s provided more than adequate performance but we were left wondering: what was going on?




In our investigation the primary focus was upon each card’s clock speeds being the culprit and those assumptions bore fruit in short order. According to our findings, the ASUS TOP seemed to be unable to consistently hit the upper ranges of its Boost frequencies. Instead, it tended to fluctuate up and down quite a bit, with a few peaks that thrust up into extremely high clock speed ranges. Comparing and contrasting these results to those from MSI, Gigabyte and Galaxy cards puts things into stark contrast since these other products hit a mark and stay there throughout the benchmark, sometimes resulting in higher performance.

The two reference-based cards also showed an interesting side of their personas as their clock speeds gradually decreased throughout the test, resulting in the aforementioned performance drop-off up until their fans increased speed a bit. Now, the difference between maximum frequencies and where these cards end up after a few minutes is infinitesimal in the grand scheme of things and an end user will never notice anything but on paper at least, you’ll be losing a few frames per second here and there.




Within Batman, our results for the ASUS card were actually well in line with expectations as it provided class-leading framerates regardless of its constant clock speed dance. However, once again the EVGA SC and reference clocked cards exhibit a tendency to step back their clock speeds but this time it looks like the Power Limit is stepping in as well. However, be it TDP, Power Limit, temperatures or some combination thereof, we are seeing a general downgrading of Boost values over the course of our benchmark.

This brings whole exercise could bring up some worrying points about benchmarking NVIDIA’s Kepler-based cards in reviews (and charts) where every single FPS counts. Sites benchmarking with a single run or shorter sequences will likely achieve the “best” results rather than realistic performance. Luckily, we have been able to avoid this issue by using four run-throughs of every benchmark, each with somewhat long testing times. We’ll have a full article looking at GeForce Boost and AMD’s equivalent in the coming weeks but for the time being, this is certainly food for thought.

This very same thing has happened to me with EVGA reference 680s. Clock fluctuation is VERY common on kepler cards, thankfully I avoid that with MSI lightnings.

Be it power throttle or temp throttle - this happens to a TON of kepler cards - and lowers performance / overclocks over time (although it will return to normal after temp/power returns to normal)- I can't tell you how many times i've had performance go unstable on a 690 after overclocking for 10 minutes. Clock fluctuation too. Anyway, word of warning, choose your kepler card brand WISELY. Notice MSI and gigabyte keep their clockspeeds. EVGA? Nope. I'm an advocate for MSI lightnings but be warned, brand does make difference. I was quite happy seeing MSI as one of the brands with solid clockspeeds throughout testing, I love their GPUs.
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Hasnt there been other threads, about the 670-680's that where already showing signs of degradation?

Granted this was from people that overclocked their 670-680's, but its just... how long as these cards been out?
and people are already noticeing degredation?

Gonna go out on a limb, and just say what most probably are thinking...
These cards probably arnt gonna last 10years....
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Its not that kind (lifetime of card) of degradation, its more of a clockspeed lowering over time thing. It happens a lot on ref 680s due to temperature and power throttle.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Yeah it happens because the cards are throttling down due to temperature and power usage. Let them sit for 10minutes to cool down and they'll run up there again. Kind of lame if you ask me.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Rarely, a top-end video card that sees frequent use lasts very long. Kepler or not.

I ran my GTX470s @ 99% load nearly 24/7 365 days a year @ 760mhz. For the average gamer that's probably at least ~5 years of gaming (5 hours of gaming a day @ 4.8 years). HIS states they guarantee full load operation 24/7 for the full 2 year warranty period. That's 17,520 Hours of non-stop operation or 9.6 years of 5 hours of gaming / day. So I don't agree that it's normal for a videocard's clocks to degrade so soon. In this case it's not a physical degradation but it seems Kepler's GPU Boost is not consistent enough over time, which is something many people have stated here already. Although, that makes the argument for GTX660Ti OC even weaker against an 7950's OC.
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Yeah it happens because the cards are throttling down due to temperature and power usage. Let them sit for 10minutes to cool down and they'll run up there again. Kind of lame if you ask me.

True, I should have clarified that. The clockspeeds return to normal once temps lower and/or power use lowers. Still, it is annoying because an overclock that you spend an hour trying to pin down can quickly be throttled down in a real world gaming type of situation, so when you enter a certain boost (or even a pre overclocked boost that a manufacturer sets) it will quickly normalize with lower clockspeeds over a period of 10 to 15 minutes. That is quite annoying when you -need- the performance.

That being said, I mean its okay in less demanding games such as Darksiders 2 - do I really need 1300-1400mhz in DS2? No, no I don't so i'll happily play at a lower clockspeed. However, when playing something like The Secret World or Max Payne 3 and seeing your clockspeeds either A) spin all over the place or B) lowering after 10-15 minutes from temps / power -- it is very annoying to not get the overclock or clock that you should be getting.
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Your thread title makes one think that the GTX660Ti cores are slowly dying. Maybe you should rethink that title to something more accurate to the situation? cmdrdredd said it succinctly.
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
231
106
I ran my GTX470s @ 99% load nearly 24/7 365 days a year @ 760mhz. For the average gamer that's probably at least ~5 years of gaming (5 hours of gaming a day @ 4.8 years). HIS states they guarantee full load operation 24/7 for the full 2 year warranty period. That's 17,520 Hours of non-stop operation or 9.6 years of 5 hours of gaming / day. So I don't agree that it's normal for a videocard's clocks to degrade so soon.
In my statistics, over the course of 10 years, more than the half of top cards either have died or required component repair (incl. cooling). It's the heat and power cycles that kill electronics (not the total uptime). Your GTX470s might not turn on tomorrow. Of course, if they are well looked after.. they will serve longer. The higher-end parts naturally degrade faster, that was my point. On average, people forget about these things.
 
Last edited:

toyota

Lifer
Apr 15, 2001
12,957
1
0
this boost nonsense was nothing more than a last minute idea to help the "mid range" gk114 cards match or exceed the 7970/7950. I dont know why so many people praise it because to me it just causes inconsistent results. its also silly that the cards will throttle at 70 C which means that their modest reference coolers cant even let the cards fully boost in realistic conditions anyway without cranking the fan up.

I am also sick of people praising the cards for being efficient. in reality they are not much more efficient than the bigger 7950/7970 cards and even use more power in some games. now once you oc the 7970/7950 cards then yes gk114 does look better but it also starts losing across the board.

and the gk114 reference cards are cheaply built pos IMO. its like they tried to cut every corner possible yet charge consumers the most that they could get away with for their performance. that is especially true for the 670 and 660 ti pcb and coolers which are a complete joke for cards at their asking price.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
In my statistics, over the course of 10 years, more than the half of top cards either have died or required component repair (incl. cooling). It's the heat and power cycles that kill electronics (not the total uptime). Your GTX470s might not turn on tomorrow. Of course, if they are well looked after.. they will serve longer. The higher-end parts naturally degrade faster, that was my point. On average, people forget about these things.

What temperatures do you maintain your cards at? I mean it's not just GTX470s though, the same applies to my 4890/6950 @ 6970 speeds, 7970 I have now. I have never had any GPU fail on me. You are saying it's expected that high-end parts degrade and fail over time. Is that just based on your own experience with your own computers or some statistical data? I don't know if people would buy $400-500 GPUs if they knew they have a 2 year shelf-life only and then they would start failing out of the blue. Also, such a situation would be extremely costly for AIBs and many such as Gigabyte cover warranty up to 3 years. If you run your GPUs at 95-100*C 24/7, maybe.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
I've owned many, many video cards, dozens. Out of those two died on me. A 8800GT, and an ATI 1650XT. The Nvidia card just died, one day I came home and the computer was on but black screen, no response. The card never worked after that. The 1650XT started artifacting and it got worse and worse, tried to bake the card but that killed it completely.

I've never had a card degrade performance over time, hotter over time yes because of dust or TIM slowly losing its effectiveness. Definitely not normal to expect a video card to perform worse the longer you own it.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
So I wonder how many of the reviews on these new Nvidia cards were handled? Do they do several runs and then get an average, or just one run, and then shut said benchmark down. It's entirely possible that the reviews for all these cards don't at all represent real world performance, and it's actually lower than many are expecting.

If they let the GPU get good and hot before they benchmark then the numbers are most likely what a consumer could expect.
 

KompuKare

Golden Member
Jul 28, 2009
1,016
931
136
So I wonder how many of the reviews on these new Nvidia cards were handled? Do they do several runs and then get an average, or just one run, and then shut said benchmark down. It's entirely possible that the reviews for all these cards don't at all represent real world performance, and it's actually lower than many are expecting.

If they let the GPU get good and hot before they benchmark then the numbers are most likely what a consumer could expect.

Well, even Nvidia's PR machine would not have been able to hide it if their review guide for Kepler advised this:
79485c68_redneck_engineering_15.jpg
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Well, even Nvidia's PR machine would not have been able to hide it if their review guide for Kepler advised this:

Actually NV reviewer's guide encourages use of fully enclosed casings in benchmarks.

But please, do keep the conspiracies spinning. I'm sure something dirty will come out sooner or later.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
One of my 680s (maybe both) have degraded since I got them on launch day. They lasted fine for a while with the overclock I gave them and then that imploded and I started to get crashing/artifacts. Pull my overclock back about 40Mhz and that fixed it.

Considering you can't add voltage to these it was pretty weird. I've seen other users of these cards having the same experiences. I'm pretty sure the 680 being the flaghsip is in particular near redlined from the factory. This is why there are the draconian demands against voltage control nvidia mandates to AIBs for Kepler cards. They pushed this little chip as hard as possible to get it near to competitive with AMD's Tahiti XT and don't want it pushed any further to avoid RMA issues. IMO.

I bet we see voltage control return on GK110 and see a card with a real PCB design that doesn't look like something that belongs paired with an HTPC card. :sneaky:
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Rarely, a top-end video card that sees frequent use lasts very long. Kepler or not.

Source?

In my statistics, over the course of 10 years, more than the half of top cards either have died or required component repair (incl. cooling). It's the heat and power cycles that kill electronics (not the total uptime). Your GTX470s might not turn on tomorrow. Of course, if they are well looked after.. they will serve longer. The higher-end parts naturally degrade faster, that was my point. On average, people forget about these things.

You have statistics? Please share.

this boost nonsense was nothing more than a last minute idea to help the "mid range" gk114 cards match or exceed the 7970/7950. I dont know why so many people praise it because to me it just causes inconsistent results. its also silly that the cards will throttle at 70 C which means that their modest reference coolers cant even let the cards fully boost in realistic conditions anyway without cranking the fan up.

I am also sick of people praising the cards for being efficient. in reality they are not much more efficient than the bigger 7950/7970 cards and even use more power in some games. now once you oc the 7970/7950 cards then yes gk114 does look better but it also starts losing across the board.

and the gk114 reference cards are cheaply built pos IMO. its like they tried to cut every corner possible yet charge consumers the most that they could get away with for their performance. that is especially true for the 670 and 660 ti pcb and coolers which are a complete joke for cards at their asking price.

While I think GPU boost has the ability to offer higher average performance it also offers the opportunity to abuse the review process because you don't have any guarantee that the card you buy will perform like the card in the review. When reviewers get cards that boost to *1300MHz you aren't likely going to see similar performance.



*Since the TDP of this video card is 195W, there might be some games that don't come close to tapping the full power of this video card. In these lower powered games, the GeForce GTX 680 is able to raise the GPU frequency to give you better performance until it reaches TDP. This means the GPU clock speed could increase from 1006MHz to 1.1GHz or 1.2GHz or potentially even higher. (Kyle saw a GTX 680 sample card reach over 1300MHz running live demos but it could not sustain this clock.) The actual limit of the GPU clock is unknown. As each video card is different, and with the addition of custom cooling, your maximum GPU Boost clock speed could be anything…within reason. This is going to make overclocking, and finding the maximum overclock a bit harder.

*GPU Boost is guaranteed to hit 1058MHz in most games. Typically, the GPU will be going much higher. We experienced clock speeds in demo sessions that would raise to 1.150GHz and even 1.2GHz in such games as Battlefield 3. With a GPU frequency increase of that much over base clock you can rest assured it will be a noticeable performance difference.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
So I wonder how many of the reviews on these new Nvidia cards were handled? Do they do several runs and then get an average, or just one run, and then shut said benchmark down. It's entirely possible that the reviews for all these cards don't at all represent real world performance, and it's actually lower than many are expecting.

If they let the GPU get good and hot before they benchmark then the numbers are most likely what a consumer could expect.

reviewers would never ever ever do a single run of a benchmark for a review. They do several runs. Every single number is actually an average of a handful of runs back to back. This should be the most basic rule for anyone who wants to do reviews. You couldnt be taken serious or have a legitimate site if you did not run several times per benchmark. Your reviews would be terribly inconsistent and unreliable.

blackened23 said:
True, I should have clarified that. The clockspeeds return to normal once temps lower and/or power use lowers. Still, it is annoying because an overclock that you spend an hour trying to pin down can quickly be throttled down in a real world gaming type of situation, so when you enter a certain boost (or even a pre overclocked boost that a manufacturer sets) it will quickly normalize with lower clockspeeds over a period of 10 to 15 minutes. That is quite annoying when you -need- the performance.

That being said, I mean its okay in less demanding games such as Darksiders 2 - do I really need 1300-1400mhz in DS2? No, no I don't so i'll happily play at a lower clockspeed. However, when playing something like The Secret World or Max Payne 3 and seeing your clockspeeds either A) spin all over the place or B) lowering after 10-15 minutes from temps / power -- it is very annoying to not get the overclock or clock that you should be getting.

this is every bit related to the fan speed from the quite fan profile. Nothing more to it than this. No need to panic:

the very same article said:
The two reference-based cards also showed an interesting side of their personas as their clock speeds gradually decreased throughout the test, resulting in the aforementioned performance drop-off up until their fans increased speed a bit. Now, the difference between maximum frequencies and where these cards end up after a few minutes is infinitesimal in the grand scheme of things and an end user will never notice anything but on paper at least, you’ll be losing a few frames per second here and there.
http://www.hardwarecanucks.com/foru...roundup-asus-evga-gigabyte-galaxy-msi-21.html

This is something that is controlled by the driver. The throttling happens before the fans kick up, once the temps dont go down the fans come on and then the boost starts again. having ultra quite fan profiles would cause this to happen. Its not the end of the world. Its not unusual. You would never really see it while playing much. If you bump up your fans or use a more aggressive profile, you wont see it at all. The only reason the EVGA card does this in the review is because of the cooler and the fan delay before kicking in. MSI doesnt have this effect, nor any other card with a better cooler.

You may not know this, but a driver update could easily end this "anomaly". I dont see it as any such thing though. Its just a result of the temperature going up. The fans wait to see if the boost will help, if the temps remain the fans kick in and drop the temps so the boost comes back again. Anyone overclocking shouldnt use quiet fan profiles and anyone that pushes up the fans wont see this even on paper. Anyone who buys a good cooler GPU wouldnt see it either. Case air flow might also be a factor in causing the boost to not be as high.

There are a lot of things to consider. But this isnt strange by any means. Its a product of the normal operations of kepler boost clocks. If its something that might bother you, bump up the fan slightly. Even if you dont, the driver will soon enough and the boost will be back to normal. The only reason hardwarecanucks ever caught this is because the conditions were just right. The temps were teetering the boost down and up as the fan triggered up and down to compensate. A perfect condition which many factors contribute. such as Ambient temps, air flow, etc
 
Last edited: