Rumor: Price Cuts on GTX660Ti series coming next week

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
You acclaimed a GTX 470 was quiet because after all of that work you got it to 75 Celsius on load (is that supposed to be good for a heavily modified card?). It's just a false point you are using to justify your purchase of triple SLI 470s. The reality is simpler. The 470 was a crappy card.

Those are fighting words! D:
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
The 470 was a good card, except for one thing. On the plus side the thermonuclear nature of Fermi causes some innovations in cooling design.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Back on topic. We all know it's that evil AMD's fault that the 660ti is over priced. they were using Jedi mind tricks to get nVidia to do their bidding. nVidia is now no longer under AMD's evil spell and are going to price the 660ti where they always wanted to. Being the kind and gentle company that they are. /sarc

The problem is that by the time they get the price where it belongs it'll be a $200 card. $270 - $320 is still too high. $270 is good, but how many are actually going to be that price?
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
You acclaimed a GTX 470 was quiet because after all of that work you got it to 75 Celsius on load (is that supposed to be good for a heavily modified card?). It's just a false point you are using to justify your purchase of triple SLI 470s. The reality is simpler. The 470 was a crappy card.

That's his reality. Ask what he sold them used for. I think in justifying his 6950 purchase, he flipped them for more than he paid ! They must have been special ! /sarcasm.
 

sze5003

Lifer
Aug 18, 2012
14,318
682
126
Back on topic. We all know it's that evil AMD's fault that the 660ti is over priced. they were using Jedi mind tricks to get nVidia to do their bidding. nVidia is now no longer under AMD's evil spell and are going to price the 660ti where they always wanted to. Being the kind and gentle company that they are. /sarc

The problem is that by the time they get the price where it belongs it'll be a $200 card. $270 - $320 is still too high. $270 is good, but how many are actually going to be that price?

I'm going to guess and say that the ones that start at $275 will be the reference cooler styles. Anything above that you will pay up to $320 or so. I got my 560Ti last year in October and I payed around $260 for it.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Do you really think that a video card makes any appreciable contribution to system power draw besides the power it uses directly? Is there any testing that's ever been done that shows that?

Well it's the only logical explanation that I can come to when looking at websites like hardocp, hardwarecanucks, and here at anandtech when cross referencing with techpowerup.

Total system power draw with a gtx670 being only 7 watts higher than the same system with an hd7870 and 36 watts lower than with the hd7950

46458.png


or drawing a watt less than with the hd7950

1336636802OhmufU4CHj_8_1.gif


and drawing 16 less watts than with the hd7950

NV-GTX-670-88.jpg


Yet techpowerup shows the gtx670 is drawing more average and peak power than the hd7950: http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_670/26.html Most, if not all, reviews show (this is just one example) that an identical system with a gtx670 draws the same or less power than with an hd7950, yet techpowerup shows that when they isolate their GPU's the results are way, way different than what it should be if all other things stayed the same. If you have a very solid explanation for this I'm all ears, otherwise what is happening is exactly what I am describing.
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_670/26.html Most, if not all, reviews show (this is just one example) that an identical system with a gtx670 draws the same or less power than with an hd7950, yet techpowerup shows that when they isolate their GPU's the results are way, way different than what it should be if all other things stayed the same. If you have a very solid explanation for this I'm all ears, otherwise what is happening is exactly what I am describing.
Let's do a little introductory to the scientific method - what else is different among these tests?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You all completely skipped over the point I was trying to make. If X gpu is more efficient when isolated, that's fine and dandy and neat to point out, but if X gpu causes the rest of the system to work harder and draw more power than the less efficient Y gpu then in my opinion it's just as important to have that information and to measure the total power draw of the system. Total system power is what shows up on the electric bill, so in my opinion that trumps anything when it comes to measuring power and efficiency.

tviceman, I did not ignore that total system power consumption should be considered. What I said is that you cannot use total system power consumption to derive GPU power consumption accurately without accounting for the PSU inefficiency. This is because the PSU inefficiency applies to all components. If you aggregate this number mathematically, you assign some of the PSU inefficiency to the GPU power consumption. This is mathematically incorrect.

Example:

1A) 200W GPU + 150W system @ load @ 80% PSU efficiency = 350W actual or 438W at the wall

2A) 200W GPU + 150W system @ load @ 92% PSU efficiency = 350W actual or 380W at the wall

vs.

1B) 150W GPU + 150W system @ load @ 80% PSU efficiency = 300W actual or 375W at the wall.

The GPU power consumption difference is 50W but this shows up as 63W due to PSU inefficiency.

2B) 150W GPU + 150W system @ load @ 92% PSU efficiency = 300W actual or 326W at the wall

The GPU power consumption difference is 50W but this shows up as 54W.

You cannot measure GPU power consumption unless you take PSU efficiency into account.

Also, under real world conditions a typical modern system draws 300W of power in gaming without a display:

Power.png


Add to this your LCD/LED/Plasma, for many of us overclocked CPUs, maybe a lamp in your room and the extra 30-50W of power is mostly irrelevant in most of North America. In the context of a modern desktop PC rig in you room/office, the total power consumption of a desktop PC + monitor is likely to be 350-400W already.

In North America the cost savings argument is difficult to justify since even if you game 10 hours a day straight, 50W of extra power @ 24hr x 365 days @ 0.20 per Kwh = $36 per annum.

How many gamers here game 10 hours a day for 365 days?

Further, the cost savings argument against AMD HD7900 series of cards is undermined completely by virtue of AMD HD7900 series' making $ bitcoin mining. If in fact costs were your primary consideration in the overall system build, it would be difficult if not impossible to ignore the bitcoin mining feature of AMD cards. Thus, it is not a reasonable position to undertake since if saving $ was the most important factor, then buying AMD HD7950/7970 cards would be the only logical outcome. If bitcoin mining is not a consideration, the cost savings cannot be the most important factor by default imo.

It is also difficult to prove that if a modern system is drawing 350W of power with the monitor already that another 50W of power will exponentially heat up your room/office.

If we take the position that the extra power consumption is extremely important, one has to question why such a person does not just buy a Playstation 3 as the entire device in Slim SKU now draws just 108W of power at full load. If 50W of extra power consumption is so vital for your gaming PC, then why would you not purchase a PS3 and save 250W of power?

Furthermore, the cost argument is further invalidated given the fact that an 1025-1050mhz HD7950 draws 167-170W of power and yet delivers near GTX680 level of performance. The immediate cost savings in such a purchase is $170 as the 7950 retails for $320-330. Even against an after-market 670, this is a savings of $70-80. You cannot mathematically save $ on a GTX680's power consumption vs an overclocked 7950 since you can simply stop overclocking the minute you reach GTX680's level of performance which occurs very near GTX680's power consumption level. The end result is still a savings of $170.

The extra power consumption of an overclocked 7950 vs. an overclocked 660Ti part also provides superior performance. Thus, the extra power consumption is not a wasteful metric as it results in a measurable performance advantage. Since I presume you have chosen to game on a PC for its superior performance and image quality despite PS3 consuming 1/3 of the power of a gaming PC, why would you not assign any value to 7950 OCed 20% performance advantage against an overclocked 660Ti part? Of course, it's an individual's choice if performance/watt is more important than absolute performance. However, this creates a circular argument because GTX660Ti itself is not the leaders in performance/watt, which means if performance/watt is the most important metric, then 660Ti does not fit this metric either:

perfwatt_1920.gif


If then we assume that the extra power consumption of the 7950 and its 20% performance increase in overclocked state against an overclocked 660Ti is not worth it due to performance/watt being a key metric, we still arrive at the conclusion that HD7870 is superior to GTX660Ti. Since 660Ti offers much worse performance/$ and inferior performance/watt, therefore we arrive at the conclusion that 660Ti needs a price drop against the 7870 SKU.

So what has been stated earlier has been proven true:

1) If performance/watt is the most important metric, 660Ti needs a price drop against the 7870;
2) If performance/$ is the most important metric, 660Ti needs a price drop against the 7870;
3) If #1 and #2 are the most important metrics, 660Ti needs a price drop against the 7870;
4) If top OCed performance is the most important metric, 660Ti needs a price drop against the 7950.
5) If MSAA performance is most important metric, 660Ti needs a price drop against the 670 since a mid-range SKU and a top-tier SKU should not have linear price/performance scaling using historical market trends.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com

I am not talking at all about overclocking and the ultimate point I am trying to make is not about cost. I totally understand that a PSU is anywhere between 70-90% efficient depending on it's power load and that the actual power use of the system is not what the numbers are reading at the wall - that doesn't change the fact of the point I am trying to get at. It's not the power draw numbers I am actually directly getting at - it's that techpowerup is showing a X GPU to draw less power than Y GPU when isolated, yet ever other website is showing that with identical systems, the total system power draw with the Y GPU is less or equal to the X GPU.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
perfwatt_1920.gif


It now seems that if the metric of performance/watt is the most imortant, we still arrive at the conclusion that HD7870 is superior to GTX660Ti. Since 660Ti offers much worse performance/$ and inferior performance/watt, we still arrive at the conclusion that 660Ti needs a price drop against the 7870 SKU.

Now this is what I am getting at, although I wasn't trying to specifically match up any two cards against each other. Techpowerup isolated the GPU's and showed the hd7870 is drawing less power than the gtx660ti. But Anandtech took the power draw of the whole system, and found the difference to be negligible (2 watts) therefore making the same system with gtx660ti more efficient per watt than the hd7870 when gaming.

The final system power draw is more important to me, because apparently as I am arguing, that if techpowerup's power draw measuring is accurate, then certain GPU's require other components (the CPU) to work harder.
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Different benchmarks used. ;)
Very good :)
They're all using different benchmarks to read their draw power, and the only dramatically inconsistent result is techpowerup. One of these things is not like the other.
Again though, it's "dramatic" because you're not thinking. The only assumption I'm making is that all these sites are correctly and honestly reporting their data (which is a big assumption, but I digress). In our sample we have many variables that affect the power consumption measured:

- Different applications (different games, benchmarks, etc.)
- Different systems
- Different drivers
- Different cards (reference vs. aftermarket vs. age, etc.)
- Different measurement methods
- Different folks measuring
- Other sources of gross error

You're making a very simple but substantial mistake in assuming you can compare these different data sets without accounting for the above and more. Therefore it's easiest and best to do a stratified meta-analysis based on testing conditions. Summarily, we can see that the the GTX 670 sometimes uses more power than, sometimes uses less power than, sometimes uses the same power as a 7950.

But here's some more statistical fundamentals thrown at you: do you know what this all means? They're not significantly different and all of this arguing is pointless. :D
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
They're all using different benchmarks to read their draw power, and the only dramatically inconsistent result is techpowerup. One of these things is not like the other.


They are all inconsistent?

AT: 670 to 7950 difference = 36w
[H]: 670 to 7950 difference = 1w
HC: 670 to 7950 difference = 16w
TPU 670 to 7950 difference = 18w avg. / 17w peak in a more recent review

If anything it shows that you can't compare numbers from different sites with different drivers, benchmarks, systems, measuring techniques, etc.

Besides, it's not like we are talking 100w differences here. Even if it's 36w at the wall, it's not enough to really fret over.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
They are all inconsistent?

AT: 670 to 7950 difference = 36w
[H]: 670 to 7950 difference = 1w
HC: 670 to 7950 difference = 16w
TPU 670 to 7950 difference = 18w avg. / 17w peak in a more recent review

If anything it shows that you can't compare numbers from different sites with different drivers, benchmarks, systems, measuring techniques, etc.

Besides, it's not like we are talking 100w differences here. Even if it's 36w at the wall, it's not enough to really fret over.

You are spinning the data.

Anandtech shows the system with the gtx670 drawing 36 less watts.
hardware canucks shows the gtx670 drawing 16 less watts.
Techpowerup shows it is drawing 18 MORE WATTS. If all else would be the same, then the gap between anandtech and techpowerup is 54 watts when comparing the same two video cards (34 watts with hardwarecanucks).

And I agree, it's not enough to fret over. I'm not trying to get to any kind of conclusion as to which card or vendor has the most efficient architecture, I'm pointing out the big difference between techpowerup's methods of measuring power draw and other websites, and why I think overall every other website's power draw numbers add up to show there is more going on than what isolating a GPU's power draw will show.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It's not the power draw numbers I am actually directly getting at - it's that techpowerup is showing a X GPU to draw less power than Y GPU when isolated, yet ever other website is showing that with identical systems, the total system power draw with the Y GPU is less or equal to the X GPU.

There is no contradiction. You are not comparing the same videocards then. TechPowerUp shows that a reference 7970 card uses more power than an after-market 7970 GE card. That is true. AT only tested a reference 7970 GE card which is not an actual product. There is no such thing as a reference 7970 GE product since it's only a review sample.

"With that said, we suspect the story of the 7970 GHz Edition hasn't been completely told just yet. AMD's partners haven't delivered their customized cards, and as we've noted, our review unit is more of a reference design than an actual product." ~ TechReport

This is the point we are trying to communicate to you guys - you cannot use reference cards to predict power consumption of after-market cards. Using reference 7950B and reference 7970 GE to measure power consumption as was done in AT's review is ludicrous for comparing after-market versions since a cooler videocard with upgraded components uses less power and they don't use the same BIOS.

There is no anomaly in TPU's numbers as you are claiming. TPU shows that a reference 7970 GE card draws a ton of power, 30-40W more than an after-market version. Countless reviews show that an after-market 7970 only draws 225-238W of power @ 1150-1165mhz. A reference 7970 GE card draws around that at just 1050mhz. This is all known. This is why using reference power consumption numbers to extrapolate against after-market 7950/7970 cards is completely meaningless (especially since you can't buy a 7970 GE card in retail as it does not exist).

You are spinning the data.

Anandtech shows the system with the gtx670 drawing 36 less watts.
hardware canucks shows the gtx670 drawing 16 less watts.
Techpowerup shows it is drawing 18 MORE WATTS. If all else would be the same, then the gap between anandtech and techpowerup is 54 watts when comparing the same two video cards (34 watts with hardwarecanucks).

Which specific cards are you even referring to? Reference cards. Because that's all AT tested. Meaningless. 7950B card @ 1.25V is all you are comparing.

I can find 10s of reviews that show the opposite, not that it matters.

7950 uses less than 670 @ Techreport
power-load.gif


7950 uses less than 670 @ Techspot
http://www.techspot.com/review/565-nvidia-geforce-gtx-660-ti/page11.html

7950 uses less than 670 @ Guru3D
http://forums.anandtech.com/showpost.php?p=33856350&postcount=53

That's enough to prove a point that TPU's results are not an anomaly.

The only thing that keeps evading you guys for some reason is that a reference 7950B / 7970 GE cards have 0 to do with how after-market 7950/7970 GE cards work. No matter how much we tell you guys, you ignore it.

1) You cannot buy a reference 7970 GE card in retail. Using them to indicate noise levels, power consumption - all meaningless since it's not a real product you can buy.
2) After-market 7970 GE cards use significantly less power than a 7970 GE reference design card.
3) There are countless 7950 cards that come at 880-950mhz clocks with 0.97-1.1V levels. What is this constraint that a reference 7950B card is representative of 7950 series? The only thing that card represents is 1 such card on Newegg.

The interesting part is that unlike a healthy enthusiast forum where people share their own experiences and power consumption and voltage numbers and noise levels for after-market cards, when this information is presented by actual 7950/7970 owners, all of this is ignored and instead the focus shifts on some reference 7950B and on a non-existent retail reference 7970GE card no one on our forum owns!!!!!!!!!!! Talk about a red-herring.

So what's the point of this sub-forum then if we ignore people's actual real world usage patterns with after-market cards? Or are we only allowed to discuss user experiences with NV after-market cards? By sharing our own experiences, it strengthens our community since we discuss our own experiences (both positives and negatives). Instead, you guys dismiss all of this, turn it into another AMD vs. NV flame-fest, and by dismissing what overclockers are telling you about how the after-market 7950/7970 cards actually work in the real world, you are negating the whole point of sharing any real world experience/data here. Isn't sharing our actual real world experience partly a reason why this sub-forum has value? Or should we only read reviews and never ask people who use the product how it actually functions?

It's pretty interesting when actual owners of the said products are saying that reference cards have nothing in common with voltage levels and overclocking of after-market cards with different BIOSes, this is dismissed as irrelevant. Unbelievable.
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
.............

are you suggesting that somehow when these reviewers change only the GPU for testing power consumption that their PSU efficiency somehow is effected by going up or down? In these reviews only the GPU is different on the same system and this souldnt have much effect on the PSU efficiency. At the end of the day, its still the real world effect and typically what you would expect to see.

The most conclusive and accurate way to compare the real power consumption is at the wall. This is the true power consumption and it is absolute. If you change only a GPU you will see the absolute effect it is having on power consumption. This method takes everything into account, it is much more real than methods that dont include the complete system. Its the most infallible way to see the real consumption from adding a GPU. It is exact and precise.

I am with Tviceman on this, and it appears more reviewers would also agree. It seems like a tiny few do not do a total system draw at the wall method.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
GPU efficiency is not a bad thing - don't get me wrong - but when you're arguing 30-36 watts at 100% load you all realize that this makes a net difference of less than 2$ per year? (actually, the difference is likely less than 1$ TBQH) Because idle power is roughly the same and unless someone games 24/7 there will not be a difference in a power bill, period. The only thing I care about efficiency wise are the indirect effects (noise) which isn't a big deal since most cards have great aftermarket solutions, but everyone has to make their own judgement call there. When I had 7970s months ago I never thought the ref noise was terrible, but I also kept it at 40% manual speed - I do feel the 680 has a better ref cooler (and hopefully AMD will make a better one for their next gen) but then again there are tons of mostly noise free aftermarket cards as well.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
GPU efficiency is not a bad thing - don't get me wrong - but when you're arguing 30-36 watts at 100% load you all realize that this makes a net difference of about 2$ per year? Because idle power is roughly the same.

Again -- GPU efficiency is a good thing obviously because chip makers want to get their chips in mobile products, and efficiency means more headroom for mobile and such. But i'm just saying - for desktop - in terms of efficiency some of the indirect things may bother me (noise, increased ambient temps) but i'm not sure why a power bill would be brought up. Maybe outside the US things (higher electricity costs) are different I guess.

Argh. I'm not arguing about which card is more efficient. I don't really care. I am arguing about what is the best way to show how efficient something is - by isolating it and ignoring other factors that may skew results when looking at the entire system, or by NOT isolating it and just looking at the entire system. My argument is from an end user point of view, comparing identical systems with different graphics cards is better than isolating the graphics cards because different GPU's are extracting different loads from the CPU and/or other system components.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Argh. I'm not arguing about which card is more efficient. I don't really care. I am arguing about what is the best way to show how efficient something is - by isolating it and ignoring other factors that may skew results when looking at the entire system, or by NOT isolating it and just looking at the entire system. My argument is from an end user point of view, comparing identical systems with different graphics cards is better than isolating the graphics cards because different GPU's are extracting different loads from the CPU and/or other system components.

Gotcha, that makes sense. I hopped in mid thread, so I didn't catch all of it.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
Thank you for this highly informative contribution. Go back into your cave now.

Oh dang. Hey man, don't be so mad. How many watts does everyone's overclocked CPU suck up? Why aren't you crying about that? You obviously biased people are sickening. Arguing over power draw less than a freakin light bulb gets you going? :rolleyes:
 
Last edited:

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
The most conclusive and accurate way to compare the real power consumption is at the wall. This is the true power consumption and it is absolute. If you change only a GPU you will see the absolute effect it is having on power consumption. This method takes everything into account, it is much more real than methods that dont include the complete system. Its the most infallible way to see the real consumption from adding a GPU. It is exact and precise.

I am with Tviceman on this, and it appears more reviewers would also agree. It seems like a tiny few do not do a total system draw at the wall method.
The most conclusive and accurate way to compare the real power consumption is at the wall. This is the true power consumption and it is absolute. If you change only a GPU you will see the absolute effect it is having on power consumption. This method takes everything into account, it is much more real than methods that dont include the complete system. Its the most infallible way to see the real consumption from adding a GPU. It is exact and precise.

I am with Tviceman on this, and it appears more reviewers would also agree. It seems like a tiny few do not do a total system draw at the wall method.
Neither of you have explained how or why you think measuring power consumption off the lines somehow masks power consumption of the other components of the system. What do you think is more probable - that the differences in load consumptions change based on the application (since each site is using a different application to measure power consumption) or by measurement method?