Ghz edition 7970 coming very soon! (Softpedia)

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
This is completely ridiculous unless you pay high rates and game for most of the day every day. An extra 100w (may be more or less depending on game--see HardOCP review for more info) is 0.1kwh. Say you pay 10 cents per kWh. That means an extra one penny per hour of gaming. Even if you game for 4 hours a day 365 days a year (in which case you really need a life), that's $14.60 assuming perfect PSU efficiency. If it's 90% efficient like he's saying his Gold rated PSU is, then it's $14.60/0.9 = $16.22 per year.

If he games for less than 4 hours a day every day, it will be even less.

In fact if he doesn't game at all, the 7970 will SAVE him money since it has lower idle. The 7970 is more efficient at idle and at zerocore, (12 vs. 14 watts), so that will further reduce the $electricity delta, esp. if he idles for a lot longer than he plays games.

Granted the guy's rates are 10-20 cents/kWh so that would be higher; and if you get bumped up a tier on your rates you could well pay more than 10 cents/kWh, but I'm going by typical USA rates. I also think gaming 4 hours a day every day is unhealthy.


You know what my overclocked 7970 is doing right now to save me money?

Running Bitcoin.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I do not want to turn this thread into a discussion about BTCs and their long term viability. Just talking straight up power here. But BTC profitability will go way down soon (reward per block halves at the end of 2012), and FPGAs are already making their impact felt. Difficulty is only going up from here on out as people upgrade their GPUs and buy FPGAs or even ASICs, so expect continually shrinking margins. Probably will be unprofitable for most people without free or very cheap electricity by the end of 2012. It's already not profitable for much of Europe.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
How so? According to Anandtech, the 7970 uses 24-30 more watts than the GTX680. Fermi used over 100 watts more than the 5870. Add in the fact that Fermi launched six months after the 5870, so expectations were high, and you get the backlash that occurred when Fermi launched.

The GTX480 felt like a rushed product with corners cut to get it out to have something to at least have the look of a competing product. The GTX580 showed how 'broken' the GTX480 was. 10%+ higher clock speeds, all parts of the chip fully functional, while using less power and being quiet.

AMD's situation with the 7970 is nothing like Fermi, in my opinion.

This is completely on point. I don't give a toss about power consumption as far as how much is being consumed and the related energy costs. I do care about noise and heat.

Fermi initially was a roaring furnace and that is what I saw it getting lambasted for most of the time. The power consumption is what caused it to be that way, but the actual consumption and energy costs I don't think were what concerned those buying. It was the heat and noise being dealt with that were obnoxious. Those characteristics in tandem with the huge delay and really small perf. lead over the 5870 were what had the intial Fermi cards being met with such negative reviews.

Trying to call it as some sort of about face is hugely inaccurate. The two situations are totally different. Now the crying about the 7970's launch price and less than expected performance against the lack of said crying over the 680's launch price and less than expected performance is an about face of some substance...
 

96Firebird

Diamond Member
Nov 8, 2010
5,749
345
126
How so? According to Anandtech, the 7970 uses 24-30 more watts than the GTX680. Fermi used over 100 watts more than the 5870. Add in the fact that Fermi launched six months after the 5870, so expectations were high, and you get the backlash that occurred when Fermi launched.

The GTX480 felt like a rushed product with corners cut to get it out to have something to at least have the look of a competing product. The GTX580 showed how 'broken' the GTX480 was. 10%+ higher clock speeds, all parts of the chip fully functional, while using less power and being quiet.

AMD's situation with the 7970 is nothing like Fermi, in my opinion.

I was more pointing toward the fact that people complained that Nvidia's cards cost $x.xx more per day of gaming compared to an AMD card in the past. Now it seems people are complaining that AMD's cards cost $x.xx more per day of gaming compared to an Nvidia card.

I'm sorry if it sounded like I was comparing your precious 7xxx series to the horrid Fermi.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I was more pointing toward the fact that people complained that Nvidia's cards cost $x.xx more per day of gaming compared to an AMD card in the past. Now it seems people are complaining that AMD's cards cost $x.xx more per day of gaming compared to an Nvidia card.

I'm sorry if it sounded like I was comparing your precious 7xxx series to the horrid Fermi.


Why the sarcastic remark at the end? I thought I made a well thought out post that explained how Nvidia's situation regarding performance/watt was not very much like AMD's situation regarding 7970 performance/watt.
 

96Firebird

Diamond Member
Nov 8, 2010
5,749
345
126
Why the sarcastic remark at the end? I thought I made a well thought out post that explained how Nvidia's situation regarding performance/watt was not very much like AMD's situation regarding 7970 performance/watt.

Because you, like plenty of other people here, get upset when anyone mentions the 7xxx series even closely resembling the Fermi of last gen. If you can't see how the tables have turned, there's nothing I can do to help that.
 

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
Because you, like plenty of other people here, get upset when anyone mentions the 7xxx series even closely resembling the Fermi of last gen. If you can't see how the tables have turned, there's nothing I can do to help that.

You generalise lots,gonna back up what you are spewing?
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Because you, like plenty of other people here, get upset when anyone mentions the 7xxx series even closely resembling the Fermi of last gen. If you can't see how the tables have turned, there's nothing I can do to help that.


I don't think you will have much luck finding a post where I seem upset regarding the 7970's power use for the performance.

The GTX680 came out a bit later, it is the better performing chip, and it does so with less power use. There is no denying how well Nvidia nailed the GTX680 as a gaming chip. The only way it could be better would be if it came out before the 7970 or had voltage control, in my opinion. I don't think that means that the 7970 is at all a bad part... it was suddenly and certainly priced too high as soon as the GTX680 was available, there is no denying that. But, it was still not a 'bad' part and simply needed a price adjustment, which it has received. Now that it is quite a bit cheaper, has 1GB more memory, overclocks to similar performance levels (even possibly faster than an overclocked GTX680 if you have a good chip), and much easier to find than the GTX680, I think it is a fantastic part... and I simply do not think that 25 watts at load changes that at all. And if you are the type of person that leaves your computer on 24/7 and only game a couple hours a day, the 7970 may actually cost LESS to run.

Sorry, but I do not think the situations are at all the same as the 5870 vs. Fermi.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Because you, like plenty of other people here, get upset when anyone mentions the 7xxx series even closely resembling the Fermi of last gen. If you can't see how the tables have turned, there's nothing I can do to help that.

Have you ever considered the possibility that you could be the one who is biased? Fermi was 6 months late and had a modest performance advantage, but major power/heat/noise disadvantage that required the GTX580 to fix. 7970 was 4 months early (relative to competition and perhaps rushed to market), basically tied in performance (OC vs OC), and it doesn't have the firebreathing problems of the GTX480. It's lower power draw despite having twice the VRAM. So yeah there are similarities but it's to a much lesser degree. The time difference alone is huge (10 month difference, a lifetime in tech terms).
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I don't understand why AMD did not clock the hd7970 at 1000-1050mhz to begin with. They should realize that yields were going to improve, and they should have also realized that the miniscule gap between the hd7970 and hd7870 would make it impossible to harvest and sell a third Tahiti-based SKU.

AMD explained why they clocked Tahiti where they did. That's the clock that worked with their engineering samples. Setting clocks isn't just about how fast but also the power requirements. Newer chips are clocking higher with the same voltage.

They don't set clocks like a typical consumer O/C'ing their card. They need to test to certify that the cards will be stable and return an acceptable yield %. They likely could have released at higher clocks, but to do it right they would have had to go through a re-certification process. That takes time and would have delayed the launch.

It's another case of AMD having a nice advantage in one form or another, and completely squandering it. Ever since the hd5870 and hd5850 - which were excellent products AND executed well - to me it seems like they keep tripping over themselves. Releasing hd6800 cards that are slower than hd5800, not reacting to gtx460, cayman offering only a minor performance bump, releasing an underclocked hd7970 flagship card, over-pricing hd7970, taking a month to react to gtx680, even longer to get a proper hd7970 out, and the noise levels for some of their highest end reference cards are abysmal.

1st to market for the entire range with quantities while nVidia has released one card is hardly tripping over themselves.

The 6800 was a restructuring of the lineup. I agree it was a confusing move. But it is what it is. It's not the replacement for the 5800.

The 460 was what, 8 or 9 months late to market? It was almost refresh time for AMD by then. GPU's aren't something you can just pull out of thin air. The 6850 answered the 460 just fine.

Cayman was supposed to be on 32nm. TSMC cancelled it and it forced AMD to neuter the card. It was released with fewer shaders than originally planned. It very likely would have been faster than the 580 if it had the shader count it was intended to (1920sp, or there abouts?).

You don't like the price of the 7900. I don't either. The $550 price made sense with the market at the time.

Taking a month to react to the 680? How about nVidia almost 2mos. after releasing the 680 still can't get any quantities of them into the market.

Even longer to get a proper 7970 out? I assume you mean a higher clocked version. The process has improved and the chips run faster on less voltage. That's TSMC improving their process. AMD has zero control over that. We're still waiting for GK100/110 if you want to talk about something taking too much time.

As far as noise goes, I haven't heard both cards side by side so I can't comment first hand. It's virtually impossible to determine anything from reviews as each reviewer has his/her own way of measuring SPL, and to be honest, every method I've seen is not scientifically verifiable nor well conducted. The only way to accurately measure the noise from a card would be to isolate it in an anechoic chamber and measure it with calibrated equipment with a standardized procedure that was followed by everyone. How it's done now is a joke.

Nvidia was a half generation late with GF100, but once the gtx460 came out, to me it seems like they have been consistently performing (other than the poor pricing of the gtx550ti): timely fixing GF100 and retaining the single GPU crown, capturing the $200-250 market with the gtx560ti, releasing a plethora of sku's for each Fermi chip they made(which is confusing as hell and negatively looked upon by many, but is good for their bottom dollar), getting their next-gen dual part out before AMD, and now beating AMD at their small die, lower power strategy.

Looks like they are going to be a whole generation late with GF110's replacement. They're getting worse. The only consistency from nVidia is they are consistently late, and by hook or by crook they release the faster product. The GTX 480 was an abortion, neutered, hot, power hungry, late, but it was fast enough to beat the 5870. The 580 was considered timely, but it was in reality a fully functioning GF100 that was a year late. Good thing it was such a powerful design and was still faster than the 6970. Good thing TSMC didn't get 32nm working on time, too.

Capturing the 200-250 market with the 560ti? I'm not sure where you are getting that from. The 560ti and the 6950 competed head to head.

Releasing a plethora of SKU's was good marketing. About the same as AMD's readjustment of their lineup that brought the 6800 to a lower price point than the 5800 before it. The exact strategy you called a fail for AMD is good for nVidia. Sounds a bit bias to me.

While they have released the 690, it is MIA in the market. Again, it's a good marketing move. nVidia is very good at that. Where are the cards though?
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
I prefer NV's approach: take your time and get it right the first time (and with more features such as GPU Boost and Adaptive VSYNC and multi-monitor without resorting to adapters). AMD's rushed approach was to offer basically the same price/perf ratio as HD6xxx--which is pathetic with a new node. Whereas NV took their time and are offering a significantly higher price/perf ratio.

Unless you bought a 7950 or higher, the 7xxx series didn't add much, because you could have just overclocked a GTX580 to reach that performance level anyway. (I am ignoring multi-GPU as that has driver and microstutter issues.)

I mean yeah 78xx is nice but at the end of the day you didn't really gain much over a GTX 5xx card other than a slightly higher price/perf, more VRAM, and more efficiency. Hardly earthshattering and very pathetic given that 78xx is a full node smaller.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I prefer NV's approach: take your time and get it right the first time (and with more features such as GPU Boost and Adaptive VSYNC and multi-monitor without resorting to adapters). AMD's rushed approach was to offer basically the same price/perf ratio as HD6xxx--which is pathetic with a new node. Whereas NV took their time and are offering a significantly higher price/perf ratio.

Unless you bought a 7950 or higher, the 7xxx series didn't add much, because you could have just overclocked a GTX580 to reach that performance level anyway. (I am ignoring multi-GPU as that has driver and microstutter issues.)

I mean yeah 78xx is nice but at the end of the day you didn't really gain much over a GTX 5xx card other than a slightly higher price/perf, more VRAM, and more efficiency. Hardly earthshattering and very pathetic given that 78xx is a full node smaller.

Take your time and get it right? I don't understand how you can think that. Everytime there's a node change nVidia is late and then releases a rushed design. I say rushed because they either have to be crippled to run (gtx 480) or they're yielding so bad that they can't supply them. AMD is just ready sooner than nVidia.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
By waiting a little longer for TSMC to get its 28nm act together and by stripping out HPC stuff to make things run more efficiently, and designing PCBs differently, NV managed to give 35% more performance for the same price as GTX580.

7970 over 6970 worsened price/perf and if it weren't for NV, perhaps we'd still be paying >$500 for 7970s.

(All price comparisons are launch vs launch)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
By waiting a little longer for TSMC to get its 28nm act together and by stripping out HPC stuff to make things run more efficiently, and designing PCBs differently, NV managed to give 35% more performance for the same price as GTX580.

7970 over 6970 worsened price/perf and if it weren't for NV, perhaps we'd still be paying >$500 for 7970s.

(All price comparisons are launch vs launch)

You are only talking games. Tahiti is designed to do more than games. GK104 is just a larger Pitcairn. AMD has accomplished the same thing as nVidia, sooner, and they have cards to sell. You can try and say nVidia has waited and released a more mature product all you want to. The facts say otherwise. The parameters you choose to back your position have nothing to do with it. How fast the GK104 is in games doesn't address any of your positions.

Additionally, if nVidia had launched on time and with their true top range chip we might have only paid $400 for Tahiti, if you want to talk what caused Tahiti to be so expensive.
 
Last edited:

Mad_dawg

Junior Member
May 5, 2012
17
0
0
You are only talking games. Tahiti is designed to do more than games. GK104 is just a larger Pitcairn. AMD has accomplished the same thing as nVidia, sooner, and they have cards to sell. You can try and say nVidia has waited and released a more mature product all you want to. The facts say otherwise. The parameters you choose to back your position have nothing to do with it. How fast the GK104 is in games doesn't address any of your positions.

Additionally, if nVidia had launched on time and with their true top range chip we might have only paid $400 for Tahiti, if you want to talk what caused Tahiti to be so expensive.

Why would you want Tahiti for $400 when you can get Kepler for $400.. The hd 7970 consumed more than a 100w just to be on par with the gtx 680... I don't get this logic...
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Why would you want Tahiti for $400 when you can get Kepler for $400.. The hd 7970 consumed more than a 100w just to be on par with the gtx 680... I don't get this logic...


Because you need divine intervention to get a 680, and it's only efficient at gaming, and your +100W statement needs to be qualified. It is by no means accurate as stated. Why are you combining the gtx 680 and the $400 price point in the same example? It would make sense only if you were trying to be misleading. The 680 is $500.

For example:

The 7870 Lightning qualifies as on par
perfrel.gif


This is the highest power reading measured by both cards while playing a game. 47W difference.
power_peak.gif


This is the avg. power usage while playing a game. 35W difference.
power_average.gif


Now, let's look at GPU assisted rendering, like I would use when modeling.
luxmark.png


Gee, why would I want a 7970? This is what you asked me, right? Hopefully you get the logic.
 
Last edited:

Elfear

Diamond Member
May 30, 2004
7,169
829
126
Why would you want Tahiti for $400 when you can get Kepler for $400..

Skyrim with mods. :biggrin:

The hd 7970 consumed more than a 100w just to be on par with the gtx 680... I don't get this logic...
perfrel_1920.gif

perfrel_2560.gif

power_average.gif


You sure about that hoss?


*Edit- Doh! 3DVagabond beat me too it with the same graphs, although with average power consumption the difference is only 35W.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
You are only talking games. Tahiti is designed to do more than games. GK104 is just a larger Pitcairn. AMD has accomplished the same thing as nVidia, sooner, and they have cards to sell. You can try and say nVidia has waited and released a more mature product all you want to. The facts say otherwise. The parameters you choose to back your position have nothing to do with it. How fast the GK104 is in games doesn't address any of your positions.

Additionally, if nVidia had launched on time and with their true top range chip we might have only paid $400 for Tahiti, if you want to talk what caused Tahiti to be so expensive.

Yes but I think we who buy cards for non-games are in the minority. If I wanted a card purely for gaming purposes, there is no doubt in my mind that at current prices, I'd probably go for a gtx 6xx.

Also, NV did not rush the GK110 and are taking their time with it because the 680 can hold the fort while they game-ify it. (They are reportedly releasing it as Tesla first. I would too... bigger profit per GPU there. http://vr-zone.com/articles/how-the...-prime-example-of-nvidia-reshaped-/15786.html And note how NV is specializing cards again after the G80 to GF100 stretch in which they seemingly didn't care if gamers paid for more and more unused HPC silicon.)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Yes but I think we who buy cards for non-games are in the minority. If I wanted a card purely for gaming purposes, there is no doubt in my mind that at current prices, I'd probably go for a gtx 6xx.

Also, NV did not rush the GK110 and are taking their time with it because the 680 can hold the fort while they game-ify it. (They are reportedly releasing it as Tesla first. I would too... bigger profit per GPU there. http://vr-zone.com/articles/how-the...-prime-example-of-nvidia-reshaped-/15786.html And note how NV is specializing cards again after the G80 to GF100 stretch in which they seemingly didn't care if gamers paid for more and more unused HPC silicon.)

I'm sure if they had enough GK110 they'd sell it to the public. Which you will see them do as soon as they can. You never have to pay more for unused silicon. You would still have the option of buying GK104 instead of GK110 if both were available.