• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Rumored specifications for HD8850/8870 - Launch January 2013 (?)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Those specs and performance expectations are quite realistic. AMD knew what they needed from their mid range HD 8870 chip as early as this March. Also given the fact that HD 7950 can easily match or exceed GTX 680 at same clocks its easy to see that HD 8870 will be competitive with the GTX 680. With the same resources as HD 7950 except the 384 bit memory controller and double precision / ECC stuff , HD 8870 is essentially HD 7950 stripped of all the fat to make a lean efficient gaming chip like GK104. If AMD can price the chip at USD 250 instead of the rumored USD 280 they would have a killer chip like the legendary 8800 GT.


Even at $300, that chip would be a killer value in the scenario where it's beating out the 680. You're talking about a $300 chip beating a $500 chip. Hell, in that scenario, with AMD being the AMD that started the 79xx series out at $550 and $450, I think it's much more likely AMD would postpone the 89xx series, price the 88xx series at the same levels as the 680 and 670 (or ever so slightly below) and then wait for nVidia to respond.

When nVidia is about to show up with their new parts, then AMD releases a high end 89xx series at equivalent pricing with price drops on the 88xx series to make room. That's when you get your $250-$300 88xx series. Money in the bank, maximizing profits, and as a bonus there's everyone buying the 88xx series at lower prices feeling like they're getting a premium card that "only a few months ago was $350-$450."

That's more likely for the new AMD pricing strategy. Just going by what they did for this gen. It seems to have worked out well for them. Maybe not for the consumers where the new mainstream was at $400 for months, but hey... what do we matter?
 
That would be interesting to see what price they would start out at. But I'm guessing they would start around 500 or so assuming AMD comes out first.
 
If the specs are true, then Oland will likely have the same memory bandwidth performance issues that plague Kepler (or worse, since Nvidia usually manages to squeeze better performance out of the same or less mem bandwidth as AMD cards in each comparable generation).

Anyways, performance sounds pretty close to what I would guess (perhaps a little optimistic, I think a stock hd8780 will be about 10% slower than a stock gtx680), but that sounds like a pretty big die for a second tier AMD GPU.

For the price level these cards occupy, that's probably ok though.


It makes a lot of sense when you read the article and explanation, actually. Again, GF100 was rushed to market/unoptimized and Pitcairn was not. Simple as that.


I'm not saying you're wrong, you very well may be correct. But, I don't think you can say that it isn't possible for the power savings to be there simply because Pitcairn was not as rushed to the market as Fermi. Fermi was obviously a rush job, no doubt. But this is both AMD's and Nvidia's first go at 28nm, as they have learned the process we very well may see some solid improvements.
 
when you think about it this is what the x870 cards have done since the 6870 : be better than the previous gen's flagship or at least compete against it
 
For the price level these cards occupy, that's probably ok though.





I'm not saying you're wrong, you very well may be correct. But, I don't think you can say that it isn't possible for the power savings to be there simply because Pitcairn was not as rushed to the market as Fermi. Fermi was obviously a rush job, no doubt. But this is both AMD's and Nvidia's first go at 28nm, as they have learned the process we very well may see some solid improvements.
I keep reading this, but didn't Nvidia spend a long time designing Fermi. If I remember correctly, the estimated release date was many months earlier than actually happened. My understanding was that poor design decisions led to the power leakage problems and they spent a very long time trying to correct them. Hardly what one would consider a rush job.

Is it impossible for Nvidia to make mistakes, or if they do, then it automatically means that time was short?
 
I like how in this thread power is a concern and GF100 was bad, because it was 35% faster than the 5870 at stock in modern DX11 titles and overclocked like a dream. However when you step into another thread power is not a concern and thoughts of this nature are simply doing it wrong.

It's hard to keep up around here since what does and doesn't matter changes with each thread.
 
HD8850 - 28nm Oland Pro

3.4 billion transistors
~ 270mm-280mm^2 die
925mhz GPU clock (975mhz Boost)
1536 Shaders
96 TMUs (93.6 GTexels/sec texture fill-rate)
32 ROPs (31.2 GPixels/sec pixel fill-rate)
6 Ghz GDDR5 @ 256-bit (192 Gb/sec memory bandwidth)
TDP 130W
(Single-Precision Compute: 2.99 Tflops, 187 Gflops Double Prec.)
Comparable performance to GTX670, 35% faster than GTX 660 2.0GB

HD8870 - 28nm Oland XT

3.4 billion transistors
~ 270mm-280mm^2 die
1050mhz GPU clock (1100mhz Boost)
1792 Shaders
112 TMUs (123.2 GTexels/sec texture fill-rate)
32 ROPs (35.2 GPixels/sec pixel fill-rate)
6 Ghz GDDR5 @ 256-bit (192 Gb/sec memory bandwidth)
TDP 160W
(Single-Precision Compute: 3.94 Tflops, 246 Gflops Double Prec.)
Comparable performance to GTX680, 30% faster than GTX 660Ti 2.0GB

Also rumored to have some "AMD Wireless Display Technology" (?)

Launch: January 2013?

*** Take with a huge grain of salt due to random source***

http://read2ch.com/r/jisaku/1347605134/ID:hKcZJDn

>>> Personally I think this sounds way too good to be true since we are still stuck at 28nm (unless 28nm node has matured this much behind the scenes). If true, we are looking at GTX680 level of GPU performance next year at $349 I bet.


The specs are possible but it will make a much bigger die than the current HD7870 (Pitcairn). If the specs are legitimate, it could make the Tahitis replacement be more than 400mm2.

TAHITI = 4,31 million transistors @ 365mm2
PITCAIRN = 2,8 million transistors @ 212mm2
CAPE VERDE = 1,5 million transistors @ 123mm2
 
:biggrin:

Took 6-7 months for NV to match HD7950/7870 with GTX660Ti/660 in performance?

Really? I would take 7950 over 660Ti any day of the week, the same goes for 7870 over 660. Not to mention nv chips are already clock-speed-headroom exhausted whereas you can overclock 7950 to speeds that would exceed GTX680 performance. Really impressive catching-up indeed!


BTW. I know that there are some NV/Apple shill sites claiming that GTX660 is faster than 7870 and that 7970GE is at most on par with GTX680 but feel free to believe them.


The specs are possible but it will make a much bigger die than the current HD7870 (Pitcairn). If the specs are legitimate, it could make the Tahitis replacement be more than 400mm2.

TAHITI = 4,31 million transistors @ 365mm2
PITCAIRN = 2,8 million transistors @ 212mm2
CAPE VERDE = 1,5 million transistors @ 123mm2


IMHO doubling up pitcaring would be the best way to go. It should outperform CF7870 without all the hassle that comes with CF. And they should just leave tahiti for FireGL market.
 
Last edited:
I like how in this thread power is a concern and GF100 was bad, because it was 35% faster than the 5870 at stock in modern DX11 titles and overclocked like a dream. However when you step into another thread power is not a concern and thoughts of this nature are simply doing it wrong.

It's hard to keep up around here since what does and doesn't matter changes with each thread.
I don't know if you have a reading problem, but I never said that Fermi was bad or slow, merely this, ' My understanding was that poor design decisions led to the power leakage problems'.

The fact that the 500 series was so much better implies, for any reasonable person, that the 400 series had a power problem, or are you saying they did not have one.
 
I don't know if you have a reading problem, but I never said that Fermi was bad or slow, merely this, ' My understanding was that poor design decisions led to the power leakage problems'.

The fact that the 500 series was so much better implies, for any reasonable person, that the 400 series had a power problem, or are you saying they did not have one.

I actually wasn't replying to you, I hope next time your advanced comprehension of reading allows you to see that before you question another users ability to read. It should have been quite easy to see I wasn't implying anyone in particular but a group as a whole, how you missed that is beyond my abilities though perhaps a mystic can lead us there.

The fact that 5 series doesn't clock much better when overclocked also says a lot too about those transistors. On water the difference between GF100 and GF110 clock wise is pretty much nothing. GK110 wasn't so much better if you ignored power (arguably if you ignore the first batch of fermi you'd have less of a case as well), my point was most people today try to say power doesn't matter now. However the Ghz card from AMD uses a sizable amount of extra power vs the reference 680, it does not however, come anywhere near as fast compared to the 680 as the 480 was against the 5870.

Yet from those same people saying power consumption numbers from highly clocked 7970's (which is required to beat the 680 mind you, not by much), don't matter still attempt to dismiss GF100 for the very same reasons the higher clocked 7970's are being discounted. The 680 uses more power than the 5870, the 7970GHz uses a bit less power than a first batch 480, however the 7970 is nowhere near as fast compared to the competition that the 480 is. This doesn't even account for the fact that you have to eat a large amount of the 7970s clock headroom just to get to this point where it's slightly faster. GF100 was already well past where the 7970 is, without that and still had a vast wealth of untapped power.
 
Last edited:
They're not 100% comparable because of VLIW4 vs. VLIW5, but you get the point. If the die is any significant amount bigger, power consumption will go up.

The real world power consumption will probably go up, but so will the performance.

HD7870 peaks at 115W = 212mm^2
GTX680 peaks at 186W = 294mm^2
http://www.techpowerup.com/reviews/ASUS/GeForce_GTX_660_Direct_Cu_II/25.html

HD8870 with real world power consumption of 160W with performance ~ 1Ghz HD7970/GTX680 is not unreasonable a 12 months more mature 28nm node with a 280mm^2 die size.

Compared to the unrealistic claims of a 2880 SP 1Ghz GTX780 under 250W TDP, this rumor is actually viable.

Right...

The HD 5870 consumes on average ~145W in games. The HD 7970 consumes on average ~190W in games.

You cannot compare HD7970 to HD8870 though (and this is why you cannot compare Pitcairn XT to Tahiti XT directly).

HD7870 = 1280 SP, 32 ROPs, 80 TMUs, 256-bit bus = 212mm^2 die (13,200 transistors / 1 mm^2 density)
HD7970 = 2048 SP, 32 ROPs, 128 TMUs, 348-bit bus = 365mm^2 die (11,800 transistors / 1 mm^2 density)

Notice, TMUs/SPs increased 60%, and bus width just 50%, # of ROPs stayed the same, but the chip increased 72%!

Clearly Tahiti XT is less dense (or there are some types of less dense transistors within Tahiti XT chip that take up disproportional volume). My guess is the "less" efficient per/mm^2 transistors are what's needed to give you over 1 Tflop of double precision compute (we have seen AMD being very strict on this aspect and chopping off double precision compute or neutering it for mid-range parts --> HD6850/6870). That means double precision compute cannot be cheap transistor space.

Get rid of extra transistors that are needed for double precision compute for Tahiti XT (and all the other 'fat') and you end up with a 270-280mm^2 die 1792 SP, 32 ROP, 112 TMU 256-bit bus chip using less power than an HD7970.

Also, your logic contradicts any other viable possibilities for improving performance right now without growing the die size of HD8800 series. AMD can't add 20-30% increase in IPC since that's impossible in 12 months. That leaves you with 2 major options:

1) Increase clock speeds (this requires higher voltages to maintain 1200-1300mhz) - not going to happen on 28nm from the factory.

2) Make the chip larger by adding more functional units.

In other words, there is no other viable option but to make HD8800 series larger in size if you want to gain performance by 25-30%.

Right now AMD HD7970 can already be found for $379.99, while HD7950 goes for as low as $279.99. Both of those are 384-bit 365mm^2 die chips, with 3GB of VRAM.

AMD would make a lot more $ if they discontinued HD7950/7970 and sold an HD8850/8870 2GB 256-bit at $299-349 on a 280mm^2 die with performance ~ HD7950 V2 / HD7970 (or higher). Without the need for a 384-bit bus and double precision compute of Tahiti, AMD would realize substantial die size savings.

Since Tahiti XT is Pixel fill-rate limited, HD8870 @ 1100mhz with 32 ROPs would have 19% higher pixel fill-rate than the original HD7970. That will be huge for gaming performance (we have seen that HD7950 is barely faster than HD7870 at stock speeds despite a huge memory bandwidth advantage. The minute you start overclocking the 7950, the performance increases significantly - not so with the HD7870).

The specs are possible but it will make a much bigger die than the current HD7870 (Pitcairn). If the specs are legitimate, it could make the Tahitis replacement be more than 400mm2.

Ya, last rumor I read was @ 400-410mm^2 for 8970.
 
Last edited:
Yet from those same people saying power consumption numbers from highly clocked 7970's (which is required to beat the 680 mind you, not by much), don't matter still attempt to dismiss GF100 for the very same reasons the higher clocked 7970's are being discounted. The 680 uses more power than the 5870, the 7970GHz uses a bit less power than a first batch 480, however the 7970 is nowhere near as fast compared to the competition that the 480 is. This doesn't even account for the fact that you have to eat a large amount of the 7970s clock headroom just to get to this point where it's slightly faster. GF100 was already well past where the 7970 is, without that and still had a vast wealth of untapped power.

7970GHz uses a bit less power than GTX580 not GTX480, roughly the same as GTX570 which is still acceptable for a single gpu card.


power_average.gif


Let's dig deeper shall we?

257W vs 158W is a bit different than 166W compared to 209W don't you agree?

Now let's take a look at performance per watt.

perfwatt_2560.gif


So GTX680 has a whooping 13% better performance per watt than 7970GE. How about GTX480 vs 5870?

perfwatt_2560.gif


5870 has 69% better performace per watt than GTX480. Still think that's a comparable situation?
 
If the specs are true, then Oland will likely have the same memory bandwidth performance issues that plague Kepler (or worse, since Nvidia usually manages to squeeze better performance out of the same or less mem bandwidth as AMD cards in each comparable generation).

That's a fair point. Granted if HD8850 / 8870 are priced at $299-349 and provide performance near GTX670/680 level, then it would still be progress. Right now GTX680 is about 36% faster at 1080P than the HD7870. I doubt HD8870 will reach that but if it gets to GTX670 level for $299-349, that's a heck of a lot better than $349 HD7870 was! Of course NV can "re-badge" a higher clocked GTX670 as a GTX760Ti and sell GTX680 as GTX770 with higher clocks for $349. Exciting times ahead gents! 😛
 
The chinese are hinting Tahiti's successor will be in the 450mm2 range.

This is because of added compute functionality, they want to push their HPC agenda. Edit: contrary to what we expect, going the mild-route, they do realize with a big focus on compute, efficiency and perf/mm2 is shot thus chips have to be huge.

There's been suggestions from the Chiphell mods that GK110 is nearly 600mm2, that was many months ago, and TSMC just couldn't make it with 28nm being so immature. I recall seeing the thread but cant dig it out atm.

Thus, you can imagine its double gk104, it will definitely be very fast and power hungry.
 
Last edited:
I think the GPU industry would strongly benefit if AMD finally ditched the small die strategy as it has not really worked for them on the high-end. When ATI pushed performance, we got 9800XT, X800XT PE/X850XT PE, X1950XTX. If we have a very competitive AMD on the high-end, it would make for a much more competitive NV as well, because it would keep NV on its toes. This would only strengthen the offerings from both sides like it was in the olden days. The other problem is since we are stuck at 28nm for at least another year, both NV and AMD pretty much have to increase die sizes. There is nowhere else to go.

The CPU is becoming less and less relevant for a modern gaming system beyond what we have today. I feel like the GPU needs to be as fast as possible and all this new trend for releasing flagship cards at 180W of power is really killing the performance increases. Performance/watt should be prioritizes for low- and mid-range offerings and laptops. I'll take 70-80W more power if it means 40% more performance. They can always fix the power consumption on the next node shrink. I am even fine going back to the days of 250W GPUs if the performance increases correspond to 50%+ each new generation 🙂.

BTW: what I was really trying to gauge was if the 620W Seasonic PSU on this system was overkill. It's was an i7-3770K with a modest 4.4GHz OC, the 7950, 16GB and a fans etc. The most I was able to get it to draw at the wall was 340W running IntelBurnTest and Kompuster at once. Also tried the Crysis 1 demo and that was max 220W at the wall. That PUS is 80+ which makes the 340W more like 300W. So a 620W PSU is probably overkill unless both the i7 and 7950 are going to massively overclocked (I'm pretty sure the owner is not going to either).

Add a 2nd 7950 for bitcoin mining and that 620W power rating will be handy. But ya, overall people are paranoid about PSU ratings (if you get a quality rated PSU). I am running an overclocked 7970 on a 520W.
 
Last edited:
7970GHz uses a bit less power than GTX580 not GTX480, roughly the same as GTX570 which is still acceptable for a single gpu card....

You people need to learn to check multiple reviews. Just linking only TPU all the time will simply not do.

Consumption numbers from 5 reviews (card only) and thus at least 5 different games:
http://www.3dcenter.org/artikel/eine-neubetrachtung-des-grafikkarten-stromverbrauchs

235W for both exactly, the GTX480 and the 7970 GHZ Edition.
 
Last edited:
Whats up with all this boost BS, why cant graphics cards be overclocked the way they should be overclocked. The whole GTX 600 series overclocking was disgusting to me, having to constrain your OC to a certain power limit and stupid boost stuff. It all seems very complicated and a waste of everything, I want to be able to open up MSI AB, choose my clocks and voltage, and set them.

I think the GPU industry would strongly benefit if AMD finally ditched the small die strategy as it has not really worked for them on the high-end. When ATI pushed performance, we got 9800XT, X800XT PE/X850XT PE, X1950XTX. If we have a very competitive AMD on the high-end, it would make for a much more competitive NV as well, because it would keep NV on its toes. This would only strengthen the offerings from both sides like it was in the olden days. The other problem is since we are stuck at 28nm for at least another year, both NV and AMD pretty much have to increase die sizes. There is nowhere else to go.

The CPU is becoming less and less relevant for a modern gaming system beyond what we have today. I feel like the GPU needs to be as fast as possible and all this new trend for releasing flagship cards at 180W of power is really killing the performance increases. Performance/watt should be prioritizes for low- and mid-range offerings and laptops. I'll take 70-80W more power if it means 40% more performance. They can always fix the power consumption on the next node shrink. I am even fine going back to the days of 250W GPUs if the performance increases correspond to 50%+ each new generation 🙂.



Add a 2nd 7950 for bitcoin mining and that 620W power rating will be handy. But ya, overall people are paranoid about PSU ratings (if you get a quality rated PSU). I am running an overclocked 7970 on a 520W.

This, even though the GTX 480 guzzled up power (as much as a 580) it was still twice as good, roughly, as the previous gens offerings. Dont even get me started on 4870 -> 5870, im pretty sure power use went down or extremely slightly up yet it had double the performance. I would love to go back to the days of 130w TDP CPU's and 250w GPU's if it meant that we had the performance to back it up.
 
Last edited:
You people need to learn to check multiple reviews. Just linking only TPU all the time will simply not do.

Consumption numbers from 5 reviews (card only) and thus at least 5 different games:
http://www.3dcenter.org/artikel/eine-neubetrachtung-des-grafikkarten-stromverbrauchs

235W for both exactly, the GTX480 and the 7970 GHZ Edition.

No, you need to learn that there is no such card as a reference HD7970 GE edition. It was only sent to reviewers to gauge performance. All other metrics are irrelevant since you cannot buy this card in retail. Think of the HD7970 GE reference as a simple in-house concept that never made it to the market. The power consumption or noise levels of a reference HD7970 GE is a meaningless metric for consumers. It has no relation whatsoever to real world after-market HD7970 GE cards, especially since IDC proved in his CPU overclocking threads that the cooler you keep a chip, the lower the power consumption. Also, GTX480 uses more power than 235W, more like 270W.

Meaning --> after-market HD7970 GE cards will consume less power by virtue of better cooling alone, and then by virtue of having upgraded VRM/PCB components, and then by virtue of NOT using a 1.25V BIOS of the reference version. This has been repeated ad-nauseum but you ignore it.

An after-market HD7970 GE @ 1200mhz uses 45-50W more power than a stock GTX680 at the wall, which is still less than a stock GTX580, nevermind a stock 480:

Power.png


Keep in mind that GTX680 @ 1290mhz can't beat an 1165mhz HD7970 and an overclocked 680 uses more than 200W of power. You guys are still stuck in the mythical land of an overclocked HD7970 PC using up 275-300W of power, while countless HD7970 owners have proven that this is wrong. HD7970 starts drawing 275W of power at 1.25-1.3V @ 1260+mhz at which point it's 25% faster than a stock GTX680 at 1600P (at 1200mhz, HD7970 is already 19% faster).

You don't have to put 1.25-1.3V into a 7970 though. Generally speaking, an after-market HD7970 @ 1150-1165mhz @ 1.175V uses about 225-238W of power (shown many times on this forum via 3-4 different sources outside of TPU). Comparing GTX480's power consumption to HD7970 is so off the mark, it's simply trolling or being ignorant.
 
Last edited:
No, you need to learn that there is no such card as a reference HD7970 GE edition. It was only sent to reviewers to gauge performance. All other metrics are irrelevant since you cannot buy this card in retail. Think of the HD7970 GE reference as a simple in-house concept that never made it to the market. The power consumption or noise levels of a reference HD7970 GE is a meaningless metric for consumers. It has no relation whatsoever to real world after-market HD7970 GE cards, especially since IDC proved in his CPU overclocking threads that the cooler you keep a chip, the lower the power consumption. Also, GTX480 uses more power than 235W, more like 270W.

Meaning --> after-market HD7970 GE cards will consume less power by virtue of better cooling alone, and then by virtue of having upgraded VRM/PCB components, and then by virtue of NOT using a 1.25V BIOS of the reference version. This has been repeated ad-nauseum but you ignore it.

An after-market HD7970 GE @ 1200mhz uses 45-50W more power than a stock GTX680 at the wall, which is still less than a stock GTX580, nevermind a stock 480:

Power.png


Keep in mind that GTX680 @ 1290mhz can't beat an 1165mhz HD7970 and an overclocked 680 uses more than 200W of power. You guys are still stuck in the mythical land of an overclocked HD7970 PC using up 275-300W of power, while countless HD7970 owners have proven that this is wrong. HD7970 starts drawing 275W of power at 1.25-1.3V @ 1260+mhz at which point it's 25% faster than a stock GTX680 at 1600P (at 1200mhz, HD7970 is already 19% faster).

You don't have to put 1.25-1.3V into a 7970 though. Generally speaking, an after-market HD7970 @ 1150-1165mhz @ 1.175V uses about 225-238W of power (shown many times on this forum via 3-4 different sources outside of TPU). Comparing GTX480's power consumption to HD7970 is so off the mark, it's simply trolling or being ignorant.

From what I understand a GTX 480 is almost equal to a 580 with power consumption.
 
No, you need to learn that there is no such card as a reference HD7970 GE edition. It was only sent to reviewers to gauge performance. All other metrics are irrelevant since you cannot buy this card in retail. Think of the HD7970 GE reference as a simple in-house concept that never made it to the market. The power consumption or noise levels of a reference HD7970 GE is a meaningless metric for consumers. It has no relation whatsoever to real world after-market HD7970 GE cards, especially since IDC proved in his CPU overclocking threads that the cooler you keep a chip, the lower the power consumption. Also, GTX480 uses more power than 235W, more like 270W.

Meaning --> after-market HD7970 GE cards will consume less power by virtue of better cooling alone, and then by virtue of having upgraded VRM/PCB components, and then by virtue of NOT using a 1.25V BIOS of the reference version. This has been repeated ad-nauseum but you ignore it.

An after-market HD7970 GE @ 1200mhz uses 45-50W more power than a stock GTX680 at the wall:

Power.png


Keep in mind that GTX680 @ 1290mhz can't beat an 1165mhz HD7970 and an overclocked 680 uses more than 200W of power. You guys are still stuck in the mythical land of HD7970 using up 275-300W of power, while countless HD7970 owners have proven that this is wrong.

An after-market HD7970 @ 1150-1165mhz @ 1.175V uses about 225-238W of power (shown many times on this forum via 3-4 different sources outside of TPU). Comparing GTX480's power consumption to HD7970 is so off the market, it's simply trolling.

I've shown you several reviews where partner cards used about the same as the reference 7970 GE. You just ignored that post. The 480 numbers are solid, so the comparison stands as correct. The only thing that is noteworthy is that some of those cards clock 10-15% higher. It's not my fault when there is no 7970 GE that actually uses the 1000/1050 clocks.

I don't know what you are making a fuss about with your long post. The GE partner cards use about as much as the 480 (your 225-238 compared to an averaged 235W for the 480), but they earn it by clocking higher, providing more performance. I see no problem here honestly.

And you did not get my main point:
Always use multiple sources where possible. When it comes to power in discussions over here at AT, TPU is linked almost exclusively when other sources and analyses are readily available.
 
No, you need to learn that there is no such card as a reference HD7970 GE edition. It was only sent to reviewers to gauge performance. All other metrics are irrelevant since you cannot buy this card in retail. Think of the HD7970 GE reference as a simple in-house concept that never made it to the market. The power consumption or noise levels of a reference HD7970 GE is a meaningless metric for consumers. It has no relation whatsoever to real world after-market HD7970 GE cards, especially since IDC proved in his CPU overclocking threads that the cooler you keep a chip, the lower the power consumption. Also, GTX480 uses more power than 235W, more like 270W.

Meaning --> after-market HD7970 GE cards will consume less power by virtue of better cooling alone, and then by virtue of having upgraded VRM/PCB components, and then by virtue of NOT using a 1.25V BIOS of the reference version. This has been repeated ad-nauseum but you ignore it.

An after-market HD7970 GE @ 1200mhz uses 45-50W more power than a stock GTX680 at the wall, which is still less than a stock GTX580, nevermind a stock 480
Why do you think AMD's marketing arm is so poorly run. Surely they could have imagined the potential fallout.
 
Back
Top