OCUK: 290X "Slightly faster than GTX 780"

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
This needs to be stickied on our forum.

TDP does not necessarily equate to real world power consumption. In some cases it may as a pure coincidence. In others, real world power usage may be higher or lower.

GTX480 has a 250W TDP but peaked at 272W in Crysis 2.

vs.

GTX780 has a 250W TDP but peaked at 222W in Crysis 2.

That's an incomplete example, at best. TDP is VERY important, especially with all the fuss here on AT people put on compute performance. On higher-end cards like Titan or 7970 that tout good compute performance, you should actual power consumption during high-loads like compute at or around the TDP. The reason it is important is that some AIBs build their cards poorly, and they cool the card just fine for general gaming, but can't keep the card from throttling under true 100% loads.

This article seems to suggest a TDP of 270w, but I would like to get formal numbers and obviously a real-world test of both 'standard' power consumption, like gaming, and all-out 100% load using all silicon areas.

http://wccftech.com/amd-radeon-r9-290x-hawaii-gpu-final-model-pictured-hot-cooler-design/
 
Last edited:

LegSWAT

Member
Jul 8, 2013
75
0
0
AMD need to get their strategies right. Because Nvidia is a few steps ahead all the time.
Nvidia is ahead, with its flagship GPUs, and their respective die size. Die size however is less of a critical lead than being a few nodes ahead in fab process. So it seems, AMD is going to compensate with higher area-efficiency.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
That's an incomplete example, at best. TDP is VERY important, especially with all the fuss here on AT people put on compute performance. On higher-end cards like Titan or 7970 that tout good compute performance, you should actual power consumption during high-loads like compute at or around the TDP. The reason it is important is that some AIBs build their cards poorly, and they cool the card just fine for general gaming, but can't keep the card from throttling under true 100% loads.

This article seems to suggest a TDP of 270w, but I would like to get formal numbers and obviously a real-world test of both 'standard' power consumption, like gaming, and all-out 100% load using all silicon areas.

http://wccftech.com/amd-radeon-r9-290x-hawaii-gpu-final-model-pictured-hot-cooler-design/

Techpowerup does test the silicon 100%.
Its just that RussianSensation didnt post the right picture.
TDP is the most critical point on a GPU, so here is the newest chart:

http://tpucdn.com/reviews/Palit/GeForce_GTX_780_Super_JetStream/images/power_maximum.gif


Wait what?
So many rumors and conflicting information on the internet.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Now AMD comes dragging after with a new GPU that will match GTX 780, which Nvidia have been cashing in for many months now. AMD need to get their strategies right. Because Nvidia is a few steps ahead all the time.

Looking at AMD's overall financial position and NV's ability to continue churning out 550-560mm2 die GPUs, it's a miracle AMD is even competing. Sure it's 6 months late but they made a 438mm2 chip match a 561mm2 one from NV! NV should be blowing AMD completely out of the water. If AMD could afford to make a 561mm2 GCN Hawaii, NV's 780 would be like 30% slower. NV is lucky AMD cannot afford to build such large die GPUs ... yet.

This time AMD went from 365mm2 Tahiti to a 438mm2 Hawaii. If AMD continues to push their own limits and say goes for a 475mm2 20nm chip, things will get very interesting.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
A few percent faster than a gtx780 pretty much like I predicted. I bet actual power draw is higher, so for the overclockers out there that could be an issue. Still though, if it manages to come in at $549 then it will be a sure-fire success and will force Nvidia to make price adjustments. If it comes in at $599 then it's a nice welcome but more than anything ho-hum since $650 OC'd cards that are faster out of the box than a stock Hawaii have already been on the market for quite some time.
 
Last edited:

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
Ah Cloudfire, forgotten your previous posts already? See this is why I tag stuff.

http://forums.anandtech.com/showpost.php?p=35063053&postcount=125

AMD have nothing to fight of GTX 780
Tahiti (7950/7970) is too inefficient so they can`t make a bigger core based on it to match GTX 780.
It won`t work. Stop dreaming and face reality. Why do you think AMD said themselves that 7970 is gonna be their greatest high end single GPU? Because they can`t go any further.
Ok so what is your new argument again?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That's an incomplete example, at best. TDP is VERY important, especially with all the fuss here on AT people put on compute performance. On higher-end cards like Titan or 7970 that tout good compute performance, you should actual power consumption during high-loads like compute at or around the TDP. The reason it is important is that some AIBs build their cards poorly, and they cool the card just fine for general gaming, but can't keep the card from throttling under true 100% loads.

You are missing the point entirely. TDP itself does NOT equal power consumption. In games, the real world power consumption can be less than, equal to or higher than TDP. In compute, the real world power consumption can be less than, equal to or higher than TDP.

In other words, looking at the card's TDP rating doesn't tell us what the maximum power consumption is. As an example, GTX480 with a TDP rating of 250W can even hit 320W at load.

If you look at the card's TDP, it tells you almost nothing if R9 290X will use more or less power in games vs. 780.

As far as some AIBs having poor coolers, etc. it's up to each consumer to research AIB offerings when making a purchase. Again, knowing the TDP value of a card tells us nothing about how MSI Gaming will performance against Gigabyte Windforce 3X against Asus DCU.

Techpowerup does test the silicon 100%.
Its just that RussianSensation didnt post the right picture.
TDP is the most critical point on a GPU, so here is the newest chart:

http://tpucdn.com/reviews/Palit/GeForce_GTX_780_Super_JetStream/images/power_maximum.gif

Nope. TDP and real world power consumption do not need to agree. You can have a card with 250 TDP that only uses 225W in games (780) or you can have a card with 250W TDP that uses 270W of power in the same game (480).

I don't think you understand anything I stated.

I clearly outlined that TDP for 480 and 780 was 250W. Then I specifically linked peak power consumption in the same game at the same settings - Crysis 2. In one case the 480 used 272W of power (exceeding TDP), and in the second case the 780 used 222W of power (below TDP). The maximum power consumption you link is per FurMark which no one cares about since there is no program in the world that can use 99.9% of the GPU's internal resources simultaneously like FurMark.

TDP is just guidance for OEMs to help them with case airflow, power supply requirements and cooler designs. It is not an accurate assessment of the GPU's real world power consumption in games or compute. Some cards will never approach TDP in games/compute apps and in other cases they can easily exceed TDP in games/compute programs.

You should look up the definition of TDP.

The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate. The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running "real applications."

Again, I've noticed this mistake being made every new round of GPUs being released. People equate TDP to a videocard's real world power consumption. TDP and the GPUs real world power consumption in real world applications do not need to agree. In fact, AMD and NV set TDP as a guidance only. You can increase the power consumption of the 780 from 250 to 265W and similarly you can raise the TDP limit in PowerTune by 20% for AMD cards. But if I increase my PowerTune limit from 250W to 300W (+20%), it doesn't mean that my overclocked 7970 will ever hit 300W in any app.

--

With this post you made, I suggest you start learning more about GPU design, node maturity, etc. and GPU terminology before stating your opinions as facts. All of your predictions so far have been proven false.
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
You are missing the point entirely. TDP itself does NOT equal power consumption. In games, the real world power consumption can be less than, equal to or higher than TDP. In compute, the real world power consumption can be less than, equal to or higher than TDP.

In other words, looking at the card's TDP rating doesn't tell us what the maximum power consumption is. As an example, GTX480 with a TDP rating of 250W can even hit 320W at load.

If you look at the card's TDP, it tells you almost nothing if R9 290X will use more or less power in games vs. 780.

As far as some AIBs having poor coolers, etc. it's up to each consumer to research AIB offerings when making a purchase. Again, knowing the TDP value of a card tells us nothing about how MSI Gaming will performance against Gigabyte Windforce 3X against Asus DCU.



Ummm, no. I don't think you understand anything I stated.

I clearly outlined that TDP for 480 and 780 was 250W. Then I specifically linked peak power consumption in the same game at the same settings - Crysis 2. In one case the 480 used 272W of power (exceeding TDP), and in the second case the 780 used 222W of power (below TDP). The maximum power consumption you link is per FurMark which no one cares about since there is no program in the world that can use 99.9% of the GPU's internal resources simultaneously like FurMark.

TDP is just guidance for OEMs to help them with case airflow, power supply requirements and cooler designs. It is not an accurate assessment of the GPU's real world power consumption in games or compute. A card can never approach TDP and in other cases it can exceed TDP.

You should look up the definition of TDP.

The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate. The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running "real applications."

Again, I've noticed this mistake being made every new round of GPUs being released. People equate TDP to a videocard's real world power consumption. TDP and the GPUs real world power consumption in real world applications do not need to agree.

You conveniently left out the last sentence you linked to...

;)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You conveniently left out the last sentence you linked to...

;)

I didn't leave it out, since I even quoted it. I suggest people read up on what TDP means.

"In some cases the TDP has been underestimated such that in real applications (typically strenuous, such as video encoding or games) the CPU has exceeded the TDP."

Again, TDP is only a guidance, nothing more. A CPU or GPU can use less than, equal to or more than its rated TDP. If you want to find out the product's real world power consumption you start testing it in real world apps.

I am not surprised though. Most people are so stuck in their ways and these mistakes are repeated over and over for the last 10 years I've been on the forums: (1) 3dmark used to predict real world gaming performance (fail 101), (2) TDP used as an accurate gauge for real world gaming power consumption (proven to be false so many times over the years, it would take 10 pages to show enough examples).

I breathed a sigh of relief when professional reviewers finally realized how ignorant they were to use FurMark in their power consumption testing. It was only after professional engineers at AMD/NV explained to them how stupid it was to use FurMark that most of them stopped using it.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Ah Cloudfire, forgotten your previous posts already? See this is why I tag stuff.

http://forums.anandtech.com/showpost.php?p=35063053&postcount=125

Ok so what is your new argument again?

Oh so cute, checking my history :D

AMD smacked on more cores and at the same time, the TDP increased to a whopping 300W.

They have a 300W GPU matching a 250W GTX780.
Its the same strategy they do with their CPUs. Make up for a really unefficient architecture by increasing power consumption way above the competition.

They get no applaud from me :thumbsdown:
 
Last edited:

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
Titan is ~30% faster than 7970 Ghz, as speculated, Hawaii is also around 30% faster, so its a great result for AMD for a same node refresh.

Maybe, but that's a pretty meaningless metric. The 7907GHz is a slightly OC'd 7970 that was released in early 2011. So it has been over 2 1/2 years since AMD's last new single GPU flagship, and they've managed only a 30% increase in performance? That's pretty disappointing.

Hopefully, 4k displays will start driving AMD/NVidia into a faster development cycle again.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
They have a 300W GPU matching a 250W GTX780.

Link to R9 290X using 300W of real world power in games?
Link to 780 using 250W of real world power in games?
Link to AMD officially stating R9 290X has a 300W TDP rating?

You are making stuff up on the forums as usual without any factual evidence whatsoever. On top of it, you have no clue about GPU terminology and keep equating TDP to real world GPU power consumption in games. :rolleyes:

Again, I can raise PowerTune limit from 250W to 300W on my 7970s. Will my 7970s @ 1.2ghz ever use 300W of power in real world applications? Not a chance.

Maybe, but that's a pretty meaningless metric. The 7907GHz is a slightly OC'd 7970 that was released in early 2011. So it has been over 2 1/2 years since AMD's last new single GPU flagship, and they've managed only a 30% increase in performance? That's pretty disappointing.

The same is true for NV. Blame TSMC and physics. It's going to be harder and harder to keep shrinking transistors to lower nodes. Without a lower node, there is only so much you can do on 28nm. On the positive side, with performance increases slowing down in the GPU space between nodes, it means GPU upgrades are easier on the wallet ;)
 
Last edited:

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Oh so cute, checking my history :D

AMD smacked on more cores and at the same time, the TDP increased to a whopping 300W.

They have a 300W GPU matching a 250W GTX780.
Its the same strategy they do with their CPUs. Make up for a really unefficient architecture by increasing power consumption way above the competition.

They get no applaud from me :thumbsdown:

Will have to agree here,but i think if the price was much lower then the 780 honestly people will buy the 290x all day long over the more efficient 780.

If the reviews show a good lead over the 780,then i could see the 300w being justifiable.
 

SiliconWars

Platinum Member
Dec 29, 2012
2,346
0
0
Oh so cute, checking my history :D

I didn't check your history, like I said I tagged your post back then because I knew it was BS that I'd enjoy linking at the right time.

AMD smacked on more cores and at the same time, the TDP increased to a whopping 300W.
Oh no it didn't.

They have a 300W GPU matching a 250W GTX780.
Its the same strategy they do with their CPUs. Make up for a really unefficient architecture by increasing power consumption way above the competition.

They get no applaud from me :thumbsdown:
Actually the only benchmarks we've seen so far has the 290X easily beating the 780 while only consuming a tiny amount more power which can easily be explained by the 4GB VRAM.

http://videocardz.com/45753/amd-radeon-r9-290x-slightly-faster-gtx-titan
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
@RussianSensation: Where on earth have I said that TDP = Power consumption? I agree that you can bypass TDP by running Furmark and other extreme artifical benchmarks/tools.

I posted that graph because he asked about a 100% silicon power draw.

I didn't check your history, like I said I tagged your post back then because I knew it was BS that I'd enjoy linking at the right time.

Oh no it didn't.

Actually the only benchmarks we've seen so far has the 290X easily beating the 780 while only consuming a tiny amount more power which can easily be explained by the 4GB VRAM.

http://videocardz.com/45753/amd-radeon-r9-290x-slightly-faster-gtx-titan

Eh, no official benchmarks have been posted. If you refer to that chinese test, do you really believe that 290X will beat Titan by over 10%?

The specs are out there. It is a 300W GPU, increased 50W from 7970GHz.
I don`t think they increased it 50W just for fun, right? So where is the great efficiency AMD is boasting? Could it be trickery, that they only refer to more efficient design, putting more cores on a smaller area? ;)
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
@RussianSensation: Where on earth have I said that TDP = Power consumption? I agree that you can bypass TDP by running Furmark and other extreme artifical benchmarks/tools.

I posted that graph because he asked about a 100% silicon power draw.

You just said that R9 290X is a 300W card. Where did you pull that number from? You then proceeded to link Maximum power consumption of 780 using a power virus. In the real world a 780 uses 222-240W of power. If R9 290X uses 240-260W of power, is that supposed to be the end of the world or something? Our overclocked CPUs use 100-150W more power from their stock levels.

These cries over 30-40W of power consumption difference on rigs that use 500W-600W are getting tiresome. You think enthusiast who buy $600 GPUs care about another 30-40W of power vs. specific GPU features, drivers, performance in specific games, etc.? It's probably the last thing on their minds.

Frankly since there isn't any credible review with R9 290X's gaming power consumption, everything you are stating about AMD throwing more cores and making their GPU less efficient is just conjecture.

The specs are out there. It is a 300W GPU, increased 50W from 7970GHz.

Even if R9 290X ends up with a TDP rating of 300W by some chance, it again may have little to do with its real world power consumption in games/compute. We keep going back in circles because: (1) You do not understand what TDP means, yet you correlate TDP rating of 300W as = real world power usage of 300W; (2) You assume R9 290X will have a TDP of 300W without any credible evidence, instead relying on rumors from some website like videocardz.

BTW, 7970GE has a TDP of 250W but uses less power at load in games/compute. Even at 1180mhz, 7970 still uses less than 250W of power in video games.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
Looked like it suffered from hitching? now and again but overall looked pretty smooth. They seemed pretty excited about it tho.

They have a right to be excited about that. That's 3 4k monitors in Eyefinity :eek:

That's the equivalent of 12 1080p monitors running at a good framerate from 2 gpu's?!?!?!?!

I would imagine any hitching is due to the monitors probably having to run at 30hz because of cable bandwidth issues.
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
AMD smacked on more cores and at the same time, the TDP increased to a whopping 300W.

They have a 300W GPU matching a 250W GTX780.
Its the same strategy they do with their CPUs. Make up for a really unefficient architecture by increasing power consumption way above the competition.

There is no universal method of determining TDP. All manufacturers have their own methods. TDP is like tread wear ratings for tires. You can only use the values for a comparative basis if they are from the same manufacturer.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
http://videocardz.com/46006/amd-radeon-r9-290x-specifications-confirmed-costs-600

:whiste:

Even if R9 290X ends up with a TDP rating of 300W by some chance, it again may have little to do with its real world power consumption in games/compute. We keep going back in circles because: (1) You do not understand what TDP means; (2) You assume R9 290X will have a TDP of 300W without any credible evidence, instead relying on rumors from some website like videocardz.

BTW, 7970GE has a TDP of 250W but uses less power at load in games/compute. Even at 1180mhz, 7970 still uses less than 250W of power in video games.

Dont you ever get tired there on your high horse?

Wether you like it or not, TDP and power consumption is linked together. If TDP increase, power consumption will too.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
http://videocardz.com/46006/amd-radeon-r9-290x-specifications-confirmed-costs-600

:whiste:

Wether you like it or not, TDP and power consumption is linked together. If TDP increase, power consumption will too.

On the most basic level only. TDP is not the same as actual power consumption.

"TDP values between different manufacturers cannot be accurately compared. While a processor with a TDP of 100 W will almost certainly use more power at full load than a processor with a 10 W TDP, it may or may not use more power than a processor from a different manufacturer that has a 90 W TDP."

There is no point of arguing with you since you clearly don't want to learn. The obvious flaw in all of your reasoning up to this point is real world example that AMD and Intel even measure TDP differently. Therefore your idea of linking TDP to real world power consumption is another opinion of yours stated as a fact without evidence.


Ask him to first link a review that shows 7970Ghz using 250W of real world power in games. Neither the 780 nor the 7970Ghz uses 250W of power in games despite a TDP rating of 250W for both.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
You compare GTX 480 and GTX 780, TDP and power draw, and conclude that TDP and power consumtion is different.

I agree with that, but have you ever considered that GTX 480 and 780 is widely different in nature, ie different architecture and on a entirely different node?

7970 and R9 290X is the exact same architecture and still on 28nm

How can you not see your logical flaws?

Unless AMD have made an incredible discovery in making GCN efficient (way more than GK104 > GK110), that power consumption will go up.
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
First, congratulations on coming out of retirement at this very odd timing to tell us all of this. I remember your posts on OCN.

Christ dude, go look up what TDP means. TDP is not power consumption. Furthermore, the leaks have shown that the 290X uses less power at load than the Titan, so that would indicate that the 290X did improve efficiency over Tahiti; besides which - every site that has mentioned a "300W TDP" only did so after making mention of the fact that it has a 8 pin + 6 pin connector which can draw 300W. So basically what they're doing is assuming that it is a 300W TDP, when all prior leaks indicated a 250W TDP.

The TDP merely indicates how versatile the cooling solution must be running "average" applications. There is no industry standard of what TDP means. It can mean anything. Nvidia's TDP measurement is different than Intel's. Intel's is different than AMD's. AMD's is different than both nvidia and Intel. The point here is that given the definition of TDP (which you're apparently NOT aware of) - the size and characteristics of the shroud on the 290X, it is not a 300W TDP unless you blindly and ignorantly just look at the power connectors (which provides up to 300W) and assume that TDP is the same. The shroud used on the 290X is roughly the same as the one on the 7970 with slightly different aesthetics - thus it can't be a 300W TDP as it is not a massive shroud re-design. This is aside from the fact that the 290X uses less power at load than the Titan, and the Titan has a 250W TDP. But the bottom line is that TDP is not the same as maximum power consumption.
 
Last edited:

Durvelle27

Diamond Member
Jun 3, 2012
4,102
0
0
You compare GTX 480 and GTX 780, TDP and power draw, and conclude that TDP and power consumtion is different.

I agree with that, but have you ever considered that GTX 480 and 780 is widely different in nature, ie different architecture and on a entirely different node?

7970 and R9 290X is the exact same architecture and still on 28nm

How can you not see your logical flaws?

Unless AMD have made an incredible discovery in making GCN efficient (way more than GK104 > GK110), that power consumption will go up.

Were did you get that the 7970 and 290X use the exact same architecture. The R9 290X uses the new Hawaii Chip and uses GCN 2.0.