OCUK: 290X "Slightly faster than GTX 780"

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Looks like GPU is now experiencing the stagnation we get with CPUs. Someone with an OC 2500K or 2600K from Q1 2011 has really no reason to upgrade to anything in the past 2.5 years. GPU is starting to get that way. Got an original 7970 with a light OC from December 2011? Unless you're running more than 1080p, there's not much reason to upgrade to even 780/290X.

And then we have the super weak new consoles. The pathetic XB1 and the average PS4, which with their netbook CPU performance and strictly midrange GPUs will doom us to 5+ more years of stagnant ports.

GPUs are already purposed as massively parallel. This is stagnation of the underlying process node. 20nm at the same die size should be a nice jump in performance.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Really? Because people were trying to say it was sold retail because of the whole aftermarket 780 extreme bias in the face of no reference GHz yet still compared issue.

As has already been stated no multiple occasions, if AMD only released reference R9 290X cards, then ignoring the existence of after-market 290X cards is a valid reason. There is no evidence to suggest though that there will be no after-market 290X cards.

There is no bias, it's just you are not comparing apples-to-apples and trying to twist facts:

Case 1: Card A (7970Ghz) was never released in reference form in retail. Therefore, using Card A for measuring temperature, noise levels, power consumption is a theoretical exercise only. Please find 1 person on our forums who bought this Card A reference design in retail?

There are plenty of reviews of after-market 7970Ghz cards that show their real world power consumption, temperatures and noise levels that completely negate the idea of linking a "reference non-existent 7970Ghz" that no one has in their system. If someone wants to linked after-market 7970Ghz power consumption vs. after-market 780s, by all means. That's not biased.

His comments that HD7970Ghz after-market cards used more power than Titan in games are inconclusive at best and yet he ignores all these other reviews:

http://www.xbitlabs.com/images/graphics/nvidia-geforce-gtx-780/zpw-xbt.png
http://techreport.com/r.x/geforce-gtx-780/power-load.gif
http://www.sweclockers.com/recension/17041-nvidia-geforce-gtx-780/15#pagehead
http://www.guru3d.com/articles_pages/geforce_gtx_780_review,9.html

Case 2: Card B (R9 290X) is going to be released in in both reference and after-market form, just like Card C (GTX 780). It only stands to reason logically that using ONLY reference designed R9 290X in reviews but allowing for the inclusion of extreme after-market 780 cards such as $690-750 Galaxy HOF, MSI Lightning and EVGA Classified is an irrelevant comparison since there will be similar after-market R9 290X cards for sale shortly.

Almost everyone understands these facts on our forums, except you. Of course I shouldn't expect any less since for 18 months you linked reference designed 7970Ghz temperature, noise levels, poor overclocking, and high power consumption despite many of us telling you that our 7970s can run at 1100-1150mhz on stock voltage of 1.175V vs. your insistance that no we must all use Tahiti XT2 bios of 1.256V to hit 1050mhz. You totally ignored all of the real world metrics of HD7970 after-market cards, all of the comments of real world usage of HD7970 owners and their overclocking experiences and instead for 18 straight months straight up linked reference designed 7970Ghz that was not for sale.

I don't remember you linking a single after-market 7970Ghz against an after-market 680 in power consumption, noise levels or otherwise even once! In fact, we even told you that it took a max overclocked 670 to match a stock 7970Ghz and at that point their power consumption was equal despite equal performance and you yet you still ignored it, deciding to focus on a non-existent 7970Ghz card. :thumbsdown:

Regardless, it's amusing to see someone claiming that R9 290X will use 300W of real world power consumption based on 300W of unconfirmed TDP rating; and another refusing to acknowledge the existence of after-market R9 290X cards as if they are magical unicorns and insisting on comparing a reference 290X against a binned after-market card like HOF 780. Ignorance at its finest.

Warning issued for personal attack.
-- stahlhart
 
Last edited by a moderator:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Hahaha look who is trying really hard.
Once again writing a whole essay about his beloved 7970, trying to make people ignore the fact that a GTX 780 beat the crap out of his inefficient GPU.

Warning issued for personal attack.
-- stahlhart
 
Last edited by a moderator:
Feb 19, 2009
10,457
10
76
Yeah, all I can hear is noise.
So much to comment on, but meh, I wont bother. They always spin the facts.

Logic and sound reason backed by evidence is too hard for your brain to comprehend, thats why its all noise.

Your entire existence here on the AT forum has been nothing but trolling, thread crap and ad hominem. For you to dis Russian is a disgrace.
 
Last edited:
Feb 19, 2009
10,457
10
76
Lies and slander backed by system consumption, no reason to give merit where none is due.

Is there a reason why NV cards cause the entire system consumption to go up so much? There's a valid reason there, perhaps its important we recognize it, because at the end of the day, you don't play games on the GPU alone in isolation.

Edit: You yourself have aftermarket 7950s, does it indeed use less power than the reference 7950 Boost?
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Pretty simple, using a 4.8GHz SB chip which isn't sipping power by todays standards with a GPU that is 30-35% faster even if it uses the same power as the slower chip is going to draw more power due to more strain on the system via cpu cycles.
 
Feb 19, 2009
10,457
10
76
Pretty simple, using a 4.8GHz SB chip which isn't sipping power by todays standards with a GPU that is 30-35% faster even if it uses the same power as the slower chip is going to draw more power due to more strain on the system via cpu cycles.

Yes, we can agree on that, a faster GPU will push the CPU and memory subsystem more so overall system consumption goes up.

But what about the other points raised, ie. Reference vs aftermarket cards, the conditions are entirely different. Surely you must concede using reference numbers (when most cards are aftermarket) and applying it in general is a falsehood.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Lies and slander backed by system consumption, no reason to give merit where none is due.

Not to mention that neither of them could post any information that CPU usage went up with Geforce cards and that the threads was used to back the Geforce cards up.

Also, lets compare a rebuilt and binned OEM GPU against a Nvidia GPU. Neither of them noticed in their rage that I compared a Nvidia 780 against AMD 7970Ghz. I mean, the GPUs AMD make themselves have no merit right. All the first reviews done are on stock Nvidia/AMD cards, so they are garbage as well. Also, lets compare system power consumption, in two different tests, Anandtech's Battlefield 3 against udteam's Firestrike. Its not like the different CPU and other hardware, or the fact that Firestrike perhaps use more or less of the CPU making GPU power consumption really hard to get out of these test. No sir, we don't want to look up TechPowerUp which actually measure the real GPU power consumption, because that put our AMD GPUs in bad light.

"You spin me baby..."
There is a song that goes like that
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Hahaha look who is trying really hard.
Once again writing a whole essay about his beloved 7970, trying to make people ignore the fact that a GTX 780 beat the crap out of his inefficient GPU.

My beloved 7970s? This isn't about defending my 7970s but discussing 2 points that were brought up: power consumption and after-market 7970Ghz since there is no such thing as a reference 7970Ghz. As far as 780 being more efficient, that's nice, but all my 7970s cost me $0 after BTC and saved up for 10+ years of free GPU upgrades. So next time you waste $650 on a new GPU every 2-3 years, I'll be thanking my "inefficient" 7970s for yet another free GPU upgrade. :D

You still haven't grasped the part where 300W TDP was neither confirmed by AMD nor is it logical to conclude that 300W TDP = 300W real world power usage. Carry on then.
 

stahlhart

Super Moderator Graphics Cards
Dec 21, 2010
4,273
77
91
Any further taunting and direct provocation in this thread is going to be punished more severely. Get it under control immediately.
-- stahlhart
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
But what about the other points raised, ie. Reference vs aftermarket cards, the conditions are entirely different. Surely you must concede using reference numbers (when most cards are aftermarket) and applying it in general is a falsehood.


I don't think reference vs non reference matters, at least not as much as some would think.

In the end I think it comes down to the silicon on the board. It's in the same vein as OC potential, except now you're talking about how leaky a chip is and what voltages the OEM set. For instance, some 7950s come at 1.25v, others 1.18v, others lower still. There are far bigger factors to stock power consumption for AMD than reference vs non reference, is what I'm trying to say.

And I'm not talking about GPUz ASIC, that's nonsense.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,804
4,726
136
Is there a reason why NV cards cause the entire system consumption to go up so much? There's a valid reason there, perhaps its important we recognize it, because at the end of the day, you don't play games on the GPU alone in isolation.

Edit: You yourself have aftermarket 7950s, does it indeed use less power than the reference 7950 Boost?

Actualy his link is more than doubtfull with said site
expressely noticing that they have lower TDP with
an overclocked 780 than with a stock one..

With such a discretanpcy one would immediatly check
his test set up and protocols but no , they have no doubt
that the reason couldnt be their questionable competence...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Let's even make up some random # for R9 290X's power consumption and place it at 275W in games. Compared to that,a reference GTX780 uses 222W and Titan uses 238W if we only look at TPU (as if no other reviews in the world exist). Do people here honestly think that someone who is spending $650 on a GPU will care about a 40W difference in power consumption on a rig that will likely have an overclocked i5 or even i7?

What about if someone buys GPU A that uses 275W of power and puts into an MSI GD65 board vs. someone else who buys GPU B that uses 238W of power and puts it into Gigabyte UD5H?

power-7.png


I am pretty sure that someone spending $600-700 on GPU and checking this forum is likely going to want to see GTX780 OC vs. R9 290X OC in popular games. They aren't going to give a 2nd thought if there is a 40W power consumption difference between them but a 10-15% performance difference.

Further, someone who is willing to buy a GPU for $600+ will have a modern case and modern components in which case dissipating 40W of extra power is not a factor. I would even say that full voltage unlock on R9 290X is going to be a bigger seller for enthusiasts. Yet, this isn't even brought up into the discussion and instead 300W TDP is being used to project real world power consumption.....and then a non-existent 7970Ghz reference card is used to forecast that R9 290X will have "horrible" performance/watt even though 28nm node maturity is not even taken into account.
 
Last edited:

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
I am pretty sure that someone spending $600-700 on GPU and checking this forum is likely going to want to see GTX780 OC vs. R9 290X OC in popular games. They aren't going to give a 2nd thought if there is a 40W power consumption difference between them but a 10-15% performance difference ........... I would even say that full voltage unlock on R9 290X is going to be a bigger seller for enthusiasts..
Totally agree, especially if the R9 290X is $120 cheaper and more apt for 4K PPS 1440p 120Hz PLS Dispay gaming then say a $720 eVGA GeForce GTX 780 DUAL Classified.

The option of Mantle taking off is no big deal to me but if it does it's another Plus.

Really looking forward to the up and coming R9 290X Reviews.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Told you guys

While AMD’s new R9 280X is pretty much based on the HD 7970 GHz Edition AMD is still expecting its AIB partners to come up with fresh looking designs so that consumers feel they are getting something new and different. The XFX R9 280X pictured above, via VideoCardz, is a prime example of this and even though it uses what is essentially a HD 7970 GHz Edition GPU, XFX have still outfitted a totally new cooling design to it

http://www.eteknix.com/xfx-radeon-r9-280x-double-dissipation-graphics-card-spotted/


Also, October 15th is the date when the R9 290X hit the market included reviews is coming then I suppose, since NDA is lifted at that date. :)

R9-290X-launch-date.png


http://videocardz.com/46186/amd-radeon-r9-290x-officially-released-october-15th
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Told you guys



http://www.eteknix.com/xfx-radeon-r9-280x-double-dissipation-graphics-card-spotted/


Also, October 15th is the date when the R9 290X hit the market included reviews is coming then I suppose, since NDA is lifted at that date. :)

R9-290X-launch-date.png


http://videocardz.com/46186/amd-radeon-r9-290x-officially-released-october-15th

That is a sexy looking card. Hard to tell by the angles but it does look like the card is less than 2 slots thick at least at the shroud. Maybe 1.5 slot?
 

MrK6

Diamond Member
Aug 9, 2004
4,458
4
81
Looking at AMD's overall financial position and NV's ability to continue churning out 550-560mm2 die GPUs, it's a miracle AMD is even competing. Sure it's 6 months late but they made a 438mm2 chip match a 561mm2 one from NV! NV should be blowing AMD completely out of the water. If AMD could afford to make a 561mm2 GCN Hawaii, NV's 780 would be like 30% slower. NV is lucky AMD cannot afford to build such large die GPUs ... yet.

This time AMD went from 365mm2 Tahiti to a 438mm2 Hawaii. If AMD continues to push their own limits and say goes for a 475mm2 20nm chip, things will get very interesting.
If AMD is going to take a blow on die size it needs to pass the savings on to customers to maintain a positive light in the marketplace. Someone else mentioned that NVIDIA made a good market play with the Kepler launch by pricing the GTX 680 at $500, AMD could certainly learn from that. That said, I'm wondering if AMD finally got someone with a brain at the helm and they're gearing up their business in light of the new consoles hitting the market. A lot of their R&D could be spent coordinating product lines in the next year and this is just a stopgap. Or, as RS said, this could be their limit and the best they can do atm (or maybe both are true, time will tell).
Ah Cloudfire, forgotten your previous posts already? See this is why I tag stuff.

http://forums.anandtech.com/showpost.php?p=35063053&postcount=125

Ok so what is your new argument again?
Pwnt.
 

Teizo

Golden Member
Oct 28, 2010
1,271
31
91
It's interesting to see people supporting 'AMD' cards use the 'TDP doesn't matter' argument after giving Nvidia heck over the 480 back in the day.

Either efficiency matters or it doesn't.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
It's interesting to see people supporting AMD cards use the 'TDP doesn't matter' argument after giving Nvidia heck over the 480 back in the day.

Either efficiency matters or it doesn't.

TDP doesn't matter when you have no idea what it actually is for a card, that's the argument. Not that efficiency doesn't matter.