OCUK: 290X "Slightly faster than GTX 780"

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Were did you get that the 7970 and 290X use the exact same architecture. The R9 290X uses the new Hawaii Chip and uses GCN 2.0.

Its an improved GCN, yes. But its still GCN. Its not a new architecture.

AMD is doing rebadge on all other GPUs, except 290 and 290X. Why? Wasn`t it that much improved?

First, congratulations on coming out of retirement at this very odd timing to tell us all of this. I remember your posts on OCN.

Christ dude, go look up what TDP means. TDP is not power consumption. Furthermore, the leaks have shown that the 290X uses less power at load than the Titan, so that would indicate that the 290X did improve efficiency over Tahiti; besides which - every site that has mentioned a "300W TDP" only did so after making mention of the fact that it has a 8 pin + 6 pin connector which can draw 300W. So basically what they're doing is assuming that it is a 300W TDP, when all prior leaks indicated a 250W TDP.

The TDP merely indicates how versatile the cooling solution must be running "average" applications. There is no industry standard of what TDP means. It can mean anything. Nvidia's TDP measurement is different than Intel's. Intel's is different than AMD's. AMD's is different than both nvidia and Intel. The point here is that given the definition of TDP (which you're apparently NOT aware of) - the size and characteristics of the shroud on the 290X, it is not a 300W TDP unless you blindly and ignorantly just look at the power connectors (which provides up to 300W) and assume that TDP is the same. The shroud used on the 290X is roughly the same as the one on the 7970 with slightly different aesthetics - thus it can't be a 300W TDP as it is not a massive shroud re-design. This is aside from the fact that the 290X uses less power at load than the Titan, and the Titan has a 250W TDP. But the bottom line is that TDP is not the same as maximum power consumption.

Congratulations proving you haven`t learn how to read and have anger issues.
I just stated on the post above yours (lol) that TDP and power consumption is not the same. The link I posted stated TDP = 300W for 290X. Don`t blame me if that is incorrect.

Good job buddy.
 
Last edited:

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
Balla was right when he said silly season is back. Time for me to go to a different forum because of some select people derailing a thread. This is not an Nvidia has better TDP thread, its a performance thread for crying out loud. Rvenger out.
 

LegSWAT

Member
Jul 8, 2013
75
0
0
Its an improved GCN, yes. But its still GCN. Its not a new architecture.
If it's an improved GCN transistor re-layout at its core it can still be a new architecture. It has a new, supposedly more efficient memory controller. And more render engines. Does that define "new architecture" in your terms, if you have any definition of it at all?
 

Pariah

Elite Member
Apr 16, 2000
7,357
20
81
The link I posted stated TDP = 300W for 290X. Don`t blame me if that is incorrect.

You mean the link that posted the guess of another site?



"TO BE CONFIRMED :

Clock speeds, TDP and die size."

"** The same sites claims that R9 290X will consume up to 300W, which is more than we&#8217;ve thought (260W)"



That's your idea of proof of anything?
 

imaheadcase

Diamond Member
May 9, 2005
3,850
7
76
Seriously this thread derailed fast. When are official reviews coming?

They have 9k of them for BF4 on the 4th for Pre-order, I would venture to guess around then? But who knows, from what i've heard is that sites don't even have cards to review, just what AMD told them under NDA.

I would venture to guess early Oct you should see official reviews because BF4 comes out 29th.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
AMD is doing rebadge on all other GPUs, except 290 and 290X. Why?

You might be wrong

http://beyond3d.com/showpost.php?p=1789377&postcount=850

"None of these products announced are rebrands. "

So according to Dave Baumann of AMD the existing chips seems to have undergone slight tweaks at an ASIC level. something akin to GTX 680 to GTX 770. I am thinking its a newer stepping to lower load power and improve perf/watt.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com

CakeMonster

Golden Member
Nov 22, 2012
1,384
482
136
A new stepping (of which there might have been several already, or could easily have been) is not a new chip. It would indeed be a rebrand. Its ok to release several models of the same chip with different clock speeds and memory, but when you claim you have a new chip and all you have is a new stepping.... its a rebrand.
 

CakeMonster

Golden Member
Nov 22, 2012
1,384
482
136
The memory speeds alone don't seem to do much for performance on this generation, nor on the last. The 79xx's can easily oc to the 300GB/s++ speeds that AMD bragged about in the presentation, and they've been out for almost two years.

Ah hello? It's got an additional memory channel! That's +33% more bandwidth at equal speeds, comparing apples to apples. What's hindering you to OC Hawaii in the same way as Tahiti?

No. When they advertise the bandwidth in pure throughput speed (Gb/s), that's the actual bandwidth you get. I don't doubt that the end result will be that the Hawaii will OC too and because of the 512bit bus then have slightly higher actual bandwidth.

But AMD put the 300GB/s number out there and that is simply unimpressive, possibly it will allow them to use cheap memory since the 512bit bus takes care of still getting them to where a 7970 will overclock with ease. But advertising this number almost two years after 79xx release means that the actual memory bandwidth has practically not improved at all.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
think he said it best:




There will be games takeing advantage of this mantle stuff.
Activision and EA arnt small potatoes in the gameing industry, and these are just the ones we know of so far.

There could be intire swarms of other developers out there too, planning on doing it/actively doing it atm.

I predict there will be more games using Mantle than Physx. And Mantle improves performance across the board on all graphics features.
 

pcslookout

Lifer
Mar 18, 2007
11,926
146
106
I will never pay for top of the line video cards. Though I will get mid to high end ones that are cheaper by $100 + and that is the maximum I will spend.
 

pcslookout

Lifer
Mar 18, 2007
11,926
146
106
Looked like it suffered from hitching? now and again but overall looked pretty smooth. They seemed pretty excited about it tho.

It sure did.

I would love three monitors but most graphic cards can't handle it with just one unless you Dual or Triple SLI. I don't want to SLI.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I predict there will be more games using Mantle than Physx. And Mantle improves performance across the board on all graphics features.

Problem is a lot of us already have enough performance, what we want are cool features and more immersion.

AMDs answer to slower performance is Mantle, their answer to PhysX is sound.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Lets so some thinking here

R9 290X: 2816 cores, TDP = ?
7970: 2048 cores, TDP = 250W

2816/2048 = 1.375 = 37.5%

--------------------

Lets say TDP of the new 290X is 250W. That would mean that AMD have improved GCN with 37.5%. Same architecture as 7970, improved, but still on 28nm.
Does this sound plausible?

Lets say R9 290X is 300W.
7970 is still 250W.

300W/250W = 1.20 = 20%

Looking purely at TDP, its still missing 17.5% to reach the core count of 290X. Now what if AMD got that from the inproved GCN?
Does that make any sense? That they improved GCN by roughly 17% on the same 28nm.

Looking at this, what would be the most plausible option here? Id say the last one.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
I predict there will be more games using Mantle than Physx. And Mantle improves performance across the board on all graphics features.

I predict neither will be used as much as DX11+ so what does it even matter? I also predict Nvidia will still sell more cards.

Problem is a lot of us already have enough performance, what we want are cool features and more immersion.

AMDs answer to slower performance is Mantle, their answer to PhysX is sound.

This too. If you can't beat their hardware one on one you develop software that can "force it to win benchmarks wee!" That's all it is cause if it is close to 780 performance then it will be plenty of performance for games. They just want the graphs to look longer under their brand which means about zero when you are playing the game. "Ohh look at that guys, it says 80fps. *bang* oh sorry I was checking out the fps ticker when I got blown up and we lost. my bad" I still think this is simply a way for the DICE guys to easily port their engine around to different platforms so they can push out yearly releases of all these games they have to work on.
 
Last edited:

Leadbox

Senior member
Oct 25, 2010
744
63
91
Lets so some thinking here

R9 290X: 2816 cores, TDP = ?
7970: 2048 cores, TDP = 250W

2816/2048 = 1.375 = 37.5%

--------------------

Lets say TDP of the new 290X is 250W. That would mean that AMD have improved GCN with 37.5%. Same architecture as 7970, improved, but still on 28nm.
Does this sound plausible?

Lets say R9 290X is 300W.
7970 is still 250W.

300W/250W = 1.20 = 20%

Looking purely at TDP, its still missing 17.5% to reach the core count of 290X. Now what if AMD got that from the inproved GCN?
Does that make any sense? That they improved GCN by roughly 17% on the same 28nm.

Looking at this, what would be the most plausible option here? Id say the last one.

Look at Bonnaire, 40% more SPs, high core and memory clocks netting you 30% more performance whilst remaining in the same power envelope thanks to improved power states 80W for 7770 vs 85W Bonnaire
It not as impossible as you're trying to make it out to be to do the same with Hawaii
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Some Korean mag already benchmarked the r290x showing it had better power consumption than the Titan. And there were leaks at chiphell as well showing the same. I'm sure you can scream "fabrication" and that may be true, but seeing as the same posters had accurate leaks for GTX 680, 7970 and GTX Titan i'd assume that there is a basis in reality to the leaked benchmarks and power figures at chiphell. Seeing as the Titan has a 250W TDP, and TDP does not equal power consumption, AND the shroud cooler (which is what TDP represents, COOLING VERSATILITY) being roughly the same as the 7970s (aside from minor aesthetic differences) - I think it's safe to assume that the actual TDP of the R290X is 250W. Additionally, you'd think this is common sense, but core count does not scale in a linear manner with TDP because TDP is NOT POWER CONSUMPTION (apparently, some folks STILL don't understand this). The 770 has nearly half the core count of the GTX Titan. Well, around ~1200 less. But the TDP is almost the same, 230~ vs 250 for the Titan. So trying to extrapolate TDP from the core count is ridiculous because it doesn't scale that way, and we're back to square one : TDP is not maximum power consumption. This is all off topic though to this thread. Back to the regularly scheduled programming, I guess.
 
Last edited:

Leadbox

Senior member
Oct 25, 2010
744
63
91
This too. If you can't beat their hardware one on one you develop software that can "force it to win benchmarks wee!" That's all it is cause if it is close to 780 performance then it will be plenty of performance for games. They just want the graphs to look longer under their brand which means about zero when you are playing the game. "Ohh look at that guys, it says 80fps. *bang* oh sorry I was checking out the fps ticker when I got blown up and we lost. my bad" I still think this is simply a way for the DICE guys to easily port their engine around to different platforms so they can push out yearly releases of all these games they have to work on.

You're all over the place with this post
If the "graphs look longer under their brand" means more than zero, it is about selling gpus, thats were it matters to them
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
Anyone else notice the contradiction?

No because by the time this is relevant there will be another card that brute forces past it. This is always the case.

You're all over the place with this post
If the "graphs look longer under their brand" means more than zero, it is about selling gpus, thats were it matters to them

That's not all over the place...that's the honest truth. It means zero when you play the game. If it's even close to a 780 it's pretty friggen fast so they do just wanna win a benchmark and like I said, DICE wants to port around frostbite and milk it dry. I doubt anyone with a 780 is gonna say "damn I have a crap GPU now". Seriously...try thinking past the marketing BS.
 
Last edited: