OCUK: 290X "Slightly faster than GTX 780"

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

PrincessFrosty

Platinum Member
Feb 13, 2008
2,301
68
91
www.frostyhacks.blogspot.com
So basically this whole release boils down to 1 new part at GTX 780 levels/prices, the rest just being the same old at the same prices :(

Yeah but lets be fair, Nvidias new range is basically the same thing, they added the 780 based off the GK110 GPU, probably ones that were not binned high enough for Titans, and the rest are re-branded old tech.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
But advertising this number almost two years after 79xx release means that the actual memory bandwidth has practically not improved at all.

And? GTX580 and HD4890 are 2 perfect examples of GPUs having too much memory bandwidth for their GPU power. It's not only about how much memory bandwidth you have, but how efficiently you can put it down. Despite 680 having the same bandwidth as the 580 it is able to outperform it by 35-40%. The counter-point is lets say R9 290X shipped with 400GB/sec memory bandwidth, how do you know the performance would increase by even 10% overall? We don't know how R9 290X scales with more memory bandwidth beyond 300GB/sec to assume that 300GB/sec is too little of an improvement over Tahiti.

Anyone else notice the contradiction?

780 OC is about 35-40% faster over 7970 OC. 7970 OC is 40-50% faster than 580 OC. 7970 OC is 70-75% faster than 6970 OC. Thus, the 780 and by extension the R9 290X provide a far worse increase in performance/$ than 7970 OC did over 6970/580 OC.

If you don't want to compare AMD vs. NV, just look at NV. 580 ($499) --> 680 ($499) brought a 35-40% increase but you are now paying $650 to get a smaller increase over the 680 than what the 680 brought over the 580.

----

Linus shows off the R9 290X reference card at about 11:38 min mark
http://www.youtube.com/watch?v=3MU-DIKvY3U
 
Last edited:

Leadbox

Senior member
Oct 25, 2010
744
63
91
No because by the time this is relevant there will be another card that brute forces past it. This is always the case.



That's not all over the place...that's the honest truth. It means zero when you play the game. If it's even close to a 780 it's pretty friggen fast so they do just wanna win a benchmark and like I said, DICE wants to port around frostbite and milk it dry. I doubt anyone with a 780 is gonna say "damn I have a crap GPU now". Seriously...try thinking past the marketing BS.

You don't think the "longer graphs" help to inform purchasing decisions ?
You're talking about after you've bought the gpu
Again, it's about selling gpus here. winning benchmarks helps here, a lot
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
You don't think the "longer graphs" help to inform purchasing decisions ?
You're talking about after you've bought the gpu
Again, it's about selling gpus here. winning benchmarks helps here, a lot

I don't see how because BF4 comes out next month and it will take two months before they get their benchmark software out for it. People would have likely already made a decision for that game.
 

Makaveli

Diamond Member
Feb 8, 2002
4,720
1,055
136
And? GTX580 and HD4890 are 2 perfect examples of GPUs having too much memory bandwidth for their GPU power. It's not only about how much memory bandwidth you have, but how efficiently you can put it down. Despite 680 having the same bandwidth as the 580 it is able to outperform it by 35-40%. The counter-point is lets say R9 290X shipped with 400GB/sec memory bandwidth, how do you know the performance would increase by even 10% overall? We don't know how R9 290X scales with more memory bandwidth beyond 300GB/sec to assume that 300GB/sec is too little of an improvement over Tahiti.



780 OC is about 35-40% faster over 7970 OC. 7970 OC is 50% faster than 580 OC. 7970 OC is 70-75% faster than 6970 OC. In all cases, 780 (and I presume R9 290X) provide a far worse increase in performance/$ than 7970 OC did over 6970/580 OC

Thanks for posting this russian.

I was just thinking it to myself.

How does he know the new card that isn't out yet it bandwidth starved at 300gb/sec why would it need more if that is enough to feed it.

Was the 7970 lacking bandwidth before it?

I would think AMD would know better than a randon guy on a forum how much bandwidth the card requires.
 

LegSWAT

Member
Jul 8, 2013
75
0
0
No. When they advertise the bandwidth in pure throughput speed (Gb/s), that's the actual bandwidth you get. I don't doubt that the end result will be that the Hawaii will OC too and because of the 512bit bus then have slightly higher actual bandwidth.

But AMD put the 300GB/s number out there and that is simply unimpressive, possibly it will allow them to use cheap memory since the 512bit bus takes care of still getting them to where a 7970 will overclock with ease. But advertising this number almost two years after 79xx release means that the actual memory bandwidth has practically not improved at all.

Tahiti as Radeon HD 7970 has a memory bandwidth of 264 GB/s.

As for Hawaii, you're still not getting the point that a quad channel interface is a design decision taken to be significantly more power-efficient, do you?

At the same time, you're badmouthing a GPU not even presented in full detail for technical details which may not even affect performance in the way you display:
* Not even Tahiti showed any signs of being bandwidth-starved
* You imply that the design decision more channels with less power-consuming, lower-clocked GDDR already means "cheap modules", when you have no details whatsoever about the very module specifications themselves
* You imply that a small increase of bandwidth is a major issue, when as a fact none of the current-gen and last-gen high-end cards ever had the issue of being even remotely bandwidth-starved

What's really impressive, if correct, is that AMD engineers managed to compact a 4 channel memory interface to a smaller die area than their previous 3 channel interface, whilst working on the same fab process.

So stop implying technical issues you have no idea of and very little information on, and stop badmouthing a product many talented engineers spent thousands of hours of highly-creative and productive hard work on! Show some more respect for that sort of engineering work even badmouthers as yourself will be allowed to buy!
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,001
126
Lets so some thinking here

R9 290X: 2816 cores, TDP = ?
7970: 2048 cores, TDP = 250W

2816/2048 = 1.375 = 37.5%

--------------------

Lets say TDP of the new 290X is 250W. That would mean that AMD have improved GCN with 37.5%. Same architecture as 7970, improved, but still on 28nm.
Does this sound plausible?

Lets say R9 290X is 300W.
7970 is still 250W.

300W/250W = 1.20 = 20%

Looking purely at TDP, its still missing 17.5% to reach the core count of 290X. Now what if AMD got that from the inproved GCN?
Does that make any sense? That they improved GCN by roughly 17% on the same 28nm.

Looking at this, what would be the most plausible option here? Id say the last one.


GTX580 and GTX480 The GTX580 had an 11% core clock speed and shader clock speed boost while having 6.7% more SP's enabled. Both on 40nm.

I'll give you that the GTX 480 appeared more rushed and incomplete to begin with, but with 16 months since the 7970 launced for 28nm to mature and AMD engineers learning and revising, I don't see why such an all around increase in efficiency would not be possible. Nvidia launched the GTX 580 about six or seven months after the GTX 480, for comparrison.
 

DownTheSky

Senior member
Apr 7, 2013
787
156
106
They chose 512bit bus for 3 reasons:
1. 4GB
2. smaller die size than the 384bit Tahiti bus
3. smaller power consumption due to lower voltage lower clocking GDDR5 memory.

Also expect 290X OCd to be faster than Titan/780 OCd.


I would have liked more ROPs, but hey, you can't have everything. Guess that went for TrueAudio.


Something that went overlooked: >6bn trans? Color me impressed.
 
Last edited:

el etro

Golden Member
Jul 21, 2013
1,581
14
81
Lets so some thinking here

R9 290X: 2816 cores, TDP = ?
7970: 2048 cores, TDP = 250W

2816/2048 = 1.375 = 37.5%

--------------------

Lets say TDP of the new 290X is 250W. That would mean that AMD have improved GCN with 37.5%. Same architecture as 7970, improved, but still on 28nm.
Does this sound plausible?

Lets say R9 290X is 300W.
7970 is still 250W.

300W/250W = 1.20 = 20%

Looking purely at TDP, its still missing 17.5% to reach the core count of 290X. Now what if AMD got that from the inproved GCN?
Does that make any sense? That they improved GCN by roughly 17% on the same 28nm.

Looking at this, what would be the most plausible option here? Id say the last one.

Performance don't increase linearly with each Stream Processor count added BTW(I know you don't ask it).
And great engineering work have done to put all of the 2816Sps on the 438mm² chip.
Power consumption of 7970 is way behind the 250W TDP. Remember 7970Ghz have a much higher power consumption within this same thermal envelope. 290X can be launched on the same 250W TDP(consuming more than 7970Ghz and below Titan).
 
Feb 19, 2009
10,457
10
76
Problem is a lot of us already have enough performance, what we want are cool features and more immersion.

AMDs answer to slower performance is Mantle, their answer to PhysX is sound.

YOU OF ALL PPL.. think we have enough performance? hehe

Stop overclocking.

Once 4K monitors become cheap, it will trash most top end setups, it would take a coding to the metal API to really power it on single GPUs.

We've been with 1080p to 1600p for too long.
 

pcslookout

Lifer
Mar 18, 2007
11,936
147
106
YOU OF ALL PPL.. think we have enough performance? hehe

Stop overclocking.

Once 4K monitors become cheap, it will trash most top end setups, it would take a coding to the metal API to really power it on single GPUs.

We've been with 1080p to 1600p for too long.

I don't think we been with 1080p to 1600p for to long.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I don't think we been with 1080p to 1600p for to long.

1600p remained a niche in the PC world primarily because it is a resolution used by professionals. The key difference with 4k Ultra HD is that it will also be used in living rooms on UHD TVs, whereas that was *never* the case with 1440p or 1600p - with this being the case, 4k absolutely will catch on. It won't be this year. It won't be next. But it will happen. I'd say 3-4 years from now when prices enter the realm of "reasonable" for HDTVs, and all entertainment formats transition to 4k (this is happening NOW, by the way, Blu Ray already supports 4k) then it isn't unreasonable to expect 4k to become the next PC standard.

So you can't really draw a parallel between the adoption of 1600p as compared to the adoption of 4k. 4k WILL catch on because it is a HDTV standard that will be used in the living room. As I said though, this will take 2-4 years. It will happen, guaranteed, though.

That being said I do think the prices aren't in a reasonable territory for either 4k screens or GPUs to power PCs at 4k. Maybe next year or 2015. Who knows. But it will happen at some point.
 

pcslookout

Lifer
Mar 18, 2007
11,936
147
106
1600p remained a niche in the PC world primarily because it is a resolution used by professionals. The key difference with 4k Ultra HD is that it will also be used in living rooms on UHD TVs, whereas that was *never* the case with 1440p or 1600p - with this being the case, 4k absolutely will catch on. It won't be this year. It won't be next. But it will happen. I'd say 3-4 years from now when prices enter the realm of "reasonable" for HDTVs, and all entertainment formats transition to 4k (this is happening NOW, by the way, Blu Ray already supports 4k) then it isn't unreasonable to expect 4k to become the next PC standard.

So you can't really draw a parallel between the adoption of 1600p as compared to the adoption of 4k. 4k WILL catch on because it is a HDTV standard that will be used in the living room. As I said though, this will take 2-4 years. It will happen, guaranteed, though.

That being said I do think the prices aren't in a reasonable territory for either 4k screens or GPUs to power PCs at 4k. Maybe next year or 2015. Who knows. But it will happen at some point.

I can easily accept it in 3 to 4 years! Everything gets better in time!
 

CakeMonster

Golden Member
Nov 22, 2012
1,392
500
136
I would think AMD would know better than a randon guy on a forum how much bandwidth the card requires.

I never stated anything whatsoever about what bandwidth is required, and you know that. I was merely replying to and correcting a poster promoting the misunderstanding that the bandwidth was increased by 33% over the last generation because of 512bit. The memory bandwidth measured in GB/s as referred to by AMD is the actual achieved bandwidth after bus width is taken into account. That needed to be emphasized and corrected. Pointing to the number AMD put out and what can be achieved on the 79xx just provides context. I made no prediction of the cards performance.
 

Abwx

Lifer
Apr 2, 2011
10,953
3,472
136
Oh so cute, checking my history :D

AMD smacked on more cores and at the same time, the TDP increased to a whopping 300W.

They have a 300W GPU matching a 250W GTX780.
Its the same strategy they do with their CPUs. Make up for a really unefficient architecture by increasing power consumption way above the competition.

They get no applaud from me :thumbsdown:

Let s look at the available numbers :

ch5_power.jpg


So it consume less than a Titan while outperforming it ,
it consume a little more than a 780 but still it has also
better perfs , in fact the perf delta is greater than the
TDP delta...


The link , so anybody can check how hugely wrong you are :

http://videocardz.com/45753/amd-radeon-r9-290x-slightly-faster-gtx-titan
 

CakeMonster

Golden Member
Nov 22, 2012
1,392
500
136
So stop implying technical issues you have no idea of and very little information on, and stop badmouthing a product many talented engineers spent thousands of hours of highly-creative and productive hard work on! Show some more respect for that sort of engineering work even badmouthers as yourself will be allowed to buy!

I'm not sure why you are criticizing me. You even quote my post and in it I'm not badmouthing anything. Please take a look at it again before you accuse me of anything. You're making wild leaps in attributing my simple comparison of numbers to malicious intent. That's simply unfair and you need to reread it with less suspicion.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
You might be wrong

http://beyond3d.com/showpost.php?p=1789377&postcount=850

"None of these products announced are rebrands. "

So according to Dave Baumann of AMD the existing chips seems to have undergone slight tweaks at an ASIC level. something akin to GTX 680 to GTX 770. I am thinking its a newer stepping to lower load power and improve perf/watt.

And it was shot down.
http://forums.overclockers.co.uk/showpost.php?p=25014060&postcount=2770

R270X is a cripped HD 7950, 2GB and 256-bit reduced from 3GB and 384-Bit.

R280X is a HD 7970 equivalent, a little slower than a GHz model. This is a HD 7970 re-boxed/re-branded
.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Let s look at the available numbers :

ch5_power.jpg


So it consume less than a Titan while outperforming it ,
it consume a little more than a 780 but still it has also
better perfs , in fact the perf delta is greater than the
TDP delta...


The link , so anybody can check how hugely wrong you are :

http://videocardz.com/45753/amd-radeon-r9-290x-slightly-faster-gtx-titan

I can see error in that chart right away. GTX 780/Titan consume less power than 7970GHz, not more. So yeah, let wait for some real review and not made up numbers.
http://www.techpowerup.com/reviews/Palit/GeForce_GTX_780_Super_JetStream/24.html
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
You mean less power like this?

54906.png

Lets see,

Should one look at Anandtech, that use different hardware with different machines, and post system power consumption,

Or

Should one look at TechPowerUp that use an Integra multimeter, that measure the GPU exact power consumption

:rolleyes: