Why did AMD clock the 290X memory so low?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

rtsurfer

Senior member
Oct 14, 2013
733
15
76
With a 512 bit memory bus vs. a 256 bit memory bus on the GTX 980 and a 384 bit memory bus on the R9 280X, the R9 290X has much more memory bandwidth. If the R9 290X had double the shaders and double the ROPs of those other chips, that than having double the memory bandwidth would make sense. However, the R9 290X only has 37.5% more shaders than the GTX 980 and R9 280X, so extra memory bandwidth doesn't contribute much past a certain point. I bet that AMD tested the chip with 6.5 Ghz memory

Here is a video of a guy who tests an R9 290 with 1350 Mhz RAM vs. 1700 Mhz RAM (i.e. 5.2 Ghz vs. 6.8 Ghz). As you can see, the results are less than 10% from a 30% overclock.

Thanks for the link.
 

III-V

Senior member
Oct 12, 2014
678
1
41
So I guess I recalled my memory clock speed findings incorrectly... the difference in memory clock speed between the 680 and 7970, with the data pulled from TechPowerUp's 8 GTX 680 reviews and 7 HD 7970 reviews ends up being statistically insignificant. The 680 averaged 1817 MHz, while the 7970 averaged 1788 MHz. The difference is less than 2%.

Of course, with Nvidia's newer cards, they do hold a substantial lead, but that's to be expected with them using memory rated for a higher speed.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
And you're getting needlessly defensive over a fact that was presented to be informative, not pejorative. Relax, please. It is okay for companies to shine in some areas and not in others.

I'm defensive? How about you try replying to what I said instead of trying to psychoanalyze me.

So you are claiming you are posting facts and being informative. Where are the facts? All I saw was speculation with one possible cause. I simply pointed out there could be other reasons.
 
Last edited:

III-V

Senior member
Oct 12, 2014
678
1
41
The only speculation I made was in a separate conversation, and was not applicable.

I had stated that Nvidia's memory controllers clocked higher. The only speculation behind this "fact" I presented was provided by you. I'd appreciate it if you'd not misrepresent my statements.

Anyway, I went and looked at TPU's reviews, and I seem to have totally goofed up on my claim that Nvidia bested AMD at maximum capable memory clocks, comparing (as best as possible) apples-to-apples (GTX 680 to HD 7970). They both clock similarly, as stated in my post above.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The only speculation I made was in a separate conversation, and was not applicable.

I had stated that Nvidia's memory controllers clocked higher. The only speculation behind this "fact" I presented was provided by you. I'd appreciate it if you'd not misrepresent my statements.

Anyway, I went and looked at TPU's reviews, and I seem to have totally goofed up on my claim that Nvidia bested AMD at maximum capable memory clocks, comparing (as best as possible) apples-to-apples (GTX 680 to HD 7970). They both clock similarly, as stated in my post above.

You said this: "Nvidia's memory controllers are capable of clocking signicifantly higher than AMD's."

You would have to use the same spec RAM, voltage, and timings to conclude that.
 

III-V

Senior member
Oct 12, 2014
678
1
41
You said this: "Nvidia's memory controllers are capable of clocking signicifantly higher than AMD's."
Yes, and note how there is not a lick of speculation in that sentence you quoted, despite your statement that "All [you] saw was speculation with one possible cause."

Please go look up the definition of the word "speculation."
You would have to use the same spec RAM, voltage, and timings to conclude that.
The relevance of this has expired.

Anyway, seeing as I've already cleaned up my Nvidia memory clock mess, I'll be leaving now.
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
Here is a video of a guy who tests an R9 290 with 1350 Mhz RAM vs. 1700 Mhz RAM (i.e. 5.2 Ghz vs. 6.8 Ghz). As you can see, the results are less than 10% from a 30% overclock.

HardOCP got 10% performance gains from a much more mild overclock using 290Xs.

1130mhz core & 5.8ghz memory. I would expect 1350mhz core and 6.8ghz memory to give an even bigger boost. Very interesting that it didn't. Different cards though, 290 vs 290Xs.

http://www.hardocp.com/article/2014...issipation_overclocking_review/8#.VGpDSPldXy4
 

futurefields

Diamond Member
Jun 2, 2012
6,470
32
91
It just seemed like they could have had even more bandwidth quite easily. Without trying too hard, I just set mine at 6.5Ghz. Thats an extra 1.5ghz and an extra 100GB/s of bandwidth over stock. If if they only got half that, that seems like a lot of extra performance to leave on the table.

Any performance difference in games? Let us know if it makes any.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,600
6,084
136
You'll get a lot better scaling from overclocking the core versus the memory. Sure higher memory speed helps but the gain is not 1:1 versus the frequency increase due to the 290/X not exactly being starved for memory bandwidth...
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
Any performance difference in games? Let us know if it makes any.

I'm not really much of a benchmarker, I like to play games more than benchmarks, but I did use 3DMark to test stability and as a result, I did get some scores. These were for the FireStrike tests.

My cards are the Tri-X OC which comes a little faster than AMD's cards. 1040/5200 vs 1000/5000

Stock (1040/5200) was around 15.6k
OC'd (1100/6500) was around 16.4k
 
Last edited:

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
The part that throws me off a little is the 'memory bandwidth isn't needed'.

Isn't the 290Xs memory bandwidth one of the reasons why its so fast at high res/4k gaming? Wouldn't more memory bandwidth help even more there?

Also remember, the question I ask is partly due to how easy it is for these cards to get to those higher speeds. I'm not asking why didn't the re-engineer the whole card for higher clock speeds. I'm wondering why they left so much potential untapped.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
I believe the results from tests are that the bandwidth is already good enough that the bottleneck isn't there for 4k gaming, so it would see proportionally less improvement from further memory clock speed increases because proportionally less of what's holding it back is the memory.
 

Actaeon

Diamond Member
Dec 28, 2000
8,657
20
76
I believe the results from tests are that the bandwidth is already good enough that the bottleneck isn't there for 4k gaming, so it would see proportionally less improvement from further memory clock speed increases because proportionally less of what's holding it back is the memory.

Thanks. My takeaway from all of this then is that the performance gains to the energy consumption ratio was unfavorable. While there may be a benefit and while the card may be capable of getting there, the energy trade off wasn't worth it.

Thanks again to all.