AMD GPU14 Tech Event Sept 25 - AMD Hawiian Islands

Page 52 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Welcome to the new AMD overlords. :) I see a much brighter future for PC gaming with AMD than with any other company out there. And they have the coolest tech by far. Nobody else is doing 'ANYTHING' for gamers except extorting gamers for more money. Gaming Evolved wins.

my sarcasm detector is peaking but it could just be a false reading.

my kepler card is living in terror of being replaced by a gcn card soon for some mantle goodness.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
R9 290X
300 GB/sec. bandwidth (vs. 288 GB/sec. in 7970 GE)

For the new R9 290X (512bit bus) to get only 12 GB/sec more memory bandwidth vs HD7970Ghz (384-bit bus )

all while using a massive 512bit memory bus it must be using really slow GDDR5 maybe 3800Mhz or something wow.. what a step back from the 6000Mhz GDDR5 on the 7970Ghz

http://www.dailytech.com/AMD+Soft+L...Programmable+Audio+in+Hawaii/article33449.htm

A step back?? Are you serious? You're looking at it while standing on your head. The memory controller is much more efficient in Hawaii, according to rumors. Better yet, low clocked memory is great for enthusiasts as is gives way more headroom for overclocking.
 

Bubbleawsome

Diamond Member
Apr 14, 2013
4,833
1,204
146
R9 290X
300 GB/sec. bandwidth (vs. 288 GB/sec. in 7970 GE)

For the new R9 290X (512bit bus) to get only 12 GB/sec more memory bandwidth vs HD7970Ghz (384-bit bus )

all while using a massive 512bit memory bus it must be using really slow GDDR5 maybe 3800Mhz or something wow.. what a step back from the 6000Mhz GDDR5 on the 7970Ghz

http://www.dailytech.com/AMD+Soft+L...Programmable+Audio+in+Hawaii/article33449.htm
I bet aftermarket cards will come with 6000-7000 GDDR5. Help me, what bandwidth will this turn into? I don't know anything about these data rates.
 

lopri

Elite Member
Jul 27, 2002
13,211
596
126
HD7970GE is 4.32Tflop.

thank you. And the difference between 7970GE and regular 7970 is 100 MHz clock difference?

Edit: Oh, I see more folks chimed in. Thank you all. So combined with the brand-new memory controller (vastly improved, hopefully), we should see a decent gain. Tahiti's memory controller sucked despite its humongous raw memory bandwidth.
 
Last edited:

Saylick

Diamond Member
Sep 10, 2012
3,217
6,582
136
I bet aftermarket cards will come with 6000-7000 GDDR5. Help me, what bandwidth will this turn into? I don't know anything about these data rates.

6000 MHz GDDR5 Effective => 384 GB/sec
7000 MHz GDDR5 Effective => 448 GB/sec

Don't expect the memory to clock that high though, since the memory controller in Hawaii will most likely be based on Pitcairn's memory controller. Stock should be at least 4500 Mhz (which produces 300 GB/sec) but I'm thinking you can get above 5000 MHz easily, with it topping out at ~5.5 GHz.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
6000 MHz GDDR5 Effective => 384 GB/sec
7000 MHz GDDR5 Effective => 448 GB/sec

Don't expect the memory to clock that high though, since the memory controller in Hawaii will most likely be based on Pitcairn's memory controller. Stock should be at least 4500 Mhz (which produces 300 GB/sec) but I'm thinking you can get above 5000 MHz easily, with it topping out at ~5.5 GHz.

yeah I am thinking the memory is clocked at 1125 mhz (300.8 Gb/s) or 1250 mhz (320 Gb/s) at stock speeds. since it uses Pitcairn memory controller 5.5 - 5.8 Ghz memory overclock is possible. at those speeds the bandwidth is a massive 352 - 371 Gb/s. to match that bandwidth on 384 bit memory bus you would need 7.4 - 7.6 Ghz speeds for a bandwidth of 355.2 Gbs/s - 364.8 Gb/s.
 

Bubbleawsome

Diamond Member
Apr 14, 2013
4,833
1,204
146
If it uses pitcarins memory controller I bet it isn't a top of the line Hawaii. I think they have something planned in case 20nm doesn't work in time.
/tinfoilhat
 

Saylick

Diamond Member
Sep 10, 2012
3,217
6,582
136
If it uses pitcarins memory controller I bet it isn't a top of the line Hawaii. I think they have something planned in case 20nm doesn't work in time.
/tinfoilhat

Even if 20 nm didn't arrive early enough, I doubt they will release a new chip. A 512-bit memory bus is plenty wide, even if it cannot be clocked as high, and should provide plenty of bandwidth for future chips. Secondly, AMD looks to be pushing hard for Mantle's adoption which, if I'm not mistaken, should provide all GCN-based GPUs a nice boost to performance. Some posters on these forums believe that the difficulty in hitting smaller nodes at reasonable prices is causing this push for improved performance through software improvements instead of hardware improvements.
 

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
Gotta say 512 Bit Card is no slouch. I have an eVGA 512 Bit/1 GB DDR3 GTX 280 running STK 600MHZ's that handles a 4K PPS 2650 x 1440P Samsung 120Hz PLS Display without issues for Web Browsing and watching Movies - Haven't tried games with it as what would 1 GB of DDR3 have to offer ;o)

Man a 512 Bit/4GB's of Hynix DDR5 GPU Card for $600 @ 1000 GHZ - I say YES for the AMD R9 290X over the $720 eVGA Geforce 780 384 Bit/3GB DDR5 Samsung Classic in regards to Horse Power/Price.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The GTX 680 is right on the heels of the 7970 Ghz, so the GTX 770 would be right about even.

The context of our discussion was about how efficient GCN is for games that utilize compute. You have a stock 780 with 561mm2 that is barely 11% faster despite a die size increase of 54%. Watch in this title R9 290X will will beat the 780 without much trouble.

The difference between GTX 770 and 7970 GHz is only ~ 2fps. :D Anyway, I think you are missing his point. In the past, the 7970 GHz was ~ 20% faster than the GTX 770 in Tomb Raider with TressFX enabled. With newer drivers, that deficit has been cut down to ~ 6% , which is a huge difference.

No, no point is being missed. Out of the gate NV was struggling in this game and needed driver fixes to try and catch up to the 7970GE and it still failed. Also, the 770 boosts way higher than 7970GE which is only 1050mhz. Therefore, NV has to overcome its compute deficit with higher GPU clocks because its architecture is less efficient for compute shaders than GCN. Of course none of this really matters since 770 is $400-450 and 1Ghz 7970 is $280. That makes the comparison a non-starter! In fairness at current price levels you'd have to compare 760 after-market cards vs. 1Ghz 7970 and then it's not even close in TR.

Also, look at the performance of 780. Despite a nearly 50% increase in units across the board and memory bandwidth, it can't beat 7970GE in this title by more than 11-12%, which means its likely compute shader bottlenecked since additional ROPs, TMUs, CUDA cores and memory bandwidth are not utilized effectively.
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,450
2,488
146
Thing is, we don't know a solid price yet I thought?
 

Saylick

Diamond Member
Sep 10, 2012
3,217
6,582
136
Thing is, we don't know a solid price yet I thought?
We know nothing of the card besides it's name, that it will use 8+6 pin power, will have >300 GB/s of memory bandwidth, >5 TFLOPs of computing power, and will achieve at least 8000 in Firestrike (AMD's numbers were a bit conservative). Hell, I don't even think we can confirm the card's color at this time.
 

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
Hell, I don't even think we can confirm the card's color at this time.
Agreed but the R9's 512Bit's / 4GB of GDDR5 Hynix at whatever GHZ can not be ignored in regards to $600/$720 Hp/Price ratio if compared the eVGA GeForce GTX 780 Dual Classic or the Galaxy GTX 780 HOF.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The context of our discussion was about how efficient GCN is for games that utilize compute. You have a stock 780 with 561mm2 that is barely 11% faster despite a die size increase of 54%. Watch in this title R9 290X will will beat the 780 without much trouble.

That stock 780 also consumes slightly less power than your 7970GE, while being much faster in most games..

Also, Tomb Raider is one game out of many. Bioshock Infinite and Civilization V also uses compute, and the last time I checked, NVidia was ahead of AMD in both those games.

I don't think AMD has a lead in DirectCompute as you claim. It's just that some games favor GCN..

No, no point is being missed. Out of the gate NV was struggling in this game and needed driver fixes to try and catch up to the 7970GE and it still failed.

Yeah, 2 FPS slower than AMD and they failed :rolleyes:

Also, the 770 boosts way higher than 7970GE which is only 1050mhz. Therefore, NV has to overcome its compute deficit with higher GPU clocks because its architecture is less efficient for compute shaders than GCN. Of course none of this really matters since 770 is $400-450 and 1Ghz 7970 is $280. That makes the comparison a non-starter! In fairness at current price levels you'd have to compare 760 after-market cards vs. 1Ghz 7970 and then it's not even close in TR.

It's pointless to compare clock speed on different architectures. Even with the higher clock speed of the GTX 770, it still consumes less power than the 7970GE.

Also, look at the performance of 780. Despite a nearly 50% increase in units across the board and memory bandwidth, it can't beat 7970GE in this title by more than 11-12%, which means its likely compute shader bottlenecked since additional ROPs, TMUs, CUDA cores and memory bandwidth are not utilized effectively.

Some games favor one architecture more than another. There's nothing new about that.
 

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
I'm thinking it'll be red, I dunno why... :p
ROLF :eek:
Some games favor one architecture more than another. There's nothing new about that.
GNC/API/Mantle - I presume - Sish!

I'm Freaked OUT - I want start 4K 1440p 120Hz Gaming on my Rig and at Lost what GPU Card to buy !

It seems AMD promises this with their 512 Bit/4 GB Hynix DDR5 R9 290X Card for $600.

Yah! Yah! - Wait for Reviews. I got everything except for the Card - Wait for nVidia Maxwell 1st Q 2014 - LOL
 
Last edited:

lopri

Elite Member
Jul 27, 2002
13,211
596
126
Memory width and bandwidth means little without a context. Some folks may remember back in the day when Pentium 3/4 would always adopt next gen memories (DDR 2) and low-level benchmarks would show twice the bandwidth of Hammer/A64 (DDR 1). We know how they performed back then.

Generally speaking, you can compare memory bandwidths among the same family hardware, including their offsprings. But in order to do a cross-platform comparision there need be a lot more digging required.
 

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
Memory width and bandwidth means little without a context.
Look what the ATi Radeon RAGE 128 FURY did to the nVidia Riva TNT followed by the infamous 9700 with the TV-Wonder 128 PCI Stereo Capture Card.

Granted the nVidia nForce 400 was the fastest MB at the time running an AMD Barton CPU.

Intel now dominates the 64 Bit CPU Architecture.

What happened to ATi - I'm getting OLD ;o(
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Sorry I just cant buy that excuse for the reasons I stated above. Games are still goign to be programmed on direct x to the fullest and if devs want to dig deeper they can program to hardware. This doesn't effect Nvidia in any way, they are still going to have great DX11 performance.

Developing a low level API is the only way to squeeze out this much extra performance, if AMD made this API card neutral it's no longer "close to metal" and the performance gained would 0.

We all know PC graphics blow away consoles, but I think its impossible to deny that what devs have done with console graphics with the anemic hardware they have been using is straight up impressive, and thats a big part due to of having that low level API that they can tap into.

Could it be a game changing advantage for AMD? Yes it could be, but nvidia decided to walk away from the console market basically handing AMD the reigns to do this, but what people seem to not understand is that this isn't a devious move by AMD, they are just maximizing their hardware across 3 very similar platforms which should be expected of them. With JHH throwing in the towel on consoles it became a much bigger advantage though having everyone who makes games will be familiar with GCN. If you want to be mad point fingers at nvidia for deciding that consoles were a waste of time.

nVidia "said" the reason they didn't compete for consoles is they are too low margin. Reality is, nVidia doesn't make the hardware Sony and M$ wanted for consoles. Anybody who wants to believe nVidia did this to themselves, feel free. Truth is, AMD actually is responsible. They had the vision and the IP to develop the hardware that Intel and nVidia couldn't.

Mantle is there. It's needed to get the most out of the consoles. AMD would be really stupid not to leverage the extra performance for the PC market as well. I can see the meeting room now, "We aren't going to use Mantle in the PC market because it's not fair to nVidia." LMAO :D
 
Feb 19, 2009
10,457
10
76
Its still sad some people believe "NV walked away from the console market".. as if they had a serious solution... Tegra 4 Octa-SLI??
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It increases development time because you then have to rewrite stuff for DX11 because 90% of the PC market is not on GCN cards. I also very much doubt this API is the same exact one being used in the XB1 and PS4 dev kits now. The XB1 demo boxes were Nvidia powered PCs at E3 to begin with. Plus sony has already let some info out about their API for the PS4 and a lot of developers like it and not anywhere did they say it was using an API developed by AMD. This is AMD's way of trying to play catch up on performance with GPUs used in PC that just happens to work on consoles because they use a GCN based GPU.

They have to do all of the DX work either way. Mantle doesn't increase that. It just allows for Mantle's optimizations to carry over to the PC. About the only way I can see a dev not using Mantle is if nVidia pays them. It's already there. All of the work has been done.