AMD to introduce its Radeon R9 300-series lineup at Computex

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
The 4gb ram issue is definitely a problem for CF users, because they have the GPU power to run at higher settings that shift the bottleneck into vram.

For single card it should be fine for the next few years.

This puts 4GB 390X at a disadvantage in my eyes as I intend to go multi-card for 4K. So there are drawbacks with going early with HBM.

It remains to be seen if DX12 rumors are true about how you can use both banks of memory in Crossfire under DX12 instead of mirroring them. If that's true then that's makes it even less necessary for >4GB VRAM.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Consoles are the speed limit, and with 8GB shared memory and 5GB available to game developers, how much of that is going to be graphics? 4GB at MOST. More likely 3GB or less. Gamedevs aren't going to program for cards that very few people own, that's just a fact and it's always been that way since Crysis and Crytek throwing a fit over how few sales they got vs how many they expected to get.

The only way you are going to need more than 4GB anytime soon is if you run tri-screen 4K with anti-aliasing turned up, but only a fool would do that since pixel density on 4K screens is so high that AA is largely or entirely superfluous.

I would go far as to say that 4GB VRAM will last from now all the way to the next console generation for single-screen 4K or lower. Furthermore, in 3-4 years the people who can afford crossfire halo cards and tri-screen 4K can simply upgrade their cards in a few years if 4GB is not enough. There is no point in trying to "futureproof" by buying 6GB+ VRAM cards for games. For pro graphics or Tesla, sure. Games? Pffft. Re-upgrade down the line if you have to.

P.S. It remains to be seen if DX12 rumors are true about how you can use both banks of memory in Crossfire under DX12 instead of mirroring them. If that's true then that's makes it even less necessary for >4GB VRAM.

You can't really say that.

Also, consoles run 1080 best case scenario and many games are less than that. There are many variables such as dx12. It may get rid of much of the windows overhead but we don't much at all about it. Just as an example, mantle didn't do so well on 2gb tonga.

If consoles are struggling with 1080 and below, PC ports could come with higher textures. The bar could really be raised with dx12, there is no way to predict the future.

I am not saying that 4gb will be an issue any time soon. I absolutely hope not cause I have a 4gb card. But if it does, we just deal with it like we always have and move on.

Personally I think 4gb will be fine for a good while. But we can't predict anything with so many industry shifts coming, we truly are heading into the unknown. But I do find it strange that some people are insisting that 3.5gb is so bad and a huge problem, they say 12gb is just a total waste and worthless, but 4gb is just perfect for yrs and yrs to come. I mean, if 4gb is great for the 290. It 3.5gb is condemning the 970...how can these people insist that a card with a suggested 50% increase in performance be perfectly fine with 4gb as well? I am not saying it won't be, just that it doesn't seem like sound logic.

I think that we can't make any such assumptions with so many unknowns. We can't say that 12gb is just a total utter waste for the gm200, just pointless but then turn around and say 4gb is perfectly fine for a card 50% faster than the 290x. How can we make any claims at this point at all. It may very well turn out to be just that case. The 12gb gm200 may be completely worthless but we must wait and see.

These are areas we know little of. We don't know how the gm200 will fair with all that ram and we don't know the first thing about HBM on a graphics card.
HBM may offer more than imagined. 4gb HBM may stomp all over 12gb gddr5.
Its just unknown. Really, al we have to do is keep the cores saturated at all times. We don't know which route images be the drawback or any of the pros and cons when it comes to performance. And that is today, the ramifications right now and there is no way we can have any clue as to what they will be the future
 

Ramses

Platinum Member
Apr 26, 2000
2,871
4
81
P.S. It remains to be seen if DX12 rumors are true about how you can use both banks of memory in Crossfire under DX12 instead of mirroring them. If that's true then that's makes it even less necessary for >4GB VRAM.

Just got a horn-tooting email from AMD saying "AMD's ground breaking Mantle API enables superior performance; you can even mix monitors of different resolutions. With up to 8GB of video memory, practically nothing is out of reach!"

dx12 may not do it but Mantle apparently can.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Consoles are the speed limit, and with 8GB shared memory and 5GB available to game developers, how much of that is going to be graphics? 4GB at MOST. More likely 3GB or less. Gamedevs aren't going to program for cards that very few people own, that's just a fact and it's always been that way since Crysis and Crytek throwing a fit over how few sales they got vs how many they expected to get.

I expect ~3 GB vram from high end console ports in the years to come.

-Now add PC ultra settings and texture packs.

-Now go to 1440p or 4K.

-Now add AA.

4 GB is going to be exceeded in a couple years. I do expect vram demands to slow down but I do expect them to keep climbing. Lets not forget that the 512 MB consoles frequently demanded 1 GB of vram for games.

It remains to be seen if DX12 rumors are true about how you can use both banks of memory in Crossfire under DX12 instead of mirroring them. If that's true then that's makes it even less necessary for >4GB VRAM.

Though possible, accessing through the PCIe interface is incredibly slow with a lot of latnecy compared to HBM.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
We can't say that 12gb is just a total utter waste for the gm200, just pointless but then turn around and say 4gb is perfectly fine for a card 50% faster than the 290x. How can we make any claims at this point at all.

I feel like this statement along with your going off on a 3.5 vs 4GB tangent is not even addressing the points in my post.

You are beating up a strawman. 3.5GB is probably fine for single-monitor 4K for a very long time as well, so long as you keep AA under control--and there's no reason to crank it up if you have a high-density 4K monitor. If it's a 4K TV your viewing distance makes AA superfluous as well.

You are also seemingly mistaking memory bandwidth for framebuffer size. I thought we were talking about the latter??? Because I don't care if a chip is 50 or even 500% faster, if it doesn't need more framebuffer, it doesn't need more framebuffer. If it's a 500% faster GPU then it will probably need more memory BANDWIDTH, not framebuffer.

I made the points I made in my post, if you want to address them.

I expect ~3 GB vram from high end console ports in the years to come.

-Now add PC ultra settings and texture packs.

-Now go to 1440p or 4K.

-Now add AA.

4 GB is going to be exceeded in a couple years. I do expect vram demands to slow down but I do expect them to keep climbing. Lets not forget that the 512 MB consoles frequently demanded 1 GB of vram for games.

Though possible, accessing through the PCIe interface is incredibly slow with a lot of latnecy compared to HBM.

To repeat: why 4K *and* lots of AA?!?!!? Some people are stupid like that but that doesn't mean you have to be stupid, too. FYI I have been on Eyefinity since I had a 5850 1GB card on three 1080p screens. That's like 3K. And guess what, 1GB was fine for that era's DX9 games. Newer games do eat more VRAM, as do mods and AA but as long as you don't get too crazy, 4GB should be enough for a single 4K panel for the rest of this console generation's ports. I played BF4 on tri-1080p with absolutely no problems and that's 3/4 of a 4K panel. Other than increased resolution, the typical ways to increase VRAM usage are mods and lots of AA, but as long as you don't get too crazy you'll be fine. Some sites have done studies showing how much extra VRAM AA eats up, if you are interested in that kind of stuff. Mods will vary of course.

BOTH of you guys selectively quoted from my post, ignoring how I said the above, and how I ALSO said that even if you are one of those rich guys who wants to max everything out, you can simply REUPGRADE in a couple of years anyway!
 
Last edited:

Piroko

Senior member
Jan 10, 2013
905
79
91
There's no reason to hold back on the 390X just to wait for the rest of the new stack. Staggered releases are quite normal in fact.
Staggered releases are normal, yes, but a top-to-bottom release in one swoop could very likely lead to a greater lasting impact both with press and consumers. It's not the worst idea imho.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
To repeat: why 4K *and* lots of AA?!?!!? Some people are stupid like that but that doesn't mean you have to be stupid, too. FYI I have been on Eyefinity since I had a 5850 1GB card on three 1080p screens. That's like 3K. And guess what, 1GB was fine for that era's DX9 games. Newer games do eat more VRAM, as do mods and AA but as long as you don't get too crazy, 4GB should be enough for a single 4K panel for the rest of this console generation's ports. I played BF4 on tri-1080p with absolutely no problems and that's 3/4 of a 4K panel. Other than increased resolution, the typical ways to increase VRAM usage are mods and lots of AA, but as long as you don't get too crazy you'll be fine. Some sites have done studies showing how much extra VRAM AA eats up, if you are interested in that kind of stuff. Mods will vary of course.

Will 4K require AA? Probably not (and you can see I say this multiple times in my post history). However there will be some aliasing even at 4k.

I use a 1080p 15.6" laptop (pixel density the same as a 31" 4K monitor) and can definitely see aliasing. Its not there to any large extent and 2x MSAA pretty much makes it go away but its not just jaggies, its shimmer and the like. For better IQ some light form of AA is going to be required (SMAA seems to work very well).

I definitely think that 4GB will be having problems in the future, especially for 4k. Obviously you will be able to turn down a few settings and be fine but 4GB will limit a few things in the future.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
I've never owned a 290/290x, but having read dozens of GPU reviews since they launched, I completely agree that they are really solid chips (and good cards as long as aftermarket coolers are used) and on most metrics do not fall as far behind Nvidia's equivalents as the perception might be.

But that's on AMD. Perception is part of the competitive battle for sales, and they are losing/lost the perception tug of war when it comes to the 200 series. They not only need to put good hardware out, package that hardware in competitive AIBs, but also do a better job of marketing their products and changing the perception that AMD technology is a "red-headed step child."

It's not only on AMD. It's also the individual consumers responsibility to make educated purchasing decisions (You're probably laughing at this statement). The problem is companies like Apple (regardless of how good or bad their products are) have generated a sales force through emotion rather than through using rational thought. Nvidia any many other companies have simply followed suit and why not, it works!

Go watch a television commercial from the 50's or 60's and compare it to commercials today - You will see what I mean. Products were generally compared through their competitive advantages.

Anyways, the uninformed purchasing masses will continue to make stupid decisions based off how a product makes them "feel" rather than what the product offers them. So I have no hope for AMD unless they can successfully replicate the marketing and sales tactics of more successful companies.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
It remains to be seen if DX12 rumors are true about how you can use both banks of memory in Crossfire under DX12 instead of mirroring them. If that's true then that's makes it even less necessary for >4GB VRAM.

DX12 supports SFR, which allows both frame-buffers to be used. It has be coded on the application side, but why wouldn't NV/AMD start optimizing their DX12 titles for this? Exciting stuff.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
DX12 supports SFR, which allows both frame-buffers to be used. It has be coded on the application side, but why wouldn't NV/AMD start optimizing their DX12 titles for this? Exciting stuff.

SFR still requires local memory.

The only source for the more than 4GB came from an AMD PR guy via twitter in some random Mantle claim without any examples or explanation.
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
Just posting this here because it crossed my mind yesterday:

Most sites say that the 390X will have 4096 "cores" (aka Stream Processors).

That's actually the max amount of "cores" that the actual (documented) GCN architecture supports.

GCN supports max 4 SE (Shader Engine) and 16 CU (Compute Unit) per SE. Every single CU has 4 SIMD with each 16 ALUs.

This is a maximum of 4096 "cores".

(290X for example has 4SE, with 11CU each -> 2816 "cores")

That could lead to some conclusions:

1) No big architecture changes to expect.

2) The GPU will be expensive to produce because of the lack of scavenging.

Happy discussing :)
 
Feb 19, 2009
10,457
10
76
DX12 supports SFR, which allows both frame-buffers to be used. It has be coded on the application side, but why wouldn't NV/AMD start optimizing their DX12 titles for this? Exciting stuff.

It will take a long while for SFR to become common. It sounds a lot harder to do for developers to take full control of vram loading & garbage collection to handle it in a way that multi-GPU will pool their vram together in one giant pool (in theory).

@NeoLuxembourg
If you assume there's no changes in the next iteration of GCN for 390X, then sure.
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
@NeoLuxembourg
If you assume there's no changes in the next iteration of GCN for 390X, then sure.

I expect some changes, mostly a more agressive power/clock-gating like they have in Carrizo. And all-around improvements other areas.

But nothing really ground breaking.

Now, will this , and the 45% increase in Stream processors, be enough ... maybe, maybe not!
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Woah, nice. I guess the 390X is more than just GCN 1.2 with HBM strapped on as some have said. Still a bit disappointing it's not GCN 2 though.

EDIT: Are you sure this isn't for GCN 1.2 aka. Tonga? PDF states "Graphics Core Next Architecture, Generation 3", GCN 1.0 would be gen 1, 1.1 gen 2, 1.2 gen 3.
 
Last edited:

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
sure this isn't for GCN 1.2 aka. Tonga? PDF states "Graphics Core Next Architecture, Generation 3", GCN 1.0 would be gen 1, 1.1 gen 2, 1.2 gen 3.

You may be right, there's Volcanic Islands in the diagram.

tYRe4Xv.png


The release date is March 2015 and the "Gen3" made me think is was 1.3, sorry for the confusion. :(
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
"1.1" & "1.2" are just arbitrary designations by the end users/reviewers.

AMD doesn't use those. Right?
 

digitaldurandal

Golden Member
Dec 3, 2009
1,828
0
76
290 might be a good fit if snaggable at around $250
no freesync, no hdmi 2.0, poor power/hest vs the 970.

I got two 290s for 480 shipped with a lot of games. Currently the offer isn't as good as when I jumped on the deal.

Note though, as has already been mentioned in this thread. 290 does have freesync support. As does 285 and 290x.
 

geoxile

Senior member
Sep 23, 2014
327
25
91
"1.1" & "1.2" are just arbitrary designations by the end users/reviewers.

AMD doesn't use those. Right?

No, they used to use generations and I guess they still kinda do. They used Generations in CodeXL IIRC, but they've changed it to a generic naming system, i.e. "Graphics IPvX". IPv8 was GCN 1.2 IIRC.
 

dacostafilipe

Senior member
Oct 10, 2013
772
244
116
Tonga did introduces changes in the instruction set, but I did not find the docs about it, only some slides:

GCN12ISA_575px.png