[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 60 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Besides the fact that Hawaii isn't new (it was introduced in Nov 2013, more than 18 months ago) -

The price/perf aspect is because Hawaii is heavily discounted right now, but that doesn't mean that AMD or the AIBs are making any money off of it - and making money is what companies are all about.

Hawaii is about 25% larger than Tonga, dissipates about 55% more heat (requiring more cooling) / takes 55% more power, and doesn't have the 3rd gen memory compression (hence needs a bigger bus). What all this means is that it costs a lot to build cards around Hawaii.

In essence, the 290/290x are cards designed to compete with top line Nvidia GPUs like the 970/980, but they have to be heavily discounted to do so despite being obviously more expensive to manufacture.

Yes, I know all this. But from an end-user perspective, it's all irrelevant. Buyers don't care how much the cards cost to manufacture (though they do care about the TDP, which is the primary reason why Hawaii has to be so heavily discounted). The point is that if Hawaii is discontinued and the 300 series goes straight from Tonga to Fiji with nothing in between, then AMD cards will be a worse purchase in terms of perf/$ for consumers. Right now you can buy a R9 290 for ~$250 or a R9 290X for ~$300, and these are hard deals to beat.

I also think Tonga as we know it now (R9 285) is just a shadow of what it's capable of. People seem to forget, it isn't just the SPs. It's the 256 bit bus, and the 2GB RAM, that hamstrings Tonga. Even with that, it bests the 384bit / 3GB R9 280.

Besides what we know - full Tonga have at least 50% more RAM and 50% larger bus, along with 15% more SPs - it's also rumored that it can support HBM.

I really think the R9 285 was meant to shake out Tonga from a manufacturing standpoint. It's intentionally crippled by its 256 bit bus and 2GB VRAM. If you look at the benchmarks where it suffers - it suffers because of that. Remove those bottlenecks, and it will be superior to most of AMDs current lineup. And that's what I think the R9 380 / 380x will be - uncrippled Tonga.

I agree that the R9 285 isn't representative of Tonga's full performance, but I think you're exaggerating here. First of all, there is no reason to think that Tonga has a 384-bit bus. That rumor was started by TechReport, and from what I can determine it was based mostly on the fact that Tonga has a higher transistor count than Tahiti. But these extra transistors can easily be explained by the bigger caches required by the new architecture, plus features like FreeSync and TrueAudio that Tahiti lacked (plus the improved UVD block as you mention below). And there's no reason to think it needs a 384-bit bus; the whole point of the improved delta color compression technology was to allow a narrower bus while maintaining the same or higher level of performance, just as Nvidia did with Maxwell.

The top-binned, fully-enabled Tonga in the Retina iMac has all 2048 shaders enabled, and a TDP of roughly 125W. It still has a 256-bit bus, as expected. What could we expect a desktop variant of this kind of part to do? Well, GCN scales about as you'd expect with the number of shaders (e.g. R9 290 has 25% more shaders than the R9 280X, and offers about 25% more performance). Since full Tonga has 15% more shaders than the R9 285, we can expect about 15% better performance. This still falls short of the R9 290 by 10%-15% (depending on resolution) and short of the R9 290X by an even greater margin. On the other hand, performance/watt should be considerably better. The R9 285 is probably the worst bin of Tonga wafers; the far higher perf/watt of R9 M295X chips indicates that Tonga is capable of much better. Fitting full Tonga into a single 6-pin connector (150W TDP) seems like a reasonable expectation. In fact, the FirePro W7100 (cut-down Tonga) already does this.

The performance could be improved by upping the core clock rates (R9 285 clocks at 918 MHz), but this would also risk increasing power consumption levels.

We should also expect the full Tonga card to have 4GB of VRAM, just as the Retina iMac's version does. This may improve performance at higher resolutions (the R9 285 reviews show it falling behind above 1080p). Of course, that could have been done by AIBs at any time with existing R9 285 cards if they wanted to. Has anyone benched a FirePro W7100 (8GB)?

Just as a reminder (also, reminder of what 290/290X can't do) :

http://www.anandtech.com/show/8460/amd-radeon-r9-285-review

"With this newest generation of UVD, AMD is finally catching up to NVIDIA and Intel in H.264 decode capabilities. New to UVD is full support for 4K H.264 video, up to level 5.2 (4Kp60). AMD had previously intended to support 4K up to level 5.1 (4Kp30) on the previous version of UVD, but that never panned out and AMD ultimately disabled that feature. "

This is true, and it's definitely a nice addition, but Tonga still lacks fixed-function HEVC decoding. I want that feature for my next card (I'll be doing a new build in a few months) and right now I have exactly one choice: the GTX 960. I was hoping AMD might bring some additional options to the table with regards to this.
 
Last edited:

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Do you know of any application that draws more power from the GPU than FurMark? If the answer is no, then by definition, FurMark shows the GPU's maximum power usage. What you're saying is that you don't care, because you don't consider it a "real-world" load. But that's a matter of opinion. I, personally, refuse to build a system that will not be stable under all operating conditions. If you don't test with FurMark, then how do you know that running it won't trip the breaker in your power supply, cause the VRMs to melt down, or set your house on fire? I consider no system stable until it has been tested with Prime95 and FurMark simultaneously for 24 consecutive hours.

So I take it you also run your car at max RPMs for a day before you consider it reliable for your daily commute?

You're doing something that your card wouldn't otherwise do to verify that it can do something you wouldn't otherwise do.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Yes, I know all this. But from an end-user perspective, it's all irrelevant. Buyers don't care how much the cards cost to manufacture (though they do care about the TDP, which is the primary reason why Hawaii has to be so heavily discounted). ...

I agree with you, but this thread is about speculation / prediction of AMDs next gen cards. The R9 290 is clearly being flushed from the supply chain. It is not designed to be a $250 card. That TDP/power use also makes R9 290/290X a non-starter for the vast majority of the PC market.

The R9 285 is probably the worst bin of Tonga wafers; the far higher perf/watt of R9 M295X chips indicates that Tonga is capable of much better. Fitting full Tonga into a single 6-pin connector (150W TDP) seems like a reasonable expectation. In fact, the FirePro W7100 (cut-down Tonga) already does this.

Exactly. The R9 285 is the worst binned Tongas, probably put out to remain relevant while fab yield ramps up and revisions to increase that yield are made.

The performance could be improved by upping the core clock rates (R9 285 clocks at 918 MHz), but this would also risk increasing power consumption levels.

I would fully expect there to be a revision of the die and fab improvements to offset this before R9 3xx is released. We're not talking about massive clock ramps - 10% would provide a significant boost in performance vs R9 280 given that it already beats it by ~8% at 1080p.


This is true, and it's definitely a nice addition, but Tonga still lacks fixed-function HEVC decoding. I want that feature for my next card (I'll be doing a new build in a few months) and right now I have exactly one choice: the GTX 960. I was hoping AMD might bring some additional options to the table with regards to this.

This is true, and while I'm pointing out the improvements Tonga has made vs the many prior gens of AMD GPUs, AMD still seems ~12 months behind Maxwell even looking at Tonga features and perf/watt. Much farther ahead if we look at older chips like Hawaii.


On the W7100, which is a R9 285 class 256bit Tonga with 8GB of ram, same number of SPs, and of course setup for stability not performance.

So yeah, I already thought about that. It's also part of why I think Tonga will be the new R9 280/280X.

Based on the few benchmarks one can peice together on this :

The W8100 is the R9 290 equivalent, lower clock.

The W7100 is only 2-3% slower, and it's more like 5-10% faster than the R9 280X.

Search for "FirePro" here :
http://www.videocardbenchmark.net/high_end_gpus.html

Now look at this :

26-Gaming-Performance-2160p.png


And here regarding the W7100, put up against an R9 290X in non-gaming rendering applications :

http://translate.google.com/transla...bestie-dedicata-profesionistilor/&prev=search

"We ran this test on a gaming video card Sapphire Trixx 290x R9, and I saw a huge difference in FPS: Siemens NX in the latter test scored only 17 FPS compared to the 59 FPS obtained The W7100, or R9 Solidworks 290x test had an average of 66 FPS and W7100 took an average of 85 FPS. "
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
We finally have specific dates! :thumbsup:

From the site that brought us the details about upcoming Skylake chips, they now have revealed availability/launch of R9 300 cards:

June 18th - R7 360 all the way up to R9 380 will be available to buy (retail cards I guess?)
June 24th - R9 390 and R9 390X (Fiji) will be announced and available to buy

Source: http://benchlife.info/amd-radeon-300-series-would-announce-in-june-18th-and-24th-05112015/
Translated: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=no&ie=UTF-8&u=http%3A%2F%2Fbenchlife.info%2Famd-radeon-300-series-would-announce-in-june-18th-and-24th-05112015%2F&edit-text=
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
Sadly I feel absolutely no need yo upgrade from a 2500k and a r9 290 at 1080p.

We need some better 4k monitors to come out to get people upgrading.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
We finally have specific dates! :thumbsup:

From the site that brought us the details about upcoming Skylake chips, they now have revealed availability/launch of R9 300 cards:

June 18th - R7 360 all the way up to R9 380 will be available to buy (retail cards I guess?)
June 24th - R9 390 and R9 390X (Fiji) will be announced and available to buy

Source: http://benchlife.info/amd-radeon-300-series-would-announce-in-june-18th-and-24th-05112015/
Translated: https://translate.google.com/transl...ce-in-june-18th-and-24th-05112015/&edit-text=
if your source n info is true regarding performance, you will become a god on this forum :cool:
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I just hope for AMD's sake these cards aren't limited to 4GB on their top end (390/390x) cards. I think this thread is a good example of what happens when there's a lack of excitement in a product. As soon as it started looking more and more like the 4GB rumor may be legit, the activity in here came to a grinding hault with the exception of some bickering that went on.
 

Serandur

Member
Apr 8, 2015
38
0
6
I just hope for AMD's sake these cards aren't limited to 4GB on their top end (390/390x) cards. I think this thread is a good example of what happens when there's a lack of excitement in a product. As soon as it started looking more and more like the 4GB rumor may be legit, the activity in here came to a grinding hault with the exception of some bickering that went on.

I hope so too. If they manage a seamless 8GB solution, the 390X will look quite tempting. But 4 GBs... on something so powerful... nuh-uh, GM200 would be my choice in that case.:thumbsdown:
 

Sequences

Member
Nov 27, 2012
124
0
76
Holy cow some of you guys are attached to the idea of 8GB VRAM. As if hardware vendors are holding back the goodies for themselves. God forbid someone writes a game that mallocs 32GB of RAM, will you be crying out against games that use 1 to 2 GB's?
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
Notice people aren't saying we need X amount of RAM? Just we need anything as long as it's more than 4GB.

That's because for the super high end Area....the area that IS 4K+ gaming...4GB is just too close to the limit.

Just like how nowadays 2GB of Vram will no longer be good enough for high end 1080P gaming. That's just a fact.

Sure...for average joe with average joe settings...this will be okay...but people that buy high end cards want to be able to set the settings on the highest without having to fear to run out of Video Memory.

Like...when this card comes out @4GB only...at this point there will already be games that come ridiculously close to the 4GB mark. Why would anyone want to dish out 700 or so $ for a card that might not be able to run the highest texture settings anymore in like half a year...that just seems super bad. Because even enthusiasts generally expect their hardware to run all they throw at it for a while...I know right...shame on them.
 
Last edited:
Feb 19, 2009
10,457
10
76
Notice people aren't saying we need X amount of RAM? Just we need anything as long as it's more than 4GB.

You know Vaga, back during the 4800 series, I said 512MB wasn't enough. I got the 1gb version.

In the 5800 series, I KNEW 1GB wasn't enough for CF, I should have gotten the 2GB version but they weren't available so I got the 1gb version, plenty of games later where my setup can max it if only it had more vram.

During the 680 v 7970 series, I got the 7970 specifically because of 3gb vs 2gb (which i knew would not be enough). Already quite a few games where the 7970 can put on higher quality textures where 2gb cards cannot.

SLI 780 3GB vs CF R290 4GB same deal, R290s has proven itself to be the one with more longevity.

Now if its 390X 4gb vs 980Ti 6gb, I know which one is at a disadvantage. History repeats itself.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
This was the exact same argument that went around on this forum 3 years ago with the 2 GB 680 vs 3 GB 7970. Fast foreward 3 years and look where 2 GB vs. 3 GB is now.

The argument is not the same at all.

680 2GB and 7970 cost more or less similar to each other. That means the choice between 2GB and 3GB was not hard. We are talking +/-$50. Shortly after 7970Ghz launched, 7970 cost about the same as a 670 while 7970Ghz cost less than 680 2GB and way less than 680 4GB. Since there was NO premium to get more VRAM with AMD over NV, the decision was different.

If a $449-500 R9 390 non-X has 4GB of VRAM, 87% of the performance of the Titan X, then stepping up to 6-12GB may entail paying $700-1000 for GM200 6GB/R9 390X 8GB or the Titan X.

However, history proves time and time again that 15-20% extra performance in the same generation does absolutely nothing for future-proofing in 2-3 years. There are very few exceptions to this rule (like if you bought X1950XTX for DX9 games then yes it did outlast 7900GTX). Otherwise, if you need good performance in the latest games, you are always better off buying and selling and upgrading rather than holding on to your last gen flagship cards for a long time. There is literally no exception to this upgrading strategy for GPUs in 20+ years.

Proof.

The person who bought GTX470s instead of 480s and upgraded sooner to 670s is better off than someone who held on to 480s for too long. The person who bought HD6950s instead of 580s and dumped those and got 7950s is better off than the person who held on to 580s. The person who got 970s today and will dump them in 2 years will be better off than someone who spent $400 extra on 980s and will hold on to them beyond 2 years while the ex-970 user will have long upgraded to something way faster.

Spending hundreds of dollars extra upfront today on the flagship cards of any generation is a fruitless future-proofing strategy because it never worked. The far superior future-proofing strategy that works is upgrading more often.

I have 2GB 680's in SLI so I know exactly what it's like to be VRAM limited. It's not a mistake I plan on repeating.

The 'mistake' you made has nothing to do with 680 2GB SLI. You bought the wrong cards to begin with. The right upgrade strategy was GTX670 2GB SLI, reselling them and getting something faster with the $200 saved. That way you would have never cared about teh VRAM bottleneck. 680's 10% performance advantage over 670 never provided superior gameplay, and in SLI it was less since SLI scaling is 80-90%, which means that 10% needs to be multiplied by a factor of 1.8-1.9.

That's why 980 SLI for 1080P-1440P is a waste of $ over 970 SLI by the exact same logic in that 980 SLI won't be more future-proof over 970 SLI because the 970 SLI owner will sell those 970 cards sooner, lose less $ in resale value and roll-over the resale value towards way faster (50-60% faster Pascal/14nm AMD cards).

perfrel_2560.gif


Because you bought $1000 of GPUs and held on to them for so long, you now feel you needed to get your money's worth out of them. Had you bought 670s like many people did and sold them and upgraded to 970s by now with the $200 saved from back then, you wouldn't care about 2GB of VRAM bottleneck. In that case, even if you went with HD7970 CF, it was still a worse decision than buying 670s or 7950s and reselling them and getting 970s or 290Xs now for example. Now you might be asking yourself how am I preaching this advice but myself have 7970s? That's because I paid $0 for them with my own $. Bitcoin paid for them so it didn't matter if they cost $500 or $3000. They were bought to make $ over 7950s. Performance wise, 7970 CF was a waste over 7950 CF and anyone who ever used 7950s OC vs. 7970s OC would attest to the same.

Future proofing beyond 2 years doesn't work, and never worked. The best upgrading strategy is to resell old cards and get new ones vs. overspending $200-300 extra for flagship cards NOW to keep them > 3 years. That's a failed strategy every generation for the last 20 years. (Computerbase article I linked above proves the same).

You got your 4GB card awhile ago. But if I were to pay big bucks for potentially Titan X or above performance, I like to think in 2 years time, it can still handle games on ultra textures.

What was the last time you kept a GPU setup beyond 2 years? 6900 series, 7900 series and now 290 series -- looks like you upgraded every time at 1.5-2 year mark, tops.

Let's say you buy dual R9 390X 8GB or GM200 6GB cards and pay $1200-1400, I can just buy dual R9 390s or 2nd tier GM200 cards in pairs, dump them in 2 years and get faster cards. I'll be better off in 2 years time than your 390Xs or GM200 6GBs, while until that point have 95% identical performance to your cards.

Flagship cards don't make sense unless you can upgrade them every generation, not every 3-4 years.

15-20% differences today get completely wiped out by next gen games. I can't believe with your extensive experience as a PC gamer you haven't learned this yet.

GeForce Ti 4200 & 4600
5900XT & 5950U
6800GT & 6800UE
GTX470 & GTX480
GTX570 & GTX580
GTX670 & 680 & 770
780 & 780Ti
970 & 980

All these cards become obsolete at the same time. The same for AMD's top and 2nd tier card. It doesn't matter what you buy, this has always been true for 20 years and continues to be true.

The only way you can future-proof beyond 2 years is to spend LESS today, get 80-90% of the performance of the top cards, and buy a newer card in 2 years with the $ saved. If you can't afford buying $700+ flagships every gen, don't bother with them as they will won't outlast their slightly slower brethren.
http://www.computerbase.de/2015-05/geforce-gtx-470-570-670-770-970-vergleich/2/





TL, DR;

Let's get one thing straight as a lot of people here spin information to try to frame certain members as 'red' fans. If R9 390X 4GB and GM200 6GB are priced and perform similarly in stock and overclocked states, GM200 6GB is the clear buy, assuming most other aspects are also similar (power usage, features, etc.). If R9 390X has 8GB of VRAM and GM200 has 6GB, the extra VRAM amount of R9 390X is not an advantage, but a marketing bullet point. So no, contrary to some posters' claims on these boards, I do not favour AMD's decision for going 4GB of VRAM over 6GB whatsoever and defend the choice of 4GB over 8GB for all situations.

My point is this:

It's way better to buy an $500 R9 390 non-X with 85-87% of performance of the Titan X than a 15% faster R9 390X 8GB for $700 if all you are using is a 1080P-1440P monitor, because both cards will become "too slow" for next gen games anyway as their GPUs are too close to each other. That's why 2nd tier GM200 and R9 390 cards are actually more interesting cards, but unfortunately on 'enthusiast forums' (where the new enthusiast term now means who spends more on PC hardware) they are largely ignored since they aren't "flagship glamour."

However, I bet you if ask any GTX670, 970, R9 290, 6950, GTX470, 570, HD7950 owner if those cards were awesome, etc. , they will tell you saving hundreds of $ over the top card, but using that savings later towards a next gen upgrade was by far the better decision for those who can't afford to buy $700+ cards every gen. From that point of view, a $450-500 R9 390 non-X (or GM200 competitor) even with 4GB of VRAM but 15-20% faster than the 980 would be a HUGE win. In fact, I'll go 1 step further and say if GTX980 falls to $399, it could become a sleeper "best buy" for those who will view this gen as a stop-gap anyway ;)
 
Last edited:

Serandur

Member
Apr 8, 2015
38
0
6
You know Vaga, back during the 4800 series, I said 512MB wasn't enough. I got the 1gb version.

In the 5800 series, I KNEW 1GB wasn't enough for CF, I should have gotten the 2GB version but they weren't available so I got the 1gb version, plenty of games later where my setup can max it if only it had more vram.

During the 680 v 7970 series, I got the 7970 specifically because of 3gb vs 2gb (which i knew would not be enough). Already quite a few games where the 7970 can put on higher quality textures where 2gb cards cannot.

SLI 780 3GB vs CF R290 4GB same deal, R290s has proven itself to be the one with more longevity.

Now if its 390X 4gb vs 980Ti 6gb, I know which one is at a disadvantage. History repeats itself.

It was only a few months after I got my 780 that Watch Dogs released and I suddenly couldn't do ultra textures without hitching while driving around. The people who told me it was going to be fine with 3 GBs and it wasn't worth taking the 290 over it were wrong and my $500+ purchase felt intentionally crippled by Nvidia. There are many colorful expletives I would use to describe them after that and the way they locked 6 GBs to a $1000 Titan exclusively to justify the overpriced abomination's existence. And then seeing what became of Kepler's performance after Nvidia needed to justify their 5-10% faster mid-range Maxwell chips on top of that as a side remark...

I was always uncomfortable with the 680/670's 2GBs of VRAM versus the 7970/7950 and now it's obvious those fears were justified as well. Now I'm rocking a 1440p panel, I install high-res texture mods on any games that allow it, and downsample even more often (in Skyrim, I do both).

I had SLI 970s for a while and the 3.5 GB limitation hit me hard. I was one of the first people reporting the problem before it was even officially unveiled because I was slamming into problems the moment I enabled 2xMSAA on AC: Unity and because Skyrim at 5K downsampled and SpaceEngine were also giving me strange VRAM-related issues. I have since gone through 290Xs (one of which was an 8 GB model) and a 980 out of curiosity and for testing purposes and they did not suffer like the 970s, but are similarly close to the limit with modern games, high-res textures, and any MSAA. Anecdotally, the 980 runs Shadow of Mordor with its max textures generally well, but occasionally it hitches in between area transitions whereas my 8 GB 290X did not. Similarly, my 980 does hit an MSAA cap in Unity the 8GB 290X didn't which results in some occasional frametime spikes (which are very brief and will not make a big dent in average FPS figures, but are annoying nonetheless).

Now, it's not a big problem for me with a single 290X or 980 because they can't run that at 60 FPS even if they had the VRAM, but my SLI 970s could have and I'm confident the 390X/980Ti will be able to as well... except that 4 GB limit on the 390X is really cutting it close in the here and now for what I intend to do with the beast. I'm not buying any more mid-range parts and am going to try switching to a semi-annual upgrade cadence in line with big-die parts on a mature node in the future and I'm not riding a card with the power of a 390X/Titan X into 2017 with 4 GBs of VRAM, period. The hypocritical denials of the same people who would have no doubt agreed Kepler was crippled by too little VRAM will not dissuade me. I refuse to spend another $500-$800 to be burned by the same limitation again.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
You know Vaga, back during the 4800 series, I said 512MB wasn't enough. I got the 1gb version.

In the 5800 series, I KNEW 1GB wasn't enough for CF, I should have gotten the 2GB version but they weren't available so I got the 1gb version, plenty of games later where my setup can max it if only it had more vram.

During the 680 v 7970 series, I got the 7970 specifically because of 3gb vs 2gb (which i knew would not be enough). Already quite a few games where the 7970 can put on higher quality textures where 2gb cards cannot.

SLI 780 3GB vs CF R290 4GB same deal, R290s has proven itself to be the one with more longevity.

Now if its 390X 4gb vs 980Ti 6gb, I know which one is at a disadvantage. History repeats itself.

You are talking in situations where "all else being equal" it was better to go with more VRAM. No doubt about it. When 7970 and 290 were cheaper than the competitive product and had additional VRAM, you would have been blind not to go with the 7970 or 290. Although lots of people did just that.

Historically it's been a tough sell to get people to spend an extra hundy for double the VRAM. Now though, if the 390 is 4GB and the 980 ti is 6GB then regardless of anything else, the 390 is a failure. Regardless the fact that for most people 4GB will be just fine. That it will be superior HBM compared to what is comparatively outdated GDDR5. That it might cost substantially less. None of this matters. Just 6GB > 4GB so it's better and the 390 sucks.

I think we should see what happens. We have every reason to believe that an 8GB version is coming. We've seen the Hynix presentation talking about "dual link interposer" offering 2x the RAM. If it doesn't come at first, then it should came as soon as it's available. It's not the end of the world. Except for people who want to run 3K/4K and high AA, and not caring about FPS being at or near 60fps, it's going to perform just fine.

I think it should be judged on perf/$ and the individuals needs. Not some arbitrary number where bigger is automatically better. Like I said, in situations where all else is equal? Sure. It's a no brainer.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I thought it was obvious from factual information available, but since it's escaped you :

R9 380 = Tonga
R9 380X = Tonga XT

R9 390 and 390X = variations on Fiji

How in the world is that obvious? For starters AMD released no information on R9 380 desktop cards but you assumed those are 1792 and 2048 Tongas?

AMD historically has a 30-40% gap between its flagship card and mid-range cards (7970Ghz vs. 7870 or R9 290X vs. 280X or HD4870 vs. 4850). You realize a Tonga XT based on 2048 Shaders would get oblitered by 50%+ by a 3500 shader Fiji card? That makes no sense to have Tonga XT that's barely faster than a 280X and R9 390 that's 50%+ faster.

I think you really need to check where R9 280X and Titan X stand - we are talking 2X the performance delta. AMD cannot have a line-up where Tonga XT is barely 10% faster than an R9 280X but R9 390 is a whopping 50% faster, while R9 390X is 75-100% faster. Tonga XT doesn't work as a 380X. 380X needs to be as fast as a 970 at minimum unless AMD has more than 1 tier above 380X (i.e., 390/390X and 395/395X).

Unless you think 390X is only 10% faster than R9 290X, Tonga XT as a 380X would be a total fail and an illogical position.

But then comes the pricing. Tonga XT with 2048 shaders should not cost more than $199-209 because R9 280X is $199 today. You said R9 390 = Fiji variants.

R9 290X is 35% faster than HD7970Ghz and basically any Fiji card based on rumours we have will be faster than an R9 290X. In that case, how are you going to price R9 390 variants and your mythical Tonga 380X card? Have a $200 price gap between them?

I think it should be judged on perf/$ and the individuals needs. Not some arbitrary number where bigger is automatically better. Like I said, in situations where all else is equal? Sure. It's a no brainer.

:thumbsup::thumbsup: Exactly, no one in this thread is saying that one should buy a 4GB card vs. a 6GB card if "all else is equal." But once people started discussing future-proofing, it's even more vital to discuss buying 2nd tier cards and upgrading more often with the $ saved from not getting the top cards because that strategy has been superior forever. Trying to hype up 6-8GB cards as more future-proof misses the point that they will cost $150-200 more per card against say a 4GB R9 390 nonX. The question then becomes, are you going to pay $150-200 for a potential that at some point 6-8GB becomes worth it?

One just has to look at GTX570 1.28GB $350 vs. GTX580 3GB $550 to conclude that the latter was a waste of $ as far as future proofing goes. The 570 user happily used his card for 2 years, then resold it for $175, looked in his drawer for that $200 saved from not spending it on an over-hyped 580, added $25-50 more and got the much faster HD7970/670 $400-425. 670 OC and 7970 OC beat 580 OC by 40-80%. Oops. Check-mate 580 3GB "future-proofing" user. :cool:

1325889231KTNbsOX8Vr_7_1.gif

1325889231KTNbsOX8Vr_7_2.gif


Since GPUs continue to improve in performance so fast every generation (35% faster per year on average or at least 2X faster every 3 years), trying to spend $200-300+ extra to future-proof beyond 2 years is for the Lulz. I guarantee it on July 2017, a $500 14nm HBM2 card will smash GM200 6GB/R9 390X, and have more modern features too.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,749
345
126
4GB will be fine if you don't mind unchecking a few boxes and turning a few settings down in some games.

Now, if potential 390(X) buyers feel this is unacceptable, they shouldn't buy the card.

That simple.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
4GB will be fine if you don't mind unchecking a few boxes and turning a few settings down in some games.

Now, if potential 390(X) buyers feel this is unacceptable, they shouldn't buy the card.

That simple.

I would actually like to see that if in practice you will be able to have playable settings that need more VRAM. 1440 is the new go to gaming res. Newer gaming screens are 144Hz. I wouldn't buy a 27" 144Hz monitor to play below 60fps. I mean, what's the point of paying the premium for greater than 60Hz performance and then run settings that tank the frame rate to 40fps or less? Even downsampling and running the equivalent of 4K w/o AA isn't using more than 4GB.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
4GB will be fine if you don't mind unchecking a few boxes and turning a few settings down in some games.

Now, if potential 390(X) buyers feel this is unacceptable, they shouldn't buy the card.

That simple.

People buy $500+ video cards specifically so that they don't have to uncheck boxes and turn settings down. At that price level, no compromises are expected.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It was only a few months after I got my 780 that Watch Dogs released and I suddenly couldn't do ultra textures without hitching while driving around. The people who told me it was going to be fine with 3 GBs and it wasn't worth taking the 290 over it were wrong and my $500+ purchase felt intentionally crippled by Nvidia.

3GB vs. 4GB for 780 vs. 290 wasn't even the crux of the purchase here. R9 290 OC was faster than 780 OC for $100 less. That made 780 a worse buy regardless of VRAM. In a situation where a card that's as fast or faster had more VRAM and cost less, you should have never purchased the 780 to start with. It's disappointing that some people actually gave you such horrible advice. On our forum, most people would have recommended you to get an after-market 290 when 780 was $500+. When gamers with those cards compared them stock vs. stock or OC vs. OC, 780 couldn't win which made its $100 premium largely a marketing/brand premium tax.

There are many colorful expletives I would use to describe them after that and the way they locked 6 GBs to a $1000 Titan exclusively to justify the overpriced abomination's existence. And then seeing what became of Kepler's performance after Nvidia needed to justify their 5-10% faster mid-range Maxwell chips on top of that as a side remark...

This is a very important point you bring up against future-proofing with any current gen. Whether it's AMD or NV, even if they optimize the last gen architecture as best as possible, their last gen cards have inherent architectural weaknesses (tessellation in 5850, 6950, 7970 or R9 290 or compute in Fermi, Kepler). That means, once newer gen games start using some of those graphical features more extensively, no amount of optimization will save R9 280X/7970Ghz from tanking when tessellated God Rays are enabled in FC4. That's more reason to not keep any gen beyond 2-3 years. Just dump it and get something faster.

I was always uncomfortable with the 680/670's 2GBs of VRAM versus the 7970/7950 and now it's obvious those fears were justified as well. Now I'm rocking a 1440p panel, I install high-res texture mods on any games that allow it, and downsample even more often (in Skyrim, I do both).

I had SLI 970s for a while and the 3.5 GB limitation hit me hard. I was one of the first people reporting the problem before it was even officially unveiled because I was slamming into problems the moment I enabled 2xMSAA on AC: Unity and because Skyrim at 5K downsampled and SpaceEngine were also giving me strange VRAM-related issues. I have since gone through 290Xs (one of which was an 8 GB model) and a 980 out of curiosity and for testing purposes and they did not suffer like the 970s, but are similarly close to the limit with modern games, high-res textures, and any MSAA. Anecdotally, the 980 runs Shadow of Mordor with its max textures generally well, but occasionally it hitches in between area transitions whereas my 8 GB 290X did not. Similarly, my 980 does hit an MSAA cap in Unity the 8GB 290X didn't which results in some occasional frametime spikes (which are very brief and will not make a big dent in average FPS figures, but are annoying nonetheless).

All great points. If you know for sure you play a lot of modded titles and games where more VRAM is 100% going to benefit you, get the card with more VRAM since you will enjoy that benefit right away, not in 2-3 years. Also, if you can easily afford a $700-800 card over the cheaper $400-500 mid-range cards, enjoy it, just don't keep it too long as its resale value will bomb (just look at GTX480/580/780TI or HD7970Ghz/R9 290X)! :thumbsup:

The hypocritical denials of the same people who would have no doubt agreed Kepler was crippled by too little VRAM will not dissuade me. I refuse to spend another $500-$800 to be burned by the same limitation again.

There is no hypocrisy if you read the posts more carefully. Since you are new to our forums, you should get used to some people putting words in other people's mouths. There is good info on our forums but you just gotta learn how to compare diffeerent viewpoints and make your own decision. Of course if R9 390X is 4GB costs roughly the same as a GM200 6GB and they perform similar, pick the 6GB card. Pretty much no one in this thread disagrees with that comparison. :cool:

We really need to see where R9 390 nonX, 390X and various GM200 versions of cards end up in price/performance though before making an assessment on what's actually worth $500-800. Right now everyone on this forum is guessing at where R9 390 series will end up in performance as there is no data from AMD on R9 300 series and no one is confident to say that any one leak is credible enough.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
We finally have specific dates! :thumbsup:

From the site that brought us the details about upcoming Skylake chips, they now have revealed availability/launch of R9 300 cards:

June 18th - R7 360 all the way up to R9 380 will be available to buy (retail cards I guess?)
June 24th - R9 390 and R9 390X (Fiji) will be announced and available to buy

Source: http://benchlife.info/amd-radeon-300-series-would-announce-in-june-18th-and-24th-05112015/
Translated: https://translate.google.com/transl...ce-in-june-18th-and-24th-05112015/&edit-text=

I hope that site isn't correct, because the picture it paints is very bleak. It's hard to tell because of the translation, but the site seems to be saying that the R9 300 series will consist of only five cards, two of which are new Fiji cards and three of which are rebranded trash silicon (360, 370, and 380). Even the full Tonga doesn't seem to be mentioned. It seems outlandish that AMD would completely give up on everything but the ultra high end.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
People buy $500+ video cards specifically so that they don't have to uncheck boxes and turn settings down. At that price level, no compromises are expected.

Every GPU purchase has a compromise. There is no videocard purchase in the world that isn't a compromise of something. VRAM alone is not end-all-be-all in GPU purchasing decisions. Certainly it never stopped most people from buying GTX480/580 1.5GB vs. 6950/6970 2GB or 670/680 2GB vs. 7950/7970/7970Ghz. Also, for the last 7 months, millions of PC gamers bought 970 and 980 cards and put them in SLI. It's a very interesting position this forum is taking that 4GB on a card slower than 970 SLI and WAY slower than 980 SLI is no longer sufficient but no such aversion to 4GB cards was stated in any thread where someone was considering 980 SLI on our forums. It's not like 980 SLI's performance changes when R9 390X or GM200 6GB launch. What changes the performance of 980 SLI are NV's drivers and the games. Thus far, 980 OC SLI walks all over Titan X OC in almost all games where SLI scales. You would be seriously hard pressed to find a situation where the Titan X provides a better gaming experience than 980 SLI, we are talking rarities.

Do you honestly believe AMD's $500 card will be as fast as 980 SLI? There is no way that's happening.

Do you believe that with the release of 6-12GB cards, out of the blue PC games will require 6-8GB of VRAM to run? Software is roughly 1.5-2 years behind hardware nowadays. Sure, 680 2GB is technically inferior to an HD7970 today but let's face it it's not like you are getting 60 fps with everything maxed out on the 7970 either. I think the decision for 6-12GB vs. 4GB isn't as clear as some people on this forum want to make it because 6-12GB cards will have premiums attached to them -- so it's not like getting 'free' VRAM as was the case with 680 vs. 7970 or 570 vs 6950/6970. We know that 14nm/16nm GPUs with HBM2 will have monster performance, newer features and that's all coming in 18 months. Knowing all this, how many people on this forum are really going to keep their GM200/Titan X/R9 390X for 3+ years?
 
Last edited:

2is

Diamond Member
Apr 8, 2012
4,281
131
106

I'm not going to quote your entire novel. My mistake was certainly not 680's over 670's, as I'd have the same regret if I owned 670's. Cost wasn't a factor, i'm not regretting that I spend more for 680's over 670's. I regret not waiting until 4GB 680's became available. Your attempt at mind reading has failed you here. Back then the mistake was getting a high end card with 2GB of ram. In 2015 the mistake is going to be getting a high end card with 4GB of ram. It's not going to happen with this consumers money.

And games requiring more vram is about as "out of the blue" as PS4 and XB1 suddenly appearing in peoples households.