[VC][TT] - [Rumor] Radeon Rx 300: Bermuda, Fiji, Grenada, Tonga and Trinidad

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hauk

Platinum Member
Nov 22, 2001
2,806
0
0
Off topic, but I just reviewed R9-295x design, reviews, and pricing. I've never owned a dual GPU board, opting for two cards, but intrigued by the former. Would it be foolish to buy one right now?
 
Feb 19, 2009
10,457
10
76
A) With respect to 390x and efficiency, you are talking about unconfirmed leaks. Neato. But I'll go with it.

B) You are trying to compare a full fledged chip's (Fiji 390x) expected efficiency to a cut-down chip's efficiency (GM204 970). Niiiiiice. That makes absolutely no sense given that the 390x should trounce the 970 AND especially (more logically) when I very easily pointed out that 980 is about 15-20% more efficient than the 970 since both end up using about the same power and have a 15-20% performance difference.

C) Tonga is still noticeably slower in every game both new and old vs. the 280x by 20% or more except Wolfenstein and Watch Dogs - but Watch Dogs doesn't count because you said it doesn't in the other thread. Wait a second, you didn't think Far Cry 4 should count either because GW and Ubisoft and Hardocp sucks but here you are grasping. Doesn't matter. Tonga is still crap. CRAP. CRAP. CRAP. It's a complete dud for an engineering standpoint and AMD would have been better no spending the R&D bringing it out in it's current state on 28nm.

A) Yup, we're basing these discussions on assumed specs & performance.

B) If I compared it to the 980, I could say it comes close in matching it on efficiency (it beats 970, it loses to 980). Does that make Fiji XT bad? Not at all. Coming close to an efficient design when chasing raw performance should be seen as a good result.

C) In the Tonga launch thread, I have made multiple statements its a bad product, 2gb vram automatically rule it out of that price/perf segment. But the chip itself, you could say its bad if all you see is itself in isolation. If you see it as testing bandwidth compression, a better front end and much improved tessellators (which may not show itself in games), then you see it has potential for future designs.

The reason people think its castrated is due to its SP count, exactly that of the castrated Tahiti in 7950/280.

That and the "full-fat" Tonga is the M295X in Macs. That may explain the lack of R285X on desktop because they are using it to supply to Apple instead.

Note its full-fat spec is exactly matching the 7970 on SP, ROPs, TMUs. Only the bus was lowered to 256. As to concrete proof? Only AMD could give you that. But if you can think for yourself and analyze the evidence, you would understand destkop Tonga, R285 is in fact a cut-down chip.

paW4HLt.jpg


ps. I still think Ubisoft games are just that, crap and shouldn't be nearly half of the suite of games on an enthusiast site such as [H]. Go read their WD, ACU and FC4 performance reviews, note how much hate they dished upon these games for the broken mess it was and still is. Is it then fair of them to reward Ubi with the recognition of being used as a legitimate benchmark for the industry? There's a failure of logic at play. But that's besides the point of discussion here. Tonga's architecture is forward looking, it shows that when games stress the front end of GCN, Tonga can shine and is a good foundation to improve upon.
 
Last edited:

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
I'm interested on what GCN 1.3 brings to the table, as Silverforce11 mentioned and I totally agree, GCN 1.2 was quite an improvement, AMD did good there, the problem was the product wasn't beating the one below it, most of the time, which is what people expect. So once again AMD fails at marketing. -.-

*Uses AMD products*

On topic tho, I wouldn't even think of getting a non GCN 1.3 card if I was gonna buy a new one. That's part of the upgrade, IMO!
AMD should stop relabeling cards too!
 
Last edited:

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
Fiji XT and Pro are too close together spec wise for this to be real. That's not even a 10% increase in cores.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Off topic, but I just reviewed R9-295x design, reviews, and pricing. I've never owned a dual GPU board, opting for two cards, but intrigued by the former. Would it be foolish to buy one right now?

The 295x2 is a great card. You just have to make sure your PSU doesn't have issues feeding it. But it runs cool, quiet, and XDMA crossfire is the best multi gpu setup. Considering Earlier GCN designs seem to be aging quite well, it should be viable at least as long as current Maxwell and is already a better choice than Kepler.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,110
1,260
126
Any idea why the 280X is destroying every other card in the comparison?

Nvidia's driver support for anything but their newest architecture is garbage. They've abandoned optimizations for Kepler altogether going on benchmarks for games released since 970/980 launched comparing Kepler cards to Maxwell cards, contrasted against the same comparison in games that were already released at launch.

The 280X is faster than a 780 in Far Cry 4, an nvdia TWIMTBP crapworks title, that says it all.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
Nvidia's driver support for anything but their newest architecture is garbage. They've abandoned optimizations for Kepler altogether going on benchmarks for games released since 970/980 launched comparing Kepler cards to Maxwell cards, contrasted against the same comparison in games that were already released at launch.

The 280X is faster than a 780 in Far Cry 4, an nvdia TWIMTBP crapworks title, that says it all.
it is what I have noticed too. nv only supports the latest gen and that is it. you buy nv = 1 year of driver support and optimizations.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Nvidia's driver support for anything but their newest architecture is garbage. They've abandoned optimizations for Kepler altogether going on benchmarks for games released since 970/980 launched comparing Kepler cards to Maxwell cards, contrasted against the same comparison in games that were already released at launch.

The 280X is faster than a 780 in Far Cry 4, an nvdia TWIMTBP crapworks title, that says it all.

That doesn't explain the 285 though. If it were just poor nVidia drivers the 285 would be wiping the floor as well, but it's not.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
so um, when do we get to see benchmarks? if 380x really is a rebrand, the 50% performance increase is just bs right?
 

garagisti

Senior member
Aug 7, 2007
592
7
81
http://www.3dcenter.org/news/amd-ra...-als-380x-fiji-als-390x-und-bermuda-als-395x2

Confirmed it seems. 380x is not a new card, 390, 390x, and 390x2 kinda are.

I don't understand why a lower end card will be had at GCN1.2, but 380s will be 1.1. It will be a fairly silly thing to do. The other thing which i don't understand is why delay 395x2 till winter? You could just push it out on water, and people did buy 295x2, and 390s are already on water, charge about 90%-95% (may be more, depending on what specification they sell it at) and kill the competition in high-end segment pretty much. Later on with updates in HBM, you could do 8gb 390s and 16gb 395x2s, of course for slightly more money.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I wonder if Trinidad (360X, etc) will be fabbed at Global Foundries like Fiji?
 

_UP_

Member
Feb 17, 2013
144
11
81
so um, when do we get to see benchmarks? if 380x really is a rebrand, the 50% performance increase is just bs right?
I believe that the benchmarks, if real, were of the higher end card. It's just that the earlier rumours were conflicting as for the name - some said the top card will be called 380x and some 390x. So, if the benchmarks are real and this rumour is true as well they were describing the 390x.
That said, I wouldn't mind if they described the 380x and 390x will be much higher
Well, one can dream, right?
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
I don't understand why a lower end card will be had at GCN1.2, but 380s will be 1.1. It will be a fairly silly thing to do.

You mean GCN 1.3? The 360X? They also did it with the 260X, probably for a cheap solution with all the new features. I actually like that.
 

S.H.O.D.A.N.

Senior member
Mar 22, 2014
205
0
41
Stop the press. The 395X2 is supposed to release before the 390X proper? What am I missing?

Edit: Scratch that, my German is really rusty. Fall/Winter, not Spring/Winter.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
http://www.3dcenter.org/news/amd-ra...-als-380x-fiji-als-390x-und-bermuda-als-395x2

Confirmed it seems. 380x is not a new card, 390, 390x, and 390x2 kinda are.

Nothing is confirmed. It's just 3D center regurgitating 6 months old rumours they found all over the Internet. There is literally nothing in the OP or 3D center with any real new info. Even the cut down Fiji CE is made up.

As has been mentioned, Fiji Pro and Fiji XT are way too close in specs, and also if everything besides 390 series is a rebrand, how is AMD going to compete in the mobile dGPU space? They can't fit Tonga, Hawaii or Fiji in there. Are we supposed to believe AMD will continue to use HD7970-->8970-->M290X for another 1.5 years until 14nm for laptops? I find this difficult to accept.

Secondly, AMD already cannot compete against newer 960/970/980 in the eyes of the average consumer. You think AMD would release higher clocked 290/290X with 270-280W once more to compete against 970/980? What would they price them at? After-market cool and quiet 290s already go for $240-260 and 290X already goes for $260-330 and most people still buy 970/980 over them in the US!

If AMD even tries to rebadge 290 series, all NV needs to do is drop 970 to $259 and it would annihilate both rebadged 380/380X cards. Thirdly, do people honestly believe it would have taken AMD 1.5 years to release 5-10% faster 290X with similar power usage?!! They could have strapped an AIO CLC in a 290X and gotten 5-10% more performance at 270-280W 6 months ago. While 380/380X might be barely faster than a 290/290X, I don't for a second believe they will be 260-280W parts.

That's why these rumours don't add up logically. The reason the same info is being regurgitated by 10 sites is because they have nothing to leak. This is one of the most secretive AMD card launches of all time. Pretty much the only legitimate leak is 4096 SP, 256 TMU, 4096-bit HBM 550mm2 3xx series card. Everything else just straight up seems made up.

I would even accept identical specs on paper for 380/380X to 290/290X but it's hard to accept that AMD's next gen mid-range card will use almost the same amount of power as its flagship 390/390X card. That sounds absurd!!
 
Last edited:
Feb 19, 2009
10,457
10
76
They won't need a Hawaii class as filler, because its likely true there's going to be 3 variants of Fiji, a very large die should make for additional harvesting/binning. The lowest end variant will be >980 class performance. Its going to come down to how effective they will be in lowering the TDP to fit it into a mid-range product.

R290/X isn't competitive with 970/980 due to huge efficiency gap, whether we enthusiasts agree or not, that's the way the market sees it.

R390X is the GM200 competitor, high price, high performance, high power. The big boys will make 970/980 look like the true mid-range chip that it is, though this shouldn't be a surprise.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The 970 loses out noticeably on efficiency to the full fat 980. http://www.techpowerup.com/reviews/MSI/GTX_970_Gaming/28.html Color me unimpressed if all GCN 1.3 can do is match a less efficient and castrated GM204 GTX970 with their highly, highly touted HBM.

I know you are a smart guy so you surely understand that GPU vs. GPU perf/watt is completely different than System 1 (GTX980) vs. System 2 (390X) perf/watt in games.

i7 4770+ 970 = 256W (Bioshock infinite) / 279W (Metro LL) / 267W (TR) = avg. 267W

Let's just go nuts here and say an i7 4770 + 390X will use 400W. We get 50% higher power usage and possibly 45-50% more performance. Seems awesome to me. If we just compare perf/watt on a card to card basis, you'll expect a 300W card to be 2x faster than a 150W card, which is just wishful thinking as you know that's not how it works.

You guys need to stop exclusively using GPU 1 vs. GPU 2 perf/watt metrics as some ultimate benchmark of efficiency since that's primarily good for engineers, comparing efficiency of GPU architectures and giving us a perspective of top performance in TDP constrained environments (laptops). As far as desktop gaming goes, Perf/watt of the overall system in games is BY FAR the more important/relevant metric for a gamer since that's what we actually see in games. It's not like I can just game on a 980 all by itself, can I?

As far as your example of a 980 goes, MSI Gaming 980 peaks at 205W, Gigabyte G1 980 peaks at 204W. There are exceptions like the Asus Strix that peaks at 174W. But let's just go with it anyway and say a 390X will use 300W or 125W more than the Asus Strix 980. Guess what even if a 390X is only 20% faster and costs $650, already almost no one brand agnostic will even look twice at a $550 980. That's because the flagship market has always been about performance first. What are the chances that someone who is legitimately looking to get flagship GM200/390X cards for $600+ doesn't have a $50 550W PSU? Almost 0.

Even if 970/980 beat 390/390X on a GPU perf/watt basis, it makes no difference for flagship gaming except for NV's marketing department, NV faithful and the uninformed average consumers who skip AMD anyway. As long as 390/390X are at least 15% faster than a 980 and aren't priced at some ludicrous amount, a $550 980 will be irrelevant in the eyes of brand agnostic PC gamers. NV will need to drop 980's price even if 390 uses 125W more power if 390 series' performance is actually good. I mean warrantied AIO CLC alone is easily $50 in the marketplace. Even if 390X was just 10% faster than a 980 at $599 with AIO CLC it would already make brand agnostic gamers pick the 390X. As you can see I basically put the 980 in the best light possible in my comparison but chances are 390X will be > 10% faster. Why? 980's performance advantage over the 290X keeps getting smaller and smaller:

10% more at 1440P

perfrel_2560.gif


8% at 4K

perfrel_3840.gif


It's pretty much a 99.9% done deal that 390X will mop the floor with a $550 980. If I were a 980 owner I would be dumping it for sure June 2015 at the latest. This is going to be a repeat of $650 780 --> $399 R9 290. Obviously NV will release GM200 so I am not even remotely worried about them but we are less than 6 months away until 980 is officially mid-range.
 
Last edited:
Feb 19, 2009
10,457
10
76
I know you are a smart guy so you surely understand that GPU vs. GPU perf/watt is completely different than System 1 (GTX980) vs. System 2 (390X) perf/watt in games.

i7 4770+ 970 = 256W (Bioshock infinite) / 279W (Metro LL) / 267W (TR) = avg. 267W

Let's just go nuts here and say an i7 4770 + 390X will use 400W. We get 50% higher power usage and possibly 45-50% more performance. Seems awesome to me. If we just compare perf/watt on a card to card basis, you'll expect a 300W card to be 2x faster than a 150W card, which is just wishful thinking as you know that's not how it works.

While your points are valid, its interesting to compare GPUs alone on efficiency & power use just to compare architectures. Ultimately it matters less than total system power because that's what costs people $ in electricity.

I was just going with the leak numbers, R390X is directly more efficient than 970 GPU vs GPU and close to the 980, which is actually very good considering its a monster die made for DP compute & tweaked for performance. I mean if its +50% efficient & perf over R290X, that is VERY GOOD.

It's also what I would expect given its large die with Tonga+ architecture and HBM for better memory subsystem & latency. If its not around that level of performance, its not going to be able to compete with "full-fat" GM200.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Ultimately when you fire up a next gen game and get 37fps on a 970, 39 FPS on a 290X, 43 fps on a 980, but 50-60 FPS on a GM200/380X, no one is going to give a damn about power usage in an overclocked i5/7 rig. Right now, perf/watt is all the rage but as soon as 970/290X/980 are mid-range, they will be quickly forgotten. As far as I am concerned, with 290X going for $260-300, all 3 of these cards are already mid-range. It's almost irrelvant what the 980 costs now because it can hardly provide a next gen experience compared to a 290X at high resolutions. What we need is more performance.
 

DiogoDX

Senior member
Oct 11, 2012
747
279
136
Rebranded 290X with GCN1.1 makes no sense unless AMD don't want to sell mid-range cards.

Tonga already have better tess performance than 290X, VSR 4K, Eyefinity PLP and VCE 3.0 with 4K support. And still missing HDMI 2.0 and some hardware DX12 features like GM204 (DirectX 11.3).

If Fiji is a high-end 4096 HBM chip my guess is that the mid-high will be a 3072 384bits 6GB GDDR5 card with the architectural improvements of Tonga and DX12 hardware features.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Gcn 1.0, 1.1, 1.2 and 1.3 all together in series 3xx gpus. Jeeez AMD, just how much tahiti/hawaii/pitcarin inventory is still out there to keep the rebranding going on? I was hoping for a solid, all gcn 1.2/1.3 lineup. No wonder they are getting behind in perf/watt, they are still trying to sell us 3yr old tech as new releases!
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
If we're going to resort to wishy-washy "the market" arguments, then at least let's stop acting like anyone in "the market" knows what the power consumption of their chip is, let alone cares about it...