[VC][TT] - [Rumor] Radeon Rx 300: Bermuda, Fiji, Grenada, Tonga and Trinidad

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
I already gave the evidence here, I can't help it if some people are just blinded by their AMD fanboyism and can't accept the truth.

http://www.anandtech.com/show/8544/microsoft-details-direct3d-113-12-new-features

So you have nothing other than you don't know... as expected.

How about waiting till the specs come out and we know what supports what before spreading misinformation. I am sorry for asking for real information as you clearly don't have it. But then again with your the name nvgpu I expect nothing less than trying to derail this thread since it's supposed to be about the next gen AMD cards.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Just go read what does "features" are in the first place :sneaky:

Here zlatan explains some of them : http://forums.anandtech.com/showpost.php?p=37102322&postcount=50

On beyond3D, there's also a lot of information about this.

He doesnt explain how GCN support these features in hardware. Emulation CR through software is not the same. Here from nVidia:
Is it possible to achieve Conservative Raster without HW support?

Yes, it is indeed possible to do this, and there is a very good article describing it here.
Essentially it involves using the Geometry Shader stage to either:
a) Add an apron of triangles around the main primitive


b) Enlarge the main primitive


However both approaches add performance overhead, and as such usage of conservative rasterization in real time graphics has been pretty limited so far.
https://developer.nvidia.com/content/dont-be-conservative-conservative-rasterization

Microsoft cleary wants hardware support.
 

dacostafilipe

Senior member
Oct 10, 2013
804
305
136
Emulation CR through software is not the same

There's nothing to "emulate", you can write a compute kernel for this.

Even Nvidia knows it because it states that Kepler and console hardware support VXGI. And VXGI uses conservative rasterization.

We are way past t&l times guys! Our GPU's are highly flexible and programmable.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
We are way past t&l times guys! Our GPU's are highly flexible and programmable.

Yet plenty of features missing just in DX11 on both. Why isnt it added if its so easy?

Is it because emulating it gives turtle speeds or because it cant be done? Both makes it meaningless for practical application.

Maybe GCN 1.3 will add some of the missing features. Just like Maxwell v2 did.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
There's nothing to "emulate", you can write a compute kernel for this.

Even Nvidia knows it because it states that Kepler and console hardware support VXGI. And VXGI uses conservative rasterization.

We are way past t&l times guys! Our GPU's are highly flexible and programmable.

What do you think is "write"?
With Maxwell v2 you trigger the rasterizers to work in the conservative mode.
Without the ability of the rasterizers it is nessessary to emulate it with the shaders through rewriting the software.

It should be clear that Microsoft is asking for hardware support within the rasterizer which are dedicated units.

Note that conservative rasterization can also be achieved on older hardware using a combination of a special geometry and pixel shader as described in the GPU Gems 2 article below, but in many cases this incurs considerable performance overhead. Hardware conservative rasterization has roughly the same performance as normal rasterization (it generates more fragments, and so might be slightly slower).
http://docs.nvidia.com/gameworks/in...l_samples/conservativerasterizationsample.htm
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
He doesnt explain how GCN support these features in hardware. Emulation CR through software is not the same. Here from nVidia:

https://developer.nvidia.com/content/dont-be-conservative-conservative-rasterization

Microsoft cleary wants hardware support.

Not once has it been said anywhere that GCN of any revision cannot do CR, or any of the other 11_3 or 12 features.

You only see Nvidia saying it is "new" and they offer ways to utilize it in current D3D11 features with custom API calls. It says nothing about what other manufacturers like AMD can provide when the API has native support for these features.


We will likely find out during GDC, so everyone should stop arguing about what WE DO NOT KNOW until that point in time.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Not once has it been said anywhere that GCN of any revision cannot do CR, or any of the other 11_3 or 12 features.

CR is a concept over two decades old.
Every modern architecture can support it through shaders.

New is the actual hardware support within the rasterizer to do conservative rasterization . Something Microsoft has made mandatory for DX11.3 and DX12 as a hardware feature.

And this is the only information about hardware CR support from AMD:
http://videocardz.com/51021/amd-gcn-update-iceland-tonga-hawaii-xtx
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
DX12 has not even been finalized. I would expect that NO CURRENT GPU on the market has FULL HARDWARE SUPPORT for ALL the DX12 features.
 

caswow

Senior member
Sep 18, 2013
525
136
116
does nvidia support dx12 as well as dx10.1 and 11.x back then ? i wouldnt want that "support" :thumbsdown:
 

garagisti

Senior member
Aug 7, 2007
592
7
81
GCN 1.0-1.2 certainly does not support those features, otherwise they wouldn't have been called new.

Maxwell was designed for DX12 and top to bottom support, GM200, GM204, GM206 supports them in hardware.
So you know more than Nvidia?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It is clear.
Making CR a feature for the new DX API is only possible through actual dedicated hardware support. Otherwise DX has supported it since DX8 with the introduction of programmable shaders (-> vertex shader).
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
Stop the arguing now or I will do it for you. Any off topic posts will be removed as well.


-Rvenger
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Remember how consoles hold everything back? Remember how we were stuck with DX9 for most games until the new console gen? If a feature won't run (well) on console, few developers are going to use it for the PC version unless they get a bucket of cash or something to do so. Even Crytek has shied away from "melt your PC" specs in practice.

Current consoles use DX11-era hardware.

Think about it.

I would not worry about the more exotic features of DX12 until the next console generation comes out in, what, 8 years or so?
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Remember how consoles hold everything back? Remember how we were stuck with DX9 for most games until the new console gen? If a feature won't run (well) on console, few developers are going to use it for the PC version unless they get a bucket of cash or something to do so. Even Crytek has shied away from "melt your PC" specs in practice.

Current consoles use DX11-era hardware.

Think about it.

I would not worry about the more exotic features of DX12 until the next console generation comes out in, what, 8 years or so?

To be fair, the Xbox One will be utilizing DX12 and already uses features of the D3D11_3 feature level, which are the same ones (that we know of thus far) that are in D3D12 at a lower level of abstraction. And Xbox One also already offers lower abstraction than the standard DX11.

I doubt there will be much missing from the actual hardware pipeline of GCN. Perhaps GCN 1.1 will be missing Conservative Raster at the hardware level, perhaps not... with Microsoft so heavily involved in designing the API and hardware of the XBONE, I feel like it would be weird for that GPU to be missing hardware features of the very API they were planning to bring to the console all along. And that GPU is GCN 1.0.

It may not be there at all, but that just seems to be a weird thing to leave out of your own console. Then again, Microsoft has been known to be a little boneheaded at times, especially with regards to the XBONE. That very question also remains unanswered. I very much believe we'll find out one way or the other at GDC the first week of March. I'm excited to see what is coming down the line from AMD, Nvidia, and Microsoft.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
While I will not denounce the rumour that R9 380/380x could be close in performance or have similar specs on paper to 290/290X, I personally cannot see how AMD spent 1.5 years developing only 2 new chips, with everything else being straight up Tonga and Hawaii rebadges. Today a 290/290X cannot gain market share or desirability despite generally selling for $250-350 vs. $330-550 for 970/980. Similarly a 285 2GB doesn't seem to sell well at $180-200 either. What would it accomplish to relaunch 285/285 unlocked only as fast as a 280X/290/290X as 370/370X/380/380X at $179/199/$249/$299? NV could just lower prices $20-30 on 960/970 and AMD's entire 370/370X/380/380X line up is neutralized.

If that were the case that 370/370X/380/380x are just rebadges, well AMD could have just launched all those products now. Why wait another 3-4 months if all you are going to do is rebadge 90% of your cards, why do you need 1.5 years to do that! You would just discontinue printing 285/290/290X boxes as of December 31st and call those cards 370/380/380X.

AMD could have easily just added $50 to 290X's price and launched it as a 380X aka HIS IceQ 290X:
http://www.techspot.com/review/892-his-r9-290x-hybrid-iceq/

...but they didn't.

6870/6970 were based on fundamentals set out by the 5870. 290X was based on the fundamentals set out by the 7970. 380/380X could be based on Hawaii but that does not automatically mean a rebadge with only 390/390X being the ONLY cards with an ounce of new in them. It simply doesn't make sense that AMD would spend 1.5 years working on just 1 chip, and that chip only being a high end one. If true that would imply no new products for laptops either because you can't fit a 250-280W Hawaii into a laptop; and if rumours are true such chip would become next gen's mid-range 380/380X. I won't believe such rumours unless AMD improved Hawaii's perf/watt.

Let's go back in time. If someone saw a rumour that HD4870 would be rebadged as a 5770, it would appear so based on 800SPs, 40 TMUs and 16 ROPs of the 5770. With 3 of those details and 850mhz clocks vs. 750mhz for the 4870, one would be almost assured that 5770 was just a rebadged 4870 with a slight bump in clocks. If someone just gossiped the number of SPs, TMUs, ROPs, the products appear identical! Yet, these were vastly different chips in terms features and Perf/watt:
http://www.anandtech.com/show/2856

All these sites just use 1-2 sources and then spam rumours at different dates. For example, Sweclockers uses some random source for their article. There is no evidence whatsoever that their source is credible. Then other sites like Fudzilla, TPU, Videocardz, etc. create articles sighting Sweclockers. They have done 0 due diligence on the reliability of Sweclockers' original source, and yet they start posting these rumours by almost spinning them as facts. Remember TPU leaked 980 being a 3200 CUDA core card, and shockingly this BS rumour was spread on August 1st, 2014, less than 2 months before the official launch!
http://www.techpowerup.com/mobile/203661/nvidia-to-launch-geforce-gtx-880-in-september.html

Imo most of these sites have no info. The minute 1 of them leaked that everything besides 390 are rebadges, everyone else posted exact same articles paraphrased shortly. They just regurgitate rumours from 1-2 sites for clicks.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,740
334
126
I just don't see how they could have that large a gap between the supposed 290X/380X and the 390(X) that are rumored to be 40% faster that the 980.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I just don't see how they could have that large a gap between the supposed 290X/380X and the 390(X) that are rumored to be 40% faster that the 980.

40% faster doesn't add up either. If 390X is 50% faster than a 290X, it's ~ 30-36% faster than a 980, less at 1080p. But you make a good point that if 390X were to be at least 50% faster than a 290X, there has to be more than just a 390 non-X to fill in such a large gap in performance. Also, if they literally just rebadge 285/285 unlocked/280/290/290X, NV drops 970 to $249, game over for 4 of their cards after 1.5 years of "putting finishing touches on 300 series." That would be the biggest flop ever if they straight rebadge everything sub-$450.

Either 390X is way less than 50% faster than a 290X, and/or 380/380X will be Enhanced Hawaii chips with improved perf/watt. AMD won't be able to compete with a 270W 380X unless it beats a 980 or they price a 290X rebadge at $229 or something ridiculous because right now anecdotal evidence suggests that gamers are not buying a $280-300 290X over 970/980.
 
Last edited:
Feb 19, 2009
10,457
10
76
My prediction is that such a large chip (the first for AMD) will be harvested twice for yields, for 2 additional lower SKUs.

If R390X will be 50% faster R290X, R390 ~40% faster, R385X ~25% faster.

25% gap is fine between top & middle.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Let's go back in time. If someone saw a rumour that HD4870 would be rebadged as a 5770, it would appear so based on 800SPs, 40 TMUs and 16 ROPs of the 5770. With 3 of those details and 850mhz clocks vs. 750mhz for the 4870, one would be almost assured that 5770 was just a rebadged 4870 with a slight bump in clocks. If someone just gossiped the number of SPs, TMUs, ROPs, the products appear identical! Yet, these were vastly different chips in terms features and Perf/watt:
http://www.anandtech.com/show/2856

IIRC, HD5770 was basically an updated HD4890 (not 4870) that had DX11 features bolted on. The perf/watt was largely due to node shrink from 55nm to 40nm IIRC.

I'm not commenting on rest of your post, I just thought that the above part stuck out.

I lost interest in video card upgrades because I typically only upgrade when there's a node shrink and there hasn't been one in forever.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Blastingcap, even if you argue that 5770 was largely derived from a 4870/4890, the feature set and perf/watt were world's apart. My point is a 380X could very well be a 2816 SP, 176 TMU, 64 ROP part - this could mean a 290X rebadge, or something entirely different. Remember how Maxwell brought a 35% increase in IPC performance per CUDA core? Before we knew this, we would think a 980 would get run over by a 780Ti based on CUDA cores, memory bandwidth and TMUs.

You say you won't care to upgrade based on lack of node shrinks but if say GM200 is 40% faster than a 980, it would be over 50% faster than your card. You should ask yourself more what performance increase at what price do you desire. If you want 75-100% faster in a single chip card, well it sounds like you'll need to wait for 14nm.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
My guess is that the rebrands are coming because AMD can increase the prices again. Just like they did with the 7950 and 280.
 

SimianR

Senior member
Mar 10, 2011
609
16
81
I thought the only reason they reintroduced the 7950 (as the 280) was because at the time they still couldn't meet demand for their GPU's during the bitcoin craze. I could be wrong, though.