[VC][TT] - [Rumor] Radeon Rx 300: Bermuda, Fiji, Grenada, Tonga and Trinidad

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I though it was with frequency scaling and not power scaling. Gpus dont clock as high as cpus...

Its the same for GPUs.

HDL for that matter isnt something new. ARM etc have been using it for ages. There is a reason why AMD first started to use it at sub 35W chips.

An example:
CS794_Fig2.jpg
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You forgot to add that any benefit besides the area is pretty much gone at 25W+ due to HDL. And that power consumption gets very bad at higher powerdraw while you cant reach the same frequency targets as without HDL. There is no magic saving in terms of HDL for performance GPUs. So if thats your argument for AMD solving the performance/watt issues for GPUs. Then you couldnt be more wrong.

Using HDL instantly killed any hopes of Carrizo for the desktop in favour of a Kaveri refresh as well for the same reasons. It would have been a performance regression.

My argument is people who think only NV's engineers are special enough to improve perf/watt on the same node but no one else in the world can are not correct. Carrizo proves that AMD can achieve improvements in many key areas on the same node in terms of improving perf/watt and die size. Since the first day you've joined AT, there has never been 1 optimistic comment from you regarding anything from AMD, so it's impossible to take any of your opinions on AMD for anything other than doom and gloom.

AMD doesn't need to match or beat Maxwell's perf/watt to make R9 300 a great series. This idea that has been promoted since 970/980 dropped that AMD is done for in GPUs and will never come back will be proven wrong to the pessimists.

Many people here have a tendency to forget history that HD4000-6000 were actually more efficient than NV's products and that when R9 290 dropped at $399 it made a $650 780 look horrendously overpriced. History has a tendency of repeating itself. I am sure many counted AMD out when 2900 series flopped and this same crazy pessimism is being regurgitated. Less than 2 quarters are left until a $550 980 will seem terribly overpriced; and it will get run over by a $550 300 series card.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
My argument is people who think only NV's engineers are special enough to improve perf/watt on the same node but no one else in the world can are not correct. Carrizo proves that AMD can achieve improvements in many key areas on the same node in terms of improving perf/watt and die size. Since the first day you've joined AT, there has never been 1 optimistic comment from you regarding anything from AMD, so it's impossible to take any of your opinions on AMD for anything other than doom and gloom.

AMD doesn't need to match or beat Maxwell's perf/watt to make R9 300 a great series. This idea that has been promoted since 970/980 dropped that AMD is done for in GPUs and will never come back will be proven wrong to the pessimists.

*more price ramblings*

Carrizo uses completely different methods. And they can only be used for low power devices unlike nVidia with its entirely new uarch. Using GCN 1.2 in Carrizo is the main benefit there. But you couldnt be more wrong if you think using HDL is any kind of advancement to be used in performance chips. The Kaveri refresh is a direct reason for the disadvantages of HDL. nVidia for that matter could also mix Maxwell and HDL and keep its advance in low power devices for compare.

AMD still desperately need a new uarch to compete in performance/watt. The constant massive marketshare bleed cyclus they are in now is very dangerous.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Pointless to argue with someone who puts words in my mouth about things I didn't say. You can't seem to comprehend the idea of AMD improving performance and perf/watt on 28nm. Others know it will happen. Whether you will admit to or not AMD will improve perf/mm2, perf/watt and absolute peformance enough that your 980 will be moved to mid-range status by AMD in less than 5 months.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Pointless to argue with someone who puts words in my mouth about things I didn't say. You can't seem to comprehend the idea of AMD improving performance and perf/watt on 28nm. Others know it will happen. Whether you admit or not your 980 will be moved to mid-range status by AMD in less than 5 months.

Its not me confusing low power HDL designs with performance GPUs.

Even AMD indirectly say it cant be used for performance devices. Unless you got another excuse for no Carrizo desktops and instead a Kaveri refresh. Plus AMDs own graphs in relation to power/performance between Kaveri and Carrizo.

Its a new uarch thats needed.

Oh, and in 5 months my 980 is almost a year old. And I never cared about what it will be moved to or whatever price it will be at the time. But again, I am not desperate for some magic to happen.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Many people here have a tendency to forget history that HD4000-6000 were actually more efficient than NV's products and that when R9 290 dropped at $399 it made a $650 780 look horrendously overpriced. History has a tendency of repeating itself. I am sure many counted AMD out when 2900 series flopped and this same crazy pessimism is being regurgitated. Less than 2 quarters are left until a $550 980 will seem terribly overpriced; and it will get run over by a $550 300 series card.

The difference is that was then, this is now. ATI used to have the personnel but after AMD's purchase of ATI, there have been so many rounds of layoffs and restructuring, I'm not sure how competitive the remaining talent is. If you were a good engineer why should you stay at AMD when you could leave for companies like NV, Intel, Qualcomm, Apple, etc.? I'm sure AMD still has some talent left, but I suspect it's diminished since ATI's heyday. Also, in the past AMD has used node shrinks and new memory to get its advantage, like HD4xxx GDDR5 and hopping onto new nodes faster than NV. What we're seeing is that if NV and AMD are on equal node and memory, NV is a very tough competitor.

24% market share is just plain bad. That's not that much more than AMD's x86 CPU market share vs Intel, and we all know how that turned out (other than a brief period when Intel shot itself in the foot with Netburst).

I don't think anyone is saying that AMD can't improve their efficiency. The question is whether they can improve by enough, fast enough, to matter. Because efficiency really is king, because mobile is king. Desktop discrete GPUs are transitioning from being the dog to being the tail (Quadro/Tesla excepted).

For the sake of consumers, I hope AMD remains competitive. As Jaydip said above, I don't want to see what happened in the x86 CPU space happen to the discrete desktop space.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
AMD designed a brand new Tonga chip, only used in R9 285 (desktop) and the Apple iMacs. Why would AMD introduce such dramatic improvements in Tonga in terms of colour & memory bandwidth compression, tessellation, pixel fill-rate efficiency and then do absolutely nothing with those architectural improvements for R9 370/380 series?
Because that requires more SKUs. I don't want to sound overly negative but it looks to me like AMD simply does not have the resources to bring out a new line of GPUs. I honestly hope I'm wrong, will be glad to be. But look at the pattern the last few generations, we went from a top to bottom line of new chips (Evergreen) to AMD having mostly re-badges.
 

DownTheSky

Senior member
Apr 7, 2013
800
167
116
Since GPUs are highly parallel is it really that costly to cut-down an existing product and releasing smaller ones (with less units)?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
AMD has released all new top to bottom stacks. They've done it months and months ahead of nVidia. They've had nVidia beat at every price point with new tech while nVidia kept peddling old stuff. Didn't really matter. nVidia was able to hang in with just a single new chip (GF100), and get right back to dominating with a second (GF104) even though it was released ~10 months later than AMD's mid range card. Efficiency didn't matter either. Their cards out sold AMD.
 

SimianR

Senior member
Mar 10, 2011
609
16
81
That's the thing, even when AMD GPU's were more efficient (less power hungry) than their NVIDIA counterparts they still weren't selling as well. Brand perception/marketing is a pretty powerful thing. On the desktop side I'm not sure having maxwell like efficiency is going to do AMD a lot of good when you have a very loyal group of customers approaching Apple like status who now it seems would never have considered AMD anyway. AMD definitely needs to improve perf/w on the mobile side though, that's where grabbing big oem sales will matter.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
But you couldnt be more wrong if you think using HDL is any kind of advancement to be used in performance chips.

My post had little to do with HDL. It's about engineers taking 1.5-2 years to re-evaluating their designs and squeezing more performance per mm2 from the same node for enhancing the existing architectural foundation.

It's like everything AMD has done with Tonga, explained by reviewers, flew completely over your head. The problem with Tonga is that it doesn't have enough TMUs and SPs to showcase these advantages in a general sense. However, there are games that have bottlenecks elsewhere and Tonga beats an R9 280X. This should not happen under any circumstances if Tonga's architectural improvements weren't real.

R9 285 beats HD7970 by a whopping 20% in Bioshock Infinite.

bioshock_2560_1600.gif


Why is this important? Because on 'paper specs' 7970 matches or beats R9 285 in basically everything.

Why can't AMD's engineers take these improvements in R9 285 already and incorporate them into R9 380/390 series? These improvements will shine because R9 380/390 series will have more SPs and TMUs that will allow the architectural improvements in Tonga to stretch their wings.

AMD still desperately need a new uarch to compete in performance/watt. The constant massive marketshare bleed cyclus they are in now is very dangerous.

The statement would only be true if you have R9 300 series benchmarks at hand for the entire line-up.

People seem to not understand this or something. R9 200 series is a GTX700 competitor that is forced to compete against Maxwell because AMD is late. However, if we look at perf/watt, perf/mm2, compute, R9 200 series is very competitive against NV's architecture that was always meant to compete against it.

This is akin to reading a review of an-all new 2015 Ford Mustang beating 6-7 year old Chevrolet Camaro and Dodge Challenger and calling Chevy and Dogde a write-off and making insinuations that they'll never make a new Camaro or Challenger that will beat the new Mustang. All credit is due to NV for beating AMD to launch with a more modern architecture but it's unbelievably how people use AMD's outdated and old GCN 1.0/1.1 R9 280X/290X parts as some pessimistic gauge as to where R9 300 will end up in perf/mm2 or perf/watt. Based on these comments you'd think GPUs take only 12-15 months to design or something.

The difference is that was then, this is now. ATI used to have the personnel but after AMD's purchase of ATI, there have been so many rounds of layoffs and restructuring, I'm not sure how competitive the remaining talent is.

GPUs take 3-4 years to design. This isn't a 12 to 15 months deal. Do you honestly think AMD's engineers just started working on R9 300 series 1 day after R9 290 cards were launched? It's not how it works. It's not like 5 guys at AMD sat down on December 1, 2013 and said hmmmm...forget GDDR5, let's design R9 390 with HBM. These types of decisions take validation and thought process that takes months.

Chances are AMD used Tonga as a test-bed for let's call it Team 1 to improve some aspects of GCN that were lacking that AMD noticed a long time ago. Another team was working on improving other aspects of GCN but AMD couldn't pull all of these changes at once in the 285. With R9 300 series we should see the best of Tonga and whatever else AMD's engineers were able to add in 12-15 months leading up to manufacturing. Well the designs are already finished actually. The last 12-15 months could also have been used to focus on transistor level optimization to reduce leakage, introduce finer-grained voltage control and GPU boost, etc.

AMD still has some talent left, but I suspect it's diminished since ATI's heyday. Also, in the past AMD has used node shrinks and new memory to get its advantage, like HD4xxx GDDR5 and hopping onto new nodes faster than NV. What we're seeing is that if NV and AMD are on equal node and memory, NV is a very tough competitor.

Was HD7000/R9 290 series not competitive with GTX600/700 cards? This forum is making a critical mistake of comparing a brand new modern architecture to an outdated line-up of AMD parts and is assuming that AMD has nothing for R9 300 cards that's really monumentally better than 290X. This is akin to writing off NV when HD5850/5870 launched and GTX275/280/285 got crushed. While AMD isn't going to bring a fundamentally different architecture like Maxwell, that's not to say they can't significantly improve upon R9 200 series. I don't know why some of you guys who followed GPUs for 15-20 years became so pessimistic lately. ATI/AMD and NV have traded blows for years. If R9 300 series flops, then we have a real reason to worry. For now, the biggest issues are delays of R9 300 series, but making premature doom and gloom remarks without knowing the tech that underlines R9 380X/390X seems odd to me.

24% market share is just plain bad. That's not that much more than AMD's x86 CPU market share vs Intel, and we all know how that turned out (other than a brief period when Intel shot itself in the foot with Netburst).

It's not even remotely comparable. The reasons for why AMD's CPUs and AMD's GPUs are losing market share are different. On the CPU side, Intel has a fundamental perf/mm2, IPC, perf/watt advantages due to superior architecture and manufacturing advantage. NV has no such massive architectural advantages to speak off in perf/watt, perf/mm2, IPC when it comes to comparing like-for-like architectures (HD7000 vs. 600, R9 290 vs. 700 series). Right now it's looking like NV is miles ahead because we are comparing Maxwell to 200 series but it's true competitor are 300 series cards. The only way we can say just how much better Maxwell is if we compare it against its like competitor - that's 300 series not 200.

As far as market share goes, this is way more to the story than perf/watt. AMD didn't gain market share from HD4870 to HD7970 series despite it leading in perf/mm2 and perf/watt throughout the entire HD4000-6000 generations.

I don't think anyone is saying that AMD can't improve their efficiency. The question is whether they can improve by enough, fast enough, to matter. Because efficiency really is king, because mobile is king. Desktop discrete GPUs are transitioning from being the dog to being the tail (Quadro/Tesla excepted).

For the sake of consumers, I hope AMD remains competitive. As Jaydip said above, I don't want to see what happened in the x86 CPU space happen to the discrete desktop space.

Consumers have already shown since HD4000 series they will pay more for inferior perf/watt and price/performance products from NV. Therefore, even if R9 300 series bests NV in every metric possible, AMD won't get to 50%/50% market share. The issues of AMD's GPU products lie beyond today's technical inferiority compared to Maxwell's cards. Poor OEM customer relationships, tarnished brand and driver issue image, poor marketing, etc. Even if R9 390/390X is spectacular, there are already 2 areas where reviewers/mainstream segment could throw the card under the bus at will - spin WC as a requirement while NV can get away with air cooling only, focus on 300W TDP only without considering the gaming system's overall perf/watt, 4GB vs. 6GB of VRAM.

I think what AMD needs to do is get design wins in the low and mid-range desktop and laptop sectors and that's only possible with significantly improved parts. Focusing on 390/390X only is akin to spending hundreds of millions on <5% of the gaming market. AMD's GPU division can't be that incompetent as it would mean no new laptop chips from HD7970M May 2012 to late Fall 2016 then? It's amazing some people actually believe that. :awe:

Because that requires more SKUs. I don't want to sound overly negative but it looks to me like AMD simply does not have the resources to bring out a new line of GPUs. I honestly hope I'm wrong, will be glad to be. But look at the pattern the last few generations, we went from a top to bottom line of new chips (Evergreen) to AMD having mostly re-badges.

Based on what? I could just as easily propose a hypothesis that AMD saved financial resources for R9 300 series which is why we they only made R9 285/290 cards because they knew 200 series was a transition generation? Either theory is just a hypothesis. However, the biggest problem with the R9 300 re-badging theory is that it provides no explanation at all as to how AMD would get laptop design wins with re-badged R9 200M parts, which themselves are basically HD8000M parts. As I said above, if R9 390/390X will be the only 2 new SKUs, why would AMD spend hundreds of millions on dollars on < 5% of the GPU market and neglect >50% of the entire laptop market?!

That's the thing, even when AMD GPU's were more efficient (less power hungry) than their NVIDIA counterparts they still weren't selling as well. Brand perception/marketing is a pretty powerful thing. On the desktop side I'm not sure having maxwell like efficiency is going to do AMD a lot of good when you have a very loyal group of customers approaching Apple like statuswho now it seems would never have considered AMD anyway. AMD definitely needs to improve perf/w on the mobile side though, that's where grabbing big oem sales will matter.

AMD has released all new top to bottom stacks. They've done it months and months ahead of nVidia. They've had nVidia beat at every price point with new tech while nVidia kept peddling old stuff. Didn't really matter. nVidia was able to hang in with just a single new chip (GF100), and get right back to dominating with a second (GF104) even though it was released ~10 months later than AMD's mid range card. Efficiency didn't matter either. Their cards out sold AMD.

Agree with everything both of you said. I already did an in-depth analysis of how AMD's superior perf/watt and price/performance didn't even make a dent in NV's market share from HD4870 all the way to HD7970 days! If that isn't proof of NV having more brand loyal customers even when AMD practically beats NV in everything but the single GPU performance crown, I don't know what is. During 6 months of HD5000 series head-start, AMD beat NV at nearly every price segment on the desktop in terms of perf/mm2, perf/watt, price/performance and absolute performance and features by getting to DX11 first. Maxwell can't even accomplish that because NV loses on price/performance in today's market and for brand angostic gamers $100-120 750Ti or $200 960 aren't good gaming products compared to $120-130 R9 270 or $200-220 R9 280X.

NV literally coasted with outdated product line-up top-to-bottom for 6 months leading up to GTX470/480 and then another 3.5-4 months until GTX460 launched, which means NV had nothing worth buying for a brand agnostic gamer below $350 GTX470 for 10 months straight. What's amazing is HD5850/HD5870 pummeled GTX275/GTX285 by way more GTX970/980 beat R9 290/290X and those AMD cards cost $259 and $369, way less than $330 and $550 for 970/980. Yet, NV's market share was basically unaffected when that generation was settled (in fact NV gained market share post Fermi) but today with far more competitive R9 280/280X/290/290X cards vs. HD5000 vs. GTX200 series, AMD's market share has been devastated. It's more obvious as ever based on buying patterns that NV users hardly switch and when AMD's market share plummets, it's brand agnostic gamers jumping to NV because they aren't waiting for AMD's cards. What's surprising is NV has been able to get gamers to switch while raising prices. I think AMD paid a huge price for not showing up for laptops for 3 years in a row now, and having tarnished its desktop GPU brand image by the reference blower HD7970 and poor early drivers during the first 3-4 months of Tahiti's launch. Ever since they haven't been able to recover.

Therefore, R9 300 series will not be some savior to AMD. Even if R9 390X were to take the performance crown from GM200, and R9 300 cards match NV's perf/watt and beat NV's 900 cards on price/performance, NV users will be reluctant to switch as they will just wait for NV to drop prices on 900 cards and buy those. We've seen this play out since 2008 when HD4850/4870 revolutionized price/performance and for 6 months when AMD had uncontested absolute performance with HD5850/5870, but people still bought NV in greater numbers. Today we see gamers buy $200 GTX960 over a $240-250 R9 290 and the performance difference is 45-50% there...so.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Based on what? [re: AMD lack of resources]

I base it on how late AMD is to respond to Nvidia. It is remarkable how well GCN has held up, but AMD has no response to Nvidia for a very long time. And we don't even have any actual date for the R3xx stuff, AMD has said nothing. Maybe they are playing things close to the vest I don't know but I can't help but be pessimistic. AMD needs to start generating some hype, try and stop people from migrating even more to Nvidia products.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I base it on how late AMD is to respond to Nvidia. It is remarkable how well GCN has held up, but AMD has no response to Nvidia for a very long time. And we don't even have any actual date for the R3xx stuff, AMD has said nothing. Maybe they are playing things close to the vest I don't know but I can't help but be pessimistic. AMD needs to start generating some hype, try and stop people from migrating even more to Nvidia products.

I'm not sure how you see this as late.

Their last product released in the fall of 2013, and held itself in good competition with the then-current Nvidia series. Nvidia was unlikely ready to release their big-chip, but took the time to one-up AMD in the Fall of 2014. It is still Winter right now, this is typical turn-around for product releases.

They don't spawn up these products a few months after a competitor simply to compete, these things take years from start to finish. I don't recall a time when either company was ever able to have a new generation of product available at the same time as the competitor's launch.

AMD is getting parts certified and preparing stock for shipment, and frankly, it has been a reasonable time frame, all things considered.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Did you pick that game because it shows the 285 in the best light with respect to the 7970? Because when combining all of TPU's games @ 1440p, the 7970 is actually faster than the 285...

perfrel_2560.gif

You caught him cherrypicking again. :awe:

And to add to it:

285 loses to 7970 in performance/watt as well.
perfwatt_2560.gif
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I base it on how late AMD is to respond to Nvidia. It is remarkable how well GCN has held up, but AMD has no response to Nvidia for a very long time. And we don't even have any actual date for the R3xx stuff, AMD has said nothing. Maybe they are playing things close to the vest I don't know but I can't help but be pessimistic. AMD needs to start generating some hype, try and stop people from migrating even more to Nvidia products.

OK and Fermi was 6-9 months late while Kepler top-to-bottom roll-out took 2.5-9 months! Based on how late NV was, why didn't we see NV doom and bloom threads for months leading up to their new series launches?

Even if AMD created hype for R9 300 series, it wouldn't matter that much since GTX750Ti and 970/980 have captured a huge part of the GPU market that won't upgrade again for another 2-3 years, while in laptops well AMD lost 3 years of laptop sales. It'll take half a decade or more now for AMD to get back to the position they were prior to HD7000 series in terms of market share. Not showing up for laptop designs wins since 2012 really hurt them much more than R9 290/300 series being late to market. Once gamers started buying NV-powered laptops, NV/Intel laptop was basically an automatic buy for most of the market from May 2012 to today. It's a miracle AMD even has 25% market share still as their discrete GPU laptop market share should be approaching close to 0% soon. For that reason I don't see how AMD would spend hundreds of millions on just R9 390/390X desktop parts and neglect > 50% of the laptop market that's also growing faster than the desktop gaming market.

--

As a side note, some perspective on pricing for Canadians. R9 300/GM200 series will end up as some of the most expensive GPU card launches in 5-6 years!

When Linus reviewed after-market R9 290X cards on July 31, 2014, 1 USD = 1.09 CDN. He sited the following prices from NCIX Canada on that review date:

AMD R9 290X reference card launch was $550 USD

vs.

Asus Direct CUII R9 290X = $599 CDN
Gigabyte Windforce 3X 390X = $599 CDN
Club 3D Royal Ace Edition R9 290X = $699 CDN
MSI TwinFrozr IV R9 290X = $799 CDN
MSI Lightning R9 290X = $849 CDN

With USD = 1.25 CDN now, a $599 USD R9 390X would be $750 CDN, which after-taxes is easily around $850 CDN. Holly cow, we Canadians are back to near 8800GTX/Ultra days. Suddenly a used R9 290X sounds like a bargain of 2015 in the Canadian GPU market. :D
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
OK and Fermi was 6-9 months late while Kepler top-to-bottom roll-out took 2.5-9 months! Based on how late NV was, why didn't we see NV doom and bloom threads for months leading up to their new series launches?
I can't give a good answer to this. But the bottom line is AMD is much more harshly punished for the same mistakes versus Nvidia. I can't see how anyone could argue otherwise.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Did you pick that game because it shows the 285 in the best light with respect to the 7970? Because when combining all of TPU's games @ 1440p, the 7970 is actually faster than the 285...

Total facepalm. No one said anything about R9 285 beating 7970 on average. :thumbsdown:

Go and read my comment again to understand what's actually being discussed, the improvements of 28nm architecture in Tonga in terms of context for R9 300 series wrt to how this impacts future GPUs from AMD on 28nm node.

If you are not going to take the time to read the comment from top to bottom and dissect the information for what's actually written, why even bother replying?

You caught him cherrypicking again. :awe:

It's not cherry-picking to sell a 285 as some good product. It's about showing a situation where AMD improved architectural design of GCN on a perf/mm2 basis to net 20% performance increase based on various improvements under the bonnet of GCN in a situation where the SKU is not limited by shader or TMU bottlenecks. Under normal circumstances 285 can't take full advantage of these refinements to GCN because it's bottlenecked elsewhere.

Figured even when I linked a direct technical page of Tonga's improved 28nm GCN architecture, you would not even read any of it and not understand 99% of my post because discussions of architectural improvements on AMD's side go way over your head or you live in an alternative reality where facts are ignored based on brand preferences. It's no wonder most of the CPU and GPU forum ignores anything you post related to technical discussions and projections. Based on your track record of predicting that no future consoles would have APUs in them, your lack of desire to read up on GPU architectures and its pertinent details (including not even understanding for years how GCN was designed and proved to be superior in DirectCompute to Kepler), your ability to predict future trends regarding R9 300 is basically non-existent. My posts further prove the same that you do not understand how AMD can improve perf/mm2, perf/watt and absolute performance dramatically on 28nm-based R9 300 series because you continue to be in real denial that AMD used Tonga as a test-bed for various GCN architectural improvements that will find their way into R9 300 series. Not even surprising that you would use Tonga's perf/watt to project R9 300 series perf/watt and once again fail to understand that Tonga's perf/watt means nothing because it's not a fully unlocked 384-bit 2048 part.

You also don't have a clue that AMD's 512-bit controller in R9 290X actually uses 20% less space than R9 280X's/7970Ghz, and that AMD achieved a 50% increase in memory bandwidth per mm2 with Hawaii. This already proves that AMD can and will improves various parts of its future chips in terms of perf/mm2 and bandwidth/mm2, maximize die area/transistor density.

Memory.jpg


Just because you can't comprehend that AMD does not incorporate all of these changes at once like NV did with Maxwell, you can't comprehend how AMD could possibly improve perf/watt based on what they have learned from R9 290X GCN 1.1 and Tonga 1.2. Of course you don't know how AMD could make large improvements because you don't read and/or don't actually care to learn the technical aspects when it comes to AMD's GPU products to see what steps their engineers are taking every new generation. All you see is "Graphics Core Next" but miss all the intricate details of what AMD has done so far to make it all come together in R9 300 series by making various changes along the way since the original HD7970 launched.

I can't give a good answer to this. But the bottom line is AMD is much more harshly punished for the same mistakes versus Nvidia. I can't see how anyone could argue otherwise.

It's more natural for supporters of their winning sports team with championships in the last 2-3 years to think the status quo will continue indefinitely. It's pretty obvious in this thread that some people have written off AMD a long time ago, but the correlation is most of the commenters who have done so have not purchased an AMD product in 5-10 years. When AMD is leading, the downplay the advantages and wait for NV's superior products, while when AMD is behind, they start their usual bashing. The cycles repeat every generation depending on the positioning of AMD vs. NV but what remains the same is these same AT members never buy AMD cards. Ironically, they participate in AMD CPU and GPU threads never intending to buy any of their products. Go figure.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,740
334
126
Total facepalm. No one said anything about R9 285 beating 7970 on average. :thumbsdown:

Go and read my comment again to understand what's actually being discussed, the improvements of 28nm architecture in Tonga in terms of context for R9 300 series wrt to how this impacts future GPUs from AMD on 28nm node.

If you are not going to take the time to read the comment from top to bottom and dissect the information for what's actually written, why even bother replying?

Then you did a terrible job of defending your argument. I know exactly what you were trying to say, and you failed at it. Just because one game shows improvements with Tonga over Tahiti, even though the specs might show otherwise, doesn't mean that Tonga as a whole is better than Tahiti. It is, in fact, not better on average, just as the specs show. Face it, you got caught cherry picking a benchmark (again), to shine light on your side of the argument. It's sad, really, that you continue to do this. :thumbsdown:

So I ask again, why did you pick that game?
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Wow, rs just explained why he compared tonga to Tahiti and Hawaii, friend what is wrong with you?

Don't bother with him. You weren't there since the first day he joined AT. His very first posts were anti-AMD rhetoric from day 1 and that never changed.

He doesn't understand how GPU bottlenecks work in games. As a result he spouts baseless information how Tonga is a bad product (which hardly anyone is interested in arguing about), while failing to acknowledge that AMD has done real architectural improvements in Tonga as a foundation for future products.

The 1792 shader, 32 ROP Tonga is better than the 2816 shader, 64 ROP 290X in geometry and pixel fill-rate performance.

67232.png


67234.png


He can't understand how AMD can take all those improvements in Tonga, the experience and lessons learned on designing a more efficient memory controller in Hawaii and combine these aspects, along with even more improvements on the L2 cache/Asynchronous compute engines that schedule work to the SPs in an HBM-based 3500-4000 shader R9 300 series card that would smash a 980.

Then you did a terrible job of defending your argument. I know exactly what you were trying to say, and you failed at it. Just because one game shows improvements with Tonga over Tahiti, even though the specs might show otherwise, doesn't mean that Tonga as a whole is better than Tahiti.

You cannot even comprehend your 1st language because no such argument was ever presented by me. You compared R9 285 SKU vs. HD7970 SKU on the whole when no such comparison was even mentioned because the discussion centered around showing GCN improvements on the same 28nm node underneath. I didn't fail with my argument, it is you who failed to understand the context of the discussion and the example presented. Since some gamers presented a view that AMD needs a brand new architecture because they cannot really improve from R9 200 series, I presented a counter argument to such a view. Additionally, Tonga is not a fully unlocked chip and it does not use R9 290's advanced memory controller. Therefore, trying to project the overall perf/watt of Tonga as an indication of R9 380/390 series' perf/watt is a total waste of time. Not sure in what language do I need to spell this out.

It is, in fact, not better on average, just as the specs show. Face it, you got caught cherry picking a benchmark (again), to shine light on your side of the argument. It's sad, really, that you continue to do this.

Maybe you should take an English reading comprehension class. Now you are defending your position by accusing me of spinning something on a matter that was never debated. Some other PC gamers actually understood the context just fine by following the discussion more closely and actually understanding what's being written. It's not my fault you cannot understand the juxtaposition of R9 285 being a bad gaming product but simultaneously showing excellent architectural improvements wrt GCN as a foundation to build upon for GCN 1.3 products.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You cherrypick again and continue the personal attacks.

Again, its proven that Tahiti(7970) is more power efficient, smaller, less transistors and faster than Tonga(R9 285).
 
Last edited: