• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

AMD R9 470X exposure: TDP is only 60W

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This is all guessing from my part :

I am wondering about how much clock cycles a given gcn instruction needs to complete when compared between the different gcn generations.
I am still trying to find some information about it, but it makes sense that AMD already optimized GCN as much as possible.

So, they cannot shave of much (if at all) clock cycles per instruction to increase IPC that way. Because usually for higher clocks, a longer pipeline is needed. I assume here that gcn is pipelined because that makes sense for a gpu which does the same iterative work over and over again. But then again, a new smaller process may allow for less stages for some instructions, so that these instructions complete faster with less clock cycles. This may limit of course the maximum clock speed. But if that is not necessary because the reachable clock speed plus IPC improvements are already enough.

Do you know if there is somewhere a guide about gcn instructionset ?

We could all compare the difference between 1.0,1.1 and 1.2 and make an educated guess how much gcn has improved and how 1.3 may have improved.

edit:

While looking, i found something right here from anandtech. :)

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/2

Don't get too hung up on the naming. AMD could have just as easily not called it GCN. Would that mean it's a more complete overhaul of the uarch? No. Just because nVidia uses different names of physicists between gens does that mean their uarch is a bigger update? Again, no. You know the saying, a rose by any other name?
 
May 11, 2008
22,565
1,472
126
Don't get too hung up on the naming. AMD could have just as easily not called it GCN. Would that mean it's a more complete overhaul of the uarch? No. Just because nVidia uses different names of physicists between gens does that mean their uarch is a bigger update? Again, no. You know the saying, a rose by any other name?

That is a good point. But with what i read in the anandtech article, a lot of features will probably remain the same.
And i think i know why Microsoft pushes windows 10-64. It might support the GCN IOMMUv2. I do not understand yet how much of an improvement this will give.
It might have something to do with the zero copy technique through pointer passing (for HSA).

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/6

Now what’s interesting is that the unified address space that will be used is the x86-64 address space. All instructions sent to a GCN GPU will be relative to the x86-64 address space, at which point the GPU will be responsible for doing address translation to local memory addresses. In fact GCN will even be incorporating an I/O Memory Mapping Unit (IOMMU) to provide this functionality; previously we’ve only seen IOMMUs used for sharing peripherals in a virtual machine environment. GCN will even be able to page fault half-way gracefully by properly stalling until the memory fetch completes. How this will work with the OS remains to be seen though, as the OS needs to be able to address the IOMMU. GCN may not be fully exploitable under Windows 7.
 
Last edited:

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
So this is basically a 270X, with 4GB rather than 2GB, that doesn't need a 6-pin PCI-E? Sounds like a winner, sort of. Better than the 750Ti, maybe similar to the GTX950(A) boards that lack the additional power connector.

Still, yeah, price is probably a tiny bit too high.

I could see $150. Then again, it's got 4GB, not 2GB, so that's worth some money too.

The 270x is a great card! I've ran 270 and crossfire 270's for a while and a single 270 can play default settings 1080p just fine.

BF4
BF hardline
Witcher 3
etc...

With 4GB of vram, that should help. I hope they can increase the clocks a bit or have a little bit great ipc for future games though.
 
Feb 25, 2011
16,992
1,621
126
So this is basically a 270X, with 4GB rather than 2GB, that doesn't need a 6-pin PCI-E? Sounds like a winner, sort of. Better than the 750Ti, maybe similar to the GTX950(A) boards that lack the additional power connector.

Still, yeah, price is probably a tiny bit too high.

I could see $150. Then again, it's got 4GB, not 2GB, so that's worth some money too.

MSRP of $169 will likely mean street prices a bit lower (or a lot lower if you resell bundled games, etc.) 15% (the standard we-sell-everything-for-this-but-tell-you-the-higher-price-so-you-think-it's-a-sale discount) off of $169 is $144.

You should know that - you're, like, the shopping-for-sales king. ():) :thumbsup:

I eagerly await the VL thread where you benchmark your quad-CF 470X rig with a Celeron dualie in it for gaming and DC. :cool:

Personally, I like power-efficient cards in principle, but have given up on them for my own use.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,348
9,730
136
AMD is a bit of a wild card in the launch department. I'm *positive* that they will find some way to screw up the Polaris launch and subsequent launches (Terrible pricing, clocked too low, crap stock cooler, horribly delayed, etc etc etc). The last clean sweep launch they had was way back when with the HD5xxx series .

Fact is AMD is strapped for cash, has suffered some amount of brain drain and is trying to launch an overhauled microarch at a foundry that's untested for the task (Has GF actually made any chips of note?).

AMD demoed working Polaris silicon before we even knew what the 10xx series from Nvidia would be called. Nvidia has presented, demoed and will even "launch" their Pascal silicon before we have any concrete details about Polaris.

I think AMD fans should temper their expectations a bit; AMD just doesn't have the resources at the moment to execute like Nvidia does.
 
Feb 19, 2009
10,457
10
76
that 1Ghz clock seems too low imo.

Seems fine for the low TDP. FinFet scales really well with clocks and volts, and also inversely if AMD wants to chase a low TDP.

Here's the thing nobody mentioned.

40 ROPs and 80 TMU. Why does a 1280 SP part need such high counts for these units?

32 ROPs is more than enough.

Unless per shader, performance went up by a massive amount.

Or the rumor is total bust. lol
 

coercitiv

Diamond Member
Jan 24, 2014
7,381
17,499
136
I think AMD fans should temper their expectations a bit; AMD just doesn't have the resources at the moment to execute like Nvidia does.
I'm *positive* that they will find some way to screw up the Polaris launch and subsequent launches (Terrible pricing, clocked too low, crap stock cooler, horribly delayed, etc etc etc). The last clean sweep launch they had was way back when with the HD5xxx series.
So you appeal to objectivity in order to temper fan expectations, yet dump said objectivity to relay your own view: AMD is set to botch GPU launches for the foreseeable future. Because they'll find a way™.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
No, he's being very objective indeed - the recent historical record definitely doesn't inspire a priori confidence.

Actually think this launch has already gone a little bit past truly ideal, which would have been launching a month or two ago. Hope it does well though :) I really like these low TDP things.
 

coercitiv

Diamond Member
Jan 24, 2014
7,381
17,499
136
No, he's being very objective indeed - the recent historical record definitely doesn't inspire a priori confidence.
This mantra really needs to be addressed, because it's mostly legend: past behavior can be used as a predictor for future behavior only under specific conditions, one of which is lack of external feedback. Another is the observed individual/group must remain essentially unchanged and consistent in his/their behavior.

Even if we admit these conditions are met for the next launch (for the sake of argument), going as far as putting the subsequent launches in the same basket shows complete disregard for any objective criteria.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
In general? Of course its dangerous.

In this specific case there's a really very plausible hypothesised underlying reason - AMD are simply short of the resources required to ensure a smooth release roadmap.

That's obviously still very true now. This situation could change at some point, and if it did then you'd expect to see things working more smoothly again.

Until then, as the main point of that post you're objecting to was - cut them a bit of slack! They're trying to do something monumentally hard, and not doing a remotely bad job of it either.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,348
9,730
136
So you appeal to objectivity in order to temper fan expectations, yet dump said objectivity to relay your own view: AMD is set to botch GPU launches for the foreseeable future. Because they'll find a way™.

- My apologies.

I figured I was presenting my opinion on the matter (AMD is going to botch this), explaining why they're going to screw this up (fallaciously pointing out a history of poor launches, laying out the financial and technical challenges ahead of them, pointing out the complete lack of marketing hype coming from them) then advising a fan base that is slowly starting to consume itself with expectations that they should probably reign those expectations in for the aforementioned reasons.

I stand by my reasoning, even if it isn't up to debate club standards. AMD is going to drop the ball on this somehow, and in some respects already has by letting the competition go from rumors to launch on their product uncontested.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
- My apologies.

I figured I was presenting my opinion on the matter (AMD is going to botch this), explaining why they're going to screw this up (fallaciously pointing out a history of poor launches, laying out the financial and technical challenges ahead of them, pointing out the complete lack of marketing hype coming from them) then advising a fan base that is slowly starting to consume itself with expectations that they should probably reign those expectations in for the aforementioned reasons.

I stand by my reasoning, even if it isn't up to debate club standards. AMD is going to drop the ball on this somehow, and in some respects already has by letting the competition go from rumors to launch on their product uncontested.

I agree with your view. It's better to tamper expectations and be pleasantly surprised rather than expect a $299 Polaris 10 card that = GTX1070/980Ti. Since AMD needs earnings, since GP106 is not here yet, since it's possible the competitor's cheapest card is $420+ in retail, AMD doesn't even need to start off with some ridiculous value since either P10/P11 cards will be better than what the competition offers anyway in those price brackets ($100-$330 ranges). Once the competition responds, they can lower prices and/or bundle a strong AAA title to push P10/11 sales in the fall. I actually think fire-sale R9 390 cards with TW:Warhammer bundle may be a great deal for 1080p 60Hz gamers once we start seeing them for $225-250 with rebates, etc.
 
May 11, 2008
22,565
1,472
126
Don't get too hung up on the naming. AMD could have just as easily not called it GCN. Would that mean it's a more complete overhaul of the uarch? No. Just because nVidia uses different names of physicists between gens does that mean their uarch is a bigger update? Again, no. You know the saying, a rose by any other name?

I had to think some more about this, AMD did not really have the resources to do a complete new architecture overhaul with every GCN revision. It also needs a lot of architecture compatibility if we safely assume that a polaris variant will be used in the PS4K. Just removing bottlenecks and adding architectual tweaks for higher performance and making the design ready for higher clockspeeds makes more sense.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I had to think some more about this, AMD did not really have the resources to do a complete new architecture overhaul with every GCN revision. It also needs a lot of architecture compatibility if we safely assume that polaris wil be used in the PS4K. Just removing bottlenecks and adding architectual tweaks for higher performance and making the design ready for higher clockspeeds makes more sense.

Considering we were stuck on DX11 and AMD'a uarch was already designed for next gen I can understand that they didn't do any major update. The hardware was already ahead of the software.
 
May 11, 2008
22,565
1,472
126
Considering we were stuck on DX11 and AMD'a uarch was already designed for next gen I can understand that they didn't do any major update. The hardware was already ahead of the software.

Now you mention it, that makes a lot of sense.
And since Microsoft is pushing windows 10 with DX12 a lot, the hardware will become mainstream.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,348
9,730
136
I guess the question still remains: does the 480 bring 28nm high end to the mainstream (like the 7870 did going from 40nm) or does it follow the more recent trend of just bumping the next highest number down a notch?

We still know nothing, I guess...