[Rumor]R9 300 series will be manufactured in 20nm!

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
That's not what TDP is about. TDP is about the maximum power the card can pull under any circumstances.

Absolutely false. TDP stands for thermal design power and is a reference to what is needed for the cooling capacity of the system under a typical load, what that typical load is and at what frequencies is totally up to the vendor to decide. Therefore, TDP for AMD/Intel/Nvidia/Whoever can all mean different power consumptions for a rated TDP.

I can run it, therefore it is "real" power usage. It's really physically drawing that power (in fact, I was just running some tests on my 7870 using FurMark and a Kill-A-Watt tester).

I don't care what AMD and Nvidia think. They were upset that they got caught lying about TDP on older generations of video cards. The GTX 480 used 360W under FurMark, thus the card's real, physical TDP was 360W, whether Nvidia admitted that fact or not.

The eventual solution used by both vendors was to put hardware power monitoring so the card can't get above TDP. Which is why FurMark is a good test of what the card's real TDP actually is. (In some cases, it's lower than the official figure, as we saw with the 7850 and 7950.)
True, it is power that you're reading from your meter, but it is false again trying to relate it to TDP. Furmark is not a typical load and therefore falls out of tdp considerations. Furthermore, to prevent damage to their chips and probably bad pr, both AMD and Nvidia drivers will detect when a program like Furmark loads and will purposefully throttle back the frequency. The amount of throttle differs between the AMD and Nvidia and even card model by card model, so it's not really useful unless you plan on doing something like mining on your card.

TDP can be used as an estimate for power usage, but that's it. As has been shown, TDP and actual power can be drastically different and is very load dependent.
 
Aug 11, 2008
10,451
642
126
If one vendor claims TDP is average gaming power, the other vendor should do the same IMO.

What vendor ever claimed TDP was average gaming power?

Plenty of posters on this forum use TDP in a variety of ways to benefit whatever they are trying to show, but I dont think I ever heard of a vendor calling it average gaming power.
 

Vaporizer

Member
Apr 4, 2015
137
30
66
But if they change the process and keep the layout would it be not possible to introduce the changes (scaler) to get the "new" r9 370 compatible to VSR and Freesync?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
If AMD want to venture down this road to gain against Maxwell which have a technological advantage in terms of new architecture, it can certainly be much cheaper than spending millions and millions on dollar to engineer a new architecture for 28nm since we are stuck with it until 2016 when 16nm is available.

I disagree that Maxwell has any tech advantage. It's more efficient for gaming, but that's really it.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
R9 270
1280 @ 925MHz - 150W

R9 270X
1280 @ 1050MHz - 180W

Makes extremely little sense to make the R9 370 rebrand slower than R9 270.
Even there it is 30-40W reduction. Most likely its atleast as high clocked as 270X or atleast very close imo

TSMC claim 25% power reduction with 20nm compared to 28nm. 180W * 0,75 is 135W. 150W * 0.75 is 112.5W. Falls exactly in line with what the specs for the R9 370 card is.
Ergo R9 370 have different clocked models of 270 and 270X but in 20nm.

Boy this is exciting :)

Not really. In new Mac Pro FirePro D300 is a Pitcairn chip with 850 MHz core clock. Its TDP is 116W with Turbo 139W.

Also, Tahiti which is FirePro D700 is 109W with max Turbo TDP of 129W at 850 MHz core clock.

So its possible to just lower the clocks, and voltage on chip and lower their TDP, at the same node.

Forgot to mention:
IMO the delay is because the demand of biggest client AMD can have. Apple. They have Mac Pro which uses AMD GPUs. AMD sold all 2048 GCN core Tonga dies to Apple, for the retina iMac. And you know it very well Cloudfire ;). I believe the delay is because Apple bought most of the supplies of Fiji and perhaps Bermuda chip for their computer. There is everything on the market, that would make Apple update it. They didn't. Because they are waiting for AMD, and WWDC. It will be on 8th June. Few days after the Computex, right?
Apple cannot launch new MP without announcement of new technology. And AMD cannot announce new technology if they don't also have at least some supplies of it for the consumer market.
 
Last edited:

Shehriazad

Senior member
Nov 3, 2014
555
2
46
I can't cite my sources...but R9 300 series will actually be manufactured in 0.2nm . AMD actually build a time machine and their new GPUs will not only be made in this awesome 0.2nm process...they also use this new and awesome material called Bogusium.


No but for real....if recent news already say that AMD is gonna skip 20nm....why would this even be a rumor?

That's like saying AMD is gonna use GDDR6 (lelwat) in their R9 300 series...
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I disagree that Maxwell has any tech advantage. It's more efficient for gaming, but that's really it.

Maxwell is more efficient for anything except Double Precision computing. Other GPGPU tasks work just fine, usually more efficiently than with GCN and Kepler. It's true that Nvidia cards have often had lower benchmark scores in OpenCL applications than corresponding AMD cards, but that's really a driver issue (and one that Nvidia is in no hurry to fix, because they want to push proprietary CUDA). It has nothing to do with the underlying architecture.

GCN isn't as far behind as some people seem to think (the gap is exacerbated by AMD's insistence on overclocking and overvolting its chips) but it is behind Maxwell in efficiency.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
No but for real....if recent news already say that AMD is gonna skip 20nm....why would this even be a rumor?

It's all rumors. No official source ever said that 20nm GPUs were cancelled; this came from alleged insider information and leaks. In 2014, AMD CEO Lisa Su made the following statement: "20nm is an important node for us. We will be shipping products in 20nm next year and as we move forward." That's about the only official, public statement we have from AMD on anything to do with 20nm. Of course, she didn't say GPUs so she could have been talking about other products, and it's also possible that things didn't work out and 20nm had to be cancelled. But there was never any official walk-back.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
There are some people here that really don`t know what they are talking about.
"TDP isnt power consumption bla bla bla here is some charts from a review I really dont understand bla bla bla".

R9 270:
TYPICAL BOARD POWER: 150W

R9 270X:
TYPICAL BOARD POWER: 180W

R9 370:
REFERENCE BOARD POWER: 110-130W

#2:
XFX R9 270: 1x6pin
XFX R9 270X: 2x6pin
XFX R9 370: 1x6pin

Why do they do this? Because the OEMs make power available for TDP which is worst case scenario power draw.

PCIe can supply 75W. 1x6 pin can supply 75W: 150W in total. 1x6 pin is the layout
180W TDP for a card? You have 2x6pin. 2x75W + 75W from PCIe.

Does it mean a card will draw 180W because it have a 180W TDP? 99% scenarios, no.
But it is still a power requirements OEMs have to design from. They really don`t give a rats behind about what Metro 2033 consume. You cover all aspects and leave room for a power envelope for a card. Period.


R9 370 have a much lower Board Power requirement than R9 270X and even R9 270. Driver entries have shown the GPU`s are the same, ie rebrands.
Where does this power reduction come from? Big reason to suspect 20nm imo. Unless 28nm SHP is so much better than 28nm from TSMC. But if XFX is saying up to 130W for R9 370 I`d say thats a high clocked model. Which could perhaps compared against R9 270X in clocks, maybe higher. Meaning atleast 50W reduction. Thats a lot don`t you think?
Its about 28% reduction in power, which just happens to be what TSMC claim is the result of 20nm over 28nm
http://www.tsmc.com/english/dedicatedFoundry/technology/20nm.htm
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Not really. In new Mac Pro FirePro D300 is a Pitcairn chip with 850 MHz core clock. Its TDP is 116W with Turbo 139W.

Also, Tahiti which is FirePro D700 is 109W with max Turbo TDP of 129W at 850 MHz core clock.

So its possible to just lower the clocks, and voltage on chip and lower their TDP, at the same node.

Forgot to mention:
IMO the delay is because the demand of biggest client AMD can have. Apple. They have Mac Pro which uses AMD GPUs. AMD sold all 2048 GCN core Tonga dies to Apple, for the retina iMac. And you know it very well Cloudfire ;). I believe the delay is because Apple bought most of the supplies of Fiji and perhaps Bermuda chip for their computer. There is everything on the market, that would make Apple update it. They didn't. Because they are waiting for AMD, and WWDC. It will be on 8th June. Few days after the Computex, right?
Apple cannot launch new MP without announcement of new technology. And AMD cannot announce new technology if they don't also have at least some supplies of it for the consumer market.

I`ve seen 150W thrown around, but 139W may be the accurate power limit in the vbios, you could be right about that. But thats not really a power reduction through voltage reduction etc. D300 runs at 850MHz. R9 270 runs at 925MHz. Thats where the 150W > 139W reduction comes from.

Tonga exist on mobile too you know. M295X, is featured on Dell`s 15W Alienware notebooks.
According to Guru3d, the card is manufactured at GlobalFoundries. Which should be 28nm SHP process but not 100% sure. But if it is, it did not give any big benefits TDP wise. M295X have a TDP of 125W. Which is a lot for mobile.

So AMD needs to get that TDP down if they are gonna have any chance of getting that chip in more notebooks. Right now Dell are the only OEM that use it for notebooks. Clevo and MSI usually use AMD chips in their high end notebooks but not this time, probably because they skipped out now that 970M wih 75W TDP is there with better performance. Which means less problems cooling and power wise.
More reasons to manufacture Tonga (R9 380/M385/M385X?) on 20nm I`d say
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Cloud, I believe that at 125W of TDP we will see Full Tonga(2048 GCN cores) as R9 370/X.
 

Kippa

Senior member
Dec 12, 2011
392
1
81
One of the things that makes me smile is how some users slag off other users when they make a comment about a rumour, and how quickly some users "memory" seems to fade when that rumour turns out to be true.

As for the 20nm rumour there might be a chance that it is true, how likely? Who knows. We'll probably find out very soon anyway. Personally I am agnostic on the issue. I don't believe or disbelieve the rumour, I just keep an open mind.

I'm keeping a keen eye on the benchmarks as the new gfx card is released, as I will probably want to invest in a 4K monitor and want some kind of vsync whether it be gsync or freesync for both monitor and gfxcard. I'm not jumping yet until I get the lay of the land.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Cloud, I believe that at 125W of TDP we will see Full Tonga(2048 GCN cores) as R9 370/X.
Well its not the R9 370. 370X perhaps since the jump from 1280 to 1792 isnt that big?
http://forums.anandtech.com/showpost.php?p=37270321&postcount=1

One of the things that makes me smile is how some users slag off other users when they make a comment about a rumour, and how quickly some users "memory" seems to fade when that rumour turns out to be true.

As for the 20nm rumour there might be a chance that it is true, how likely? Who knows. We'll probably find out very soon anyway. Personally I am agnostic on the issue. I don't believe or disbelieve the rumour, I just keep an open mind.

I'm keeping a keen eye on the benchmarks as the new gfx card is released, as I will probably want to invest in a 4K monitor and want some kind of vsync whether it be gsync or freesync for both monitor and gfxcard. I'm not jumping yet until I get the lay of the land.
Good for you for keeping an open mind. I really hope its true. AMD deserve this. Imagine AMD throwing the "We will be launching R9 390X with up to 8GB HBM and it is entirely built on 20nm" bomb on us. Plus "For the people that want the absolute best, we built the ultimate gaming beast, R9 395X2 to replace R9 295X2, also with HBM and 20nm".

We shall find out very soon for sure. Its just a bit over a month til Computex is happening. Im sure AMD have something for us there. Hopefully we will see 390X and finally put an end to months and months of speculating. Then power/dollar/brand discussions take over lol :p
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
I dont think Lisa is unrealistic in setting goals so the 1b for gf must happen more or less in some way.

As amd entire server and desktop line is -if not sinking -then already at the bottom of the ocean the capacity can only come from consoles, carizo, or gpu. Imo its to early to change consoles process (ms and sony would not bet that business on gf solidity) and carizo is so small and still a limited segment for the laptop. That leaves gpu with a major part of gf capacity for 2015.

The new gpu must be gf.

Its damn sure you dont make gpu on new node beeing 20 or 28nm on design anything earlier than tonga. Thats for sure. Why use older design?

What should be the purpose of 28nm of gf? Well mubadala controls amd and amd is meant to feed gf. Thats reason enough for going 28nm tsmc to 28nm gf. Newer design in a tweaked 28nm can do lot and if 28nm is cheap then why go 20nm.

What favors 20nm imo is that you get the same benefit going from 28nm to 20nm bulk as every normal shrink. We have to remember that. Without finfet cost and complexity. I simply think hp 20nm is quite ideal for gpu as you are not so dependant on leaking and low power/high perf as finfet can give. And i guess gf process is not nearly ready for their own (samsung) hp finfet. 2016 is optimistic here.

I think some of the reservation of gpu on 20nm stems from one of charlie earlier articles saying it was not fit. But imo the argument against gpu on 20nm (excepts we havnt yet seen hp 20nm) is simply not there. On the contrary; 20nm could be the last cheap ($/transistors) node and as such fit for large gpu dies. (And apu).
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
R9 370 have a much lower Board Power requirement than R9 270X and even R9 270. Driver entries have shown the GPU`s are the same, ie rebrands.
Where does this power reduction come from? Big reason to suspect 20nm imo. Unless 28nm SHP is so much better than 28nm from TSMC. But if XFX is saying up to 130W for R9 370 I`d say thats a high clocked model. Which could perhaps compared against R9 270X in clocks, maybe higher. Meaning atleast 50W reduction. Thats a lot don`t you think?
Its about 28% reduction in power, which just happens to be what TSMC claim is the result of 20nm over 28nm
http://www.tsmc.com/english/dedicatedFoundry/technology/20nm.htm

Maybe, but one thing you have to keep in mind is that the Pitcairn cards are seriously overvolted by default. Better binning, like what AMD did with the E-series FX chips, could get the power consumption of full Pitcairn down to 130W with no changes in silicon whatsoever.

In fact, I ran some experiments last night proving this. My video card is a Powercolor PCS+ 7870. I connected my PC's power plug to a Kill-A-Watt meter, which indicates that it consumes 68-72 watts while idling on the desktop (power usage fluctuates). When I set the card to stock 7870 settings (removing factory OC), total system power usage during FurMark (measured at the wall) was 228W-233W. We know from TechPowerUp that idle power consumption is about 12 watts, so this means the card is consuming about 170W-175W under FurMark - almost exactly what the TDP tells us. Then I started dropping the voltage. I adjusted the core clock slightly down, to 950 MHz, but increased the RAM speed to 1250 MHz (technically overclocking, but not really, since that's the actual rated speed of the GDDR5 chips). I ended up at 950 MHz core, 1250 MHz RAM, 1.050 volts. (I could probably have dropped the voltage more, but this was about where improvements seemed to taper off.) The result of this was that FurMark power consumption dropped to about 185W; once the non-GPU idle power is factored out, this means the GPU is pulling about 125W maximum. That's a huge difference. AMD could do this tomorrow, without any new silicon at all.

By the way, I think we can be fairly sure AMD isn't going to port any GCN 1.0 parts to either 28nm SHP or 20nm without making some changes. Tahiti has already been superseded by Tonga, Cape Verde won't have a successor, and Pitcairn needs updating because it lacks FreeSync, TrueAudio, and other modern features.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Cloud, I believe that at 125W of TDP we will see Full Tonga(2048 GCN cores) as R9 370/X.

This is doable even at 28nm if you drop the clocks enough. The Mac Pro version of Tonga has a TDP of about this much. At 20nm, 125W full-fat Tonga should be possible without any clock speed compromise.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
JDG1980: In Mac Pro there is no Tonga chip, only Tahiti and Pitcairn. In iMac there is Full Tonga.

Cloudfire: is there a possibility, that there could be different names and device id's in drivers for only OEM's versions of GPUs? I mean: device ID and new name for pitcairn GPU that ends up being only OEM part?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
This is doable even at 28nm if you drop the clocks enough. The Mac Pro version of Tonga has a TDP of about this much. At 20nm, 125W full-fat Tonga should be possible without any clock speed compromise.

Lets assume that TDP is power for the shake of the argument, which 20nm process will give you 125W from a full Tonga when cut down R9 285 is 190W ???
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Lets assume that TDP is power for the shake of the argument, which 20nm process will give you 125W from a full Tonga when cut down R9 285 is 190W ???

In Retina 5K iMac there is full(2048 GCN core) chip that has 125W TDP. And its 28 nm.

Also, Mac Pro has Tahiti Chip with 2048 GCN cores, and wider memory bus, and it also has 129W of TDP.

Its not a problem. Its matter only of voltage and clocks.
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
There are some people here that really don`t know what they are talking about.
"TDP isnt power consumption bla bla bla here is some charts from a review I really dont understand bla bla bla".

Why do they do this? Because the OEMs make power available for TDP which is worst case scenario power draw.

For being so dismissive and condescending, you are the one who doesn't know what you are talking about. TDP is absolutely NOT worst case scenario power draw unless the IHV specifies it as such. There is no definition on how to set TDP and it will vary by manufacturer and even different card models. I just explained it at the top of this page.


PCIe can supply 75W. 1x6 pin can supply 75W: 150W in total. 1x6 pin is the layout
180W TDP for a card? You have 2x6pin. 2x75W + 75W from PCIe.

Does it mean a card will draw 180W because it have a 180W TDP? 99% scenarios, no.
But it is still a power requirements OEMs have to design from. They really don`t give a rats behind about what Metro 2033 consume. You cover all aspects and leave room for a power envelope for a card. Period.

False once again. Some OEM's might do this, but I guarantee you the big OEM's don't validate their consumer systems for heavy GPGPU type loads because that's not what the systems are for. You may disagree but unless you have some kind of proof or professional experience here, it doesn't matter what your opinion is as it is not factually based. The power limits you mentioned also aren't technical power limits, they can easily be exceeded. Those are just the spec limits to have PCIe certification for your card but not every card is PCIe certified.
 
Last edited:

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
When I set the card to stock 7870 settings (removing factory OC), total system power usage during FurMark (measured at the wall) was 228W-233W. We know from TechPowerUp that idle power consumption is about 12 watts, so this means the card is consuming about 170W-175W under FurMark - almost exactly what the TDP tells us. Then I started dropping the voltage. I adjusted the core clock slightly down, to 950 MHz, but increased the RAM speed to 1250 MHz (technically overclocking, but not really, since that's the actual rated speed of the GDDR5 chips). I ended up at 950 MHz core, 1250 MHz RAM, 1.050 volts. (I could probably have dropped the voltage more, but this was about where improvements seemed to taper off.) The result of this was that FurMark power consumption dropped to about 185W; once the non-GPU idle power is factored out, this means the GPU is pulling about 125W maximum. That's a huge difference. AMD could do this tomorrow, without any new silicon at all.
True. Pitcairn undevolts fairly well, in my experience, undervolted 950-975mhz on core gives you the best performance per watt (that can challenge even GTX 960).
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I dont think Lisa is unrealistic in setting goals so the 1b for gf must happen more or less in some way.

As amd entire server and desktop line is -if not sinking -then already at the bottom of the ocean the capacity can only come from consoles, carizo, or gpu. Imo its to early to change consoles process (ms and sony would not bet that business on gf solidity) and carizo is so small and still a limited segment for the laptop. That leaves gpu with a major part of gf capacity for 2015.

The new gpu must be gf.

Its damn sure you dont make gpu on new node beeing 20 or 28nm on design anything earlier than tonga. Thats for sure. Why use older design?

What should be the purpose of 28nm of gf? Well mubadala controls amd and amd is meant to feed gf. Thats reason enough for going 28nm tsmc to 28nm gf. Newer design in a tweaked 28nm can do lot and if 28nm is cheap then why go 20nm.

What favors 20nm imo is that you get the same benefit going from 28nm to 20nm bulk as every normal shrink. We have to remember that. Without finfet cost and complexity. I simply think hp 20nm is quite ideal for gpu as you are not so dependant on leaking and low power/high perf as finfet can give. And i guess gf process is not nearly ready for their own (samsung) hp finfet. 2016 is optimistic here.

I think some of the reservation of gpu on 20nm stems from one of charlie earlier articles saying it was not fit. But imo the argument against gpu on 20nm (excepts we havnt yet seen hp 20nm) is simply not there. On the contrary; 20nm could be the last cheap ($/transistors) node and as such fit for large gpu dies. (And apu).
If Global Foundries make 20nm, that would be ideal for AMD. Thats still in the air. They might need to go 20nm to make R9 390X and R9 395X2 because of heat, power and space requirements.

They could do it across all R9 300 cards because its cheaper to rebrand than redesign new chips when we know both 20nm and 28nm are shortlived and pretty much at the end of the line. 16nm FinFET from TSMC are superior to 20nm SOI. Which is why both AMD and Nvidia are gunning for it with Greenland and Pascal.
I simply think AMD can catch on to Nvidia`s effecient Maxwell by using the same GCN 1.x architecture but in 20nm. Which will be cheaper for AMD than dishing out cash on a new architecture imo. We all heard the $700+ price for Fiji. HBM and 20nm could be the reason?

You could be right that AMD is tweaking the cores a bit. After all R9 370 which looks to be a 270X rebrand is GCN 1.0. Its missing True Audio, XDMA, Freesync etc. Would not surprise me AMD have added that.
We know AMD have changed the names for the rebrands:
Tonga = Antigua
Grenada is there, Tobago etc. It seems strange launching new names if the features havent changed atleast.

Maybe, but one thing you have to keep in mind is that the Pitcairn cards are seriously overvolted by default. Better binning, like what AMD did with the E-series FX chips, could get the power consumption of full Pitcairn down to 130W with no changes in silicon whatsoever.

In fact, I ran some experiments last night proving this. My video card is a Powercolor PCS+ 7870. I connected my PC's power plug to a Kill-A-Watt meter, which indicates that it consumes 68-72 watts while idling on the desktop (power usage fluctuates). When I set the card to stock 7870 settings (removing factory OC), total system power usage during FurMark (measured at the wall) was 228W-233W. We know from TechPowerUp that idle power consumption is about 12 watts, so this means the card is consuming about 170W-175W under FurMark - almost exactly what the TDP tells us. Then I started dropping the voltage. I adjusted the core clock slightly down, to 950 MHz, but increased the RAM speed to 1250 MHz (technically overclocking, but not really, since that's the actual rated speed of the GDDR5 chips). I ended up at 950 MHz core, 1250 MHz RAM, 1.050 volts. (I could probably have dropped the voltage more, but this was about where improvements seemed to taper off.) The result of this was that FurMark power consumption dropped to about 185W; once the non-GPU idle power is factored out, this means the GPU is pulling about 125W maximum. That's a huge difference. AMD could do this tomorrow, without any new silicon at all.

By the way, I think we can be fairly sure AMD isn't going to port any GCN 1.0 parts to either 28nm SHP or 20nm without making some changes. Tahiti has already been superseded by Tonga, Cape Verde won't have a successor, and Pitcairn needs updating because it lacks FreeSync, TrueAudio, and other modern features.
Good test, but you must remember that just because your chip endured a voltage drop on that particular test, doesnt mean another person`s 7870 Silicon can. There are pretty strict guidelines on specifications to ensure no chip failure, across many tests, not just Furmark. And the base specs from AMD for the chip is guaranteed for all chips they sell to AIBs.

I agree that voltage drop helps, no doubt there. If they found a 28nm process that can be stable on lower voltage, thats certainly a way. But in terms of denser, smaller and more stable chips, 20nm with exisiting specs is probably better.

JDG1980: In Mac Pro there is no Tonga chip, only Tahiti and Pitcairn. In iMac there is Full Tonga.

Cloudfire: is there a possibility, that there could be different names and device id's in drivers for only OEM's versions of GPUs? I mean: device ID and new name for pitcairn GPU that ends up being only OEM part?
I have no idea but it seems strange selling a 370 that is rebrand for OEMs and another that is new. I know Nvidia have done it in the past, but the rebrand and new chip have not shared the same device ID.
Take GTX 860M for example. Kepler 860M have ID 119A while Maxwell 860M have ID 1392. So I doubt it.

Plus VR-Zone says many exisiting 300 chips will be rebranded:
http://vr-zone.com/articles/amd-fij...dad-tobago-gpus-set-debut-computex/89325.html

For being so dismissive and condescending, you are the one who doesn't know what you are talking about. TDP is absolutely NOT worst case scenario power draw unless the IHV specifies it as such. There is no definition on how to set TDP and it will vary by manufacturer and even different card models. I just explained it at the top of this page.

False once again. Some OEM's might do this, but I guarantee you the big OEM's don't validate their consumer systems for heavy GPGPU type loads because that's not what the systems are for. You may disagree but unless you have some kind of proof or professional experience here, it doesn't matter what your opinion is as it is not factually based. The power limits you mentioned also aren't technical power limits, they can easily be exceeded. Those are just the spec limits to have PCIe certification for your card but not every card is PCIe certified.

TDP is not typical load dude lol. Power measurements for AMD cards have shown that, tons of other tests for many Nvidia have also shown that. You didnt seem to read my post at all where I proved it. Try reading it again.
TDP is the worst a card can come across under realistic scenarios, not including Furmark which is as far from reality as you can come.

OEMs doesnt control TDP. The chip does. They make cooling and power based on that. They can`t overrun a 200W GPU and put 150W as a limit. They can, but say goodbye to any potential customers once they dump vbios and read a power limit of 150W. If AMD market the card as 200W you dont put a 150W limit.

PCIe can go over 75W, sure. GTX 750Ti miners is fresh in memory. But you are nitpicking on details. Most AIBs run by specifications, and add pins based on the above.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I`m out of this discussion for a while. Taking up way too much time.
Lets wait and see if my source was right on this one :)


I think these quotes are worth reposting. It seems that Nvidia used money on 28nm and a new architecture because they didnt want to wait. While AMD seems to have waited it out for available capacity which is what pushed the releases back from Feb/March to May/June
Nvidia's newest chips, found in the GTX 980 and 970 cards, were made using the 28nm process instead of the 20nm Nvidia wanted. So the company decided to skip 20nm and go straight to 16nm for future designs. The problem there is 16nm is very, very new, and new process technologies mean some shakeout time.
AMD wanted to drop from 28nm to 20nm for its new GPUs but ran into the same capacity issue. This has impacted the delivery of AMD's 20nm R9 300 series graphics cards. They were supposed to show between February and March of this year but now they are at least two months behind and it's not AMD's fault.
And last November, at its financial analyst meeting, Senior Vice President and Chief Technology Officer Mark Papermaster said there would be 20nm and 28nm products in 2015 but no 14nm or 16nm products until 2016
 
Last edited: