I dont think Lisa is unrealistic in setting goals so the 1b for gf must happen more or less in some way.
As amd entire server and desktop line is -if not sinking -then already at the bottom of the ocean the capacity can only come from consoles, carizo, or gpu. Imo its to early to change consoles process (ms and sony would not bet that business on gf solidity) and carizo is so small and still a limited segment for the laptop. That leaves gpu with a major part of gf capacity for 2015.
The new gpu must be gf.
Its damn sure you dont make gpu on new node beeing 20 or 28nm on design anything earlier than tonga. Thats for sure. Why use older design?
What should be the purpose of 28nm of gf? Well mubadala controls amd and amd is meant to feed gf. Thats reason enough for going 28nm tsmc to 28nm gf. Newer design in a tweaked 28nm can do lot and if 28nm is cheap then why go 20nm.
What favors 20nm imo is that you get the same benefit going from 28nm to 20nm bulk as every normal shrink. We have to remember that. Without finfet cost and complexity. I simply think hp 20nm is quite ideal for gpu as you are not so dependant on leaking and low power/high perf as finfet can give. And i guess gf process is not nearly ready for their own (samsung) hp finfet. 2016 is optimistic here.
I think some of the reservation of gpu on 20nm stems from one of charlie earlier articles saying it was not fit. But imo the argument against gpu on 20nm (excepts we havnt yet seen hp 20nm) is simply not there. On the contrary; 20nm could be the last cheap ($/transistors) node and as such fit for large gpu dies. (And apu).
If Global Foundries make 20nm, that would be ideal for AMD. Thats still in the air. They might need to go 20nm to make R9 390X and R9 395X2 because of heat, power and space requirements.
They could do it across all R9 300 cards because its cheaper to rebrand than redesign new chips when we know both 20nm and 28nm are shortlived and pretty much at the end of the line. 16nm FinFET from TSMC are superior to 20nm SOI. Which is why both AMD and Nvidia are gunning for it with Greenland and Pascal.
I simply think AMD can catch on to Nvidia`s effecient Maxwell by using the same GCN 1.x architecture but in 20nm. Which will be cheaper for AMD than dishing out cash on a new architecture imo. We all heard the $700+ price for Fiji. HBM and 20nm could be the reason?
You could be right that AMD is tweaking the cores a bit. After all R9 370 which looks to be a 270X rebrand is GCN 1.0. Its missing True Audio, XDMA, Freesync etc. Would not surprise me AMD have added that.
We know AMD have changed the names for the rebrands:
Tonga = Antigua
Grenada is there, Tobago etc. It seems strange launching new names if the features havent changed atleast.
Maybe, but one thing you have to keep in mind is that the Pitcairn cards are seriously overvolted by default. Better binning, like what AMD did with the E-series FX chips, could get the power consumption of full Pitcairn down to 130W with no changes in silicon whatsoever.
In fact, I ran some experiments last night proving this. My video card is a Powercolor PCS+ 7870. I connected my PC's power plug to a Kill-A-Watt meter, which indicates that it consumes 68-72 watts while idling on the desktop (power usage fluctuates). When I set the card to stock 7870 settings (removing factory OC), total system power usage during FurMark (measured at the wall) was 228W-233W. We know from TechPowerUp that idle power consumption is about 12 watts, so this means the card is consuming about 170W-175W under FurMark - almost exactly what the TDP tells us. Then I started dropping the voltage. I adjusted the core clock slightly down, to 950 MHz, but increased the RAM speed to 1250 MHz (technically overclocking, but not really, since that's the actual rated speed of the GDDR5 chips). I ended up at 950 MHz core, 1250 MHz RAM, 1.050 volts. (I could probably have dropped the voltage more, but this was about where improvements seemed to taper off.) The result of this was that FurMark power consumption dropped to about 185W; once the non-GPU idle power is factored out, this means the GPU is pulling about 125W maximum. That's a huge difference. AMD could do this tomorrow, without any new silicon at all.
By the way, I think we can be fairly sure AMD isn't going to port any GCN 1.0 parts to either 28nm SHP or 20nm without making some changes. Tahiti has already been superseded by Tonga, Cape Verde won't have a successor, and Pitcairn needs updating because it lacks FreeSync, TrueAudio, and other modern features.
Good test, but you must remember that just because your chip endured a voltage drop on that particular test, doesnt mean another person`s 7870 Silicon can. There are pretty strict guidelines on specifications to ensure no chip failure, across many tests, not just Furmark. And the base specs from AMD for the chip is guaranteed for all chips they sell to AIBs.
I agree that voltage drop helps, no doubt there. If they found a 28nm process that can be stable on lower voltage, thats certainly a way. But in terms of denser, smaller and more stable chips, 20nm with exisiting specs is probably better.
JDG1980: In Mac Pro there is no Tonga chip, only Tahiti and Pitcairn. In iMac there is Full Tonga.
Cloudfire: is there a possibility, that there could be different names and device id's in drivers for only OEM's versions of GPUs? I mean: device ID and new name for pitcairn GPU that ends up being only OEM part?
I have no idea but it seems strange selling a 370 that is rebrand for OEMs and another that is new. I know Nvidia have done it in the past, but the rebrand and new chip have not shared the same device ID.
Take GTX 860M for example. Kepler 860M have ID 119A while Maxwell 860M have ID 1392. So I doubt it.
Plus VR-Zone says many exisiting 300 chips will be rebranded:
http://vr-zone.com/articles/amd-fij...dad-tobago-gpus-set-debut-computex/89325.html
For being so dismissive and condescending, you are the one who doesn't know what you are talking about. TDP is absolutely NOT worst case scenario power draw unless the IHV specifies it as such. There is no definition on how to set TDP and it will vary by manufacturer and even different card models. I just explained it at the top of this page.
False once again. Some OEM's might do this, but I guarantee you the big OEM's don't validate their consumer systems for heavy GPGPU type loads because that's not what the systems are for. You may disagree but unless you have some kind of proof or professional experience here, it doesn't matter what your opinion is as it is not factually based. The power limits you mentioned also aren't technical power limits, they can easily be exceeded. Those are just the spec limits to have PCIe certification for your card but not every card is PCIe certified.
TDP is not typical load dude lol. Power measurements for AMD cards have shown that, tons of other tests for many Nvidia have also shown that. You didnt seem to read my post at all where I proved it. Try reading it again.
TDP is the worst a card can come across under realistic scenarios, not including Furmark which is as far from reality as you can come.
OEMs doesnt control TDP. The chip does. They make cooling and power based on that. They can`t overrun a 200W GPU and put 150W as a limit. They can, but say goodbye to any potential customers once they dump vbios and read a power limit of 150W. If AMD market the card as 200W you dont put a 150W limit.
PCIe can go over 75W, sure. GTX 750Ti miners is fresh in memory. But you are nitpicking on details. Most AIBs run by specifications, and add pins based on the above.