Separate names with a comma.
Discussion in 'Video Cards and Graphics' started by blastingcap, Nov 3, 2012.
Not anymore I think. JHH mentioned hes comfortable with the supply-demand balance.
EDIT: Looked at the wrong con call, and blew it. My bad. Wish we had strike-through as a format option:
No, they are still supply limited on 28nm. NV expected that to be true and have been pleased with TMSC's efforts to mitigate that particular problem - so he is comfortable in so much as NV is where it expected to be. I think they would be happier with more supply, because there is a question as to how much money NV left on the table due lower supply.
That was last quarter.
^ Thats the last Q con call.
Heres the latest
"Q : Could you talk about 28-nanometer supply, if that was a limitation at all this quarter, and what impact that may have on your January quarter outlook?
A : Well, the 28-nanometer yield and 28-nanometer supply situation have both improved substantially. And so we feel pretty good about the balance of supply and demand at the moment."
There will be two versions of K20:
K20: 13SMX, 1,13TFLOPs and 5GB
K20X: 14SMX, 1,31TFLOPs and 6GB
Came from a news which was put down immediately. But some site has "copied" the news:
Announcement from nVidia should follow in the next two days with the beginning of the Supercomputing 2012.
K20 specs work out to 705 Mhz, 2496 cuda cores, 5 GB GDDR5, 320 bit memory.
705 x 2496 x 2 = 3.51 TFLOPS SP
= 3.51/3 = 1.17 TFLOPS DP
K20x specs work out to 732 Mhz, 2688 cuda cores, 6 GB GDDR5, 384 bit memory.
732 x 2688 x 2 = 3.93 TFLOPS SP
= 3.93/3 = 1.31 TFLOPS DP
looks more like K20 will fit 225w TDP
while K20x will be 300w TDP. the desktop GTX 780 could be based of a 800 Mhz , 2880 cuda cores chip with a TDP of 270 - 300w . Nvidia will be stockpiling this chip for a Q1 2013 launch.
K20X is using 225 Watt. There is no need to make things up.
How do we know for sure?
K20 is the mass product. K20X is the binning product with more performance and more memory, a price tag way above K20 and an availability a few months later.
For example: M2050 and M2070 have the same TDP and yet the later one have 3GB more memory.
Hm, so yields for 14 SMX are bad...
I hope that doesn't exclude a 14/15 SMX part for Geforce.
What have yields to do with the binning process? nVidia is limited by power not by size. They started the ramping process a few months ago. They must test all chips and look if they are in the power envelope and are reliable enough.
If K20 and K20X have the same TDP, why not use K20X from the beginning?
You have process yields and binning yields. You might have 100 functional dies on a wafer from a possible 200 (50% process yields, the other 100 dies are completely dead and unusable), but not all those 100 work with 14 or 15 SMX enabled at a given frequency. It is normal that larger dies have increase defect probability, thus when you disable 1 or 2 SMX that are defective, you can salvage a chip that you would have to throw away otherwise.
I believe you always get more partially defective chips than completely "healthy" chips from a wafer. Simple probability math.
You know this is mainly TDP based binning, not defects?
Had they been sold as Geforce cards they would most likely almost all be 14SMX. Since there aint no 225W TDP limit there.
Because there are not enough Chips after the ramping process. Why do you think the mobile versions of the mid- and high-end chips coming out much later? They are limited by power. nVidia and AMD are binning enough chips before they sell them to the OEMs.
nVidia announced the M2090 7 months after the GTX580 with less performance.
The M2050 came 3 months after the GTX480 to the market and had only 69% of the compute performance.
So do you really believe that in both circumstances the yields were so bad for the 15SM (GF100) and 16SM (GF110) chip that nVidia needed to sell them first in the Geforce market?
It is likely, isn't it? I just read that most if not all server vendors require a 225W TDP, so that would likely extend to K20X too.
K20x may be a tiny bit higher than 225w tsp, but no way capital LOL does it have a 300w TDP with the very small bump in clockspeed. Keep in mind core clocks affect TDP more than the amount of functional units active when comparing the same chips.
I am sticking to my original prediction that GeForce will get a 14smx GK110 as it's initial flagship (gtx780) followed by a 15smx version in late summer early fall (gtx785). The 13 SMX version will be the gtx770. I wonder if Nvidia will go with an asynchronous memory configuration if the 13smx version has a 320 bit memory interface.
This is probably very dependent on both die size and transistors per mm^2. But with dies like Tahiti, GK104, and most certainly GK110 yeah you are most likely correct.
You forget that M2050 and M2070 are identical in all aspects except VRAM size. core count, clocks, memory bus width etc.
With the K20 and K20x there is a 1 SMX difference , 27 Mhz core speed increase , 64 bit memory controller increase and yet you expect the same TDP. :thumbsdown:
Of course it can,look at Gtx 480 vs 580.
If you want to make guesses as to what a chip's tdp is, go for it. But when you state it as a matter of fact, and provide no verifiable proof, then expect to be called out.
Cray, HP, Tyan etc. won't change their server racks just for Nvidia. 225W is max usually.
the K20 with 2496 cores at 705 Mhz and 5GB GDDR5 was listed in a workstation vendor's website as 225w TDP before it was pulled off. here is another article
anyway the information is going to be out on Monday at SC12. so we will get official specs from Nvidia to end all speculation.
Good point. So do you think that the HPC grade AIBs were released first this time because Nvidia had a deadline to meet (and likely w/very high margins) for the ORNL supercomputer?
Though I understand you may not care to speculate, do you think that Geforce (gaming cards) will be released based on the GK110 GPU?
This is nitpicking but yields don't necessarily mean functional yields. Transistors running out of spec (parametric yield) most likely hits harder in this case. Still, this is definitely die harvesting and whether the SMX outright do not work or need so much voltage that it's more feasible to cut them instead of lowering clock targets makes little difference. Arguing that TSMCs 28nm process is doing what Nvidia wants it to do and everything is nice and pretty and perfect is just silly.
Anyways with Big Kepler taking the high end are the 760/ti SKUs expected to be pretty much underclocked 670/680s? I'd love to get a decent 15" laptop under 900 and it would be nice to possibly get a high GK106/low GK/104 GPU with it.