Isn't this how all previous ATi card went though?
1.) Shrink Die
2.) Double the Shaders
3.)???
4.) Win
Yes but that leaves them open to being hammered by the new arch of their rival. nV did not capitalize on this last time with GT200 (even though it was still the faster chip), we will see if they can do it with Kepler.
GPU shader cores have been evolving frequently and significantly at AMD. We introduced our common shader core in 2007 with the HD 2000 series. This introduced the unified VLIW-5 instruction set that we've had since. In late 2010, we introduced the first significant departure from this core architecture, the symmetrical VLIW-4 used in the HD6900 series of products. In this presentation, we will review that evolution, but also present an overview of the next generation of AMD cores under development. This next generation of cores will propel forward its capabilities and continue this evolution.
Yes but that leaves them open to being hammered by the new arch of their rival. nV did not capitalize on this last time with GT200 (even though it was still the faster chip), we will see if they can do it with Kepler.
I thought Nvidia also has built a "pathfinder" type card.. its GT 220/210 IIRC.
NVIDIA however picked a smaller die. While the RV740 was a 137mm2 GPU, NVIDIA’s first 40nm parts were the G210 and GT220 which measured 57mm2 and 100mm2. The G210 and GT220 were OEM-only for the first months of their life, and I’m guessing the G210 made up a good percentage of those orders. Note that it wasn’t until the release of the GeForce GT 240 that NVIDIA made a 40nm die equal in size to the RV740. The GT 240 came out in November 2009, while the Radeon HD 4770 (RV740) debuted in April 2009 - 7 months earlier.
Was anyone expecting AMD to re-do it's architecture after they just released a new one, one that hasn't even debuted in the mid-range and low-end/entry segments yet? I'd hope AMD would tweak the VLIW4 architecture a bit with the new release, as manufacturers do each release. As others mentioned, 28nm is more about power consumption, clock speeds, and putting more on the die.
Actually that is being optimistic.
The market is inundated with midrange cards that perform around GTX280-GTX285 performance with DX11 support.
You need a reasonably sized chip for a pathfinder. GT210/22 are tiny and don't tell you enough really, AFAIK.
Evolution of AMD's Graphics Core, and Preview of Graphics Core Next
Eric Demers, AMD Corporate Vice President and CTO, Graphics Division
GPU shader cores have been evolving frequently and significantly at AMD. We introduced our common shader core in 2007 with the HD 2000 series. This introduced the unified VLIW-5 instruction set that we've had since. In late 2010, we introduced the first significant departure from this core architecture, the symmetrical VLIW-4 used in the HD6900 series of products. In this presentation, we will review that evolution, but also present an overview of the next generation of AMD cores under development. This next generation of cores will propel forward its capabilities and continue this evolution.
http://developer.amd.com/afds/pages/keynote.aspx
tincart said:New arch + new process technology. That worked really well with Fermi.
SickBeast said:I would be very surprised if this rumour is true, though. I've read that AMD initially wanted to release a "full" Cayman GPU on a smaller process but they ran into manufacturing problems. Really that's just marketing spin for "we didn't time it right".
It's sad too, the HD4770 "40nm pipecleaner" debuted 2yrs ago. (it was May 2009, wasn't it?)
If 28nm were sticking to a 2yr node-cadence then we should have seen a 28nm pipecleaner product from AMD this spring.
The more 28nm becomes a 3yr node-cadence from 40nm the less impressed I am going to be with the whole "be teh excited cuz its HKMG y'all" process tech angle.
Both Nvidia and ATI had to drop 32nm designs due to supplier issues. That's what was supposed to come out instead of the 6K and 5GTX series.
We have to settle for 580's and 6870's for another year? Meh. I guess the die shrink parade is over. We'll save a fortune in upgrades I suppose.
Had the 32nm capacity been more mature, they would have had no problem at all releasing the "full" Cayman.No, the marketing spin really say's: "We had the chip all set to be on the 32nm process with 1920 shaders and all. But our crappy chip maker couldn't give us a decent 32nm process so we were forced to use a current generation 40nm process for our new generation gpus."
Thats much more accurate...
