It's still bad. I don't know why AMD doesn't just die-shrink Thuban.
40EU's tells us quite little. I wouldn't expect 150% more performance just as I didn't expect 3x more performance going from the GTX580->GTX680 unless Intel actually almost triples the die size of the iGPU.
At least on the 3770K benchmark, its faster than Llano: http://www.anandtech.com/show/5626/ivy-bridge-preview-core-i7-3770k/16Not even worth mentioning how far behind Intel is in GPGPU on their iGPU's.
At least on the 3770K benchmark, its faster than Llano: http://www.anandtech.com/show/5626/i...re-i7-3770k/16
Sandy Bridge's GPU takes up 38mm2 in 32nm process. Ivy Bridge is similar but at 22nm. The 20EU version of Haswell is at ~60mm2, and the 40EU version looks to be around 100mm2.
Also, don't count out Ivy Bridge yet. There may yet be a positive side for Ivy Bridge coming close to May rather than January. Same time last year the new driver for Sandy Bridge that brought nice gains were released: http://www.intel.com/performance/desktop/2ndgencore/hdgraphics/index.htm
At least on the 3770K benchmark, its faster than Llano: http://www.anandtech.com/show/5626/ivy-bridge-preview-core-i7-3770k/16
A very good situation for consumers in that area. Although bad for me waiting for desktop Piledriver, since AMD has to spend so many resources maintaining it's igpu lead which can't be spent awesomizing the cpu.
Nah, I think you should read my reply properly instead. My response was to Arzachel's comment that GPGPU is far behind with Ivy Bridge, when it doesn't show that in Anand's bench.
http://www.xbitlabs.com/news/cpu/di...nity_Fusion_APUs_for_Notebooks_on_May_15.html
If it's really 25% faster/better than Bulldozer it might be ok.
Oh noezzzzzzzzz AMD sukz sooo much lolzzzzzzzzz.
Sandy Bridge's GPU takes up 38mm2 in 32nm process. Ivy Bridge is similar but at 22nm. The 20EU version of Haswell is at ~60mm2, and the 40EU version looks to be around 100mm2.
Also, don't count out Ivy Bridge yet. There may yet be a positive side for Ivy Bridge coming close to May rather than January. Same time last year the new driver for Sandy Bridge that brought nice gains were released: http://www.intel.com/performance/desktop/2ndgencore/hdgraphics/index.htm
Sadly by that time Trinity should start to trickle out to retailers. What's worse, I hadn't read the preview on Anandtech before and was kinda considering the iGPU on Ivy to be within 5% of Llano. Seeing that the gap was actually 20-50% was a bit of a shock to say the least. 10% is a reasonable performance delta over a generation due to better drivers, but even that wouldn't bring parity. Also, it confirms my fear that Ivy Bridge can't keep up in image quality. First time in my life I feel disappointed in a hardware manufacturer, because I hoped for a nice competition when I'm shopping for a laptop this summer.
This point gets overlooked all too often - is it any surprise that Llano performs so much better when its iGPU uses over twice the die size of SNB? What is surprising is that SNB performance is pretty close to Llano in terms of area efficiency given how it compares to the A4 line.
Despite the improvement in Ivy's GPU I don't think anyone expected it to be a desktop video card replacement. Where Ivy will shine is in the notebook area. Remember the HD 4000 GPU in the desktop Ivy will be virtually identical to the HD 4000 GPU on the mobile version. That means the GPU in a 17W Ivy should perform the same as the 3770K Ivy that Anand tested (assuming its not CPU limited).
While Llano desktop maybe faster then Ivy that may not transfer over to the notebook segment. The A8-3870K desktop Llano that Anand used to compare to Ivy has 400 Shaders and 600 MHz speed while the top of the line A8 mobile Llano only has 444 MHz speed. If you factor that in the mobile Ivy should perform about the same the mobile A8 Llano and beat the mobile A6 and A4 Llanos that have 320 and 240 shaders.
It seems really soon for Piledriver/Trinity to be coming out. Didn't they say it would come out a year after Bulldozer? Didn't Bulldozer come out last October?
Yes, SB is lacking a few features in terms of what's actually enabled/fully implemented, but how much does that actually impact die size?How exactly is it surprising? SB is missing a ton of features and is made on a mature 32nm process compared to Llano, which is made on a new process node for both AMD and GF that was plagued with a ton of issues.
Because Intel's 17W parts have that much lower of graphics turbo frequency? Yeah, I guess they do - 1.2GHz instead of 1.35GHz. Yes, they also have some 17W skus that have lesser graphics turbo frequencies, with the lowest I see being 800MHz.Only if you expect the HD4000 to perform as well at ~500mhz as it does at ~1000mhz. Conveniently enough, a 320 shader part at 444mhz has around half the performance of a 400 shader part at 600mhz.
Yes, SB is lacking a few features in terms of what's actually enabled/fully implemented, but how much does that actually impact die size?
Because Intel's 17W parts have that much lower of graphics turbo frequency? Yeah, I guess they do - 1.2GHz instead of 1.35GHz. Yes, they also have some 17W skus that have lesser graphics turbo frequencies, with the lowest I see being 800MHz.
I'll give you tessellation seeing as how it's not implemented at all in SNB, but neither OpenCL nor 'UVD' can be counted against SNB die space. OpenCL is more a matter of a few bug fixes than actual die size, and last I checked the SNB decode was on par with both AMD and NVIDIA offerings. As for IQ, no question that it's behind there due to its lack of proper anisotropic filtering, but that's effectively fixed with IVB and again didn't affect area much as all the logic was present already, just not working properly.Saying SB is lacking a few features might be a slight exaggeration. UVD and a tessellator already take up non-zero die space, and SB doesn't support neither OpenCL nor Direct Compute using the iGPU. ~3 times smaller for 40% of the performance with worse IQ and less features doesn't seem that stellar to me, diminishing returns considered.
It is? That must by why according to notebookcheck.net the 2557m in a macbook air gets 1360 on 3dmark vantage P GPU while the 2620m in a macbook pro gets 1477... Oh wait, that's pretty much exactly the difference between their 1.2GHz vs 1.3GHz turbo speeds.I'll be blunt, gpu turbo frequencies are totally meaningless for a 17w sku, even if you enjoy gaming in a freezer.
AMD would be better off making good software frameworks for GPU compute and pushing that integration into as many applications as possible, like nvidia's "The Way It's Meant to Be Played," but for applications.. Intel is still a few years away from matching AMD in OpenCL and DirectCompute performance.