Just posted by Anand on Twitter:
![]()
so what if the Llano graphic card oced to 750Mhz, it will be fairly interesting, i hope anand will oced Llano extensively
EDIT: btw can anand benchmark this Llano in bitcoin ?
Last edited:
Just posted by Anand on Twitter:
![]()
The difference between DDR3 1333 and DDR3 1866 is around the same difference between 6450 and 5570.
More important the price difference between DDR3 1333 and DDR3 1600 is negligible, and that also provides a nice boost.
The difference between 6450/5570 to 5770 performance level is $50-70.
We are discussing future iterations of Fusion.
the cost differential between a Llano laptop and an i3/5/7 rig could very well be sufficient to cover an SSD.
Unfortunately you can't compare them that way, or I should say you can't attribute the results of such a comparison to the process integration as you are attempting.
Over half of the llano die is composed of a transistor block that need only operate at ~400-500MHz. When your clockspeed requirements are that low, your Idrive requirements are equally lowered.
Glofo's 32nm xtors are electrically 2-dimensional, they have a length and a width (as do Intel's). "gate density" is determined by the gate length, but you must make the xtors as wide as needed for Idrive purposes (hitting your clockspeeds).
This is the same basic device physics that are at play when you see sram cell sizes changing between full-speed L2$ cache versus 1/2 speed (or slower) L3$. The slower L3$ is more dense, it can be more dense because the clockspeed is intentionally reduced meaning the xtor widths can be reduced as well, leaving more room to pack in more xtors in the same area.
SB benefits from this as well as their GPU is likewise lower in clocks, xtors can be intentionally slower (i.e. smaller) in the GPU logic versus the CPU logic, but the relative area is smaller than that in llano.
To get a feel for the normalized xtor density benefits of gate-first versus gate-last between these two processes we need to compare IC circuits that are nearly identical (including the clockspeeds and the operating voltages).
While IBM did not present any information on actual array density, it is possible to make some inferences. Comparing IBM’s eDRAM in the POWER7 to comparable SRAMs from Intel yields a roughly 2X density advantage at the same node. Equivalently, IBM’s 45nm eDRAM slightly exceeds the density of Intel’s 32nm SRAM. Based on the results demonstrated and IBM comments, the overall array area should scale by 60% at 32nm. This suggests that IBM can expect roughly a 2X advantage for their storage arrays and possibly some further upside with innovations in the overall array architecture.
You are right but 47% more transistors (995M vs1.45B) for only 5.5% (216mm2 vs 228mm2) more die area is a lot of difference.
http://www.realworldtech.com/page.cfm?ArticleID=RWT021511004545&p=3
(Last paragraph)
And from the following table the IBM/AMD/freescale 32nm process (IFA d) Lgate is at 25nm when Intel Lgate at 32nm process is at 30nm.
http://www.realworldtech.com/includes/images/articles/iedm10-10.png
Although a large portion of the die size is only operating at 444MHz (fGPU and more), I still believe that Glofos 32nm Gate First SOI HKMG process played a bigger role for that large transistor density.
I Could be wrong![]()
I expected a lot of sizzle, and what happened was a lot of fizzle. Leaves me a sad panda![]()
I must admit I was expecting a tad more, uhm, fanfare (?) to be made by GloFo and AMD regarding the technological prowess of their 32nm process tech given that this is the first product to come to market that was designed and birthed by not only the vision to buy ATI but also the vision to spin-off the fabs. (bobcat counts, but not done at GloFo is what I mean)
I expected a lot of sizzle, and what happened was a lot of fizzle. Leaves me a sad panda![]()
You're only getting 10% more fps for each step up in ram speed. Going from 1333 to 1866 only buys 20% performance. Isnt that what we'd expect to see? I'm not sure how that equates to being bandwidth limited. It looks balanced to me. Sure we'd like it to be faster but 1866 ram is still rather expensive. Typical OEM pricing on 2x2 1066 DDR3 is about $30. It is an extra $5 to bump up to 1333, and about an extra $12 to bump from 1066 to 1600. To go to 1866 the price increase jumps all the way to $25. It's not worth it to go to 1866. It definitely is worth it to go to 1600, and that is where oems should settle. It is up to the consumer to choose the 1600 systems and not the 1333 systems. But I bet we're going to see plenty of 1333 systems on sale in a few months for under $400. And they'll be a hell of a deal.
Here is a really funny video this guy goes nuts over Intels Nameing scheme.
http://www.youtube.com/watch?v=iiBkhw4ztck
I expected a lot of sizzle, and what happened was a lot of fizzle. Leaves me a sad panda![]()
I will remember this quote for a long time :awe:
I think Llano will do pretty well in the market place, as long as the machines with it are reasonably priced. A 14" machine with a 4 core/400 SP Llano would need to be under $600 for me to even consider it though. Even still I think I'd rather shoot for a dedicated graphics machine at $800 instead.
I notice the label says "2.9Ghz". Is that the stock clock, or the turbo clock?
I think that I've suggested this before, but perhaps CPU makers (with the advent of Turbo modes), will start selling CPUs on the basis of their "max clocks". So CPUs would be labeled, "2.9Ghz max", or "up to 2.9Ghz". Just like DVD burners.
The 3850 has four cores running at 2.9GHz and doesn't support Turbo Core.
From Anandtech preview article:
So which is it? Does LLano have TurboCore, or doesn't it?
Hopefully, AMD will release a 35W LLano, with some sort of TurboCore.
I don't get it. That article says that the 2.9Ghz LLano desktop chip doesn't have TurboCore.
But this mobile review has a big page all about it:
http://www.anandtech.com/show/4444/amd-llano-notebook-review-a-series-fusion-apu-a8-3500m/4
So which is it? Does LLano have TurboCore, or doesn't it?
Possibly, Anand mentioned BIOS immaturity on that ASRock A75 Extreme6 board he was using, perhaps that was what was limiting TurboCore?
I'm kind of confused about the issue.
Hopefully, AMD will release a 35W LLano, with some sort of TurboCore.
TBH it is perplexing to me as well, they have a 100W TDP budget to play with, there are only 4 cores and the GPU is not exactly a monster either, all on 32nm SOI w/HKMG and yet they aren't taking the core clocks over 2.9GHz?
For a Stars-core derivative shrunk to 32nm I expected a lot more clockspeed/watt than what these early indications are showing.
I'm starting to wonder if this isn't going to be another 90nm->65nm type of transition where the 65nm chips could hardly clock as high as their 90nm older siblings.
We already know Bulldozer was officially delayed because of lackluster clockspeed yields, maybe a 2.9GHz quad core with a low-end GPU bolted on really does suck up 100W on GloFo's 32nm process...its a shame if it is true because it would likely mean then that we are looking at another 12 months or so before GloFo tweaks 32nm to get the clockspeeds up to where they need to be now.
I don't know, would be nice to have something concrete to refute the picture that is shaping up based on all the pieces of the puzzle we are collecting.
Between bulldozer delay, llano core clocks, and bapco withdrawal timing...its hard to see where the upside surprise is going to come in.
the only time these types of savings would be worth it is if you're buying in bulk for a company/school. even a $50 savings is nice if you're buying 1000 laptops. i'm with you though, for personal use, i'd rather spend a little extra and get a dediced graphics card with it.
