AMD reveals more Llano details

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Falloutboy

Diamond Member
Jan 2, 2003
5,916
0
76
what I wounder is will the GPU be clocked different then the CPU, if they can get 480 stream processors at the processors clock of 2ghz + or hell even have it run at half clock speed that would be quite a bit of power
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Sandy will be an awesome CPU, but to seriously consider that its ondie iGFX will stand a chance to Llano's 200-400 SP, vector 4+1 units with full DX11/OpenCL/DirectCompute capability, we're talking 5650/5670 class GPU here, is simply naive. To each his own, AMD and NV try hard to differentiate from Intel, their GPUS is the key differentiator for them, but to work, software needs to catch up and exploit them. Too early, infant stage, but promising.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
What is naive is for anyone to suppose that Intel is sleeping. None thought that Clarkdale IGP would be as good as it is . As many are aware Intel has found away to offload IGP to work the cpu. It is also naive to suppose Intel needs Dx11. Since they already said they don't. DX11 is a MS thing . Than one has to consider AVX to not do so is naive as it gets. Its going to be interesting . But I putting my money on SB to shock and awl.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Sandy/Ivy GPU will not be impressive. Intel will be relying on its 256-bit vectors for SIMD performance until they have a decent heterogeneous part. AMD is bringing tremendous potential to the entry level with Llano, but it does depend, to an extent, on OpenCL/Dx GPGPU proliferation. LLano will provide much higher aggregate floating point performance than AVX provided this condition is met. Intel will eventually transition to a fully LRBni-enabled IGP by the 28nm node because it is faster and more power efficient than powering up an entire core simply to play a video or do a transcode-copy to a mobile device. These are the battery-draining jobs you can count on the mobile user to do at any time.

The usual rules will still apply but they will slowly begin to operate at a finer grain. You still need to look at which applications and services you run (and the different kinds of hardware acceleration they support) to decide whether a ~5 ghz clarkdale (or sandy equivalent) or ~3.5 ghz Llano is the better machine. Abundant power-gated cores with an on-die Radeon 5000 is a compelling solution for the mission that the majority of PCs share. The integration brings cost and power savings and it is impressive the size of the GPU they could bring on at 32nm.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
What usual rules are you speaking of. Intels sandy with 2 ondie Gpus will not = whatever AMD comes up with .SB with dual IGP and AVX looks very formidable. This is the same as Clarkdale befor release . I lot of talk about intel IGP without ever seeing what it could do . Than you have AMD PHII up against Intels sandy both with fusion and you suppose AMD has the upperhand . Thats not even good logic infact it lacks logic of any degree.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
What is naive is for anyone to suppose that Intel is sleeping. None thought that Clarkdale IGP would be as good as it is . As many are aware Intel has found away to offload IGP to work the cpu. It is also naive to suppose Intel needs Dx11. Since they already said they don't. DX11 is a MS thing . Than one has to consider AVX to not do so is naive as it gets. Its going to be interesting . But I putting my money on SB to shock and awl.

What usual rules are you speaking of. Intels sandy with 2 ondie Gpus will not = whatever AMD comes up with .SB with dual IGP and AVX looks very formidable. This is the same as Clarkdale befor release . I lot of talk about intel IGP without ever seeing what it could do . Than you have AMD PHII up against Intels sandy both with fusion and you suppose AMD has the upperhand . Thats not even good logic infact it lacks logic of any degree.

You are the one saying that Intel is offloading work from the GPU to the CPU. You are the one who has your facts backwards and it is your unintelligible and sloppy use of language that makes it even more difficult for others to tailor a response. People can either ignore you or they have to somehow sift through your absurd misspellings to make any sense out of your clumsy statements, acquiesce to your bias while laying out facts that you will ultimately ignore. You must understand that it is not at all a pleasure to read your posts, let alone reply to them.

If you have kept track of any integration trends in the past ten years you will see that the general purpose of graphics integration is to move work from the CPU to the GPU and *not* the other way around. The CPU is large and power hungry and if you don't need it, you should keep it idle for as long as possible. You can't just use half a core if you need to get a job done. You have to power up the whole core and even your straightforward SIMD-like video acceleration runs down the entire pipeline of that core (which by itself has a greater TDP than most IGPs). If you had the slightest idea what you were talking about, you would have no objection to the obvious trends we are moving along because you would see that specialized integration is a higher performance, more efficient solution than software emulation on a gargantuan general purpose CPU.

Why do something at 35w when it can be done with only 10 or 15 watts? That's why we do it on the GPU. Integrating GPU hardware onto the die of the CPU does not change this fact (in fact it exaggerates it) because it is the very nature of having small/abundant execution units versus a handful of large ones.

Between the Phenom II cores and 480 shaders, the Llano device is looking at 700-800 SP gigaflops, or about ~300 DP gflops. 8 sandy bridge cores @ 4 GHz will only hit 256 DP gflops. 256 gigaflops is not far away from 300 gigaflops, but hopefully you will see that as this trend pans out over time, compute shaders will prove to be faster and more flexible than SSE registers and they don't clog up the CPU and waste power. That's why everyone including intel has plans to incorporate coprocessing on the IGP. Both the vector acceleration from the Llano IGP and Sandy AVX are nonnative and will require software developers to include support.

You can speculate all you want about the performance you would like to see from intel, but the math is right there waiting for you if you can do it. Intel has the better CPU *per thread* and that isn't going to change for at least the next few years, but intel will need 512-bit or kilobit vectors if they expect x86 to compete with GPGPU in terms of aggregate floating point throughput (with regard to GPGPU devices like LLano or larger:480 to 3200 shaders). The cost of double-precision FP on shaders is going down while the cost of widening your vector units is going up. Intel most certainly is not diving into kilobit vectors right away; it would not be a long term solution as more shaders are brought onto the IGP. They are doing the smart thing and going for LRBni with the Haswell IGP. Accelerated GPGPU will proliferate slowly and they can afford to sit out this round.

Sandy Bridge IGPs will not be LRBni-enabled (nor OpenCL/Dx11) and will only provide video acceleration like the current GMAs. They will not accelerate x86 as was envisioned with larrabee/haswell IGPs. And where did you read that Sandy Bridge will have dual IGPs? This is why people ignore your posts. They hardly pass even as talking points.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well it is you who doesn't know the facts and for you to say what SB with AVX and Igp will not be good is guess work also . We been threw this befor . and you always end with egg on your face . Attacking grammer shows how weak your argument is . Intels Clarkdale does off load work to the cpu from the IGP. There is thread here on it . read it . It has a links also . As for the above links very weak . Link the entire article from lost circuits.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
No one is disputing that clarkdale's IGP offloads video decoding from the CPU (just like GMA4500 that came before it), but if you would go back and reread your post #29, you will find that you said the clarkdale cpu offloads video from the IGP which is incorrect. You need to watch out for those simple errors especially if you want people to take your wild speculation seriously.

Furthermore, the IGP uses fixed function units to process the video. They are not programmable and they do not run on any software (for instance, you can't accelerate flash with an intel IGP). They do not provide acceleration for anything except the supported video formats. LLano, on the other hand, will accelerate OpenCL and Dx11 and flash in addition to video. This will be a valuable upper hand over the lifetime of the product.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
You are the one saying that Intel is offloading work from the GPU to the CPU. You are the one who has your facts backwards and it is your unintelligible and sloppy use of language that makes it even more difficult for others to tailor a response. People can either ignore you or they have to somehow sift through your absurd misspellings to make any sense out of your clumsy statements, acquiesce to your bias while laying out facts that you will ultimately ignore. You must understand that it is not at all a pleasure to read your posts, let alone reply to them.

If you have kept track of any integration trends in the past ten years you will see that the general purpose of graphics integration is to move work from the CPU to the GPU and *not* the other way around. The CPU is large and power hungry and if you don't need it, you should keep it idle for as long as possible. You can't just use half a core if you need to get a job done. You have to power up the whole core and even your straightforward SIMD-like video acceleration runs down the entire pipeline of that core (which by itself has a greater TDP than most IGPs). If you had the slightest idea what you were talking about, you would have no objection to the obvious trends we are moving along because you would see that specialized integration is a higher performance, more efficient solution than software emulation on a gargantuan general purpose CPU.

Why do something at 35w when it can be done with only 10 or 15 watts? That's why we do it on the GPU. Integrating GPU hardware onto the die of the CPU does not change this fact (in fact it exaggerates it) because it is the very nature of having small/abundant execution units versus a handful of large ones.

Between the Phenom II cores and 480 shaders, the Llano device is looking at 700-800 SP gigaflops, or about ~300 DP gflops. 8 sandy bridge cores @ 4 GHz will only hit 256 DP gflops. 256 gigaflops is not far away from 300 gigaflops, but hopefully you will see that as this trend pans out over time, compute shaders will prove to be faster and more flexible than SSE registers and they don't clog up the CPU and waste power. That's why everyone including intel has plans to incorporate coprocessing on the IGP. Both the vector acceleration from the Llano IGP and Sandy AVX are nonnative and will require software developers to include support.

You can speculate all you want about the performance you would like to see from intel, but the math is right there waiting for you if you can do it. Intel has the better CPU *per thread* and that isn't going to change for at least the next few years, but intel will need 512-bit or kilobit vectors if they expect x86 to compete with GPGPU in terms of aggregate floating point throughput (with regard to GPGPU devices like LLano or larger:480 to 3200 shaders). The cost of double-precision FP on shaders is going down while the cost of widening your vector units is going up. Intel most certainly is not diving into kilobit vectors right away; it would not be a long term solution as more shaders are brought onto the IGP. They are doing the smart thing and going for LRBni with the Haswell IGP. Accelerated GPGPU will proliferate slowly and they can afford to sit out this round.

Sandy Bridge IGPs will not be LRBni-enabled (nor OpenCL/Dx11) and will only provide video acceleration like the current GMAs. They will not accelerate x86 as was envisioned with larrabee/haswell IGPs. And where did you read that Sandy Bridge will have dual IGPs? This is why people ignore your posts. They hardly pass even as talking points.

What ever man . I can still see the PH1 talkspeak that AMD showed us befor its launch and now you want or exspect anyone to believe AMDs graph now on a product thats a year away LOL. Hype it up build false hopes . just to be let down again . Were is AMD going to cut all this power savings out of a 480 sp unit . I not seeing it today so why should I believe it will be that much better at 32nm. I believe in GOD based on Faith. But to believe AMD based on future products with pretty graphics and slides is for people who repeat history they never learn . You forget even tho AMDS power usage on graphics is good it doesn't come close to comparing with Cpu power usage not even close. Hype it up . Intel has not been given to making false claims for a very long time now . Thats something I can't say about AMD.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Read about power gating and how it works. HKMG with power gating is largely the reason behind intel's recent lead in power consumption (although a dual core K10.5 + 790GX consumes the same power as clarkdale). Learned behavior such as faith is what allows you to selectively acknowledge certain information as factual or impossible depending on how it complies with your expectations about the world. You are equally as liable to accept something insane based on trifling evidence as you are to indefinitely reject what is obvious to the educated. All you want to do is talk about how sandy bridge will be the best thing that ever happened to you, and all I'm trying to do is explain how GPGPU coprocessing is more efficient than SSE coprocessing. I'm sure you will eventually understand everything when intel makes the transition in 2012, but since AMD is making the move first I understand if you are obligated as a fanboy to object strongly to it.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
No one is disputing that clarkdale's IGP offloads video decoding from the CPU (just like GMA4500 that came before it), but if you would go back and reread your post #29, you will find that you said the clarkdale cpu offloads video from the IGP which is incorrect. You need to watch out for those simple errors especially if you want people to take your wild speculation seriously.

Furthermore, the IGP uses fixed function units to process the video. They are not programmable and they do not run on any software (for instance, you can't accelerate flash with an intel IGP). They do not provide acceleration for anything except the supported video formats. LLano, on the other hand, will accelerate OpenCL and Dx11 in addition to this video. This will be a valuable upper hand over the lifetime of the product.

Again DX11 is MS and AMD and NV support it . That doesn't mean intel has to support it in the same manner . Check out gaming bencmarks with Intels clarkdale and note Cpu usage . Than do the same with AMDs solution . You will see Intels Cpu is very active weres as AMDs isn't.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Read about power gating and how it works. HKMG with power gating is largely the reason behind intel's recent lead in power consumption (although a dual core K10.5 + 790GX consumes the same power as clarkdale). Learned behavior such as faith is what allows you to selectively acknowledge certain information as factual or impossible depending on how it complies with your expectations about the world.

I have read it . Just like I read about PH I befor it was released . I already see intels power gating at work and on Sandy it will be second generation / It was the Israely team who worked on Dothan it is that same team working on SB . Their is little reason for me to think SB will disappoint considering the team working on it.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Again DX11 is MS and AMD and NV support it . That doesn't mean intel has to support it in the same manner . Check out gaming bencmarks with Intels clarkdale and note Cpu usage . Than do the same with AMDs solution . You will see Intels Cpu is very active weres as AMDs isn't.

If you are running a dual-threaded game on a quad core Phenom II, expect CPU usage to be lower than a dual core clarkdale. It does not affect performance above 1280x1024 where you're GPU bound and it does not mean the game could run faster if you could magically use more CPU cycles. It doesn't work that way. Gaming performance on IGPs is a non issue (but continues to improve) and it is not gaming performance where IGPs compete. They compete on power and features, and AMD and intel are more or less equal here (except flash which is a real bummer for intel-driven mobile devices). Power consumption is just about equal, and both AMD and intel support DXVA (dx11 is not required for hardware accelerated video playback) for video. There are currently no DirectX 11 IGPs on the market so I have no idea why you continue to bring that up.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
I've had this mixed feeling about the whole Fusion thing as well. I suppose I can't fault AMD's vision and business strategy - it paid goodly amount for ATI - but as others have suggested this can accelerate the console-lization of PC.

What do you mean by the console-lization of PC?

Do you mean competent graphics PC prices dropping to the level of gaming consoles? Or do you mean something else?
 

Daedalus685

Golden Member
Nov 12, 2009
1,386
1
0
What do you mean by the console-lization of PC?

Do you mean competent graphics PC prices dropping to the level of gaming consoles? Or do you mean something else?


I think they are refering to the posibility that customization may one day go out the window with the only options being various systems on chips... A la a console pre package.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
You're suggesting if AMD could kill off NVIDIA, that they would then stop selling discrete cards? That makes no sense. Why would AMD not sell a product that people want to buy?
How does NVIDIA compete in the GPU market when the GPU becomes a subsection of the CPU? If the CPU/GPU is faster then or reasonably faster then a discrete mid-range GPU, then you'll see much less people buying the discrete solution.

Again, this is all speculation. While it may never happen, but I think we should be aware that there is a possibility of it happening.

A valid concern might be what happens if AMD becomes a monopoly on the GPU market, but if that happens it would be because NVIDIA can't stay competitive. However, at that point Intel might just buy NVIDIA if their in-house graphics isn't up to snuff.

Yes it is a valid concern.

Either way, I think you're just trolling for comments. Motherboard manufacturers not including "video card slots"? Do you mean PCI-e? The same slots used for every other type of expansion card? I think some silicon moving around on a motherboard scares you way too much.
Sorry, not trolling (at least I don't think I am.). PCI-e is far from the expansion slot for everything, PCI itself is far from dead. As a consumer, over the years the expansion cards that I've purchased have gone down further and further. With this most recent build, the only expansion slot that is filled is the PCI-E x16 slot with a video card. I know there are other expansions, but there really aren't a whole ton of PCI-E slots.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
As a consumer, over the years the expansion cards that I've purchased have gone down further and further. With this most recent build, the only expansion slot that is filled is the PCI-E x16 slot with a video card. I know there are other expansions, but there really aren't a whole ton of PCI-E slots.

Is it possible that External SSD devices may ever need the massive bandwidth provided by PCI-Express?
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
Is it possible that External SSD devices may ever need the massive bandwidth provided by PCI-Express?
It isn't likely. We already have 3 gb/s transfer capabilities with SATA. I foresee SATA evolving at roughly the same pace as SSD transfer speed (if not faster).
 

jinduy

Diamond Member
Jan 24, 2002
4,781
1
81
ah one step closer to everything being in one chip... sound processor, gpu, cpu, ram all in one
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Furthermore, the IGP uses fixed function units to process the video. They are not programmable and they do not run on any software (for instance, you can't accelerate flash with an intel IGP). They do not provide acceleration for anything except the supported video formats. LLano, on the other hand, will accelerate OpenCL and Dx11 and flash in addition to video. This will be a valuable upper hand over the lifetime of the product.

No, the Intel IGP has a dedicated video pipeline, but they still send that data to the EUs to execute them. For AMD, its called UVD, for Intel, there's no name, but they both have dedicated pipelines. The architecture between AMD/Intel/Nvidia are actually all very similar. Only difference is the implementation details, just like between Deneb and Nehalem.

Look, if you want to criticize someone, at least get your details correct:

http://tech.icrontic.com/news/flash-10-1-beta-hits-brings-gpu-acceleration/

"Intel G41, G43, G45, Q43 and Q45 chipsets"

Not only that it also supports Trancoding on the GPU, which is GPGPU: http://downloadmirror.intel.com/18557/eng/relnotes_win7vista_gfx.htm

Hey what do you know? Even the last generation supports flash acceleration. :)

At the moment they just lack software support for OpenCL and such.
 
Last edited:

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
How does NVIDIA compete in the GPU market when the GPU becomes a subsection of the CPU? If the CPU/GPU is faster then or reasonably faster then a discrete mid-range GPU, then you'll see much less people buying the discrete solution.

Again, this is all speculation. While it may never happen, but I think we should be aware that there is a possibility of it happening.



Yes it is a valid concern.


Sorry, not trolling (at least I don't think I am.). PCI-e is far from the expansion slot for everything, PCI itself is far from dead. As a consumer, over the years the expansion cards that I've purchased have gone down further and further. With this most recent build, the only expansion slot that is filled is the PCI-E x16 slot with a video card. I know there are other expansions, but there really aren't a whole ton of PCI-E slots.

NVIDIA will definitely have trouble competing if the need for a low-to-middle range standalone graphics card is removed with having a GPU on-die. That's why they're pushing for Fermi to be a GPGPU for gaming and scientific research. I'm not sure how much of a niche market that is, but it might pay off for them as software tools for using a GPGPU mature.

I can't think of a reason why PCI-e shouldn't replace PCI altogether, as it's a faster and more flexible interface. PCI might stick around for awhile, just like PS2 ports with USB.