AMD reveals more Llano details

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
NVIDIA will definitely have trouble competing if the need for a low-to-middle range standalone graphics card is removed with having a GPU on-die. That's why they're pushing for Fermi to be a GPGPU for gaming and scientific research. I'm not sure how much of a niche market that is, but it might pay off for them as software tools for using a GPGPU mature.

http://www.nvidia.com/object/cuda_in_action.html

Yep, the non-gaming side of CUDA looks very promising. In fact, I am sure learning how to harness this type of power could help a lot of people.
 
Last edited:

Kuzi

Senior member
Sep 16, 2007
572
0
0
No one is disputing that clarkdale's IGP offloads video decoding from the CPU (just like GMA4500 that came before it), but if you would go back and reread your post #29, you will find that you said the clarkdale cpu offloads video from the IGP which is incorrect. You need to watch out for those simple errors especially if you want people to take your wild speculation seriously.

Furthermore, the IGP uses fixed function units to process the video. They are not programmable and they do not run on any software (for instance, you can't accelerate flash with an intel IGP). They do not provide acceleration for anything except the supported video formats. LLano, on the other hand, will accelerate OpenCL and Dx11 and flash in addition to video. This will be a valuable upper hand over the lifetime of the product.

For all we know, with the IGP in SandyBridge, Intel may catch up with AMD and Nvidia in terms of GPGPU capability and OpenCL driver support.

The IGP in Llano seems pretty powerful, around 3x to 5x faster than say a 790GX. But that performance gain is only possible if some dedicated video memory is added to the motherboard, or if the IMC in Llano supports Quad-channel DDR3 memory, four memory DIMMS running at 1600MHz can provide 51.2GB/s. Of course that's just guesswork on my part, and simply adding some dedicated memory to the mobo would be much easier.

The SandyBridge IGP will reportedly be twice as fast as the one in Clarkedale, and I tend to believe this will be the case, but I'm not sure what Intel would to do to improve memory bandwidth. Is Sandybridge going to use Triple-channel memory like i7?

Anyways if my estimates are correct, SB's CPU performance would be around 25-40% faster than Llano per core, but the IGP in Llano should be around twice as fast as the Sandy IGP.
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
the P6x integrated sandy socket is 1155 which implies dual channel DDR3. The IGP is for basic hardware video acceleration such as AVC/VC-1 and supplemental SIMD computing. You do not need impressive memory performance for multimedia-driven GPGPU or basic gaming.

Sandy is a fast, new CPU with GMA. Llano is just another Propus with a huge IGP. One architecture is clearly banking on GPGPU more than the other.

The four-core 8-thread "mainstream" sandy is worth 128 gigaflops with SSE/AVX, which is more than bloomfield (and thuban). It is totally possible for intel to implement standard GPGPU capability in their IGP, but don't expect much from it. Intel isn't just throwing any die space around to experiment with something that isn't going to be fast out of the box. Don't get me wrong, OpenCL isn't a waste of die space, but from where intel's coming from, large IGPs are a waste of die space.

Even if you get OpenCL, you are still getting a small IGP (whereas most of Llano's xtor/area is the radeon). It'll probably be something that is just good enough for what the segment dictates. Intel's been talking about AVX for years so it wouldn't make sense for them to start marketing OpenCL (which isn't their baby) capability in their IGP. that's just a sideshow within a sideshow.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
147
106
NVIDIA will definitely have trouble competing if the need for a low-to-middle range standalone graphics card is removed with having a GPU on-die. That's why they're pushing for Fermi to be a GPGPU for gaming and scientific research. I'm not sure how much of a niche market that is, but it might pay off for them as software tools for using a GPGPU mature.

nVidia's best hope, IMO, of making GPGPU to work is to get AMD/intel/ect to improve their OpenCL drivers. Cuda was the first, but it is nVidia only, which kind of puts off developers from using it. If they can encourage the use of OpenCL and its advancement, it would be better for them in the long run. It could make it so that Getting a GPGPU card would boost performance, which would save them from getting kicked out of the mid range market (the low end might still give them the boot)

I can't think of a reason why PCI-e shouldn't replace PCI altogether, as it's a faster and more flexible interface. PCI might stick around for awhile, just like PS2 ports with USB.
Honestly, the reason PCI slots have stuck around for as long as they have is because they are relatively easy to setup drivers/program for. PCI is a parrellel connection, which means that interfacing with it is as easy as saying "in portnum, inVar" and "out portnum, output". PCI-E, on the other hand, boasts a serial connections, that in and of itself makes it harder to work with. 16 serial connections all transferring data at the same time is a lot to manage efficiently.

There are similar reasons for why PS2 stuck around for so long, USB is a NIGHTMARE to program drivers/make hardware for. It is a wonder any mouse and keyboard actually went to USB over PS2 to be honest.

That's not saying that PCI-E couldn't replace PCI in the future, just trying to give the reasons that PCI has stuck around for as long as it has (and may persist further into the future)
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,227
126
Honestly, the reason PCI slots have stuck around for as long as they have is because they are relatively easy to setup drivers/program for. PCI is a parrellel connection, which means that interfacing with it is as easy as saying "in portnum, inVar" and "out portnum, output". PCI-E, on the other hand, boasts a serial connections, that in and of itself makes it harder to work with. 16 serial connections all transferring data at the same time is a lot to manage efficiently.
I thought that the software interface to PCI-E was backwards compatible with PCI?
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
I thought that the software interface to PCI-E was backwards compatible with PCI?

It sort of is, but not really. It is backwards compatible in the same sense that and OS supporting DX 10 is backwards compatible with DX 9 software. Sure, most of the stuff is the same, but you lose the benefits of going to PCI-Express in the first place (plus the fact that the hardware still has to be there.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,227
126
PCI-E, on the other hand, boasts a serial connections, that in and of itself makes it harder to work with. 16 serial connections all transferring data at the same time is a lot to manage efficiently.
I guess what I was trying to point out is that the implications of the above are wrong.

The multi-lane serial/de-serialization is transparent to software. You made it sound like software has to manage each PCI-E lane.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,227
126
Anyways, does anyone know if the individual cores are power-gated in LLano? I could have one core doing something, with the other three cores idle, and not sucking down power?