Sandy bridge & Llano bad for gamers?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Kuzi

Senior member
Sep 16, 2007
572
0
0
If I was to give a very rough idea about the the power draw of Llano, this is how it would be like:

On the CPU side, you get a shrink from 45nm to 32nm (30%), HKMG, and Power Gating improvements. Those things combined could cut power draw by at least half (50%), compared to say an Athlon X4, which has a 95W TDP. The CPU in Llano shouldn't go over 45W TDP, unless AMD pushes the clocks too high.

Now for the GPU side, check out the Radeon 5570, it has 400 SPs running at 650MHz. Llano has 480 SPs probably running around 500-600MHz range, if we take into account the shrink from 40nm to 32nm (20%), and slightly lower clocks, the Llano IGP may draw 20-30% less power than a 5570.

The TDP for the 5570 is 42.7W, lower that by 30% and you get a nice and cool 30W IGP in Llano :)
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
How about Llano as a laptop gamer?

Or will power management/battery life be the major downfall? (assuming sideport GDDR is able to solve issues surrounding memory bandwidth)

Llano is a SoC (System On A Chip). Should fit many roles and make a good chip also for mobile gaming, considering the advances AMD has done to power gating it and implementing, it seems until proven in realworld, a sophisticated power management circuit, ditching "Cool n Quiet" and coming almost in parity with Intel and the PCU of Nehalem gen.

Its an interesting chip, tied to AMDs Stream/OpenCL SDK.

http://developer.amd.com/gpu/atistreamsdk/pages/default.aspx
 
Last edited:

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
no, AVX is the next evolutionary development of SSE for vector functions on x86 up to 256-bits (SSE is 128-bit) wide. AMD will have something like AVX for bulldozer, but it's not a capability of GPUs.

if anything, you could say intel's "equivalent" to Stream/CUDA/OpenCL is LRBni which presumably will still make an appearance in the haswell IGP, but it should also have support for OpenCL because of ubiquity. LRBni is analogous to CUDA in that it was adapted in-house specifically for the architecture, and is probably what will run best.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
if anything, you could say intel's "equivalent" to Stream/CUDA/OpenCL is LRBni which presumably will still make an appearance in the haswell IGP, but it should also have support for OpenCL because of ubiquity. LRBni is analogous to CUDA in that it was adapted in-house specifically for the architecture, and is probably what will run best.

Do you think we could see a real battle here as far as what programmers favor?

Would programming in one language affect how other APUs (like AMD's) perform?
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Performance on OpenCL is largely hardware agnostic and depends on each IHVs OpenCL ICD and compiler.

AMD bets on OpenCL for the unix/apple platforms, big irons/servers and DirectCompute on Windows desktops. Nvidia supports all of this and has its own proprietary and tuned CUDA. As of now, from the experience of developers already coding on GPUs, AMD's OpenCL implementation is superior to Nvidias.

Quoted.
//
"And as someone that has done GPU development almost exclusively on NV hardware over the last 6 years because of the driver and tooling quality, my recent switch to ATI for my OpenCL work hasn't seen me shitting the bed in disgust. ATI are making very real progress in these areas to the point where I'm mostly vendor agnostic now.

If you'd have heard me even a few months ago, I wouldn't be singing the same tune"
//

With a headstart of 2 years for both companies, i doubt Intel can do much about it, unless it performs miracles in their driver support and capabilities for LRB....and almost certainly lose their vendor lock and interest of devs coding for LRB's native code. The vast majority will have embraced OpenCL/CUDA/DC.

AMD aims for GPUs in mainstream servers starting 2012.

http://www.computerworld.com.au/article/335351/amd_aims_gpus_mainstream_servers_starting_2012/
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
It's been stated by Intel that Larrabee will NOT have to use native LRB-NI code unless needed. DirectX/OpenGL and OpenCL driver stack will be developed by the software team. Of course, those that want direct coding may do so, and due to enormous amount of support available in x86 architectures, it'll probably be cheaper to do so.

It is a question of how long "enthusiast gaming" with crazy tri-fire setups and sli will last due to the fact demand for 3D hasn't been increased since Crysis and increasing number of developers are going to consoles then doing a poor port to PC. Whether this is a sign of general depression of the economy or night before the dawn of innovation is yet to be seen.
 
Last edited:

Kuzi

Senior member
Sep 16, 2007
572
0
0
Do you think we could see a real battle here as far as what programmers favor?

Would programming in one language affect how other APUs (like AMD's) perform?

OpenCL and DirectCompute are just APIs like OpenGL and Direct3D. What the developers choose to use shouldn't make a difference to the end user as long as your GPU has supporting drivers.

Nvidia and AMD already support both APIs, so no worries there. My guess is that in the long run DirectCompute may see wider support just because of the backing of M$.

Another thing is that while an APU like Llano is a great idea, when it is released in 2011 it's GPGPU performance would look lethargic in comparison to discrete cards released by NV and AMD at the time.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Another thing is that while an APU like Llano is a great idea, when it is released in 2011 it's GPGPU performance would look lethargic in comparison to discrete cards released by NV and AMD at the time.

You are right.

As far as Gaming goes, even if Llano APU has sufficient bandwidth, it would only be good enough to play less demanding games @ modest detail settings. By 2011 we have to believe full-on DX11 (ie, tessellation) will require more GPU.

Too bad AMD couldn't have perfected 32nm sooner and released Llano in 2010 to fight Core i3. It probably would have cleaned up big time. In fact, I would think "process technology" is high on AMD's priority list for this very reason.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Did you know that had Intel succeed launching Prescott without delay we might have been seen Arrandale/Clarkdale 32nm fall of last year and Sandy Bridge before holiday this year?

Process technology launches are tied to product launches, and therefore, can't be done sooner. Intel might actually increase its process technology lead a bit further on 32nm.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
AMDs BD & the Fusion family of products where scheduled for Glofos 45nm process for a 2009 release. The schedule got scrapped and rightfully so, now with the renewed cross licence agreement between Intel & AMD and the dust settled down, the path is cleared for the company to move on on its own. I highly doubt that it will stand a chance to surpass Intel in their own territory and by Intels standards, so for now all they have to do is execute perfectly and realize that they will play second fiddle to x86 for quite some time to come.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Another thing is that while an APU like Llano is a great idea, when it is released in 2011 it's GPGPU performance would look lethargic in comparison to discrete cards released by NV and AMD at the time.

I think you're missing the point in Llano. The chip by definition and common sense is not able to surpass discreet powerful GPUs. It stands its own as a SoC, a cheap mainstream chip for budget builds, a laptop chip for mobile gaming, a great chip for developing GPGPU code without breaking the bank. It fits perfectly the above purposes and stands as a gateway and experiment to AMD's broader vision on this family, which i believe, if all things go well, fuse the GPU and a performance x86 core such as BD, under a new common instruction set.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
Llano will definitely be a nice chip that will perform very well for the price. What I meant though is that people requiring top compute/3D performance will choose to go with a discrete card.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
We're talking about a small minority here. The current sad state of enthusiast pc gaming is that development is dictated by the consoles. More and more people come to realize the futility in cashing out big amounts of money for expensive cpu and gpus for gaming purproses only to be greeted with poor and dumbed down gameplay, sad textures/graphics compared to what the current powerful pc hardware can do, horrible optimizations etc. This needs to change, Crysis resurrected a healthy amount of enthusiasm for the PC, upped the sales of high performance cards for Nvidia and AMD, but that was over 3 years ago.

Simply put, the PC needs a exclusive killer game title, every year, to sustain its halo, its raison d'etre and justify the high costs for running high powered expensive rigs. Eyefinity from ATI and Game Physics from Nvidia and soon ATI are a good attractors for developers and major differentiators for the PC, but they're not fully appreciated and exploited yet to their fullest potential.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
no, AVX is the next evolutionary development of SSE for vector functions on x86 up to 256-bits (SSE is 128-bit) wide. AMD will have something like AVX for bulldozer, but it's not a capability of GPUs.

if anything, you could say intel's "equivalent" to Stream/CUDA/OpenCL is LRBni which presumably will still make an appearance in the haswell IGP, but it should also have support for OpenCL because of ubiquity. LRBni is analogous to CUDA in that it was adapted in-house specifically for the architecture, and is probably what will run best.


Why do you say the things you say . Its just talkspeak. Avx and how intel will use it with its gpus isn't known to you. As for LRB I have seen you use it alot to spread fud .

This is more inline with what you guys are talking about and for anyone to say NV or AMD is ahead or started befor Intel on open CL is pure BS.

Intel has been working on this for along time well befor APPLE came up with Open CL. Who was Apple working with? INTEL thats who . Seriously your fud is amusing.

http://software.intel.com/en-us/data-parallel/
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
LRB wasn't temporarily scrapped for no reason. The press at the LRB event wasn't excited, the chip was hot, power hungry, a small microwave furnace, drivers sucked, software for it is very much miles behind. Intel wisely ditched it, learned its mistakes and very carefully implement it in its next iteration. They got cocky for 5 years about LRB, big claims, and realized their futility, thats smart of Intel and i bet they will have a perfectly adequate product in place when its ready, competing with NV and AMD.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
The reference to "adding a throughput computing development platform based on Larrabee" could mean that Intel has decided to formally link the Ct research language and Larrabee development together. Ct, which stands for "C/C++ for throughput computing," is a new parallel programming language that the chipmaker is developing for manycore processing. While Ct is meant to be a general-purpose, data-parallel programming language, it always seemed to be particularly well-suited to the Larrabee architecture.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008 the Khronos Compute Working Group was formed[1] with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008.[2] This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.[3]

OpenCL 1.0 has been released with Mac OS X v10.6 ("Snow Leopard"). According to an Apple press release
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Very nice of them, Ct is high quality, as usual from Intel. But with millions of GPU's out there ready for programming from NV and AMD, i doubt Intel will stand a chance soon with Ct. Ct will serve its purpose for the highend servers with Becktons/Itaniums and LRB.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ijust don't like all this BS talk about Open CL and how everyone is saying Intel is behind . From the above post we know when Apple applied for open CL. In was in june of 08. Open CL is the C++ language . Here is A date intel had for Ct and it started befor this paper. Apple may have come up with open CL but Intel and Apple were working together for along time befor 08 try 06 if not earlier . So much fud here its unbelieveable.

Note the date on this paper.

http://techresearch.intel.com/userfiles/en-us/File/terascale/Ct-appnote-option-pricing.pdf
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Very nice of them, Ct is high quality, as usual from Intel. But with millions of GPU's out there ready for programming from NV and AMD, i doubt Intel will stand a chance soon with Ct. Ct will serve its purpose for the highend servers with Becktons/Itaniums and LRB.

Ct is in beta . You a programmer . You can get on the beta programm.

http://software.intel.com/en-us/blo...on-for-ct-beta-program-now-available-on-line/ Did you watch the video on the link I posted slow it down educate yourself. Stop it and read. Post 66
 
Last edited:

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Ofcourse. Now try to find someone with LRB coding for it in constrast as i said to the millions of capable GPU's out there from NV and AMD.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
What language is brook and AMD ??????????? When did AMD start working on Open CL ?????? Your spreading fud buddy.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ofcourse. Now try to find someone with LRB coding for it in constrast as i said to the millions of capable GPU's out there from NV and AMD.

You didn't read the links or you would have known your question here was pure non-sense