Sandy bridge & Llano bad for gamers?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
On GPUs, the overwhelmingly important characteristic in memory is bandwidth. It really doesn't care about latency. I guess overclocked memory modules will matter, if the GPU performance becomes relevant enough for that. Then again, overclocking memory won't offer you 4-5x bandwidth, but rather 1/10x that.

Yep,

I just looked up the memory bandwidth of HD4670 and it lists 32 GB/s. (HD5670 @ 400 stream processors has 64 GB/s)

Dual channel DDR3 1600 (according to Wikipedia) runs 12.8 GB/s.

Hmmm....but then I look at these results and much higher Bandwidth numbers are listed under the Sandra test.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Well, we're talking about published aka theoretical bandwidth here. The Toms link you have lists 34GB/s bandwidth using 2072MHz DDR3 3 channels, which makes it theoretical 49.7GB/s of bandwidth.

Llano with DC DDR3-1600 will achieve 25.6GB/s bandwidth in theory. It's unlikely it'll reach that plus its sharing it with the CPU. Likely on Sandra it'll do 12-13GB/s.

That shouldn't be too much of a problem though. It should do pretty well in "sane" resolutions like 1280x1024 and such. It's only when you do something like 1920x1600 with 4xAA or something that kills it. 5670 will probably be faster but the Llano IGP will obviously end up better than low-end dedicated if the 480 sp rumors come true. Plus they can always do a special driver to share resources with the CPU.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The bigger problem with Llano's GPU might not be memory bandwidth, but other factors like die size and power usage. Radeon 5570 has a TDP of 42.7W and 5670 has a TDP of 61W on the 40nm process. 32nm won't bring it down drastically, it is generally believed that with a full process technology shrink power goes down by 30%. 40 to 32nm is not a full shrink, more like a half node. And 5570 and 5670 "only" has 400 SPs.

See the problem why it won't really get 1GHz+ speeds people are expecting? It's possible that they might charge a bit for the extra GPU too to make up for loss in discrete card sales. 5570+ performance in 2011 is still formidable in that price range. From the die shot and transistor numbers, the GPU is significantly larger than the CPU core.
 

biostud

Lifer
Feb 27, 2003
20,181
7,304
136
I said gamers in my title, so I wasn't talking about all those content with IGP performance and wasn't interested in other software than games. If I was, I wouldn't have called the thread "...bad for gamers".

the GPU part might be used for physics calculation, but once again you have to wonder ask what would be fastest for games. Just as you need a certain amount of processing power from a nvidia card for it to be able to handle physX. It would be great if the sp could be used to physics, but when these cpu's are launched the gaming GPU's will be next gen videocards.

The question is: Would a dedicated CPU be better than a combined CPU/GPU.

examples:
A) CPU next generation quad core
B) CPU next generation hex core
C) CPU next generation quad + GPU

I'm just wondering if it would be better to strengthen the CPU part, than adding a GPU part.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Yes, but at this point you'll be deciding Llano vs Bulldozer-derivatives. The significant die size and transistor investment also suggests the possibility they might want to profit somewhat, adding probably $30-40 extra to the CPU rather than $5-7 as with today's iGPUs. So why would enthusiasts EVER consider this one anyway?

Didn't AMD make the claim Nvidia might be cheating to make CPUs look worse in PhysX which means CPUs are well off handling physics? Still a lot of games don't benefit from greater than 3 threads, by 2011 we'll be having 6 and 8 core CPUs, which games will still lag behind.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
The performance/highend as has always has been will be served by the monster multicore chips from both companies, 8 cores with powerful discreet GPUs from NV, ATI. So, there's no question about Sandy or Lllano, Highend gamers with multimonitor setups know where they will put their money.

Llano and Sandy, in their quadcore skus, are targeted for the mainstream/low end segment, with excellent performance and speedy graphics for casual gaming beyond from what is available today in igps . Not bad at all.

Llano is the more promising APU though, due to its superior GPU in capabilities/strengths but needs the software to catch up most propably with DirectCompute from MS which is more stable and mature, at least on the ATI side compared to OpenCL, in order to exploit it. AMD claimed in their slides that Llano will come with a 1teraflop theoretical GPU performance, i seriously doubt so, but it will fare high enough and provide adequate horsepower when and if the software exploits it. Nevertheless, its the first generation of a long line of a family of products from AMD, Intel ofcourse rides the same train and at least according to the roadmaps will included some very beefy vector units from LRB in their Haswell uarch. Both companies conclude that "graphics" ondie is a thing that must be done, sooner or later.

Chas Boyd, DirectX architect explains better, Microsofts take on GPGPU through DirectX11 DirectCompute, why it matters and where its applicable.

http://microsoftpdc.com/Sessions/P09-16

Some DX11/DirectCompute 5.0 demo apps.

http://users.skynet.be/fquake/
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
I said gamers in my title, so I wasn't talking about all those content with IGP performance and wasn't interested in other software than games. If I was, I wouldn't have called the thread "...bad for gamers".

You can't talk about what's good/bad for gamers in a vacuum. Gamers have access to exactly the same CPUs/GPUs as every other section of the computing market, so they have to take what they can get out of what's available. In that respect, going forward, CPUs with an IGP/GPU on package or on-die are going to be what they have to live with, like it or not.. There's no economic reason for Intel or AMD to make a CPU-only chip out of a design that has an IGP/GPU built in when they can just add a setting in the BIOS to turn the IGP off.

I'm just wondering if it would be better to strengthen the CPU part, than adding a GPU part.

It's not an either-or situation. They improve the CPU while adding/improving the GPU.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The question is: Would a dedicated CPU be better than a combined CPU/GPU.

examples:
A) CPU next generation quad core
B) CPU next generation hex core
C) CPU next generation quad + GPU

I'm just wondering if it would be better to strengthen the CPU part, than adding a GPU part.

According to one Llano article I read AMD claims they decided to use the legacy architecture (Phenom II with L3 cache removed) rather than "Bulldozer" as way to help ease into the manufacturing transition of "Fusion".

So at the moment it looks like a Separate CPU and GPU is stronger for more than one reason.

But then I wonder at what point does having a Fused CPU/GPU start to make more sense? How many years before that happens? Wouldn't it be better in some cases to have a shorter distance from the CPU to GPU?
 

Jovec

Senior member
Feb 24, 2008
579
2
81
But then I wonder at what point does having a Fused CPU/GPU start to make more sense? How many years before that happens? Wouldn't it be better in some cases to have a shorter distance from the CPU to GPU?

I doubt we'll ever see it because they'll still have to make discrete GPUs and CPUs anyway, and at that point, there is no point. Llano, Sandy, and other on-chip/die IGPs are really just a way to reduce costs and make more money.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I doubt we'll ever see it because they'll still have to make discrete GPUs and CPUs anyway, and at that point, there is no point. Llano, Sandy, and other on-chip/die IGPs are really just a way to reduce costs and make more money.

480 stream processors on the Llano die is quite a bit of processing power.

Surely there must be an advantage in computing having all those resources right on top of the x86 cores? Latency reductions?

In the meantime we have to hope the gaming performance is respectable enough till the proper general purpose programs get written for the on-die stream processors.
 

SHAQ

Senior member
Aug 5, 2002
738
0
76
IGP's are irrelevant for gamers. If the 295 and 5870's still aren't enough horsepower then there is no need discussing an IGP....probably ever. There will always be new effects added that will require more powerful video cards. Not to even mention ray tracing which is still another 5-10 years before it can be doable.

So yes get a CPU by itself unless by games you mean cards and adventure games....nothing 3D.

Hex core won't be relevant for PC games for awhile. It depends what the next gen consoles will have. They will either have 4 or 6 cores. A 32nm quad will be pretty nice. They should all do about 4.5Ghz or more.
 
Last edited:

Tab

Lifer
Sep 15, 2002
12,145
0
76
On GPUs, the overwhelmingly important characteristic in memory is bandwidth. It really doesn't care about latency. I guess overclocked memory modules will matter, if the GPU performance becomes relevant enough for that. Then again, overclocking memory won't offer you 4-5x bandwidth, but rather 1/10x that.

Interesting, why is memory bandwidth the most important?
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
I didn't think about it until recently, but AMD does have Side-port memory for their current IGPs. It's possible they'll allow mobo manufacturers to do something similar for Llano CPUs, that way the GPU could have access to some high-speed GDDR and not have to rely on the slower system RAM.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
IGP's are irrelevant for gamers. If the 295 and 5870's still aren't enough horsepower then there is no need discussing an IGP....probably ever. There will always be new effects added that will require more powerful video cards. Not to even mention ray tracing which is still another 5-10 years before it can be doable.

Turning on Tessellation for DX11 drops performance around 75% according to Unigine "heaven" benchmark. So there definitely is a place for stronger DX11 gaming GPUs in the future.

However, at lower resolutions and/or lower detail settings 480 stream processors could be quite potent.

I didn't think about it until recently, but AMD does have Side-port memory for their current IGPs. It's possible they'll allow mobo manufacturers to do something similar for Llano CPUs, that way the GPU could have access to some high-speed GDDR and not have to rely on the slower system RAM.

What you are saying makes so much sense. No doubt, AMD has long anticipated the need for "add-on" GDDR. Hopefully this can be done cost effectively. 512 MB off "addon" slower GDDR for lower resolution applications and maybe 1 GB of faster GDDR for higher resolution scenarios.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Interesting, why is memory bandwidth the most important?

If the memory bandwidth drops too low it can "bottleneck" the power of the GPU for gaming applications.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,227
126
I used to think that memory bandwidth mattered for IGPs, but then I tested my friend's AMD dual-core AM2 and 780G motherboard. With dual-channel RAM, it scored 10500 in 3DMark01. With single-channel RAM, it scored 10000.

Now, supposedly, the HT speeds matter more than the RAM bandwidth with 780/785G. Something I don't fully understand.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I used to think that memory bandwidth mattered for IGPs, but then I tested my friend's AMD dual-core AM2 and 780G motherboard. With dual-channel RAM, it scored 10500 in 3DMark01. With single-channel RAM, it scored 10000.

How strong is that IGP? When I looked up the specs for the 790GX chipset it said "40 unified shaders".

If 40 unified shaders = 40 stream processors then I wouldn't imagine system bandwidth holding things back too much.

480 stream processors is quite a leap forward from 40 stream processors. In this case having proper (ie, fast) sideport memory might really make a difference.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Llano, paired with a powerful discreet GPU, should be useful to next gen dx11 games or apps that make use of GPGPU capabilities. AMD's OpenCL kit runs on both GPU & CPU cores and DirectCompute enabled apps/games could always leverage the ondie GPU along with the discreet one. Llanos GPU doing physics processing while the discreet GPU pumps through tesselation and heavy shaders, many possibilities here. For the application space, there are already apps in the works that would take advantage of the GPU, face tagging in images, video effects, audio processing. Lots of cool things to do. Just like Nvidia and its CUDA environment and ecosystem, AMD capitalizes on Microsofts DX11/DC and OpenCL for developing the software ecosystem necessary for their GPU's and Fusion family of processors long term potential.

Its my firm belief and will say it again. Nvidia and AMD want to differentiate from Intel, their GPUs and GPGPU Computing is their ticket for this. They try hard to convince developers to jump on their bandwagon, start migrating and developing new code and reap the accelerated benefits. Its a rough ride as of know, even with CUDA/OpenCL/DC, developers are lazy, parallel programming is tough, but...all things point to that. Now, imagine what will happen if ready made libraries/algorithms for programmers, commercial or opensource/free, come to emerge on GPU....imaging/physics/video/audio/biology/weather/seismic/oil/finance etc...

Its a long term development and we're just in the start of all of this, by my estimations things will start to stabilize and mature around in the 2012-2013 timeframe, Intel ofcourse are no idiots, they know the inherent potential danger in their interests and plan accordingly, Larrabee will hit the market around 2011.

Some information on Nvidias vision on the future of GPU computing and their potential market leadership.

http://alienbabeltech.com/main/?p=11825&page=2
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Interesting, why is memory bandwidth the most important?

Well, I said this comparing between latency and memory bandwidth, not GPUs in general, but it is true, bandwidth is very important.

I used to think that memory bandwidth mattered for IGPs, but then I tested my friend's AMD dual-core AM2 and 780G motherboard. With dual-channel RAM, it scored 10500 in 3DMark01. With single-channel RAM, it scored 10000.

Now, supposedly, the HT speeds matter more than the RAM bandwidth with 780/785G. Something I don't fully understand.

Well, Hypertransport link speeds matter because of this:

(CPU+Memory Controller)----(Graphics and PCI Express controller)

Where --- is the Hypertransport link. If your memory bandwidth exceeds the link speed, it'll benefit increasing the Hypertransport speed. Of course, on the Phenom II, that's 8GB/s. I don't know if your friend has single channel RAM that does theoretical 8GB/s.

Plus, on 3Dmark01, its already becoming CPU bound. Test more modern benches like 3DMark 05 or 06, and take a look at the "GPU" portion.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Llano, paired with a powerful discreet GPU, should be useful to next gen dx11 games or apps that make use of GPGPU capabilities.

How about Llano as a laptop gamer?

Or will power management/battery life be the major downfall? (assuming sideport GDDR is able to solve issues surrounding memory bandwidth)
 

LoneNinja

Senior member
Jan 5, 2009
825
0
0
Why are you worried that Llano will have reduced performance for gamers because it'll have a built in gpu? Llano is AMD's budget solution for 2011, gamers looking for top performance won't even want to buy Llano, they'll want Bulldozer. It's really no different than how Intel has the I3 right now and I5/I7 for higher performance.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
How about Llano as a laptop gamer?

Or will power management/battery life be the major downfall? (assuming sideport GDDR is able to solve issues surrounding memory bandwidth)

I believe Llano used in laptops will have a better performance/power efficiency ratio than anything AMD has out right now, and it should perform well enough in games especially if sideport memory is used.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I believe Llano used in laptops will have a better performance/power efficiency ratio than anything AMD has out right now, and it should perform well enough in games especially if sideport memory is used.

Yep, I am thinking Llano and budget laptop screen of 13x7 (or 14x9) resolution should go pretty nice together for a gamer.

The Power management strategy for the CPU side of things looks like it is shaping up pretty nicely. Hopefully the idle/2D power consumption for the strong IGP comes through as well.
 
Last edited: