Sandy bridge & Llano bad for gamers?

biostud

Lifer
Feb 27, 2003
19,952
7,049
136
With Sandy Bridge and Llano appearing as the first 32nm quads you will have to ask:

How will it benefit me (as a gamer)?

I can only see negative aspects of putting a GPU on the CPU if you're a gamer.

1. The GPU/IGP is not very powerful so you will still need a videocard. (480sp for Llano vs 2000 on the 5870, not to mention next gen. videocards)
2. The GPU/IGP then become redundant, but will still take up die space making it use more power, run hotter which will make it more difficult to o/c.
3. Larger die = higher price

I'm sure the CPU parts will be faster than PII/i7, but I would rather have the CPU without the GPU part.

I might have overlooked something like hydrid CF on the Llano, powersaving features, but otherwise I can't see how this move is a good thing.

Discuss :p
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Well, numbers 1 and 3 on your list are dissonant. You cannot complain about inadequate horsepower and needing a separate video card and yet also complain that the larger die (needed to accomodate any IGP) increases the price. What if the price/performance is better with the IGP than a separate video card? Not likely, but it could happen for some gamers and some games.

Other than that, I find no reason to believe that the GPU/IGP won't be disable-able.
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Llano is AMD's analog of the Core i3. It is not their performance 32nm architecture, which is called Bulldozer. Llano is the low-end low-power value integrated platform such as where the Athlon II X4 sits today. Llano is a quad core Phenom II on 32nm. Bulldozer is a larger 32nm 8-core with 256-bit vectors similar to sandy bridge. It will have hyper threading in that the operating system will see 16 "cores" but it's nothing like intel's. The dual schedulers, L1 and pipeline clusters (see anand's article) appear to be a far more robust approach to threading out a single core. nvidia and RV870 have employed similar encapsulation to great effect. You'd also hope for decent clocks, seeing it's 32nm SOI with ibm's high-k. we'll have to see. We have no reason to expect any dazzling advances in single-threaded performance so high frequencies is the best you can hope for. i don't understand all the panic about integrated CPUs making discrete performance devices obsolete. there's just no way. bigger CPUs are still faster than little CPUs and big GPUs are still faster than little integrated GPUs.

if you need exceptional performance in one area or another, then you pick your part appropriately. If you are encoding x264 video, then you get the 8-core CPU because we have no hope of encoding that in OpenCL any time soon. Likewise if you're playing high-def games you should probably go for more than 480 shaders. Llano doesn't presume to be a performance part for any 32nm-generation segment, but it does offer more than enough performance and flexibility (even enough for games below 1680x1050, go figure!) for its power envelope and is interesting in its own right because it represents the most integrated and most articulated and powerful IGP we are going to see any time soon.
 
Last edited:

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
The problem is that you are looking at it as a GPU, when modern GPUs have become so much more. I think the idea that Intel and AMD are trying to push is that these modern stream processors are not GPUs, but rather very powerful floating point processors. It can be used for many computing tasks other than rasterisation. This seems to be the direction that AMD is heading in at least, I think they're hoping that in the future all software will be able to use the programmable shaders in GPUs. So you shouldn't really look at it as wasted silicon, it could be used for physics calculations in your games or other tasks. In the near future, though, it probably will be of limited use to most people, as software support for GPGPU isn't that great.
 

biostud

Lifer
Feb 27, 2003
19,952
7,049
136
Well, numbers 1 and 3 on your list are dissonant. You cannot complain about inadequate horsepower and needing a separate video card and yet also complain that the larger die (needed to accomodate any IGP) increases the price. What if the price/performance is better with the IGP than a separate video card? Not likely, but it could happen for some gamers and some games.

Other than that, I find no reason to believe that the GPU/IGP won't be disable-able.

I don't complain about the GPU performance, I'm just stating the fact that the GPU will never be fast enough for gamers, so there's no need to include it.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
I don't complain about the GPU performance, I'm just stating the fact that the GPU will never be fast enough for gamers, so there's no need to include it.

Sure there's a need; a significant majority who don't need anything beyond a competent IGP. Gamers are not the only section of the consumption side that drive CPU development.

There's no economy of scale for Intel or AMD to make a IGP-less version of their next CPUs just for gamers.. when there's a clear alternative: add a disable feature for the IGP in the BIOS.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I can only see negative aspects of putting a GPU on the CPU if you're a gamer.

You are basically getting a 32nm Athlon II x4 (with turbo mode and other features) alongside a 480 stream processor GPU.

Don't you think a lot of people will be able to game on that alone?

I'm sure the CPU parts will be faster than PII/i7, but I would rather have the CPU without the GPU part.

Llano CPU may not end up being faster than Phenom II (due to lack of lack L3 cache), but a person will be able to buy "Bulldozer" without a fused GPU.
 
Last edited:

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
I think OP is talking about something a little different. If I understand correctly he is basically just worried about the extra costs/architectural performance hits of forcing users to buy a chip with something gamers are just going to turn off.

Personally I don't believe there is too much to fret about. In the event of an absolute worst case scenario, we just switch over to Xeon processors and eat the 10% price premium, life goes on and we get to complain about spending an extra 30 dollars.

However, eventually GPUCompute (or whatever its called) will take over. Once it becomes seamless enough e.g. Optimus, I see a huge benefit of having that extra SOI GPU crunching along.
 

Hyperlite

Diamond Member
May 25, 2004
5,664
2
76
I don't complain about the GPU performance, I'm just stating the fact that the GPU will never be fast enough for gamers, so there's no need to include it.

ORLY? and what portion of AMD or Intel's quarterly sales do you presume to make up?
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
You are basically getting a 32nm Athlon II x4 (with turbo mode and other features) alongside a 480 stream processor GPU.

Don't you think a lot of people will be able to game on that alone?



Llano CPU may not end up being faster than Phenom II (due to lack of lack L3 cache), but a person will be able to buy "Bulldozer" without a fused GPU.
Yeah, 480 stream processors, especially if they're clocked at a pretty high speed, should be very potent. Would be more than adequate for mainstream gamers. HD 4670 only has 320 and is fine even for modern games as long as you don't start cranking the AA and stuff like that. Of course there's more to GPU performance than just that, a dedicated card probably has a ton more memory bandwidth available than Llano will end up having.

And as others have pointed out, the main idea behind integrating the CPU and GPU currently is to reduce costs, so I doubt AMD and Intel's high-end processors in the near future will do this. But as I kind of alluded to in my last post, I think once GPGPU starts to catch on, all CPUs will start to include high performance FPUs onboard that could be used as a GPU or for general computation.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Yeah, 480 stream processors, especially if they're clocked at a pretty high speed, should be very potent. Would be more than adequate for mainstream gamers. HD 4670 only has 320 and is fine even for modern games as long as you don't start cranking the AA and stuff like that. Of course there's more to GPU performance than just that, a dedicated card probably has a ton more memory bandwidth available than Llano will end up having.

Good point about the bandwidth.

Hmmm....Checking out the reviews should be pretty interesting then. I wonder how well the system dual channel DDR3 will work for this purpose?

P.S. By the time this Llano APU gets released, ATI will no doubt be very close to HD6xxx (which essentially doubles the standard). Still if the memory bandwidth scales well enough this combination of GPU/CPU might end up being a good balanced combination for gamers.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,080
3,582
126
i thought the whole point in making the gpu on the same die with the cpu was to make the pipe lines shorter so it wouldnt need all that bandwidth.

I remember reading a long time ago, on a polymorphic die.
Basically its a universal die that can act as both a cpu + gpu + cache.

The proposed was having 6-8 of these guys and they would switch on the fly in what they needed.
If it was a 3d application you'd have most of the morphic dies be gpu's + cache, while if it was a cpu cycle, they would turn into cpu + cache.

It was a pretty neat concept, but i think it died out...
 
Last edited:

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
With Sandy Bridge and Llano appearing as the first 32nm quads you will have to ask:

How will it benefit me (as a gamer)?

I can only see negative aspects of putting a GPU on the CPU if you're a gamer.

1. The GPU/IGP is not very powerful so you will still need a videocard. (480sp for Llano vs 2000 on the 5870, not to mention next gen. videocards)
2. The GPU/IGP then become redundant, but will still take up die space making it use more power, run hotter which will make it more difficult to o/c.
3. Larger die = higher price

I'm sure the CPU parts will be faster than PII/i7, but I would rather have the CPU without the GPU part.

I might have overlooked something like hydrid CF on the Llano, powersaving features, but otherwise I can't see how this move is a good thing.

Discuss :p

1) Certainly, anytime you want more performance, you need more silicon. That's true with any hardware component. Certain improvements can be made with architectural changes, and moving GPU onto the die with CPU is huge.

2) Don't see why the IGP/GPU would be redundant. Power isn't an issue if the unneeded cores are throttled properly, which it sounds like AMD is doing. All that's needed is for there to be software available to utilize the IGP/GPU cores. Someone mentioned Physics calculations. Would be perfect to run on a GPU/CPU die, leaving gaming graphics to an add-in card.

3) Chips are pretty damn cheap. I can't remember the last time I paid over $200 for a CPU... if I ever did.


I really don't understand all the concern about an on-die GPU and how it might be bad. Lots of stuff has moved on die, FPU, memory controller, cache, etc. All improvements, I see no reason why this would be different. The fact that Intel and AMD are both doing it should be sign we're going in the right direction. Heck, even Sony's Cell processor is basically doing the same thing.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
With Sandy Bridge and Llano appearing as the first 32nm quads you will have to ask:

How will it benefit me (as a gamer)?

I can only see negative aspects of putting a GPU on the CPU if you're a gamer.

1. The GPU/IGP is not very powerful so you will still need a videocard. (480sp for Llano vs 2000 on the 5870, not to mention next gen. videocards)
2. The GPU/IGP then become redundant, but will still take up die space making it use more power, run hotter which will make it more difficult to o/c.
3. Larger die = higher price

I'm sure the CPU parts will be faster than PII/i7, but I would rather have the CPU without the GPU part.

I might have overlooked something like hydrid CF on the Llano, powersaving features, but otherwise I can't see how this move is a good thing.

Discuss :p


Well why can't you? AMD will still carry both a top to bottome discrete CPU and GPU line. If you don't want or need accelerated computing, by the hardware without that option. On the other hand, if you need acceleration for something like say, physics with opencl, then you have that option too... :) I see this as a win win situation for AMD and they look to be taking full advantage of their ATI aquisition. Now it's time to watch the naysayers sit back and suck it up.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,080
3,582
126
i dont think the ondie gpu will ever replace something like this:
IMG_1309.jpg


So its not a win for everyone.

And only a gamer would have that much gpu power...

Or a major folder but then it would be Nvidia.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
2) Don't see why the IGP/GPU would be redundant. Power isn't an issue if the unneeded cores are throttled properly, which it sounds like AMD is doing. All that's needed is for there to be software available to utilize the IGP/GPU cores. Someone mentioned Physics calculations. Would be perfect to run on a GPU/CPU die, leaving gaming graphics to an add-in card.

Speaking of GPU Physics, Is Direct Compute the Microsoft version of Nvidia Physx?

Briefly looking at the description it looks like CUDA architecture can run Direct Compute...but I wonder how much different the two are from each other?
 

waffleironhead

Diamond Member
Aug 10, 2005
7,066
571
136
Speaking of GPU Physics, Is Direct Compute the Microsoft version of Nvidia Physx?

Briefly looking at the description it looks like CUDA architecture can run Direct Compute...but I wonder how much different the two are from each other?

IIRC, directcompute is microsofts answer to CUDA. PhysX should be able to run off of cuda OR directcompute.(as long as it is written for it)
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
i thought the whole point in making the gpu on the same die with the cpu was to make the pipe lines shorter so it wouldnt need all that bandwidth.

I remember reading a long time ago, on a polymorphic die.
Basically its a universal die that can act as both a cpu + gpu + cache.

The proposed was having 6-8 of these guys and they would switch on the fly in what they needed.
If it was a 3d application you'd have most of the morphic dies be gpu's + cache, while if it was a cpu cycle, they would turn into cpu + cache.

It was a pretty neat concept, but i think it died out...

A HD 5670 has the equivalent bandwidth of a quintuple DDR3-1600 setup. This is where all IGPs fall short (among other things) as they have to use system RAM.http://en.wiktionary.org/wiki/quintuple
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
According to fudzilla there will be 2 igp cores on SB . Which I find a little odd .

This seems extremely redundant. On a CPU there's a difference because single thread applications have hard time extracting IPC, but on a GPU there's no problem. 1 die with 2 GPU cores having 300 SPs is basically the same thing as 1 GPU with 600 SPs. Fudzilla is confused IMO.

I think once GPGPU starts to catch on, all CPUs will start to include high performance FPUs onboard that could be used as a GPU or for general computation.

In the further future maybe, but not in Llano generation. In order to be relevant to general computation, it'll have to be supporting IEEE SP and DP standards properly. I don't see iGPU on-CPU-die's FPU outperforming high end CPU's FP like Sandy Bridge or Bulldozer, even though the theoretical FLOPS might be higher. If they want a proper replacement, it'll have to have the CPU pipeline modified to support GPU code, so its fully integrated, not GPU on die + fast interconnect next gen CPUs will have.

Real integration of GPUs with being part of the CPU pipeline is probably happening in the Haswell timeframe. In the meantime, it just makes things more INTERESTING, no more.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
A HD 5670 has the equivalent bandwidth of a quintuple DDR3-1600 setup. This is where all IGPs fall short (among other things) as they have to use system RAM.http://en.wiktionary.org/wiki/quintuple

The bright side is that this might spark some more interest in system memory. (ie, what are the bang for the buck sticks that increase gaming performance).

Or maybe bandwidth is so bad it really doesn't matter?
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
On GPUs, the overwhelmingly important characteristic in memory is bandwidth. It really doesn't care about latency. I guess overclocked memory modules will matter, if the GPU performance becomes relevant enough for that. Then again, overclocking memory won't offer you 4-5x bandwidth, but rather 1/10x that.