Nvidia GPUs soon a fading memory?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Actually you need a lot more than that, since the memory is SHARED between CPU and GPU.
This means two things:
1) There will be bus contention issues.
2) There will be a lot of non-localized memory accesses, as the CPU and GPU work in different memory areas.

These two factors mean that efficiency quickly drops well below the maximum theoretical bandwidth (IGPs rarely performed anywhere near the theoretical maxium bandwidth either).
Since AMD is going to use the existing Stars architecture and therefore its memory controller... where exactly is there room for any 'magic' to increase memory performance anyway? It just doesn't add up.

Opposed to current IGPs, that require chip-to-chip crossing between the memory controller and either the GPU or CPU, in Llano, the SIMDs, the UVD and the x86 cores will be attached to the same High Speed Bus that will connect directly to the memory.

Now you think, shared and non-localized memory access.

The thing is the memory will be divided by regions. One managed by the OS running on the x86 cores and the other managed by the software programs running on the SIMD engines.

So, as you can see (all this information is in the white paper) AMD is looking at the memory access department.

All in all I get a deja-vu feeling about all this. It sounds like as big a pipe dream as the Barcelona was... "40% faster than Clovertown"... yea right... in reality it was pretty much the other way around.
Don't get your hopes up.

Except in that case AMD even at its best, was always behind Intel (actually they were a bit in front because Intel insisted on a wrong architecture when they already had the foundation of the core 2 architecture).

Concerning the graphics department, that is a completely different story.

I guess that remark means that you also understand that IF AMD can deliver what it promised, why this is potentially a market changing product.

This also shows why AMD isn't to concerned with increasing the GPGPU power of their graphic cards atm.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Opposed to current IGPs, that require chip-to-chip crossing between the memory controller and either the GPU or CPU, in Llano, the SIMDs, the UVD and the x86 cores will be attached to the same High Speed Bus that will connect directly to the memory.

That 'high speed bus' was already available anyway: HyperTransport.
We saw with the introduction of the first dualcores from AMD, that the performance wasn't really any better than two single-core Opterons on a two-socket board, using HyperTransport.
So I see no reason to believe that it's any different for GPUs and CPUs sharing a memory controller via HyperTransport.

Now you think, shared and non-localized memory access.

The thing is the memory will be divided by regions. One managed by the OS running on the x86 cores and the other managed by the software programs running on the SIMD engines.

So, as you can see (all this information is in the white paper) AMD is looking at the memory access department.

Doesn't change the argument one bit.
You can only get maximum memory bandwidth if you use burst transfers. As soon as the CPU accesses even a single byte of memory, your GPU's burst transfer will be interrupted, and a new burst transfer will have to be set up.
CPUs themselves already suffer heavily from this, as applications rarely rely only on burst transfers. You see an almost exponential dropoff in memory performance as soon as you need to read/write to two memory streams at a time or more. And AMD has always suffered even more from this than Intel, since AMD's speculative prefetching and cache associativity have never been quite as good as Intel's.

Except in that case AMD even at its best, was always behind Intel

That's not the point, is it?
AMD hailed native quadcore as vastly superior, having great advantages for performance... Nothing of that ever materialized.
Why would this case be any different?

I guess that remark means that you also understand that IF AMD can deliver what it promised, why this is potentially a market changing product.

Well, if AMD actually manages to copy-paste a Radeon 5570 GPU into a CPU while maintaining the performance level, then yes that means the low-end discrete graphics market has just gotten smaller.
However, I would find it remarkable if AMD would even manage to get Radeon 5450 performance out of a 5570 GPU copy-pasted into a CPU, given the obvious bottlenecks.
And a 5450 would not be THAT spectacular, as it is about twice as fast as current IGPs, and still about twice as slow as the 5570.

The gap from current IGPs to a Radeon 5570 is just VERY large, and not something you can do in a single leap. You would need a small revolution, and Llano is anything but revolutionary. Neither then GPU nor the CPU are different from what we have on the market today, and they're not doing much more than copy-paste to get them on a single chip either. It's just evolution... the same evolution that Intel has gone through with their current i3/i5 series with integrated IGP.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
That 'high speed bus' was already available anyway: HyperTransport.
We saw with the introduction of the first dualcores from AMD, that the performance wasn't really any better than two single-core Opterons on a two-socket board, using HyperTransport.
So I see no reason to believe that it's any different for GPUs and CPUs sharing a memory controller via HyperTransport.

On the other hand you've never seen a GPU+CPU doing that. Or anyone, for that matter, except the AMD engineers and some partners that saw Llano samples.


Doesn't change the argument one bit.
You can only get maximum memory bandwidth if you use burst transfers. As soon as the CPU accesses even a single byte of memory, your GPU's burst transfer will be interrupted, and a new burst transfer will have to be set up.
CPUs themselves already suffer heavily from this, as applications rarely rely only on burst transfers. You see an almost exponential dropoff in memory performance as soon as you need to read/write to two memory streams at a time or more. And AMD has always suffered even more from this than Intel, since AMD's speculative prefetching and cache associativity have never been quite as good as Intel's.

On the other hand ATi memory performance has been top notch, hasn't it?

I don't know what the solution is - AMD hasn't disclosed the GPU specs yet. Is it an on-die cache? A side buffer? Some tweak on the memory controller?

It would be speculating.

That's not the point, is it?
AMD hailed native quadcore as vastly superior, having great advantages for performance... Nothing of that ever materialized.
Why would this case be any different?

And Intel said that Netburst was the shit and was all about core speed. That didn't prevent them to create core2 and i series since then.

NVIDIA said the GF FX was the shit and it was load of crap. And lo and behold series 6 and 7 was quite good, and series 8 dominated the scene for like 2 years and you still have products based on it.

So since AMD had a failure in the past that means they will never make a good product again?

The gap from current IGPs to a Radeon 5570 is just VERY large, and not something you can do in a single leap. You would need a small revolution, and Llano is anything but revolutionary. Neither then GPU nor the CPU are different from what we have on the market today, and they're not doing much more than copy-paste to get them on a single chip either. It's just evolution... the same evolution that Intel has gone through with their current i3/i5 series with integrated IGP.

The Intel design is really just a glued IGP to the CPU - CPU+GPU on the same socket. AMD approach is both on the same die, connecting both components directly to the memory, instead of the need to use an external bus for the IGP.

I'm simply saying that considering what has been leaked about Llano transistor count and the die size shot, that Llano seems to be quite a big chip.

If all of that isn't true, than my argumentation won't be the same.

Again I don't see any reason for AMD to use such a large number of SIMDs if they can't extract performance from it.

While AMD has quite big CPUs compared to the competition, ATi chips are quite small compared to the competition.

In my point of view you can't just say "look AMD CPU's memory controller can't extract that much bandwidth so the GPU won't be able either" disregarding that ATi is part of AMD and its know how.

So while you prefer to think that AMD is being idiot, in this case, I prefer to think (if the GPU portion of Llano really is 400-480 SP and is using around 600 million transistors) AMD might know something we don't.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
On the other hand you've never seen a GPU+CPU doing that. Or anyone, for that matter, except the AMD engineers and some partners that saw Llano samples.

As I say, CPU+CPU or GPU+CPU... what's the difference? In the end it's just pumping data over a bus.

On the other hand ATi memory performance has been top notch, hasn't it?

That's because they don't have to share with a CPU. A GPU has a VERY simple memory subsystem. Everything is VERY predictable and very linear. No random access or anything.
Except with GPGPU, and that's where ATi isn't top notch at all. nVidia's architecture handles random access considerably better, because of their shared memory architecture.

So since AMD had a failure in the past that means they will never make a good product again?

That's not the point.
The point is that AMD claimed that putting the cores on a single die would make them communicate a lot faster. Which couldn't be further from the truth.
AMD is making the exact claims now, that putting a GPU and CPU on die will make them a lot faster. I don't see why it would work this time around.

The Intel design is really just a glued IGP to the CPU - CPU+GPU on the same socket. AMD approach is both on the same die, connecting both components directly to the memory, instead of the need to use an external bus for the IGP.

I hate to keep repeating myself, but AMD has never managed to prove that their direct connection was any faster than their HyperTransport bus.

I'm simply saying that considering what has been leaked about Llano transistor count and the die size shot, that Llano seems to be quite a big chip.

All of a sudden big chips are a good thing?

Again I don't see any reason for AMD to use such a large number of SIMDs if they can't extract performance from it.

Just because AMD does something doesn't guarantee it's going to work.
The Radeon 2900, nVidia Fermi, Intel Larrabee... they all used a large number of SIMDs, but none of them managed to extract the performance from them convincingly.

In my point of view you can't just say "look AMD CPU's memory controller can't extract that much bandwidth so the GPU won't be able either" disregarding that ATi is part of AMD and its know how.

And I've continuously been trying to explain that a GPU is a completely different beast, since it doesn't have to share its memory controller with other cores, and it is highly predictable and linear in nature. When you glue a GPU to a quadcore, you suddenly have 4 extra cores running threads that do completely different things with the memory. THAT is where the problem is, and THAT is where ATi has no experience at all (their GPGPU architecture has quite a big weak spot there aswell).

So while you prefer to think that AMD is being idiot, in this case, I prefer to think (if the GPU portion of Llano really is 400-480 SP and is using around 600 million transistors) AMD might know something we don't.

As I say, wishful thinking. Technical facts indicate otherwise.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
All of a sudden big chips are a good thing?

Quite the contrary.

Why are they going to glue 5570 to their CPU to perform like a 5450 or worse, if they can just sell an athlon II X4 and a 5570 and make more money?

Why not just use Intel approach?

And I've continuously been trying to explain that a GPU is a completely different beast, since it doesn't have to share its memory controller with other cores, and it is highly predictable and linear in nature. When you glue a GPU to a quadcore, you suddenly have 4 extra cores running threads that do completely different things with the memory. THAT is where the problem is, and THAT is where ATi has no experience at all (their GPGPU architecture has quite a big weak spot there aswell).

And why are you assuming the memory controller is going to be exactly the same as in the STARS architecture when AMD said it is going to be different?

As I say, wishful thinking. Technical facts indicate otherwise.

Technical facts?

Basically you are looking at current CPU memory controller system and saying that doesn't work with GPUs and so we are limited to current IGP performance.

AMD, is on the other hand, saying that they can manage to have the GPU and the CPU use the memory controller and the memory independently, which doesn't occur with today IGPs nor with Intel implementation.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Memory bandwidth of a gpu on it's own card can be something like 10 times the speed of a cpu due to it generally using more advanced memory formats and wider buses. If you make a gpu use cpu memory it's going to be very slow, not even factoring in the fact that the cpu already needs most of the bandwidth.

Combine that with the heat/size/cost restrictions and you come up with this basically being a gpu to run windows, video and perhaps flash. Microsoft have already done this, AMD is just playing catchup. Sure they have ati so I expect their gpu might manage full HD video better then microsoft, and their drivers will suck less, but it'll still be a puny little thing and naff for gaming.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Memory bandwidth of a gpu on it's own card can be something like 10 times the speed of a cpu due to it generally using more advanced memory formats and wider buses. If you make a gpu use cpu memory it's going to be very slow, not even factoring in the fact that the cpu already needs most of the bandwidth.

Combine that with the heat/size/cost restrictions and you come up with this basically being a gpu to run windows, video and perhaps flash. Microsoft have already done this, AMD is just playing catchup. Sure they have ati so I expect their gpu might manage full HD video better then microsoft, and their drivers will suck less, but it'll still be a puny little thing and naff for gaming.

For that you already have IGPs.

And btw it is Intel - not Microsoft.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Why are they going to glue 5570 to their CPU to perform like a 5450 or worse, if they can just sell an athlon II X4 and a 5570 and make more money?

Why not just use Intel approach?

Why didn't they use Intel's approach for Barcelona?
Only the AMD people can tell you why.
We'll have to wait and see. Just because AMD decided to go down this path doesn't mean it's the right one. AMD/ATi have had so many failures that I certainly don't trust their judgement blindly.

And why are you assuming the memory controller is going to be exactly the same as in the STARS architecture when AMD said it is going to be different?

What AMD says and what AMD does is not always the same.

AMD, is on the other hand, saying that they can manage to have the GPU and the CPU use the memory controller and the memory independently, which doesn't occur with today IGPs nor with Intel implementation.

We'll have to wait and see. AMD has not given me ANY indication that they solved the problems AT ALL. They just say "yea Fusion is going to be great", but they did the same with Barcelona, and that ended up AMD's biggest failure in history.
With Barcelona I also said the same thing long before it was launched: AMD has not given me ANY indication that they're going to get the architecture right for a native quadcore. And I was right. Barcelona turned out to be little more than four Athlon64 cores glued together with some poorly performing L3 cache (not even taking the TLB bug and its slow workaround into account).
It's going to be the same again this time around.... Llano is not going to be anywhere near as fantastic as AMD wants you to believe. Where is the magic technology that's going to make it a success? I haven't seen or heard a thing about it... just like with Barcelona. It isn't there. What they say it's going to do, and what their technology is actually capable of, are not the same thing.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Quite the contrary.

Why are they going to glue 5570 to their CPU to perform like a 5450 or worse, if they can just sell an athlon II X4 and a 5570 and make more money?

Why not just use Intel approach?



And why are you assuming the memory controller is going to be exactly the same as in the STARS architecture when AMD said it is going to be different?



Technical facts?

Basically you are looking at current CPU memory controller system and saying that doesn't work with GPUs and so we are limited to current IGP performance.

AMD, is on the other hand, saying that they can manage to have the GPU and the CPU use the memory controller and the memory independently, which doesn't occur with today IGPs nor with Intel implementation.

Fusion isn't yet here, by the time it arrives, 5570 performance isn't going to be so hot. Fusion will likely be a speed grade beyond what igps usually are, but it's not going to be amazing or high end once it launches.

Also, AMD selling an all in one chip for around $100 (quad core plus graphics) could very well mean getting a sale versus selling nothing at all, or a lower margin product instead.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Fusion isn't yet here, by the time it arrives, 5570 performance isn't going to be so hot. Fusion will likely be a speed grade beyond what igps usually are, but it's not going to be amazing or high end once it launches.

Isn't IGPs performance like 6600 GT to 7600 GT performance range at best?

Also, AMD selling an all in one chip for around $100 (quad core plus graphics) could very well mean getting a sale versus selling nothing at all, or a lower margin product instead.

They could do the same with a far smaller GPU, that would bring the Llano manufacture cost down. Especially if those SPs aren't being used.

Unless AMD is just adding those those SPs there for physics and other OpenCL accelerated applications.

Although considering the recent AMD/ATi philosophy of going for the smaller/less complex (not for some of the current CPU projects that are legacy from years, but projects like BD and Bobcat) possible approach, that doesn't seem right.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
AMD has not given me ANY indication that they solved the problems AT ALL. They just say "yea Fusion is going to be great", but they did the same with Barcelona, and that ended up AMD's biggest failure in history.

Before you posted in this thread you didn't even know that Llano is supposed to be a 400-480 SP part and thought it was simply a low IGP+CPU on the same die, very similar to the Intel approach, so it isn't surprising that AMD hadn't given you any indication it solved the problems for a question you didn't even know that existed.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
Before you posted in this thread you didn't even knew that Llano is supposed to be a 400-480 SP part and thought it was simply a low IGP+CPU on the same die, very similar to the Intel approach, so it isn't surprising that AMD hadn't given you any indication it solved the problems for a question you didn't even know that existed.

Oh please. I never said it was going to be the same lowlevel IGP as Intel. I just said that I hadn't seen an official statement from AMD about the 400-480 SP (and still haven't).
Aside from that, you can go attack me all you want, but that doesn't change that fact that YOU don't know anything about how AMD would possibly solve those problems either.
All you've said so far is "I have faith that AMD has solved it". That doesn't convince an atheist such as myself, obviously.
So, if you can come up with the cold hard facts on how AMD is going to do this, we can talk... else it's just your word against mine. And then the discussion ends here. You've dragged it out long enough with your pathetic fanboy wishful thinking. And now you go for personal attacks? Get lost.
But I'll sweeten the deal... $5 says that Llano will NOT deliver the same performance as a discrete Radeon 5570.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Isn't IGPs performance like 6600 GT to 7600 GT performance range at best?



They could do the same with a far smaller GPU, that would bring the Llano manufacture cost down. Especially if those SPs aren't being used.

Unless AMD is just adding those those SPs there for physics and other OpenCL accelerated applications.

Although considering the recent AMD/ATi philosophy of going for the smaller/less complex (not for some of the current CPU projects that are legacy from years, but projects like BD and Bobcat) possible approach, that doesn't seem right.

IGP performance is typically on par with the lowest add in card of the generation. It's generally the exact same chip, and low end cards aren't equipped with fast memory. Generally, AMD's and nvidia's igps perform on par with the absolute low end cards targetted for HTPC use. By the time fusion launches (2011), it should only be targeting the next step up of cards, the very beginning of the cards that are at all useful for gaming.

I think the top IGPs may have already surpassed 7600gt performance though, nvidia's top igp offers roughly 8500gt to 8600gt performance I think.
There's a lot of graphics tasks that aren't that bandwidth limited, and AMD may share the cache with the igp (not likely in the initial version of fusion), add in dedicated memory on the motherboard (very likely since they've done it before), or use additional bandwidth compression techniques (limited effectiveness).
A ddr3 equipped AM3 system can already offer up to 21GB/s of memory bandwidth. That's 7600gt level, with much more processing power. If they embed GDDR5 onto the package they could easily provide a high level of performance. The performance level of fusion should be about on par with cards that typically are equipped with 50-60GB/s of memory bandwidth. A single 64 bit high speed GDDR5 chip could accomplish that if they can split the memory up (high speed framebuffer, then use system memory for textures), that would work well.

My guess is:
Most fusion setups will simply use system memory. The performance of fusion will be limited, but one of the most important parts of dx11's tessellation (and procedural content in general) is that it greatly reduces memory bandwidth requirements. Fusion's low memory bandwidth but comparatively high processing power will make it useful for both OpenCL and DX11, while not performing as well at dx9 games.

'High end' fusion setups will use high speed DDR3 (21GB/s memory bandwidth) and perhaps have an extra hypertransport link dedicated to slow speed GDDR. Combined memory bandwidth will be 40GB/s to 50GB/s, rather than actually using a GDDR5 chip that can obtain that bandwidth on its own.

Crossfire-X will make a return, and ATI will introduce dynamic load balancing. ATI's choice of equipping all its cards with the same tessellation unit will pay off, anyone who buys an AMD fusion system will double tessellation performance. The IGP will be too slow to be utilized beyond that, unless using an entry level gaming card (maybe 5600 level performance), but it will help AMD's systems compete with nvidia's lower end cards in dx11 performance.

Although I doubt it will be used, tile based deferred rendering would also solve fusion's bandwidth problem. AMD did use it in its mobile parts and xbox 360, and Intel's IGPs use it. It would also fit well with the compute power of llano, but most likely weak fillrate and memory bandwidth.

AMD's big push of fusion will hurt their margins somewhat (~400 sps is about 4x too many for a 2011 IGP, and if it hits 1Ghz+ clock speeds it's going to beat last gen's mid range cards), but they'll try to grab the low end laptop gaming market, the desktop gaming market, and the media center market. Fusion is their attempt to differentiate themselves from Intel, "our processors may not be as fast, but we offer a better and more capable overall experience."


Edit: I'd like to add that phone SOCs are closing in on PC igps. The top end snapdragon and powervr cores (along with nvidia's tegra 2) may not be in any shipping products, but they offer performance well beyond the igps of yesteryear (think geforce 6150). I'm not sure about Intel's latest IGP, but there are cell phone gpus faster than all their previous options. CPUs are also reaching the point of good enough, so having much better cpu and gpu performance may be necessary to keep people from migrating to lower cost alternatives. (tablets, smartbooks, smartphones)
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
All you've said so far is "I have faith that AMD has solved it".

I've no faith in either outcome - I just quoted tomshardware:

Information from AMD suggests that llano's integrated graphics core may perform on par with the discrete Radeon HD 5570.

From there you simply said that it is impossible because the current STARS memory controller won't allow enough bandwidth.

I simple pointed that there is documentation that the memory controller will be different.

That doesn't convince an atheist such as myself, obviously.
So, if you can come up with the cold hard facts on how AMD is going to do this, we can talk...

Basically you have two arguments. The bandwidth one and that AMD are liars.

I already told you that if AMD can't solve the problem than their product is illogical.

Compared to the Barcelona situation where they "mislead", to use a softer term, in this case they have nothing to win and everything to lose by misinformation.

else it's just your word against mine. And then the discussion ends here. You've dragged it out long enough with your pathetic fanboy wishful thinking. And now you go for personal attacks? Get lost.

It is AMD word and belief against your own belief.

Sincerely a quad-core with a 5570 performance isn't a very exciting product for my uses.

I was just using logic and assuming AMD people aren't complete morons. If that is pathetic fanboy wishful thinking, than I'm one.


But I'll sweeten the deal... $5 says that Llano will NOT deliver the same performance as a discrete Radeon 5570.

So this is a matter of belief for you? $5 dollars belief to be more precise.

Or is a mater of logic?

Your argument of the bandwidth is quite valid. If the GPU is bandwidth starved there will be no miracles.

Your argument of "Liars! AMD is all liars! Barcelona! Lies! Liars" fits quite well the realm of pathetic fanboys.

Once the specs of GPU Llano are disclosed I'll tell you if I want $5.
 

psoomah

Senior member
May 13, 2010
416
0
0
We'll have to wait and see. AMD has not given me ANY indication that they solved the problems AT ALL. They just say "yea Fusion is going to be great", but they did the same with Barcelona, and that ended up AMD's biggest failure in history.
With Barcelona I also said the same thing long before it was launched: AMD has not given me ANY indication that they're going to get the architecture right for a native quadcore. And I was right. Barcelona turned out to be little more than four Athlon64 cores glued together with some poorly performing L3 cache (not even taking the TLB bug and its slow workaround into account).
It's going to be the same again this time around.... Llano is not going to be anywhere near as fantastic as AMD wants you to believe. Where is the magic technology that's going to make it a success? I haven't seen or heard a thing about it... just like with Barcelona. It isn't there. What they say it's going to do, and what their technology is actually capable of, are not the same thing.

You reasoning contains a fundamental error as it is based on existing sockets and platforms.

But Fusion will be a completely new socket and completely new platform. AMD has a blank slate on which to design socket and platform architecture able to achieve their Fusion goals.

You appear to be stuck in a repeating loop based on a few fixed ideas.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Crossfire-X will make a return, and ATI will introduce dynamic load balancing. ATI's choice of equipping all its cards with the same tessellation unit will pay off, anyone who buys an AMD fusion system will double tessellation performance. The IGP will be too slow to be utilized beyond that, unless using an entry level gaming card (maybe 5600 level performance), but it will help AMD's systems compete with nvidia's lower end cards in dx11 performance.

I guess you mean hybrid crossfire by crossfire-x? Yeah I agree with you.

I think you might be to optimistic with the IGP performance. I think I've seen some IGP benchs recently because of the 800 chipset series. I'll try to dig out.

Here 890GX,

http://www.anandtech.com/show/2952/2

and 780G (almost the same thing).

http://www.anandtech.com/show/2630/4

Still think it is around 6600GT-7600GT.

Actually, based on the die shots the die isn't that large. Taking the 32nm die shrink into account, a Llano should be about the same size as Propus (an estimate of 169mm^2 here: http://www.anandtech.com/show/2933).

True.

But no GPU or a simple a smaller GPU that wouldn't be limited by bandwidth, would be even smaller.
 
Last edited:

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
True.

But no GPU or a simple a smaller GPU that wouldn't be limited by bandwidth, would be even smaller.

Llano has been shown in their roadmap to be up-to 4 cores.
roadmap.png


So they may have a seperate 2-core version with half the SPs as well. The problem with this is that less than 50% of the GPU portion of the die contains SPs.
amd2010small.jpg

Cutting that in half would only save you 25% of the die area of the GPU, or about 20mm^2. They may be able to cut other portions of the GPU as well though.

I don't doubt that they may do something along those lines to compete in the Netbook arena, but it would be more for power consumption reasons than to get more sellable chips on the wafer.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
I guess you mean hybrid crossfire by crossfire-x? Yeah I agree with you.

I think you might be to optimistic with the IGP performance. I think I've seen some IGP benchs recently because of the 800 chipset series. I'll try to dig out.

Here 890GX.

http://www.anandtech.com/show/2952/2

Still think it is around 6600GT-7600GT.



True.

But no GPU or a simple a smaller GPU that wouldn't be limited by bandwidth, would be even smaller.

The 890gx is just a rehashed 790gx though. Nvidia's latest IGP is still faster.
You might be overestimating a 6600gt.
Edit: Actually probably not.
http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q1/Overall-all-Games-fps,572.html
The 790GX is HD2400XT level, it may have reached 6600gt level , but the 7600gt looks solidly out of range.

Still, llano has 10x the shader processors of the current 890gx, and likely a much higher clock speed, that implies at least 4670/4770 performance level IF it's not killed by bandwidth. And the 4670 is already a $60 card, and probably a $40 card by the time llano actually launches, with the 4770 dropping down to the $60 range. The 3450 (equivalent to current IGPs) is a $30 graphics card. Llano competing with the $40-$60 cards only puts it a grade above typical IGPs of their time (excluding the initial nforce chips which were quite fast for their time), there's just a big performance jump from the $30 cards to the $40-$60 cards.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
You reasoning contains a fundamental error as it is based on existing sockets and platforms.

But Fusion will be a completely new socket and completely new platform. AMD has a blank slate on which to design socket and platform architecture able to achieve their Fusion goals.

You appear to be stuck in a repeating loop based on a few fixed ideas.

No, my reasoning is based on the fact that AMD hasn't released any information about any NEW technology, all their information indicates that they use EXISTING technology.
Let's face it, unless they completely revamp the memory controller AND motherboard/memory module technology, they are NOT going to solve the memory bottleneck.
The fastest memory controller with DDR3 on a motherboard is Intel's triple channel controller, which barely reaches 28 GB/s, and that is the CPU alone.
AMD does NOT use a triple channel controller, but a dual channel controller, and they need to share this between the CPU and the GPU.
Clearly this means that they cannot get anywhere near the 28 GB/s mark this way, no matter WHAT they try to do with the chip itself. Dual channel and existing DDR3 technology pretty much seal Llano's fate beforehand.
What is there so hard to understand about this that have I to repeat myself post after post?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
I simple pointed that there is documentation that the memory controller will be different.

As I said, that doesn't solve the problem... but you ignored that.

I already told you that if AMD can't solve the problem than their product is illogical.

Yes, and you realize that this is a real possibility, don't you?
Or are you trying to say that it is impossible for AMD to make a wrong/illogical design choice, and their products have to be successful by default?

I was just using logic and assuming AMD people aren't complete morons. If that is pathetic fanboy wishful thinking, than I'm one.

It is wishful thinking at the very least (you 'assume'... that's the word you used... that the AMD people can perform what would technically be a miracle). You will agree with me on that, don't you?
'Complete morons' isn't quite accurate... not at all actually.
If they aren't 'complete morons', but just decent engineers, then they can do what Intel did: copy-paste a GPU onto a CPU. But they'd have to be geniuses to get more bandwidth out of a DDR3 memory system than what is theoretically possible. And you just 'assume' that...

Your argument of the bandwidth is quite valid. If the GPU is bandwidth starved there will be no miracles.

Don't forget the additional arguments that memory on a motherboard is less efficient because of signaling issues by the use the socket/module configuration... and the fact that breaks in burst transfers (inevitable on a shared CPU+GPU architecture) greatly reduce effective bandwidth.

Your argument of "Liars! AMD is all liars! Barcelona! Lies! Liars" fits quite well the realm of pathetic fanboys.

Truth hurts, doesn't it? :)
It's a simple fact that AMD's predictions about Barcelona's performance were not even remotely realistic. In other words, they were lies.
So that is an indication to take what AMD (or any company for that matter) says with a grain of salt.
 
Last edited:

markdvdman

Junior Member
Apr 30, 2010
16
0
0
Scali WHAT is this agenda?

You talk about Fanboy rubbish. This is a WORLWIDE market where consumers matter. YOU are the Intel fanboy anbd it is PATHETIC.

Grow up.

The fusion technology is indeed as I agree with you difficult to see. It is not going to work well unless AMD actually do something akin to what we have not as yet seen by their track records. AMD being so bad have the fastest graphics card in the world currently so they must be doing something right!!!!

I am saying this nonsensical argumentative claptrap is all about defending one platform over another. You and others have NOT given a balanced argument.

Fusion is MEANT to transform the market - yet it may not. I am not totally convinced until I see it. I have an intel laptop. My PC is a phenom 965 overclocked with a 5850 not yet overclocked on a 1920x1200 display. It is FAST.

The OP though regarding NVidia going is not happening. They have been poor of late as ati are far ahead simply because the products are and have been here for a while.

Fermi I hope will get ahead of them for the good of the consumer!

HAIL technology be it intel/amd ati/nvidia or even Arm bless them. This pathetic fanboy stuff is kidology.

Wake up and stop utilising bias.

Let us hear FACTS.

I am not just accusing you man there are many others.

I prefer ATI currently but when I need an upgrade I look at what is there I care not one iota for fanboy nonsense!
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
That's not the point, is it?
AMD hailed native quadcore as vastly superior, having great advantages for performance... Nothing of that ever materialized.
Why would this case be any different?

AMD said that a monolithic dual core would hammer intel's glued together Pentuim D and it did. So thats one lie and one truth according to you then, so its stands to reason that Llano will be awesome, yes?
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
As I said, that doesn't solve the problem... but you ignored that.
Or at least we don't see how it solves the problem.


Yes, and you realize that this is a real possibility, don't you?
Or are you trying to say that it is impossible for AMD to make a wrong/illogical design choice, and their products have to be successful by default?

It is possible but unlikely in this case.

Why?

Because there is really no competition pressing them to have an IGP with 5570 performance level.

There is no Core 2, there is no 8800GTX out there. There is Clarkdale, but they can't really respond to the x86 cores and any silly GPU they glue onto the CPU can give similar or better performance.

It is wishful thinking at the very least. You will agree with me on that, don't you?

To expect AMD will realize their 400-480 SP GPU will be bandwidth limited is wishful thinking?

Don't forget the additional arguments that memory on a motherboard is less efficient because of signaling issues by the use the socket/module configuration... and the fact that breaks in burst transfers (inevitable on a shared CPU+GPU architecture) greatly reduce effective bandwidth.

That is what happens if they make no changes. If AMD wants their APUs to have a future memory controller will be very important. In the APU marketing slides that was point that was clearly focused.

Truth hurts, doesn't it? :)
It's a simple fact that AMD's predictions about Barcelona's performance were not even remotely realistic. In other words, they were lies.
So that is an indication to take what AMD (or any company for that matter) says with a grain of salt.

Probably to the guys running AMD at the time it did.

Are you telling me that if Llano GPU reach 5570 performance you'll be in pain?

I can guarantee you that I won't be in pain if Llano is a POS. I'll be a bit disappointed, like I was when FX5800 was a flop, Phenom turned in a turd, 2900XT was a bad and GTX480 failed to make GPU prices go down. I enjoy competition as a consumer and wish that Intel, AMD and NVIDIA provide great products at the lowest possible price.


But then again, people said it was impossible for Conroe reach as much performance as the initial information claimed. And it reached that performance.

People claimed it was impossible for AMD to get 800 SP in RV770 and achieve comparable performance to GT200 and it did.

So that argument is a bit silly.

Again everyone looking at Llano GPU specs will see immediately the bandwidth problems. It is so obvious that is almost asinine to think AMD engineers wouldn't catch it.

Also, at this time, according to the rumour, Llano is already sampling and this time AMD is comparing the GPU performance with their own products.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
AMD said that a monolithic dual core would hammer intel's glued together Pentuim D and it did. So thats one lie and one truth according to you then, so its stands to reason that Llano will be awesome, yes?

If you can tell me the secret to how they will overcome the bandwidth problems with a DDR3 motherboard, maybe...
 
Status
Not open for further replies.