Mobile Intel Ivy Bridge vs AMD Trinity?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Yes thanks to a voluntarly crippled 6620...

From the pages you quote :

If we're to believe that then it confirms what I said earlier that the current CPUs are so crippled in single-threaded performance that a boost there means a boost also in IGP performance. This is also why I'm saying that AMD will need to balance out CPU and GPU improvements for Trinity because if they focus only on the IGP they won't have much of a performance improvement because of the CPU bottleneck.

Trinity, and by extension Piledriver, should improve IPC somewhat, finally bringing AMD to the IPC they had 3 years ago with Stars/K10.5.

Given the existing bottleneck AMD has to cope with, I'd expect the overall improvement will be more balanced than what Intel did with Ivy Bridge where they improved the CPU very little and the IGP a lot, but at the same time the overall improvement in performance combining both should be similar. A 15-20% CPU improvement and 15-20% IGP improvement sounds like what AMD should be aiming at if their APU engineers are smart. This compares to Intel who already had an incredibly powerful architecture and therefore got away with small (5-10%) CPU improvements but needed bigger IGP improvements (35-40%).

BTW, manufacturers won't use DDR3-1600 modules, so don't expect an improvement there from their part--AMD will need to have a big improvement in CPU memory bandwidth, cache latency and bandwidth if they also want to remove that bottleneck out of the equation.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
BTW, manufacturers won't use DDR3-1600 modules, so don't expect an improvement there from their part--AMD will need to have a big improvement in CPU memory bandwidth, cache latency and bandwidth if they also want to remove that bottleneck out of the equation.

I think this has more to with saturation than it does memory speed. AMD has lots of work to do with respect to the IMC and can't just rely on pushing up the supported frequency of the DDR. SB does really well in memory intensive tasks despite the 1333mhz limit (excluding XMP) so it's clear that it's attainable. This is also the reason why DDR4 isn't really seen as a massive boost in performance; it's all about the IMC. I think Trinity will maintain the Llano's 128-bit bus width across the APU. Increasing bus width might be a necessity later on but that will come at a higher cost as well. Considering that there's still plenty of room for improvements on the IMC side I highly doubt that the 128-bit bus will have issues with saturation and the memory improvements will not stem directly from increased DDR frequency.

In a more succinct fashion, AMD can get around that issue with IMC improvements.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
I think this has more to with saturation than it does memory speed. AMD has lots of work to do with respect to the IMC and can't just rely on pushing up the supported frequency of the DDR. SB does really well in memory intensive tasks despite the 1333mhz limit (excluding XMP) so it's clear that it's attainable. This is also the reason why DDR4 isn't really seen as a massive boost in performance; it's all about the IMC. I think Trinity will maintain the Llano's 128-bit bus width across the APU. Increasing bus width might be a necessity later on but that will come at a higher cost as well. Considering that there's still plenty of room for improvements on the IMC side I highly doubt that the 128-bit bus will have issues with saturation and the memory improvements will not stem directly from increased DDR frequency.

In a more succinct fashion, AMD can get around that issue with IMC improvements.

If they're smart, of course. And from what they've shown us from the past two years they're nowhere near as intelligent as Intel's folks. Trinity will still be a huge percentage away from Ivy Bridge when it comes to CPU performance. AMD can make improvements in IGP performance, but even if you're one of the few that games on laptops while on the go Ivy Bridge can play fairly graphically demanding titles like DiRT3 at Medium settings and native 1366x768 resolution. HD 4000 is just a hair better than HD 6620G. But again, IGP performance has been the only thing AMD has been banking on and not that many people care about it. Intel also has better brand recognition so it's not a pretty situation for AMD if they want these products to sell.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
By nowhere near as intelligent as Intel folks, do you mean as far as CPU architecture goes? If that's the argument then it's pretty weak considering the "nowhere near" part. I don't need to remind you AMD has only a fraction of the funds to invest into R&D that Intel does so you're over-reaching with that statement here. If you're claiming that AMD doesn't have the engineers and money to beat out Intel as far as architecture goes then I absolutely agree.

You forget AMD makes fantastic GPUs, and IMO makes the best GPUs in the market -- driver issues aside. They understand bus width and saturation and they're still light years ahead of anything Intel has at the moment, HD4000 included (which, I don't know if you've been following, but Anand's benches aren't accurate as they were comparing a gimped Llano to an ungimped much more expensive Intel build). But if I were to say something like...

Intel's engineers are nowhere near as intelligent as AMD's folks with respect to GPUs, would that be fair? No. Quit getting uppity. Remarks like that only make you look less intelligent.

You're neglecting price here. The mobile Llanos can be had at a fraction of the price and perform notably well, and even better in GPU scenarios, than the Intel IB HD4000 chips. Intel is well aware they've got a lot to catch up on in GPU performance and that's why Haswell is so GPU centered -- this goes directly against your argument that GPU performance doesn't matter, btw. I've mentioned this before but it bears repeating...

Intel's biggest improvements from SB > IB > Haswell will all be on the GPU side, not CPU performance.

I understand your argument but if you consider that we've had "good enough" performance since the C2Quads then the GPU and other coprocessors is the only other way to go. It's something that both AMD and Intel have realized but you're purposely glossing over.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
By nowhere near as intelligent as Intel folks, do you mean as far as CPU architecture goes? If that's the argument then it's pretty weak considering the "nowhere near" part. I don't need to remind you AMD has only a fraction of the funds to invest into R&D that Intel does so you're over-reaching with that statement here. If you're claiming that AMD doesn't have the engineers and money to beat out Intel as far as architecture goes then I absolutely agree.

You forget AMD makes fantastic GPUs, and IMO makes the best GPUs in the market -- driver issues aside. They understand bus width and saturation and they're still light years ahead of anything Intel has at the moment, HD4000 included (which, I don't know if you've been following, but Anand's benches aren't accurate as they were comparing a gimped Llano to an ungimped much more expensive Intel build). But if I were to say something like...

Intel's engineers are nowhere near as intelligent as AMD's folks with respect to GPUs, would that be fair? No. Quit getting uppity. Remarks like that only make you look less intelligent.

You're neglecting price here. The mobile Llanos can be had at a fraction of the price and perform notably well, and even better in GPU scenarios, than the Intel IB HD4000 chips. Intel is well aware they've got a lot to catch up on in GPU performance and that's why Haswell is so GPU centered -- this goes directly against your argument that GPU performance doesn't matter, btw. I've mentioned this before but it bears repeating...

Intel's biggest improvements from SB > IB > Haswell will all be on the GPU side, not CPU performance.

I understand your argument but if you consider that we've had "good enough" performance since the C2Quads then the GPU and other coprocessors is the only other way to go. It's something that both AMD and Intel have realized but you're purposely glossing over.

Yes. It doesn't matter if AMD has a lot less R&D money; the fact is their current products are overall sub-par compared to Intel's. Do you think the average consumer cares about AMD not having 1/5th the money Intel has? I read an article some months ago claiming that AMD fired their small group of senior, experienced CPU engineers that made gems such as the Athlon XP and Athlon 64 and replaced them with a bigger group of less experienced and lower paid engineers. That could have a big thing to do with it, if it's true. The thing is, even though AMD has little resources their graphics cards are excellent and their GPU engineers are commendable, so the R&D argument I just see as an excuse. It's probably that the CPU engineers they have are idiots compared to the people Intel hired, because AMD's GPU engineers seem to have little, if any, problems delivering consistently good or great products.

Also, mobile Llano isn't as cheap as you make it seem to be. You can get a laptop with a Core i5 and a GT 540M for $600-650. For $100 less you can get a laptop with an A8 APU, but that's a huge amount slower when it comes to both CPU and GPU performance. In terms of bang-for-buck, then, it's not that clear cut.

Intel is only focusing on IGP performance now because they already have very fast CPUs and it's much more easier to develop faster IGPs than faster CPUs for them now. In other words, when it comes to CPU performance, Intel is getting to a bigger point of diminishing returns because what they've developed already is very fast but they can still improve the IGP easily--so they do that.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Intel's engineers are nowhere near as intelligent as AMD's folks with respect to GPUs, would that be fair? No. Quit getting uppity. Remarks like that only make you look less intelligent.

I wouldn't say either engineers are smarter than the other. They both hire people from more or less similarly educated people, and the whole industry shares their insights and knowledge(except for the few secrets they keep). The difference in education between engineers are probably trivial compared to how effectively management can get them working towards a goal.

Also even if the hardware and drivers are competent, Intel graphics still have the stigma back then when they were a laughing stock by everyone. That makes game developers reluctant to work with Intel.

Intel's biggest improvements from SB > IB > Haswell will all be on the GPU side, not CPU performance.
For Haswell there will be a big improvement on power usage too: http://forums.anandtech.com/showthread.php?t=2241480

Also, improvements between CPU and GPU can't be compared equally. 10% on CPU might be just as big as 30-40% on GPU. The former speeds up everything while latter only helps on 3D graphics-like(extremely parallel, memory bandwidth sensitive, non-dependent instructions) programs. At the same percentage, special purpose improvements are not as impressive as gains in everything.
 
Last edited:

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
That's true but the biggest gains will be had from GPU. We've been getting closer to that proverbial wall of CPU performance, and as far as Intel is concerned they can take their sweet ass time considering the staggering difference in IPC between their only x86 competitor. Intel's trying to rid themselves of that stigma, which is well deserved if we're honest, by concentrating on their GPU side. I've even heard rumors of a Larrabee 2.0, though if the rumors are true I'd highly suggest Intel use another name :p

The Bulldozer project was a Ruiz agenda, so you're right in the sense that the management makes the decision with regards to what direction the company takes. It's odd that so many years after he tanked the company they're still feeling the aftershocks. Intel and AMD have been swapping engineers for years so the argument that one company has better talent is a difficult one to make.

He's right about the two divisions within AMD. Their CPU division was generally the only one to make any money and it was their GPUs that barely floated above water. During the same time frame, while Intel was spanking them on x86 AMD was stealing the thunder from nVidia. So while their CPUs have been more profitable it was their GPU division that was making the bigger strides. I guess the "Fusion" idea is looking like a gamble that's going to pay off for them but it's going to require shifting their weight in such a way that the boat doesn't tip over to one side.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
We've been getting closer to that proverbial wall of CPU performance, and as far as Intel is concerned they can take their sweet ass time considering the staggering difference in IPC between their only x86 competitor.

I wouldn't count out GPUs. People are increasingly feeling that lower end CPUs and iGPUs are enough. So we see new GPUs from AMD and Nvidia(7970 and GTX 680) focusing on lower power and smaller performance gains than ever. For the same performance gain its taking longer for GPUs too.

It's odd that so many years after he tanked the company they're still feeling the aftershocks.
Without delays, CPUs take 4-5 years years to go from scratch to production.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Throw in the delays and you've got your underwhelming 2011 Bulldozer :p

I have a feeling that CPU and GPU performance will require a bump as the transition towards higher pixel count displays takes effect. In order to drive those displays (like 4K), particularly where gaming is concerned, there's going to be a need for both CPU and GPU performance, with the weight shifting towards the GPU.

I'm in the same boat as you here. I think it's going to be perf-per-watt, lower power consumption and heftier GPU increases annually with a meager increase in CPU performance and a majority of the promised gains attributed directly to newer instruction sets that won't set in until years after.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
That's true but the biggest gains will be had from GPU. We've been getting closer to that proverbial wall of CPU performance, and as far as Intel is concerned they can take their sweet ass time considering the staggering difference in IPC between their only x86 competitor. Intel's trying to rid themselves of that stigma, which is well deserved if we're honest, but concentrating on their GPU side. I've even heard rumors of a Larrabee 2.0, though if the rumors are true I'd suggest Intel use another name :p

The Bulldozer project was a Ruiz agenda, so you're right in the sense that the management makes the decision with regards to what direction the company takes. It's odd that so many years after he tanked the company they're still feeling the aftershocks.

The biggest gains right now will be gotten from the GPU, that's true, but it's simply because Intel has largely perfected what's possible with CPU architectures ever since Nehalem. There won't be any Conroe moments from now on until we get a breakthrough in CPU architecture, perhaps Quantum Computing or something else. That doesn't change my argument that the market cares more about CPU than GPU performance overall, however. But given the fact that Intel has largely perfected CPU architecture with the technology that's currently available, the only other way they could improve CPU performance is by adding moar corez. That only helps in multi-threaded, lowers efficiency, raises power consumption, results in bigger points of diminishing returns, and lowers yields. This, in turn, translates into higher manufacturing costs because of the bigger die needed due to the higher transistor budget to accomodate the additional CPU cores. They don't want to go through that since they're focusing on power efficiency first-and-foremost. That's why they're so aggressive when it comes to new process nodes and ramping up production as fast as they can. Intel also wants to compete very aggressively with ARM for the smartphone and tablet market, so Atom is in their best of interests.

Basically, given what's possible with current technology, it makes more sense to improve GPU performance because Intel won't be able to get much more CPU performance for now given they've implemented most tricks in the book already.
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
Basically, given what's possible with current technology, it makes more sense to improve GPU performance because Intel won't be able to get much more CPU performance for now given they've implemented most tricks in the book already.

That still doesn't agree with your general sentiment, though. If the market wants more perf-per-watt and lower power consumption over GPU performance then SB wouldn't have had the HD3000 at all and consequently neither would IBs have HD4000. These sacrifice and impact die size, TDP, CPU performance and price for hefty GPU performance gains. The logical conclusion would be that the market DOES want better GPU performance along with lower prices and lower power consumption. This is exactly what Intel's been saying and doing. As has AMD, actually. The difference between the two companies at the moment seems to be their respective starting positions -- Intel slowing down on CPU for GPU while AMD sacrifices GPU increases to make up the ground in CPU performance.

There are diminishing returns on adding cores that was described by Amdahl's law. Limited by serial serial code, the gains are drastically bogged down by parallelism of the code the CPUs are tasked with and it comes at the expense of die size and clock speed.

Amdahl-640x480.png


I personally favor Intel's hyperthreading when compared to the module CMT approach AMD has at the moment. I think CMT is absolutely the wrong way to go and I've made no qualms about stating that...repeatedly =P Even for most server workloads moar coars is a pipe dream. Extra CPU cores, much like instruction sets, are things that users won't benefit from for years down the line.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I don't agree that Intel is defocusing on CPU gains at all. Mobile Ivy Bridge CPUs are decent 15-20% faster, probably much as what Trinity brings over Llano, and what Core 2 Duo brought over Core Duo.

The only outlier is Sandy Bridge, and only on quad core chips because Nehalem quads didn't make sense on mobile with 45nm and stagnated, made Sandy Bridge look better than usual.


Hell, the dual core Arrandale beat quad core Clarksfields in multi-threaded apps!

I think what they are realizing is that increasing stock clocks on a desktop CPU doesn't matter. The AIO desktops are using low TDP CPUs(where Intel is still focusing on, like with mobiles) and the rest of them are from enthusiasts. Significant amount of those people go over the base clock and overclock anyway.
 
Last edited:

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
That still doesn't agree with your general sentiment, though. If the market wants more perf-per-watt and lower power consumption over GPU performance then SB wouldn't have had the HD3000 at all and consequently neither would IBs have HD4000. These sacrifice and impact die size, TDP, CPU performance and price for hefty GPU performance gains. The logical conclusion would be that the market DOES want better GPU performance along with lower prices and lower power consumption. This is exactly what Intel's been saying and doing. As has AMD, actually. The difference between the two companies at the moment seems to be their respective starting positions -- Intel slowing down on CPU for GPU while AMD sacrifices GPU increases to make up the ground in CPU performance.

There are diminishing returns on adding cores that was described by Amdahl's law. Limited by serial serial code, the gains are drastically bogged down by parallelism of the code the CPUs are tasked with and it comes at the expense of die size and clock speed.

Amdahl-640x480.png


I personally favor Intel's hyperthreading when compared to the module CMT approach AMD has at the moment. I think CMT is absolutely the wrong way to go and I've made no qualms about stating that...repeatedly =P

That doesn't contradict what I said. The choice was either go for IGP improvements or add more CPU cores, points of diminishing returns and efficiency be damned. If Intel had gone for more CPU cores that would defeat their efforts of making traditional CPUs more like SoCs by integrating everything and therefore eliminating ICs that would typically be in motherboards on the die, saving manufacturing costs and lowering power consumption. As we've already seen, "MOAR CORES" isn't really the way Intel wants to go for added CPU performance for the aforementioned reasons. It's not that Intel doesn't want to improve CPU performance, it's just that they don't want to do it at the cost of efficiency and higher manufacturing costs. Intel wants better CPU performance by improving IPC and clock speeds, and only if it raises power efficiency at the same time. They're not gonna get that right now given the available technology, so they focus on improving the IGP instead.

Intel isn't really slowing down on CPUs; it's just that they can't do much more to improve IPC, clock speeds, and efficiency all at the same time with currently available technology. They've perfected near everything they can with what's available to them. So, focus on the IGP, which is something Intel can improve much more easily without impacting power efficiency negatively.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
I don't agree that Intel is defocusing on CPU gains at all. Mobile Ivy Bridge CPUs are decent 15-20% faster, probably much as what Trinity brings over Llano, and what Core 2 Duo brought over Core Duo.

The only outlier is Sandy Bridge, and only on quad core chips because Nehalem quads didn't make sense on mobile with 45nm and stagnated, made Sandy Bridge look better than usual.


Hell, the dual core Arrandale beat quad core Clarksfields in multi-threaded apps!

I think what they are realizing is that increasing stock clocks on a desktop CPU doesn't matter. The AIO desktops are using low TDP CPUs(where Intel is still focusing on, like with mobiles) and the rest of them are from enthusiasts. Significant amount of those people go over the base clock and overclock anyway.

AIO desktops barely sell. People that want a desktop typically buy... a traditional desktop and monitor.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
AIOs are supposed to be 10% of desktop sales this year, and 30% is composed of SFF. Full tower cases are about 5%, and while smaller tower cases take up majority at 50%, that figure is decreasing, while AIOs and SFF are increasing.

Yes, I guess the focus isn't solely about CPU performance anymore. People care about thermals and power usage just as much as performance nowadays and that obviously puts a cap on how much performance they can squeeze out of it.

Trade-offs had to be made between power usage, performance, and cost. For the last few decades, performance came at not just by using Moore's Law, but others as well.

Imagine Moore's Law gives you 10 > to use in any metric they want. More > indicates more focus towards that area.

Beginning - ~2005:
-Performance: >>>>>>
-Cost: >>>
-Power usage: >

Now
-Performance: >>>>
-Cost: >>>>
-Power usage: >>

We can add mobile chips. The Mobile 3920XM performs almost like the 3770K but the sacrifice is cost.
 
Last edited:

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
AIOs are supposed to be 10% of desktop sales this year, and 30% is composed of SFF. Full tower cases are about 5%, and while smaller tower cases take up majority at 50%, that figure is decreasing, while AIOs and SFF are increasing.

Yes, I guess the focus isn't solely about CPU performance anymore. People care about thermals and power usage just as much as performance nowadays and that obviously puts a cap on how much performance they can squeeze out of it.

Trade-offs had to be made between power usage, performance, and cost. For the last few decades, performance came at not just by using Moore's Law, but others as well.

Imagine Moore's Law gives you 10 > to use in any metric they want. More > indicates more focus towards that area.

Beginning - ~2005:
-Performance: >>>>>>
-Cost: >>>
-Power usage: >

Now
-Performance: >>>>
-Cost: >>>>
-Power usage: >>

We can add mobile chips. The Mobile 3920XM performs almost like the 3770K but the sacrifice is cost.

Right... I mentioned AIO and then you piled on different products I never mentioned.

SFF is a traditional desktop computer, but just like the name says in a small form factor. Depending on the manufacturer, size, cost and power supply it may house either a mobile or desktop CPU and integrated or dedicated graphics. AIO is a whole different ballgame, and definitely not a traditional desktop computer because of the integrated display and the user accessibility of components.

Yes, the 3920XM delivers the same performance as a 3770K.

Also, I never said the sole focus was performance now. Don't know why you'd use that as an argument.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Right... I mentioned AIO and then you piled on different products I never mentioned.

I don't mean to create conflicts, merely extending my original post. Form factors do not have explicit rules for what components can be in there, you can have desktop 3960X in a "laptop", but generally smaller form factors lean towards lower power processors, for example the S and T series. AIOs are even smaller, it might even use mobile CPUs which further extend power management over the S/T desktop chips.

Also, I never said the sole focus was performance now. Don't know why you'd use that as an argument.
I guess that was a misunderstanding, sorry about that. :p
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
Intel isn't really slowing down on CPUs; it's just that they can't do much more to improve IPC, clock speeds, and efficiency all at the same time with currently available technology. They've perfected near everything they can with what's available to them. So, focus on the IGP, which is something Intel can improve much more easily without impacting power efficiency negatively.

When most people, hell, even most enthusiasts are content with how the product performs, chances are more performance isn't going to sell that well. Both Intel and AMD are focusing on efficiency not because they can't improve performance, they're just catering to the majority of their customers that are fine with how their older desktops perform and are much more likely to trade in those for laptops.

That said, extensions to the ISA. I don't expect Intel to catch up to AMD's iGPU perfomance in just a few generations so improvements in that area I don't care much about, AVX2 and FMA3 and transactional memory is why I'm interested in Haswell. If CPU performance is critical for your application, you won't mind recompiling, if it's not... well no loss. Unless you're Bethesda and are above such silly things as compiler flags, I guess.
 

Abwx

Lifer
Apr 2, 2011
10,940
3,441
136
Yea only for the Anandtech review that has IVB only on par with Llano.

Only an avid Intel supporter would say that IB IGP is on par
with Llano s IGP...

I checked the set up...

They took a 35W 1.6/2.0G Llano and did compare it to a 45W IB
that has at least 50 % higher CPU frequency than the AMD APU ,
yet this mobile and expensive IB struggle to approach the GPU
perfs of the slower 4C Llano ....

Why didnt Anand use a 45W 2.0/2.7G mobile Llano.??..

The reasons are too obvious..
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Nobody is going to know for certain until they are both released. However based on leaked benchmarks we can make some educated guesses.

1. Early numbers put Trinity CPU IPC performance to be able the same as Llano in integer and 20-30% faster in FPU. While IB IPC is about 5-10% improved over SB. SB is already 50-100% faster then Llano on the CPU end so IB will just continue this trend.

2. Early numbers put desktop Trinity GPU to be about 50% faster then desktop Llano GPU. Desktop IB numbers put it around desktop Llano GPU. So Trinity should be faster on the GPU end. However on the mobile side the gap is probably gonna be significantly less the reason being Intel tends to use the same GPU on there desktop and mobile chips (same number of EU, running frequency changes though) while AMD reduces shaders and frequency as you move from desktop A8 APU to mobile A8, then A6 and A4 APU's. If this trend continues then we can see a similar performance gap like that between SB and Llano where desktop Llano is faster by about 50% (not exact numbers), while mobile Llano A8 is faster then mobile i5 by about 40% and mobile A4 faster then mobile i3 by about 20%. Since you're looking at 17W CPU's I doubt Trinity graphics will be that much faster then IB.

3. Battery life I'd say IB will be more efficient. Intel has just to much of a manufacturing edge and at 17W Intel has enough CPU performance lead to reduce frequency if needed to increase battery life while AMD doesn't have that luxury as even there current 35/45W Llano CPU performance is at the low side already. The 17W IB i7-3667U is suppose to be a 2C/4T @ 2.0/2.9/3.1 GHz (Base frequency/Dual Core Turbo/Single Core Turbo). That already easily beats the fastest available 45W Llano CPU out now, I just don't see AMD being able to match that while reducing power by 2.5 times.

I just going to point out that For AMD a 20% increase in cpu performance is = to a 10% increase for IB. Also llano in a mobile APU isn't going to be be much faster than say IB mobile . On the besk top ya it will be faster. But mobile is the numbers I want to see.
 
Aug 11, 2008
10,451
642
126
Only an avid Intel supporter would say that IB IGP is on par
with Llano s IGP...

I checked the set up...

They took a 35W 1.6/2.0G Llano and did compare it to a 45W IB
that has at least 50 % higher CPU frequency than the AMD APU ,
yet this mobile and expensive IB struggle to approach the GPU
perfs of the slower 4C Llano ....

Why didnt Anand use a 45W 2.0/2.7G mobile Llano.??..

The reasons are too obvious..

I thought he was being rather generous actually to AMD. He could have compared it to the A6-3400, which is quite a bit slower than the A8, but much more common.
 

Abwx

Lifer
Apr 2, 2011
10,940
3,441
136
I thought he was being rather generous actually to AMD. He could have compared it to the A6-3400, which is quite a bit slower than the A8, but much more common.

Surely that it would have been even more balanced to compare it to
a Bobcat APU , preferably the 1.0G C50 variant....
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
They took a 35W 1.6/2.0G Llano and did compare it to a 45W IB that has at least 50 % higher CPU frequency than the AMD APU ,
yet this mobile and expensive IB struggle to approach the GPU
perfs of the slower 4C Llano ....

And I've heard multiple AMD fans say CPU is not important to graphics, only turn around and say it is, when argument merits it. There's no difference in graphics between the 35W and 45W Llano chips, only the CPU. For Sandy Bridge/Ivy Bridge, using a lower end CPU is sometimes favorable because of the TDP sharing.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
And I've heard multiple AMD fans say CPU is not important to graphics, only turn around and say it is, when argument merits it. There's no difference in graphics between the 35W and 45W Llano chips, only the CPU. For Sandy Bridge/Ivy Bridge, using a lower end CPU is sometimes favorable because of the TDP sharing.

The total performance is what matters, if you can get more gains by beefing up the GPU more than the CPU then that's what I want. Why did they compare the Ivy to the second-lowest SKU A8? A 3550MX would be a much more valid comparison, and fits in the same TDP, even when ignoring prices.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Why did they compare the Ivy to the second-lowest SKU A8? A 3550MX would be a much more valid comparison, and fits in the same TDP, even when ignoring prices.

It would be ideal to compare the highest versions but that's not always the case. Reviewers generally review what they have on hand. With Clarksfield Intel sent the highest 920XM to test. With Sandy Bridge it was the 2nd highest 2820QM, and with Ivy Bridge its the 3rd one using 3720QM. For AMD, they decided to send A8-3500M.

Now I wouldn't be surprised if Anand gets a higher SKU and tests it.

Here's the reason why Llano won't be affected by CPU as much as SNB/IVB chips:
http://www.hardware.fr/articles/815-10/intel-hd-graphics-cpu-vs-igp.html
http://www.hardware.fr/articles/863-7/hd-graphics-cpu-vs-igp-quicksync.html

On the first review, look at Core i5 661. Changing number of cores/threads doesn't change gaming performance at all. That's because the graphics core clock is fixed, and the Turbo Mode isn't allowed to exceed TDP.

Look at Sandy Bridge and Ivy Bridge. More cores/threads active reduces gaming performance, when CPU and GPU intensive applications are run. That's because of two reasons. The GPU clock has Turbo and Turbo Mode allows exceeding TDP limit for a brief period.

Having more cores active plus running the GPU kills the thermal headroom really fast, and the Turbo on the GPU would turn off real fast. You can see games like Civ 5 perform really poor. That's because its both CPU and GPU demanding.

In most cases, the solution is rather easy. Just change the algorithm to favor the GPU more. Since the CPU's power usage is dominant, only a minor sacrifice in CPU power would give lot of headroom for GPU peformance. Which you can see in comparisons between the two links. The second review's HD 3000 performs far better than the first one. But despite having the 2700K, its slightly slower in CPU than 2600K.

Llano is like Clarkdale. Maybe the variation is even less then Clarkdale. The Turbo Mode on Llano chips barely work, and doesn't have GPU Turbo at all.