• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Mobile Intel Ivy Bridge vs AMD Trinity?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Price and battery life matter more than anything else, imo. Furthermore, I'm gaming on my laptop far more often than I am encoding video so a heftier GPU makes more sense. This is also true in the handheld market. Laptops that are used as desktop replacements would be a different story, but they too would have discrete GPUs and a hybrid crossfire implementation would make more sense to me still.

Intel will win the CPU performance race, but quite frankly I don't care about that. The thing that interests me in IVB lappies is the battery life advantage but it's going to come at a higher cost. The deciding factor will be just how much higher each of those two are when compared to Trinity alternatives.

I love my SB laptop. The battery life is fantastic. I can get somewhere around 6 hours on a Toshiba Satellite L775, and that's with a 1600x900 17.3" screen. The issue I have with it is that I occasionally run into hiccups on the GPU side, particularly with GPU accelerated apps or gaming, which is still dreadful. IVB improves upon those but only slightly and still not enough to warrant an upgrade unless the price is great. Trinity, on the other hand, should be cheaper and fill those two criteria. The Llano dv6 laptops were exceptional at undervolting and it contributed to the battery life. With this SB I don't have that option. Frankly, I'm a bit upset that I dove in early and bought this thing rather than the Llano alternative so I don't think I'll be making that same mistake again.

Gaming on the go and battery life don't go hand in hand, not to mention the CPU and IGP won't run at full power in some occasions if you're running on the battery, making the point moot.

I don't really get the appeal of gaming on a laptop. You have to be tied to the power brick and you need an external mouse if you want a good experience. Frankly, a PS Vita seems like a better proposition no matter how you look at it, except for the smaller but higher quality screen.

Having higher CPU performance helps in that the CPU can complete a task faster and therefore go to idle/power gate faster, enabling higher battery life. There's also the usual things where it helps like video encoding and file compression.
 
Last edited:
There's a 35W quad core version, or more accurately 2-module. The benchmarks we've seen are from just that processor. The base clock is just above 2ghz which puts it above the Llano but the turbo boost clocks is still something that's unknown. The figures you're using are at the moment unsubstantiated from what we've seen, at least the hypothetical turbo clocks and the IPC. We don't know the clock speeds other than the base clock speed of the single 35W part and the desktop parts, but the latter's clock speeds were known months ago. The IPC also depends on the workload (derp), but what's even more confusing is that the IPC in that single benchmark that compared perf-per-ghz varied from equal to less to more depending on which part you compared it to. There were 2-3 Llanos and all with the same model number along with 2 Trinity parts with the same model number and we didn't have the clock speeds for the Trinity part. Basically, it's impossible to tell what the IPC is just yet but it's safe to assume it's close to Llano. There's also the added complexity of single or double module turbo boost because as we've learned from Bulldozer, that too can differ depending on how the threads are packed within the two modules. The 17W parts are also supposed to come in 2-module variations but I'm guessing availability and clock speeds on these are going to be less than impressive.

And what 20% improvement? From the leaked benchmarks the most significant improvements are actually coming from the 35W Trinity > 35W Llano parts, which makes sense given the way the RCM tech works. The boosts in clock speed will be significant at comparatively lower clock speeds than they are at higher clock speeds. Past 4ghz, according to their own papers, the technology makes far less sense and that same slide showed this in the 5800K numbers. The leaked benchmarks and slides show 30% improvements at those 35W TDP parts which is in line with what AMD had said when they actually increased their performance expectations for Trinity.

I'm not trying to paint it as an Intel-killer. Trinity will still get slaughtered by SB and IB but if their performance has caught up to Nehalem and the GPU is as awesome as they say it is, which I think few people will doubt, it's going to kick some major ass. Frankly, this shouldn't be shocking either. AMD makes awesome APUs that are the better option for most users in the mobile world. But we have to be careful about assuming just what we know about it because at the moment we know almost nothing and from we do know seems to coincide with what AMD has said, which would be a nice change after the clear bullshit they fed us with Bulldozer.

May 15th is the release date and I'm guessing we'll have even more leaked benchmarks and slides as the OEMs start fiddling with these chips in the coming weeks.

I can tell you personally that I haven't once done any encoding on this SB laptop since I've bought it and that was a year ago. Gaming, on the other hand, is something that I would do and I don't see why I should be attached to a wire to do it. The argument that a faster CPU finishes tasks faster is only true for people who use their CPUs to their max potential and in a laptop you're almost always limited by the GPU or by the hard drive, and more likely the latter. I use this lappy for browsing, PDFs, and the occasional drafting work which wouldn't benefit at all from a faster CPU. In short, I'm never ever CPU limited and nor do I reach a point where I get even close to it. I get 6 hours on the battery purely because I'm saving power by decreasing the CPU's clock speeds so I don't understand why in the hell I'd need a more powerful CPU :/ My laptop isn't my desktop and it's not where I do a majority, or any, of my work. If I did I wouldn't be buying a dual-core+HT laptop without a discrete GPU anyway. You shouldn't assume I'm the extreme end of the spectrum here. In fact, what I use it for is generally what most people use their laptops for and a better GPU + battery life + cheaper price always win out. I can assure you most people don't encode or will never ever interact with quicksync. If I'm buying a desktop CPU it's almost certainly an Intel but if I need a laptop an AMD chip makes more sense.
 
Last edited:
Yeah, the dual-core version. I was talking about the quad-core, and that one won't see a base clock speed of more than 2GHz. With Turbo they can get to 2.5GHz or even higher when a program is using a single thread, but I was never talking about Turbo OR the dual-core 35W version.
No, it's not. In terms of memory bandwidth in many cases it's a sidegrade from Starts, and the only thing AMD is banking on is on higher speed memory instead of making a better IMC. Look at the cache latency also for another issue.

2.jpg

AMD_Bulldozer_Review_Sandra2011-3-500x326.png


igps from the apu don't use any single bit from the cache...actually the memory controler is heavy worked just to avoid the data to "ping-pong" like normals igps do
 
Olikan, the developer kit for Trinity has the decode info for the OPN and device ID so you can find the model number for the above chips.

2.5ghz is quite nice 😉 I'm guessing those would be the 45W chips? or also 35W? everything under 3020 looks to be desktop.
 
@ OP

Well, i'd recommend to watch out for both 17W and 35W models of Trinity. Trinity would be giving Intel some serious headaches here. Haswell was redone for a reason (according to some chap with connections at Intel). If you're looking for almighty Intel chip, wait till Haswell. On the other hand, if you're looking to buy something which may suit your needs, you could/ should consider Trinity powered laptops, as low-end/ basic Intel based solutions won't perform overall (wherever graphics play a part) so well, at least in the same power envelope. Trinity will give you reasonable graphics power at a low power envelope. The A10, which is supposed to be new top-end, it is also supposed to have a 35W model (according to a leaked HP slide) with base clocks at 2+ Ghz and turbo at 3+ Ghz.

Of course, this being AT... the colour blue mood is rather unsurprising.

EDIT:
http://semiaccurate.com/forums/showpost.php?p=160403&postcount=1931

http://item.gmarket.co.kr/detailvie...cd=111111111&pos_class_kind=T&search_keyword=

Here's the link to leaked HP slides...
 
Last edited:
Pic screams "fake!" to me. And if it WERE true, it goes against AMD's estimates of 15% higher CPU performance. This looks like a table for the desktop version of Trinity running FM2, not the laptop version.

That's because sometime early this year they increased the performance estimates from ~15% to 20-30%, which those clock speeds seem to agree with. They also increased the GPU performance estimates to 50%.

http://www.xbitlabs.com/news/cpu/di...ce_Projections_for_Next_Gen_Trinity_APUs.html
 
Pic screams "fake!" to me. And if it WERE true, it goes against AMD's estimates of 15% higher CPU performance. This looks like a table for the desktop version of Trinity running FM2, not the laptop version.

that was for bulldozer -> piledriver

even then, i don't bellive in 10-15%, AMD probably don't even have an ES vishera chip....things can be better (or worse) than that
 
Pic screams "fake!" to me. And if it WERE true, it goes against AMD's estimates of 15% higher CPU performance. This looks like a table for the desktop version of Trinity running FM2, not the laptop version.

AMD never said 15% higher CPU performance. AMD said 15% Performance/Watt increase. Essentially Pilediver can be same speed as Bulldozer cores. Just using less power. Try calculate how much better Ivy is to Sandy in those metrics.

amd-piledriver-roadmap.png
 
@OP

"The fact of the matter is that everything Charlie has said on the big H is correct. Haswell will be a significant step forward in graphics performance over Ivy Bridge, and will likely mark Intel's biggest generational leap in GPU technology of all time."

The above excerpt is from AT's review of IB 🙂

So it is settled then... Llano is still better than IB, and Haswell will fit between Llano and Trinity in muscle power when it comes to GPU. However, AMD will very likely trounce it (again!) with whatever comes after Trinity...
 
That's because sometime early this year they increased the performance estimates from ~15% to 20-30%, which those clock speeds seem to agree with. They also increased the GPU performance estimates to 50%.

http://www.xbitlabs.com/news/cpu/di...ce_Projections_for_Next_Gen_Trinity_APUs.html

Then AMD is clearly talking crap. CPU performance will not increase by anywhere near 30% with Piledriver as it is still using the Bulldozer architecture with some tweaks, unless they found some magical unicorn fairy dust and somehow Global Foundries was able to solve all their issues overnight and deliver a CPU capable of much higher clock speeds. Going from VLIW5 to VLIW4 won't net you 50% more GPU performance at the same power envelope, either. It's pretty hard to trust any corporation now-a-days. Amazingly, it's only Intel that doesn't pull these shenanigans of mentioning the highest theoretical performance increase as the average one.

If there's two things I hate as an enthusiast currently it's AMD's CPU department and their marketing crap and NVIDIA and their marketing BS.
 
Then AMD is clearly talking crap. CPU performance will not increase by anywhere near 30% with Piledriver as it is still using the Bulldozer architecture with some tweaks, unless they found some magical unicorn fairy dust and somehow Global Foundries was able to solve all their issues overnight and deliver a CPU capable of much higher clock speeds. Going from VLIW5 to VLIW4 won't net you 50% more GPU performance at the same power envelope, either. It's pretty hard to trust any corporation now-a-days. Amazingly, it's only Intel that doesn't pull these shenanigans of mentioning the highest theoretical performance increase as the average one.

If there's two things I hate as an enthusiast currently it's AMD's CPU department and their marketing crap and NVIDIA and their marketing BS.

Going from vliw 5 to vliw 4 and then clocking your GPU at 33% higher will give you that 50% or more.😀
 
no shenanigans here...just comparing our worst agains future best...
intel_ivy_bridge_performance_2.jpg

Except you mentioned only one of the many slides they provided. Many of the other slides are just mentioning the CPU performance difference, and they put it at ~10% faster which is close to the truth. And that chart is accurate, unlike what NVIDIA did with CoreMark by using different version numbers for each product. It was also their only marketing slide.

Perhaps you should focus more on not trying to unsuccessfully twist arguments.
 
Going from vliw 5 to vliw 4 and then clocking your GPU at 33% higher will give you that 50% or more.😀

Except they won't be able to clock it 33% higher. The HD 6970 has a 250W TDP and has a Core clock speed of 880MHz and the HD 5870 has a 188W TDP and a Core clock speed of 850MHz. The HD 6970 is also "only" 15-20% faster.

Sorry to break it down to all of you, but AMD are pulling their new numbers out of where the son don't shine and unless they have magical unicorn dust there's zero way Trinity is gonna have 50% higher IGP performance than Llano on average. Zero. You don't get that huge of a boost out of a slightly more efficient architecture on the same process node with the same foundry.
 
They clearly say "up to", as in a ceiling. AKA "don't expect more than this". My guess for average GPU perfomance gain in games is 30%.

No wonder Intel is pouring tons of $ into Haswell, IB is behind Llano more than I thought it would be. Seems Trinity may actually get some decent form factors from OEM.

Except they won't be able to clock it 33% higher. The HD 6970 has a 250W TDP and has a Core clock speed of 880MHz and the HD 5870 has a 188W TDP and a Core clock speed of 850MHz. The HD 6970 is also "only" 15-20% faster.

Sorry to break it down to all of you, but AMD are pulling their new numbers out of where the son don't shine and unless they have magical unicorn dust there's zero way Trinity is gonna have 50% higher IGP performance than Llano on average. Zero. You don't get that huge of a boost out of a slightly more efficient architecture on the same process node with the same foundry.
 
Last edited:
They clearly say "up to", as in a ceiling. AKA "don't expect more than this". My guess for average GPU perfomance gain in games is 30%.

No wonder Intel is pooring tons of $ into Haswell, IB is behind Llano more than I thought it would be. Seems Trinity may actually get some decent form factors from OEM.

I'd expect the gain to be more like 20% in comparison to Llano. VLIW4 just isn't THAT much more efficient than VLIW5.
 
Zero. You don't get that huge of a boost out of a slightly more efficient architecture on the same process node with the same foundry..

ohhh....i think you didn't receive the memo of resonant clock mesh...here

another thing LOL_wut, you have to see that Llano baraly could reach 3.6Ghz even with the igp disabled... the GF 32nm node is worse than the 45nm, thing can change alot in one year

btw, no i was trying to point out that, even intel uses shanenigans... is that hard to not see that?
 
ohhh....i think you didn't receive the memo of resonant clock mesh...here

another thing LOL_wut, you have to see that Llano baraly could reach 3.6Ghz even with the igp disabled... the GF 32nm node is worse than the 45nm, thing can change alot in one year

btw, no i was trying to point out that, even intel uses shanenigans... is that hard to not see that?

Resonant mesh is for the CPU portion only, from what I've read.

Global Foundries' problems are all one thing: after you reach a certain range of voltages, leakage becomes too high and because of it power consumption rises tremendously. AMD also have this problem with Bulldozer, so I doubt it's completely fixed. Given AMD's CPU department is in shambles and has been for a few years, I don't see them pulling a rabbit out of a hat. If they wanted to save grace a few years ago, they should've ended or never started their exclusivity agreement with GF and instead manufactured everything at TSMC, even if it caused initial delays. Look at Bobcat and you can see a product where AMD succeeded in all ways.

Intel does shenanigans... on a very small number of occasions. Nothing compared to what you see NVIDIA doing, and they even manage to make AMD look like angels in comparison. There's just no other very big/influential technology company I can think of that lies to their customers like NVIDIA. Well, except Apple... but that's a topic for another day.
 
Intel does shenanigans... on a very small number of occasions. Nothing compared to what you see NVIDIA doing, and they even manage to make AMD look like angels in comparison. There's just no other very big/influential technology company I can think of that lies to their customers like NVIDIA. Well, except Apple... but that's a topic for another day

well...i don't belive in 56% increse in GPU too... way too high, i bet in ~35%

but for the 29% CPU, i don't see a problem, the clocks are way higher, and piledriver have tons smalls changes over bulldozer
 
@Lol_wut

I'll bet you'll see bigger jump when compared to what IB got over SB >🙂

As someone else mentioned... Haswell better be good enough, as it has already been redone once. AMD is not kidding around with APU's, they mean business. Kaveri will most likely be another slap in Intel's face... even with Haswell to hand.

On the server side, Intel has AMD beat right now, but i wouldn't be too surprised if i see AMD catching up with Intel with Steamroller at both perf/$ and perf/W

Intel's shens? Well, as part of Anti-trust settlement, Intel can develop exclusive lines with OEM's and exclude AMD from them. Translation, Intel could feed OEM's money, and it is all good, and nobody would bother them if OEM's happen to do a shoddy job with AMD. How's that for tricky little so and so's.
 
Back
Top