Bradtech519
Senior member
- Jul 6, 2010
- 520
- 47
- 91
Had an E6850 and loved it. Wiped the floor with the X2 3800+ overclocked. It shows what happens when you wake a sleeping giant like Intel.
First of all, Kabini appears to just have an arbitrary 15W catch-all TDP thrown at it. Anand found it to consume much less.
AnandTech said:The CPU performance testing of x264 HD 5.x and Cinebench confirm the CPU deficit AMD faces with Kabini. In heavily threaded workloads, Ivy Bridge ULV is 50-100% faster, but the real problem is in the single-threaded workloads. A single Jaguar core in Cinebench manages to score just 0.39 compared to IVB ULV’s score of 1.24, so worst-case Kabini is one third the speed of Ivy Bridge.
True, the slides are outdated as we have more accurate information now. But the slides are still technically correct. 4.5W is in the 4-6.5W range while 7.5W is in the <= 10W category. So all the more accurate current information tells us is that Intel is coming in on the low end of their projections for Baytrail M.Your slides are outdated , yet you keep using them as if the info
written is holy grail..
From the forum :
http://wenku.baidu.com/view/678a3a36dd36a32d73758148.html
4.5W is the minimum , at 1.46ghz and for a 2C.
Not particularly, it's about where I expected it to be. Why is that when I'd previously stated my expectation that the Snapdragon 800 MDP was probably using somewhere between 6 and 7 watts to provide that level of performance? Simple - take a look at the actual Playwares Review of the Samsung S4 LTE-A which shows GLBenchmark 2.7.2 performance under half that of the MDP. Such most likely means that the GPU was run at half the frequency of the MDP with lower voltage as well, probably resulting in around 1.5W of power usage.I'm glad you asked! http://www.androidauthority.com/snapdragon-800-battery-benchmark-galaxy-s4-lte-a-237448/
Impressed yet? Or just plain worried?
Kabini has nothing to do with this thread, but now that you mentioned it... Anand also shows a Kabini notebook barely matching an IB ULV notebook in light workloads with normalized battery life and up to 18% more battery life under high CPU usage. Battery life should be much better for the kind of CPU/GPU performance it offers (Haswell ULT Macbook Air 2013 has 35-65% more battery life - normalized battery capacity - than IB ULV Air 2012). If we assume Bay Trail is even more agressive than Haswell on the power front you kinda get where some of this hype comes from.
It all depends on capacity of the battery, Haswell in most cases has much improved idle over Ivy Bridge and some other things. On the paper, Haswell could be better but most people don't count all the factors thus the people get the wrong image.
Personally, from that "Bay Trail crushes ARM" thread and this thread that seems to be a direct consequence of that, I have no strong feelings either way - I won't be surprised if the new Atom outperforms any ARM competitor, but I can't say I'm exactly hoping or counting on it either. I'm just purely wait-and-see, no expectations either way.
My only real opinion on the matter is that whether Bay Trail outperforms ARM or not, AnTuTu is, without a doubt in my mind, not the benchmark to indicate it. AnTuTu is neither reliable nor consistent, and can be gamed. Its technical flaws prevent it from being any sort of bell-weather. And that's the crux of the issue. Had there been multiple benchmarks made public - Kraken, Octane, Browsermark, 3DMark, Sunspider, anything at all aside from AnTuTu, then that would be a more solid base upon which to shout "Woohoo, Intel finally beat ARM! My company is number 1 and I am somehow a better person for it! Take that, you ARM fanboys!".
So the real issue at heart here (for me) isn't whether people believe or don't believe that Intel can catch/overtake ARM. (Everything is just a matter of money in engineering - how much one party decides to throw at the problem - so anything is possible*, and I believe Intel should in fact allocate as much resources as they need in order to succeed here, even to the detriment of their desktop line.) The real issue at heart is that only one benchmark has been "leaked", and it is the worst possible benchmark, and yet some decidedly pro-Intel parties take it as irrefutable evidence already. And then, when it is pointed out to them that AnTuTu is a terri-bad benchmark, so hold off on the proclamation until we get more reliable and consistent benchmarks, these same pro-Intel parties brand their opponents "AMD fans", "ARM fans", "Qualcomm fans", or just generic "Intel haters". I find that rather disturbing. You, OP, do not wish to be branded an Intel fanboy (even going so far as to explain the nickname with a completely harmless origin story), yet you seem to have no qualms about branding the people who disagree with your conclusions and/or interpretations to be Intel haters. It is very disconcerting, and I seriously hope you can begin discussions in the future without unnecessary rhetoric like that. You essentially invited a fanboy flamewar to happen when you opened your thread like that. If that truly wasn't your intent, then you may be well served by exercising better judgment in your future opening posts.
TL;DR: I've no doubt in my mind that if Intel actually spent several billions in the design of Bay Trail with the express intent of overthrowing ARM, it is very probable to overtake ARM in performance and efficiency. However, the question of whether they have now accomplished it completely, partially, or not at all, is not something that the AnTuTu benchmark can determine.
*Engineering results scale pretty well with money. If Apple, for example, handed AMD $50B to build the necessary fab and tools and design a chip to embarrass the highest-end Intel products, it would happen (that's a lot of money). But if Google would then hand Intel a bigger amount of money to one-up that resulting Apple/AMD product, then it would also happen. It really all comes down to money, hence the more successful a company becomes, the more muscle they have to flex since engineering issues are mostly caused by - and also mitigated/cured by - money.
Are you honestly trying to tell me that these forums are not rife with posts from amd fans doing exactly the same thing, sometimes basing their claims on nothing more than marketing slides or statements by persons with a vested interest in the product they are making claims for? It is really sad actually to see the forums so polarized that it seems sometimes impossible to get any useful information at all.
You're really not giving intel enough credit here, the MBA isn't getting double the battery life (over IVB) merely by virtue of a larger battery. Some of the external factors (OS power management, battery size, etc) can be manipulated to improve things slightly, but the Haswell ULV is just that much better than Ivy Bridge in terms of battery life IMO. Many review websites have tested battery life to be 12-13 hours on the MBA. Anyway, intel has implemented many of the same burst and power management schemes into Silvermont that Haswell uses, and I think the Silvermont will be equally impressive in that respect. Of course, we won't know for sure until next month...it should be interesting in any case.
Most of the problem here stems from the fact that mobile benchmarking is, to put it as kindly as I can, in the dark ages. This is especially true if you're trying to figure out how the CPU performs (as opposed to graphics). At least with graphics benchmarks, a real image is produced for the run. For most of the CPU benchmarks, it's a random snippet of code compiled in an undocumented way that produces an unknown result. That result is then turned into a score and published.
There are so many different mobile CPU architectures (A7, A8, A9, A15, Krait2, Krait3, Swift, etc) that run at so many different claimed frequencies, yet sit on so many different form factors that produce different throttling behaviors that it's almost impossible to do an apples to apples comparison. Then you have things like different OS types (Android, IOS, Win8) with so many different versions... it's all a mess.
Something like Geekbench tries to close that gap, but again, that's another closed source benchmark with an unknown workload with unknown compiler settings. Has anybody like Exophase done a breakdown to see what instructions are used? For example, do the FP components use SSE and NEON across architectures and OS versions?
Then there are the browser benchmarks (Sunspider, Octane, Kraken, etc). Nobody ever believes these are right. There are always claims of browser cheats or unfair optimization. Other people say that the benchmarks "don't run long enough" and that if they were longer, this would lead to more throttling.
Does anybody here have a mobile benchmark that they think is a reasonable indication of CPU performance? Is there any "one" benchmark that could leak that people would actually believe? I don't think there is, and honestly, there shouldn't be.
Until someone steps up and produces a better mobile benchmark, I'm afraid that the screams and yells of the forum posters will continue unabated.
Exactly. even if you adjust the slides and say performance is only 1.5x better and power is only 2x lower it's still pretty good. I mean if it performs even worse than those "halved numbers", it's a lot bigger Marketing failure than JF-AMD with BD.
Most of the problem here stems from the fact that mobile benchmarking is, to put it as kindly as I can, in the dark ages. This is especially true if you're trying to figure out how the CPU performs (as opposed to graphics). At least with graphics benchmarks, a real image is produced for the run. For most of the CPU benchmarks, it's a random snippet of code compiled in an undocumented way that produces an unknown result. That result is then turned into a score and published.
There are so many different mobile CPU architectures (A7, A8, A9, A15, Krait2, Krait3, Swift, etc) that run at so many different claimed frequencies, yet sit on so many different form factors that produce different throttling behaviors that it's almost impossible to do an apples to apples comparison. Then you have things like different OS types (Android, IOS, Win8) with so many different versions... it's all a mess.
Something like Geekbench tries to close that gap, but again, that's another closed source benchmark with an unknown workload with unknown compiler settings. Has anybody like Exophase done a breakdown to see what instructions are used? For example, do the FP components use SSE and NEON across architectures and OS versions?
Then there are the browser benchmarks (Sunspider, Octane, Kraken, etc). Nobody ever believes these are right. There are always claims of browser cheats or unfair optimization. Other people say that the benchmarks "don't run long enough" and that if they were longer, this would lead to more throttling.
Does anybody here have a mobile benchmark that they think is a reasonable indication of CPU performance? Is there any "one" benchmark that could leak that people would actually believe? I don't think there is, and honestly, there shouldn't be.
Until someone steps up and produces a better mobile benchmark, I'm afraid that the screams and yells of the forum posters will continue unabated.
Kabini has nothing to do with this thread, but now that you mentioned it... Anand also shows a Kabini notebook barely matching an IB ULV notebook in light workloads with normalized battery life and up to 18% more battery life under high CPU usage. Battery life should be much better for the kind of CPU/GPU performance it offers (Haswell ULT Macbook Air 2013 has 35-65% more battery life - normalized battery capacity - than IB ULV Air 2012). If we assume Bay Trail is even more agressive than Haswell on the power front you kinda get where some of this hype comes from.
As I mentioned in another thread, you can't compare notebooks to get the real performance and power draw of the chips. Intel has rules for ensuring only the best, power-sipping components are used in their boutique ultrabook reviews. The rest of them are the usual crap with much worse battery life.
Since when has intel dictated to manufactures what silicon / products they are able to send out for reviews.....so what you're essentially stating is, intel tells Apple, samsung, asus, etc which chips and components they can use in their review samples.
Sorry, but no.
Intel's role in the industry has started to change. It worked very closely with Acer on bringing the W510, W700 and S7 to market. With Haswell, Intel will work even closer with its partners - going as far as to specify other, non-Intel components on the motherboard in pursuit of ultimate battery life.
As I mentioned in another thread, you can't compare notebooks to get the real performance and power draw of the chips. Intel has rules for ensuring only the best, power-sipping components are used in their boutique ultrabook reviews. The rest of them are the usual crap with much worse battery life.
AnandTech Kabini Review said:AMD shipped hardware sites special prototype laptops, similar to what we’ve seen in the past with Sandy Bridge, Llano, Ivy Bridge, and Trinity. These systems typically aren’t intended to hit retail outlets, though in some cases they may be very similar to production laptops;
I rooted whatever ARM devices I had, put Debian on them, compiled Povray 3.6 on them, and ran the benchmark scene. NEON vs. VFPU didn't make much difference due to the code itself.
As I mentioned in another thread, you can't compare notebooks to get the real performance and power draw of the chips. Intel has rules for ensuring only the best, power-sipping components are used in their boutique ultrabook reviews. The rest of them are the usual crap with much worse battery life.
SiliconWars,
You do realize that the same type of design optimization happens with phones and tablets, right, and that Ultrabooks are now just becoming more like those devices in terms of design process?
Intel's role in the industry has started to change. It worked very closely with Acer on bringing the W510, W700 and S7 to market. With Haswell, Intel will work even closer with its partners - going as far as to specify other, non-Intel components on the motherboard in pursuit of ultimate battery life.
I don't see anything here stating that intel is intentionally trying to mislead and manipulate reviewers.
Surprising or not, it gives them an advantage that you and others are unfairly attributing solely to Haswell. It's not, fact is Intel has the whole ecosystem behind them where it matters, ie on Anandtech reviews of high-end hardware.Of course intel has full control over the ultrabook specification, therefore they can specify various components to use for any manufacture producing a form factor that they created. That's not surprising.
I think the more likely scenario is that intel is trying to create a user experience which is somewhat even across various products -- they want all of the products released under their ultrabook spec to have good battery life. That means OEMs can't use cheap components which may compromise that. That does not, however, mean that intel is trying to mislead reviews and/or strongarm OEMs. I highly doubt that many would play along, especially Apple.
The reviewers know exactly what is going on, it's the readership who are sadly ignorant of it. Anand spells these things out to you frequently under difficult reviewing circumstances, but you are far too busy cheerleading your favourite vendor to notice it.It is not common within the silicon industry for a producer to set standards by which products are released. In the dGPU world, nvidia and AMD do the same thing - they both want even consumer experiences across a broad range of products; that doesn't mean reviewers are being misled.