What you're talking about? What has Dalvik JIT to do with V8?
The point is that Dalvik performance is more fundamental to Android than V8. And yet Google saw it fit to release Dalvik without even having a JIT for quite a long time.
I think you underestimate Google. Here is a small history lesson for you:
http://www.niallkennedy.com/blog/2008/09/google-chrome.html
Clearly V8 development started from the scratch with x86 and arm optimization in mind.
Mentioning ARM and x86 in the same sentence there doesn't mean they got equal attention.
In 2011 V8 got some very basic optimizations to massively improve ARM performance:
http://blogs.arm.com/software-enablement/456-googles-v8-on-arm-five-times-better/
And they came from ARM employees, not Google. You can't possibly say Google was aggressively optimizing for ARM when they didn't even have VFP code generation for a float heavy language. I don't think you really understand the implications here. In 2011 was JS performance improving 5 times for V8 on x86? No.
JS benchmark as a processor benchmarks started with phones. On PC JS benchmarks were used as arguments in "browsers war" but never as cpu speed test.
But any way, back then Chrome was faster not because of specific x86 optimizations, but because of different approach they took (javascript compilation into native code vs. bytecode interpretation).
It makes absolutely zero difference if people were using them as CPU benchmarks, the fact is that in 2008 people were very interested in Javascript performance on browsers all of a sudden. Meaning that the browser with the best JS performance won, and this meant the best x86 code gen. ARM wasn't on the radar at all.
Chrome was the first browser to use JITs for their JS engine, yes. But the others (Firefox, IE, Safari, Opera, etc) really quickly followed suit, because of all the attention Google was getting over it. I never claimed that Chrome's advantage was due to targeting x86 more aggressively than the others, that's silly. Rather they were all targeting code gen and they were all focusing on x86 because that's what people were using. ARM performance wasn't taken seriously until at least 2010.
Try to run any JS benchmark (Octane, Kraken, V8....) and see yourself.
Many already have.
http://codehenge.net/blog/2012/08/javascript-performance-rundown-2012/
Of the three tested V8 only has a big win in the benchmark they developed themselves, big surprise there right?
And MS is still aggressively pursuing improved JS performance, like everyone else.
http://encosia.com/interesting-details-on-ie10s-javascript-performance-tweaks/
Let me fix it for you:
The 4 core Calxeda nodes were shown beating the D525 sometimes at much lower clocks but with 2 more cores.
I hope you understand the difference between core and hyper-threading.
Of course I do.
The point is that the test is extremely apples to oranges for a CPU uarch test because you're looking at very different core and clock speed configurations. And Phoronix isn't reporting things like CPU utilization to try to get an idea of what threading requirements were necessary for each task.
And you think 1.6 GHz for A9 is an instant clock? And can you provide any proof that 4-core arm under load (with full clock speed) consumes less power than Medfield at turbo? The only comparison I saw is this:
http://www.anandtech.com/show/6330/the-iphone-5-review/12
Medfield 2GHz consumes little bit more then dual-core Krait. While Tegra3 consumes a lot more.
What do you mean "instant clock"? There are SoCs that can clock A9 that high. Where have I ever claimed that FOUR Cortex-A9 cores consume less power than ONE Atom core at similar clocks? Now you're being ridiculous.
Let's focus on what I did say: Cortex-A9 tends to perform a little better clock per clock in single threaded workloads vs Atom, and the two reach similar clocks (although in Medfield Atom can turbo - this could possibly be done in software with A9 SoCs but no one does it. But it's not really a uarch feature, IMO). I haven't said anything about power consumption, and it varies a lot depending on implementation, ie not just what manufacturing process is used but how it was laid out and optimized. But I would easily expect the Cortex-A9s on Samsung's 32nm process, that is in A5r2 and Exynos 44xx for instance, to consume a lot less power at the same clock speed as Saltwell. So yes, much better perf/W for the CPU core, but if the L2 cache and/or memory controller suck that can change things.
I know people like to say stuff about single core Medfields beating 2-4 core Cortex-A9s and Kraits but this is ridiculous, they only look at single threaded tests while making these claims...