Originally posted by: GoHAnSoN
i know it's different architecture, but, are there any review against each other ?
Just thought it would be interesting to know.
You would not think this is such a difficult issue, but it is.
All you would have to do is have each perform the same task and measure the time. But they never can perform quite the same task.
Most people operate their computers most of the time interactively. You click on something and wait for it to happen. There is very little "compute bound" operation. Almost all of the activity that takes a noticeable time is the OS manipulating the graphical interface, and the video card rendering the result. I gather that people familair with Apples and PCs agree that Apples are much slower in this way, and that is primarily due to the OS.
You could have both processors run the same apps and time a "compute bound" activity. A compute bound activity might be something like encoding video into divx. I don't know if divx is even supported on Apples. But apps that run on both are not really the same apps. In part, they are compiled from the same source into different object code, the instructions a processor actually executes. We don't know how optimized the compiler is, or how much special handling the most time-consuming parts received to use the processor to the best advantage. In part, there are customized sections that hook into the OS's API (application program interface.) The app itself does not directly operate the hard drive or the video. A lot of time that appears compute bound is actually reading and writing from the HD, so it is inconclusive whether the comparison is of the processor or something else. Some of the well-known benchmarks are actually scripts that run an app interactively, so the speed of the video card ends up being a big part of the time involved.
Then there are a few old time benchmarks from way back that were originally intended to rate a mainframe computer's calculation prowess. There is some source code, probably in C or Fortran, that is compiled to run on your specific processor. You are allowed to provide customized versions of key routines if you wish. Those customized sections, and how good your C/Fortran Compiler is at optimizing for the desired processor makes a huge difference. x86 users are quite sure that Apple stacked their G5 comparison against them. They ran these benchmarks on linux and used a universal, open source compiler to generate object code for both P4s and G5s. Apple claims they believe this an unbiased method. In fact, the times for these benchmarks are drastically improved when Intel's compiler and custom routines are used. That may be unfair, but Apple is entitled to do the same for the G5, and it is perfectly obvious that Apple used the supposedly unbiased method because it hobbled the P4 so badly. Without that, the G5 would have been crushed. IAC, these old time benchmarks are as close as you can get to directly comparing totally different CPUs, but once done, you still do not know how much it reflects back on the actually tasks users generally perform.
Once we get to this point, Apple proponents fall back to saying speed doesn't matter because both environments are fast enough and the Apple environment is far superior. Whatever. There is little doubt that any compute bound task, for which a modest amount of progamming effort is expended, will run substantially faster on a P4.
Apple could in fact run their present OS on P4s if they were inclined to do so. It is a flavor of UNIX and UNIX is written to be processor independent. Apple makes their old apps run on the new and different processors by recompiling them for the new ones, and the same would do for the P4. That would not quite make Apples into a PCs, because the way Apples operate all the IO and peripherals would still be different. Apple would probably not care to deal with the chaos of the PC, but if they went that far, it would make it obvious how relatively sparse the driver support is in comparison to Windows on PCs.