Originally posted by: grant2
Originally posted by: Wingznut
I don't suppose you'd mind explaining to us what part of Sohcan's FAQ is inaccurate? Or exactly how 64-bit will speed up your desktop experience?
I think he downplays the value of performing 33-64 bit integer & FP math in 1 operation, vs. the current multisteps
I also remember reading an article 13 years ago where the author discussed the speed & program size increase MERELY from recompiling 16 bit -> 32 bit. Now what does 16bit -> 32bit offer that 32 -> 64 doesn't?
I wasn't trying to downplay anything. It's been quite recognized for a long time (among engineers and architects, not PR types) that 64-bit microprocessors add additional functionality and a larger flat memory space rather than an explicit performance increase due to bit-level parallelism. The FAQ that I wrote a while back explains the diminishing returns in performance from the additional bit-level parallelism with 64-bit arithmetic.
I was merely trying to dispel the myth that microprocessors somehow "operate" on some fixed-sized dataset, and that 64-bit microprocessors can somehow "churn" through the data at twice the rate, and that this will somehow lead to a direct speedup in desktop applications. There are certainly big-iron applications where arithmetic on 64-bit datatypes is more common, but this hardly relates to a proportional speedup with a 64-bit microprocessor. For example, 186.crafty is a chess playing program in SPECint (an industry-standard workstation benchmark) that heavily uses 64-bit datatypes. Despite this, the performance of the P4 and Athlon in crafty is in line with their total SPECint score (base) with respect to other 64-bit microprocessors. In 186.crafty, the 3.06 GHz P4 scores 1160, the 2800+ Athlon XP scores 1311, and the 1.45 POWER4+ (IBM's 64-bit server microprocessor) scores 941. Their respective total SPECint scores are 1099 (P4), 898 (Athlon XP) and 909 (POWER4+).
For a supporting opinions, here's an excerpt from the introduction of the textbook for my graduate class in parallel computer architecture (
Parallel Computer Architecture, David Culler and Jaswinder Singh):
"The period up to about 1986 is dominated by advancements in
bit-level parallelism, with 4-bit microprocessors replaced by 8-bit, 16-bit, and so on. Doubling the width of the datapath reduces the number of cycles required to perform a full 32-bit operation. Once the 32-bit word size is reached in the mid-1980s, this trend slows, with only partial adoption of 64-bit operation obtained a decade later. Further increases in word width will be driven by demands for improved floating-point representation [not an issue, since x86 has supported 64-bit and 80-bit FP modes for two decades]
and a larger address space rather than performance (emphasis added). With address space requirements growing by less than a bit per year, the demand for 128-bit operation appears to be well in the future."
Here's
HP's page on 64-bit computing. Note, under the benefits section, the emphasis on increased functionality, precision, and performance due to increased memory addressability, rather than explicit increased performance due to decreased latency 64-bit datatypes operations. Also note the emphasis on the usefulness for database systems (OLTP), decision support systems, and high-performance technical computing. Finally, note how at the end the is made clear that programs that do not need 64-bit datatypes should be compiled with 32-bit datatypes on 64-bit microprocessors.