ModestGamer
Banned
- Jun 30, 2010
- 1,140
- 0
- 0
I have heard that 64 bit architecture does not outperform 32 bit because 32-bit applications do not take advantage of the oversized processors since they run under a backward compatibility mode. To take advantage of the 64-bit the old 32-bit apps should be rewritten or at least recompiled.
In other words, buying a 64-bit laptop, configured with Win 7-64, is just a waste of money if I don't buy 64-bit apps. Is that true?
In computing, word is a term for the natural unit of data used by a particular computer design. A word is simply a fixed sized group of bits that are handled together by the system. The number of bits in a word (the word size or word length) is an important characteristic of computer architecture.
The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in the computer are usually word sized and the amount of data transferred between the processing part computer and the memory system, in a single operation, is most often a word. The largest possible address size, used to designate a location in memory, is typically a hardware word (in other words, the full sized natural word of the processor, as opposed to any other definition used on the platform).
Modern computers usually have a word size of 16, 32 or 64 bits but many other sizes have been used, including 8, 9, 12, 18, 24, 36, 39, 40, 48 and 60 bits. The slab is an example of a system with an earlier word size. Several of the earliest computers used the decimal base rather than binary, typically having a word size of 10 or 12 decimal digits and some early computers had no fixed word length at all.
The size of a word is sometimes defined to be a particular value for compatibility with earlier computers. The most common microprocessors used in personal computers (for instance, the Intel Pentiums and AMD Athlons) are an example of this; their IA-32 architecture is an extension of the original Intel 8086 design which had a word size of 16 bits. The IA-32 processors still support 8086 (x86) programs, so the meaning of word in the IA-32 context was kept the same, and is still said to be 16 bits despite the fact that they at times (especially when the default operand size is 32 bits) operate largely like a machine with a 32 bit word size, similarly in the newer x86-64 architecture a word is still 16 bits, although 64-bit (quadruple word) operands may be more common.
Now this only matters if you have a CPU that can exectue a 64bit instruction as fast as a 32 bit instruction. IE it can execute more commans per word. this can improve computer effieicny.
http://en.wikipedia.org/wiki/Word_(computing)
You also get high floating points and more adress space. there are a whole host of differences.
The big issue again is Poor OS implementation and more crappy code not taking advantage of these features often enough.
To answer you ruqestion. Properly coded software using certain instruction groups can infact be faster then 32b or 16b etc etc etc adnueseum. It just depends on the CPU pipeline and execution design.
