sort of a newb question regarding basic CPU tech...

Turbonium

Platinum Member
Mar 15, 2003
2,157
82
91
So we have basically nothing but 64-bit (x64) CPUs nowadays, replacing the previous generation of 32-bit (x86) CPUs.

What determines what instruction set CPUs and software use for a given timeframe? I mean, why not just skip straight to 128-bit for example?

Go ahead and be technical if you like.

The only thing I can think of explaining it is that our ability to pack "parts" onto CPU (i.e. our currently technology) literally affects the instruction sets we are capable of dealing with on consumer-level CPUs. Also: once the hardware determines the instruction set capabilities, software settles in and invests in it, and so it sits for a generation. Then later, we get better tech, and can pack more stuff onto the same size CPU, and it reaches a point where the industry as a whole shifts over, software and all.

Am I even remotely close?
 
Last edited:

anongineer

Member
Oct 16, 2012
25
0
0
I view 64-bit instruction sets primarily in terms of how memory is addressed, and how much of it can be accessed. The ability to do 64-bit integer arithmetic, or store a 64-bit double precision floating point in a register, is bonus.

As mentioned in that 64-bit Wikipedia article, memory got inexpensive, and being able to view it as (mostly?) flat without segment registers and extended memory modes, made life easier on software developers. 64-bit addressing sidesteps the 3 GB barrier caused by other memory-mapped devices eating into addresses for physical memory. The expanded range of addresses also makes ASLR more robust.

SSE does offer 128-bit and higher registers, but it's SIMD packed arithmetic, I think. It's difficult to make a case for a general purpose 128-bit instruction set though. Only in ultra high-precision simulation and modeling (LHC?) might you want to do arithmetic with that many bits. We also haven't crammed so much memory into systems that we're pushing 64-bit addressing limits.

And you're right that implementation gets trickier with increasing numbers of bits. Area goes up, hitting a target clock speed gets harder, holding the line on power gets harder, yields may take a hit initially. All of these challenges were present when going from 32 to 64, but the costs were deemed worth it for more memory, easier programming, and better security. I don't think there would be nearly as enthusiastic a reception for, say, an x86-128.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
So we have basically nothing but 64-bit (x64) CPUs nowadays, replacing the previous generation of 32-bit (x86) CPUs.

What determines what instruction set CPUs and software use for a given timeframe? I mean, why not just skip straight to 128-bit for example?
Data becomes larger, instructions typically become larger, and on-chip buses and registers must either get wider, or more complicated.

For example, most x86-64 code is ~10-15% larger than fast IA32 code (optimized for P6 or newer; and the size difference is variable, just not as much as data), most pointers get bigger (4->8 bytes), and the stack typically gets bigger (4->8 bytes per item). The increased sizes can make for greater cache pressure, reducing performance of some code.

Once we got to being able to make good use of 1-2GB of RAM, and >=4GB files, we really needed to get beyond 32-bits. That's one of the sleeper issues, and was typically only a show-stopper for audio people and server people (except, of course, that they didn't consider Itanium to be the best solution :)). If there is enough physical RAM for all the memory in the OS kernel to be mapped from/to user processes, memory management is a ton easier. Over time, using files, messaging, and shared memory has become more common, as memory and CPUs have become cheaper. If there is much less memory to do that with, the kernel has to unmap and remap all the time, which wastes time that your application(s) could be using. With current x86-64 CPUs/OSes, we've got until 2PB of RAM or so, before we run into that kind of issue, again.
 

bononos

Diamond Member
Aug 21, 2011
3,928
186
106
So we have basically nothing but 64-bit (x64) CPUs nowadays, replacing the previous generation of 32-bit (x86) CPUs.

What determines what instruction set CPUs and software use for a given timeframe? I mean, why not just skip straight to 128-bit for example?

Go ahead and be technical if you like.

The only thing I can think of explaining it is that our ability to pack "parts" onto CPU (i.e. our currently technology) .......
We are already at 128bit or even 256bit computing as the links in post#2 shows, so its not a matter of being able to pack in more hw in a small amount of space. The main impetus for 64bit was getting over the 4Gb memory barrier.
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
So we have basically nothing but 64-bit (x64) CPUs nowadays, replacing the previous generation of 32-bit (x86) CPUs.

What determines what instruction set CPUs and software use for a given timeframe? I mean, why not just skip straight to 128-bit for example?
We already have support for 256-bit (vector) data types. But we still call these CPUs 64-bit because that's the maximum size of memory addresses.

With 32-bit, we were limited to 4 gigabyte of memory. But with 64-bit, up to 16 exabyte could be addressed in theory. Even with an optimistic growth in memory capacity, that's enough till the end of this century. So there really isn't any need to look beyond 64-bit in our lifetime. In fact internally today's CPUs are limiting the address calculations to 48 or 52 bit. That can gradually be increased if the need arises, but even that won't happen for several decades. Keeping things more narrow for now makes it faster, consume less power, and slightly cheaper.