128 Bit CPUs

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Dec 30, 2004
12,553
2
76
and the Xbox 1 with its glorified 32bit Celeron completely pooped all over the PS2 in terms of hardware capability

as others have explained, how many "bits" a console was died with the Nintendo 64 when it became a joke

It mattered from 8 bit to 16 and then to 32, but after that it was purely marketing PR that simply went too far

but dude the xbox 360 has 360 bits. actually it has 3 cpu cores though so that's only 120 bits per CPU core allowed and there duel threaded (crochet style) so its only 60 bits per thread which is still plenty. :colbert:
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Even many 8 bit CPUs were capable of calculation beyond a byte with the addition of carry information. The distinction between 8/16/32/64 bit CPUs has become even less important as time has gone on and now its mostly meaningless. High level languages have all gone and either defined the exact size of their primitives (like Java) or completely hidden the whole problem behind infinite number ranges (Ruby). Only the C guys still even care.

The memory addressable has been the 'bitness' for a while but as rightly pointed out that isn't actually right as 64 bit CPUs don't actually support a memory space that large in practice. All it really represents is the maximum single instruction size for a 2's compliment number and the registers that go along with that. Which while giving some indication of performance doesn't really say much that is useful.
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
If your program has and uses enough 64-bit pointers that those 4 bytes each makes a difference, you will already have been facing major performance issues related to memory, and that will be the least of your worries

not really, you can have small data structures accessed in your hotspots with 64-bit pointers (because all your pointers must be the same size in 64-bit mode), if these structures are with a lot of pointers, for example trees and linked lists, the pointers in 64-bit mode can significantly increase your memory footprint and thus cache miss ratio, to optimize such cases you will typically replace pointers by 2-byte or 4-byte indices in arrays for small structures, keeping 64-bit pointers for big areas (for example arrays base address), the performance impact may be huge (>10% speedup)
 
Last edited:

Gundark

Member
May 1, 2011
85
2
71
It would actually likely make computers slower due to 128bit pointers.

Programs actually tend to grow in size when they are recompiled from 32bit to 64bit due to needing 64 bits in x64 for pointers, and thus, pieces of the program are less likely to fit in cache and thus more likely to be slower.

Is this true? Even when the size does increase when compile in x64, i still think that in many cases speed up is very noticeable. Also, increase in memory footprint is not an issue anymore due to a cheap and large RAM. Also, there are quite few examples where x64 is faster than SSE.
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
Is this true? Even when the size does increase when compile in x64, i still think that in many cases speed up is very noticeable. Also, increase in memory footprint is not an issue anymore due to a cheap and large RAM. Also, there are quite few examples where x64 is faster than SSE.

64-bit code is roughly 10% bigger due to the REX encoding, this has generally no measurable effect on performance

on the other hand, 64-bit vs. 32-bit pointers change the data set size not the code size, it can have a very sizable performance impact since some data structures may become 1.5 x bigger, it increases cache miss ratio and if these structures aren't fitting well in your last level cache it will increase your memory bandwidth requirements, if you are memory bandwidth bound the impact is huge

now, 64-bit code is generally faster thanks to the doubled number of logical registers, I have measured up to 8% speedup from 64-bit mode with AVX code for example, the average speedup is 6%
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
http://www.shell-storm.org/papers/files/768.pdf
Registers: 32-bit.
Address space: 32-bit.

I believe at the time Sega advertised it as being 128bit. Probably because the FP bus is 128bits wide.

A 16/32/64/128 bit CPU?
The SH-4 is a multiple bit CPU.
Floating Point Bus 128-bits 3.2 GB/sec transfer rate from the data cache for matrix data aligned together as four separate 32-bit values like this: [32-bit][32-bit][32-bit][32-bit]
 
Last edited:

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,027
0
76
None have been sent back in time to kill john conor yet.

Seriously though they don't exist for consumers because 32 bit can address 4 gigabytes of memory and 64 bit cpus are supposed to be able to address 16 exabytes which is 17179869184 gigabytes. As you can see the need for 128 bit is a long way off.
For HPC I think the desire for 128-bit processors (and some HPC clusters have proper 128-bit CPUs) is due to a need for greater precision rather than a need for more memory.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I believe at the time Sega advertised it as being 128bit. Probably because the FP bus is 128bits wide.
The point is that by the same kind of metric we use, it was a 32-bit processor. Using the same kind of metric they used to market it, we would be calling the latest x86 CPUs 256-bit, and the OP's could have been answered with a snide, "nearly six years ago." (edit: corrected, more like 12-13 years ago)

For HPC I think the desire for 128-bit processors (and some HPC clusters have proper 128-bit CPUs) is due to a need for greater precision rather than a need for more memory.
Practically any CPU today can do double-double fast enough to be useful, should more FP bits be desired over faster processing (though hardware double-double, or actual quadruple, would be preferred, and some specialized CPUs support one or the other directly, double-double being far more common).
 
Last edited:

intangir

Member
Jun 13, 2005
113
0
76
The point is that by the same kind of metric we use, it was a 32-bit processor. Using the same kind of metric they used to market it, we would be calling the latest x86 CPUs 256-bit, and the OP's could have been answered with a snide, "nearly six years ago."

All true, but in point of fact, x86 got 128-bit XMM registers with the original SSE instruction set in the Pentium 3, back in 1999. So, not just 6 years ago, but 12!

http://en.wikipedia.org/wiki/Pentium_III

Hm, introduced Feb 26, 1999. In another 6 weeks or so, it would be 13 years!
 

MrTeal

Diamond Member
Dec 7, 2003
3,908
2,665
136
64 bit addressing space is so much more than enough for the foreseeable future and even it it wasn't, addressing more runs into problems with physics with how we currently manufacture memory.

Imagine even that you could make a bit of ram 1nm x 1nm, in order to fit 2^64 bytes of it onto a wafer would take 4.3x4.3 meters. If your 4.3 m x 4.3m Si wafer was only 10um thick, just the mass of the silicon in 2^64 bytes of this tiny, impossible to produce DRAM would be about a pound.

It's like people wondering how long IPv6 will last, since we ran out of IPv4 addresses in a few decades. The difference is that IPv4 gives less than one unique IP address per person on earth. IPv6 could assign 4 billion unique addresses to every person on Earth. Or, to think of it another way, with IPv6 you could break the Earth up into 2^64 pieces of ~240 grams each, and assign a unique IP to each one.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81

Did you even read the specs on that page? The only thing that's 128 bit on that CPU is the bus width to the FPU. That's purely the width of the bus, which doesn't really mean anything concrete in terms of CPU and FPU performance.

If you call that 128 bit, then the SB-E platform memory bandwidth is 4x 64bit = 256 bit in the same way that the dreamcast had a portion of it's bus width at 4x 32bit = 128 bit.
So, we've already passed 128 bit and are on 256 bit!!! Party!!! When's the 512 bit processor coming? More importantly, why has Intel missed this golden marketing opportunity? The public must know that SB-E is 256 bit, they clearly will spend more just because of it.

Too many people believe the crap the marketing people shovel at them.
 
Last edited:

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
<3 how this thread went off the rails because nobody really knows what the OP means when he says 128 bit CPU.

I smashed my CPU into 128 pieces, does that count?
 

Schmide

Diamond Member
Mar 7, 2002
5,693
934
126
In terms of addressing, you're not going to be seeing 128bit CPUs ever.

The reason? Physical size.

A 128 bit number is on the order of 10^38.

The number of atoms in a carat of diamond is 10^22.

The difference is 10^16.

Since a carat weights 0.2 grams, doing the math you get about 200 billion kilos (or 440 million tons) of diamond for which you can address every atom of it.

Considering modern battleships weigh in the 50k tons, you're looking at addressing the atoms of 9 thousand of them. Give or take a few.

Good luck multiplexing all those address lines.
 
Last edited:

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Did you even read the specs on that page? The only thing that's 128 bit on that CPU is the bus width to the FPU. That's purely the width of the bus, which doesn't really mean anything concrete in terms of CPU and FPU performance.

If you call that 128 bit, then the SB-E platform memory bandwidth is 4x 64bit = 256 bit in the same way that the dreamcast had a portion of it's bus width at 4x 32bit = 128 bit.
So, we've already passed 128 bit and are on 256 bit!!! Party!!! When's the 512 bit processor coming? More importantly, why has Intel missed this golden marketing opportunity? The public must know that SB-E is 256 bit, they clearly will spend more just because of it.

Too many people believe the crap the marketing people shovel at them.

Somebody swipe your Twinkle this morning?

http://letmegooglethat.com/?q=dreamcast+128+bit+cpu

At the time it was called a 128bit cpu.
 

Absolution75

Senior member
Dec 3, 2007
983
3
81
Is this true? Even when the size does increase when compile in x64, i still think that in many cases speed up is very noticeable. Also, increase in memory footprint is not an issue anymore due to a cheap and large RAM. Also, there are quite few examples where x64 is faster than SSE.

Any increase in performance is likely due to the increased number of registers (2x) rather than the instructions actually being executed in 'x64' mode.

But as I said, ram is cheap, but you can't upgrade your CPU cache without a full CPU upgrade.


Also, I believe SSE is required on x64 while on x86 it is not - so there could be some performance boosts there if you aren't compiling for SSE by default. There are a lot of other things that affect performance, but check the benchmarks - there is very little difference between x64 and x86 builds for a majority of cases.

The biggest increase in performance with regards to x86/x64 is encoders, and this is probably due to the added required instructions and increased registers.
 
Last edited:

Concillian

Diamond Member
May 26, 2004
3,751
8
81
Somebody swipe your Twinkle this morning?

http://letmegooglethat.com/?q=dreamcast+128+bit+cpu

At the time it was called a 128bit cpu.

The point is that this definition of 128 bit is different from what AMD or Intel would label a 128 bit CPU.

By the Dreamcast definition of 128 bit, then all modern CPUs are at least 128 bit CPUs, since they support dual channel memory (2x 64 bit channels = 128 bit).

If the OP understood what the console marketing meant by 128 bit then he would see that existing CPUs are already 128 bit, and LGA2011 CPUs could be considered to be 256 bit. By this definition of 128 bit, we have been using 128 bit CPUs for a long time... however comparing to what AMD or Intel call a 64 bit CPU is comparing apples and bongo drums.

If you compare apples to apples then either:
A dreamcast CPU is 32 bit and modern AMD / Intel CPUs are 64 bit
OR
a Dreamcast CPU is 128 bit and modern AMD / Intel CPUs are either 256 bit (LGA2011) or 128 bit (everything else).

It's all just semantics on the terminology.
 

bronxzv

Senior member
Jun 13, 2011
460
0
71
But as I said, ram is cheap, but you can't upgrade your CPU cache without a full CPU upgrade.

cache capacities haven't increased a lot lately, when we consider that 45 nm Wolfdale was with 6MB last level cache and 22 nm Ivy Bridge will be with 8MB last level cache

with the computation density offered by AVX memory footprint optimizations are of paramount importance
 

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
there is currently no 128-cpu in the sense that it has a 128-SISD-ALU. in every other sense already (128bit SIMD ALUs, FPUs 128bit and 128bit registers have been years in any x86 processor available, 128-bit data paths also cpus with 128-bit bus also exist
 
Jul 10, 2007
12,041
3
0
He is talking about instructions. Like how the N64 was a "64 bit" processor, Dreamcast was a "128 Bit" porcessor (And PS2).

As far as I know, the "Bits" of a system or it's CPU was purely marketing gibberish. Like geebees.

more like the MHz wars of the early 2000's.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Considering modern battleships weigh in the 50k tons, you're looking at addressing the atoms of 9 thousand of them. Give or take a few.

There is no such thing as a modern battleship. And yes, I'm being a dick :p

and the Xbox 1 with its glorified 32bit Celeron completely pooped all over the PS2 in terms of hardware capability

as others have explained, how many "bits" a console was died with the Nintendo 64 when it became a joke

It mattered from 8 bit to 16 and then to 32, but after that it was purely marketing PR that simply went too far

As a single piece of silicon, I'd say the EE was more interesting than the P3 based X-CPU, and it certainly destroyed the X-CPU in vector related GFLOPS. But of course, the VUs were primarily there to handle TnL along with physics and what not, but in unison, they could out do the Xbox's NV2A in straight up polygon throughput. It would be amazing to see what the EE could do with a proper TnL capable 3D GPU operating in unison in terms of physics and environmental interaction, though I would think the Xbox's P3 would still be better with AI and branch prediction.
 
Last edited:

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
There is no such thing as a modern battleship. And yes, I'm being a dick :p



As a single piece of silicon, I'd say the EE was more interesting than the P3 based X-CPU, and it certainly destroyed the X-CPU in vector related GFLOPS. But of course, the VUs were primarily there to handle TnL along with physics and what not, but in unison, they could out do the Xbox's NV2A in straight up polygon throughput. It would be amazing to see what the EE could do with a proper TnL capable 3D GPU operating in unison in terms of physics and environmental interaction, though I would think the Xbox's P3 would still be better with AI and branch prediction.

What was interesting about the EE? It was just another typical RISC based FLOP monster, like every other design out there. Not smart, just wide. And the VU's were essentially T&L units, just on the cpu instead of the gpu. VU0 wasn't terribly fast, but was extremely flexible (more so than the DirectX 8.1 level hardware the Xbox was sporting), VU1 was quite fast but not very flexible (probably closer to the fixed function T&L unit of the geforce 2). And from what I remember, the two were difficult (or nearly impossible?) to use together. Still, at the time, the VU0's performance about matched the fastest non-fixed function T&L hardware available (sega's boards used in Virtua Fighter 4), and the VU1 about matched the Geforce 2, while at the same time the fillrate and bandwidth of the PS2's gpu was on par with the Radeon 9700 Pro that wouldn't be launched until a few years later. I'd imagine the two systems were separated more in performance by software support than hardware, because the PS2 was fairly beefy in quite a few individual areas, just the sum of the parts didn't seem to work out.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,624
2,399
136
There is no such thing as a modern battleship. And yes, I'm being a dick :p

I know a bunch of navy people who would be willing to classify the Zumwalt-class destroyer as a BB. The consensus seems to be that it's called a DD only because BB's are presently the butt of the joke about following the old paradigm too long, when everyone should have been building aircraft carriers instead.

It's only 15k tons, which is a far cry from the Iowa, but it's still heavier than many of the Pre-DN battleships. It's certainly not a destroyer, because DD's are traditionally used for killing small ships, submarines and aircraft, each of them a task that the Zumwalt does not do. Instead, Zumwalt is meant for engaging surface and ground targets with missiles and guns. It only has 2 guns, but they are water-cooled, fully automated and have fire rates good enough that it can land as many projectiles per minute on the target as the Iowa. The projectiles themselves are, of course, much lighter.