I bet people said the same thing when the Pentium was first made... "what in the world would you need THAT much processing power for?"
I think the better analogy would be the introduction of the 386. "Who in the world would need to use 32-bits?"
On the same note, if history were to repeat itself, we won't see much benefit from 64-bit until about 5-10 years from now. That's actually some time after Longhorn (2005) but I wouldn't be surprised if Longhorn got delayed. Again. And again. And again. Goddamnit...
So you don't agree with Moore's Law huh?
Heck, even I don't agree with Moore's Law. I'm more inclined to think computer performance follows population growth models in a closed population. The only difference is the boundary conditions keep changing thanks to advancements in manufacturing processes.
Better AI requires better models of human behavior, not bigger numbers.
That may be true, I don't know. But doesn't it make sense that more advanced models of human behavior MIGHT require more computing power?
More computing power doesn't always equate with more number space.
Pentium was the improved processor, not the one which changed whole thing, right?
If I remember correctly, Pentium was the superscalar version of the 486.
increased number of registers which in turn, require less use of cache memory and system memory.
Sort of Correct. Registers are only used to do calculations and are not for storing items like cache and RAM. I've programmed assembly, infinite registers are not the solution, but x86 32 did need more registers. Maybe we can eliminate the stack altogether. If you dont know the stack is like a register in RAM where you can hold many items stacked one on top of the other. Slower, but a fail safe.
I think you sort of contradicted yourself there. Depending on the code, more registers may equate to less use of main memory. Currently, of the 8 GPRs on x86, something like 4 are actually available for calculations. Plus, you have the problem of x86 being a 2 operand ISA, which means more registers allows you to store a copy of the first source operand rather than fetching from memory again. However, over the long run, I think that's really not much of a benefit.
On the other hand, MIPS, which every engineer seems to learn these days, has 32 registers of which something like 15 or 20 are used for calculations. It's also a 3 operand ISA. For some reason (marketing) MIPS doesn't do quite as well as x86.
weather simulation systems such as Earth Simulator are more focused on floating point capabilities then integer
You're gonna need that register width for better precision instead of having to round it up.
If weather simulators are almost entirely floating point, then why would they even have to access integer registers? Floating point calculations are done with floating point registers which have been between 64 bit to 128 bit since the early 80's. Better precision would mean 256-bit floating point. Integer is unrelated except if you're doing integer/floating point conversions and even then, 12345.6 is still 12346 in 32-bit and 64-bit.
A64 motherboards typically have 3 DIMMs. To even exceed 4GB of memory, one would need 2GB DIMMs which may or may not work with current motherboards. More RAM is better, but the rate of increase isn't so great that it would be necessary for even power gamers in the next three years.
The rate of increase is definitely fast. I remember in late 2000 having 128MBs of RAM and that was a decent amount. And now 1GB is what I recommend for Gamers/multimedia people not counting Editors who might require a lot more. Thats a pretty big increase in 2 years.
Funny you should mention 2000. The computer industry of the late 20th century and today are quite different. Back then, even spreadsheets and word processors could benefit from extra processing power. Today, what has changed is the fact that now you have the majority of the user base quite satisfied with the current level of performance and literally seeing no benefit in upgrades. That's a major change.
Better AI requires better models of human behavior, not bigger numbers.
I'm sorry, but thats the dumbest thing I've ever heard since the computer talks in numbers. Bigger numbers mean more options, precision, detail information.
No, integer numbers don't see bigger precision with more bits; only bigger numbers. Unless you're bound to the same number range (example: 2^32 = 1.0 = 2^64). Depending on the AI routine, bigger numbers could mean faster, better AI or a seriously bloated piece of software. From what I do understand of AI, the goal is to increase performance by either doing each calculation faster or doing each calculation in parallel. My bet is on massively parallel, and it seems Intel would agree.
Does anyone know what the difference is going to be? Since 64-bit is able to calculate larger numbers wouldn't that insinuate better graphics?
Graphics, I don't think so. All graphics calculations are handled by the GPU. But AI will difinitely be improved as well as physics. More precision is the advantage here and more performance while having the precision is the advantage of more registers.
If I understand correctly, graphics hardware does rendering only. It doesn't do movement, positioning, or determine base color. In other words, graphics processors modify existing data already modified by the CPU. At the very least, the CPU always has to load your data.
It's just too bad AMD couldn't split the 64-bit registers in half to make more 32 bit registers and use renaming to make use of them in current 32-bitsoftware.
Not this late int he 32-bit arena, why bother changing them. They have to be coded for remember.
If I remember correctly, x86-64 supported a 32-bit mode which allows use of the extended registers. Obviously, they use 32-bit numbers in 64-bit registers, but the performance enhancement is present.
Not registers that can be renamed by the CPU... Intel already does this I believe, not sure about AMD, but I'm 95% sure I read that Intel has the normal 8 GPR's and some additional ones that can be renamed (no, I'm not talking about SSE2 registers).
I believe register renaming is used for pipelining.
AMD and Intel both use much of the same technologies. The main deviation between Athlon and Pentium 4 reside in the original microarchitecture design goals. Intel could easily design a processor to match the Athlon in CPI much the same as AMD could just as easily design a superpipelined CPU.