Modelworks
Lifer
- Feb 22, 2007
- 16,240
- 7
- 76
It would be stupid to assume someone can't build a CPU with add-ons that would benefit physics processing and other operations currently slated for GPGPU. Not that I'm saying this is the best approach but lets not write off the CPU because of what it currently can't do. As we build better and smaller CPU's, we can add more to them. Granted the same will be true for GPU's.
I think we are about to come full circle with CPU . Back in the late 80's early 90's you had a lot of home computers that consisted of a microprocessor to do things like program execution but everything else was a specialized processor. For example in the Amiga computer it was a 68K processor and the rest was performed by multiple chips that handled what they were best at doing. CPU became faster and for a while the idea became that if we need to do something faster we will just make a faster general purpose cpu, but that doesn't work so well if you can't make a single core faster . There is a limit to how far you can divide task in a program too so that is now two limits on how fast a program can run.
I think companies like ARM have the right idea. They know that the core is not fast enough to do everything else and still process flash video and java. So they add modules to the core that are hardware that does nothing but java or flash. It is like adding more cores in the x86 world, but for less cost and it is more efficient at the task. It is easier for developers because they know that it takes the specialized module X amount of time on every system regardless of other variables. From a developer perspective it is great. I can send a line of code to the module, I know it will process it in .02 seconds every time on every computer. I don't have to work out threads, or cores or what is in the pipeline.
X86 is used to doing things where the processor is general purpose. I think they will have to move away from that or take a backseat to newer tech.