Hello all,
We've been discussing this topic in a thread here in DC. Basically I offered the idea that with the ever expanding programability and new floating point capabilities of DX9 compatible video cards we should soon be able to run some CPU type tasks on our GPUs. One such application is SETI@Home, since it loves memory bandwidth (which graphics cards have a lot of) and is heavily floating point biased. So my question is this: How hard would it be to code a program like SETI@Home to run most of it's calculations on the GPU. In even simplier terms: How hard is it (or is it possible to) turn a GPU into a CPU (either partially or totally). You can read through the linked thread to see some of the other questions asked.
Thanks,
Swan
We've been discussing this topic in a thread here in DC. Basically I offered the idea that with the ever expanding programability and new floating point capabilities of DX9 compatible video cards we should soon be able to run some CPU type tasks on our GPUs. One such application is SETI@Home, since it loves memory bandwidth (which graphics cards have a lot of) and is heavily floating point biased. So my question is this: How hard would it be to code a program like SETI@Home to run most of it's calculations on the GPU. In even simplier terms: How hard is it (or is it possible to) turn a GPU into a CPU (either partially or totally). You can read through the linked thread to see some of the other questions asked.
Thanks,
Swan