I was just reading about current graphic cards on Anandtech. And then one question came to my mind: Why are current GPU clock speeds so much lower compared to current CPU clock speeds?
For example, the fastest processors from AMD are about 2.6GHz, from Intel nearly 4Ghz. Compared to that the chips from Ati or Nvidia have only 400-600MHz.
I had a course in Digital Circuit Design and I know that you can build slow and fast transistors and it's very much depending on the process technology you use. But I can't imagine that the process technology which Ati and Nvidia (or the foundries where their chips are produced) uses differs so much from that what AMD and Intel is using.
Another point could be power consumption, which depends very much on clock speed and on number of transistors. But the number of transistors of CPUs and GPUs is of the same magnitude.
Just some thoughts of mine and I would like to hear your thoughts! Maybe we can find an answer to that question.
later,
Ron
For example, the fastest processors from AMD are about 2.6GHz, from Intel nearly 4Ghz. Compared to that the chips from Ati or Nvidia have only 400-600MHz.
I had a course in Digital Circuit Design and I know that you can build slow and fast transistors and it's very much depending on the process technology you use. But I can't imagine that the process technology which Ati and Nvidia (or the foundries where their chips are produced) uses differs so much from that what AMD and Intel is using.
Another point could be power consumption, which depends very much on clock speed and on number of transistors. But the number of transistors of CPUs and GPUs is of the same magnitude.
Just some thoughts of mine and I would like to hear your thoughts! Maybe we can find an answer to that question.
later,
Ron