Originally posted by: myocardia
Originally posted by: Viditor
It will never be close to that high...
For example, on-board chipset graphics like the Mobility 1150 run at 400 MHz and have a TDP well under 10w...can you think of any discrete card that has come close to that in the last 7 years?
Sure, take any of the lowest performance video cards of the last few years, and underclock them until they're down to the performance level of the mobility 1150, and they'd all be ~10 watts.
Please list for me the number of graphics cards at 400 Mhz with a TDP
well under 10w...
When you integrate the GPU, it drastically changes the power requirements.
No, it doesn't. Would it lower it by some degree? Undoubtedly, but the word drastic would be nowhere in the description, at least by anyone with a functioning brain stem.
Hmmm...firstly, the brain stem controls autonomic functions and not reasoning. So do you mean the term "drastic" would be nowhere in the description only by those for whom reason doesn't come into play?
Second, since it seems to be a subjective term, could you give your own guesstimate as to what degree YOU mean?
Think about it...by integrating, you eliminate the need for another memory controller (like the one on the graphics card), the PCIe signalling device, and the distances you need to send any signal are measured in microns and not inches.
Memory controllers aren't what makes video cards power hogs, transistors are, at least that's where the majority of the power goes. BTW, it sure sounds to me like you're talking about a laptop with roughly the same performance as a K6-2 300/Celeron 300 (non-A), along with a 16MB TNT video card. If that's what you're talking about, of course it's TDP could be as low as ~10 watts, @45nm.
Did you think that the cache, memory controllers, signalling devices, and RAM were made of something other than transistors?
And I'm talking about a processer that is much closer to a SuperComputer on a chip...
1. The reason that Brisbane has a higher latency in L2 cache is so that AMD can drastically increase cache sizes on it.
"AMD has given us the official confirmation that L2 cache latencies have increased, and that it purposefully did so in order to allow for the possibility of moving to larger cache sizes in future parts."
AT Article
2. A very large shared cache on Fusion could eliminate the need for the discrete caches found on graphics cards as well as significantly reduce the memory latency for graphics inherent with moving to on-die.
3. Since many of the parts required in a graphics card are already present in the CPU, when you combine and integrate the 2 you have a drastic net reduction in power requirements.
Some more food for thought...
Compare the AMD San Diego 3700+ single core with the Toledo 4200+ Dual core...both on 90nm SOI.
San Diego = 2200 Mhz clockspeed, 105 Million Transistors,
89w TDP
Toledo = 2200 Mhz clockspeed, 155 Million Transistors (about 50% more!),
89w TDP