- Jun 21, 2003
Sorry, I don't work for nVidia, hence I have no way to disable all but one pipeline on one of their later model cards, downclock it's core to 400 Mhz, then find it's TDP, minus the power required by the RAM. You do realize that with video cards, a considerable amount of their power is consumed by the RAM, don't you?Originally posted by: Viditor
Please list for me the number of graphics cards at 400 Mhz with a TDP well under 10w...
Sorry, I didn't mean to confuse you. See, without a brain stem, you'll have no pulse, no respiration, among other things. That was my (bad, obviously) way of saying "anyone with a pulse". As far as the percentage of power used by a video card's memory controller, it's low. The core (GPU as it's called) and VRAM use a minimum 90% of the power that the card consumes. But, that doesn't mean that the memory controller consumes the other 7-10%, it means that all other devices on the card, which the memory controller is one of, consumes that ~10%. My best guess would be ~5-7%, depending on the actual card in question, but that is just a somewhat educated guess.Hmmm...firstly, the brain stem controls autonomic functions and not reasoning. So do you mean the term "drastic" would be nowhere in the description only by those for whom reason doesn't come into play?
Second, since it seems to be a subjective term, could you give your own guesstimate as to what degree YOU mean?
I'm not sure what they've been telling you guys in these shareholder meetings/phone calls, but that doesn't seem to be the case, at least yet. BTW, "SuperComputers" are never one processor/chip. Well, they haven't been in at least a decade, and I highly doubt anyone would start trying to build a 1P supercomputer these days.Did you think that the cache, memory controllers, signalling devices, and RAM were made of something other than transistors?
And I'm talking about a processer that is much closer to a SuperComputer on a chip...
So, why is it that all AMD processors now, including the latest/fastest Opterons have only 512MB of L2 cache (per core, obviously)? Oh that's right, because it's easier to make it able to reach higher speeds, and slower L2 cache is also considerably cheaper. We all knew why they went to slower L2, the day they announced it-- less expense. By "we", I meant us hacks who build and overclock our own computers, at least those of us who've been around for awhile.1. The reason that Brisbane has a higher latency in L2 cache is so that AMD can drastically increase cache sizes on it.
"AMD has given us the official confirmation that L2 cache latencies have increased, and that it purposefully did so in order to allow for the possibility of moving to larger cache sizes in future parts."
Umm, you don't seem to know much at all about video cards. Having to go to system RAM, whether it's from the CPU or from the video card, is extremely slow, compared to a video card's onboard RAM. Adding to that the fact that video cards have RAM that's much, much faster than system RAM, only compounds the problem.2. A very large shared cache on Fusion could eliminate the need for the discrete caches found on graphics cards as well as significantly reduce the memory latency for graphics inherent with moving to on-die.
Only if you also have a drastic net reduction in performance. See above, for the reasons why.3. Since many of the parts required in a graphics card are already present in the CPU, when you combine and integrate the 2 you have a drastic net reduction in power requirements.
Umm, I'm assuming that you're just now finding out that whenever you use less vcore, a processor consumes less electricity? Add to that the fact that those are two different steppings (E4 vs E6), and AMD improved upon the (electrical) efficiency of the E6's, and it will all sense to you. They obviously couldn't have two 89 watt cores sitting under one heatsink, which is why they had to improve upon the E6's efficiency.Some more food for thought...
Compare the AMD San Diego 3700+ single core with the Toledo 4200+ Dual core...both on 90nm SOI.
San Diego = 2200 Mhz clockspeed, 105 Million Transistors, 89w TDP
Toledo = 2200 Mhz clockspeed, 155 Million Transistors (about 50% more!), 89w TDP
BTW, the actual consumed power of a Toledo 4200 is higher than the consumed power of the San Diego 3700, assuming my memory hasn't completely failed me. I'll see if I can find a link to a system power test that a reputable site has done.
edit: Sorry, I didn't realize you had already addressed the TDP issue, with dmens.