AMD Announces High-Performance Chip Set

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: Viditor
Please list for me the number of graphics cards at 400 Mhz with a TDP well under 10w...

Sorry, I don't work for nVidia, hence I have no way to disable all but one pipeline on one of their later model cards, downclock it's core to 400 Mhz, then find it's TDP, minus the power required by the RAM. You do realize that with video cards, a considerable amount of their power is consumed by the RAM, don't you?

Hmmm...firstly, the brain stem controls autonomic functions and not reasoning. So do you mean the term "drastic" would be nowhere in the description only by those for whom reason doesn't come into play? :)

Second, since it seems to be a subjective term, could you give your own guesstimate as to what degree YOU mean?

Sorry, I didn't mean to confuse you. See, without a brain stem, you'll have no pulse, no respiration, among other things. That was my (bad, obviously) way of saying "anyone with a pulse". As far as the percentage of power used by a video card's memory controller, it's low. The core (GPU as it's called) and VRAM use a minimum 90% of the power that the card consumes. But, that doesn't mean that the memory controller consumes the other 7-10%, it means that all other devices on the card, which the memory controller is one of, consumes that ~10%. My best guess would be ~5-7%, depending on the actual card in question, but that is just a somewhat educated guess.

Did you think that the cache, memory controllers, signalling devices, and RAM were made of something other than transistors?
And I'm talking about a processer that is much closer to a SuperComputer on a chip...

I'm not sure what they've been telling you guys in these shareholder meetings/phone calls, but that doesn't seem to be the case, at least yet. BTW, "SuperComputers" are never one processor/chip. Well, they haven't been in at least a decade, and I highly doubt anyone would start trying to build a 1P supercomputer these days.

1. The reason that Brisbane has a higher latency in L2 cache is so that AMD can drastically increase cache sizes on it.
"AMD has given us the official confirmation that L2 cache latencies have increased, and that it purposefully did so in order to allow for the possibility of moving to larger cache sizes in future parts."
AT Article

So, why is it that all AMD processors now, including the latest/fastest Opterons have only 512MB of L2 cache (per core, obviously)? Oh that's right, because it's easier to make it able to reach higher speeds, and slower L2 cache is also considerably cheaper. We all knew why they went to slower L2, the day they announced it-- less expense. By "we", I meant us hacks who build and overclock our own computers, at least those of us who've been around for awhile.

2. A very large shared cache on Fusion could eliminate the need for the discrete caches found on graphics cards as well as significantly reduce the memory latency for graphics inherent with moving to on-die.

Umm, you don't seem to know much at all about video cards. Having to go to system RAM, whether it's from the CPU or from the video card, is extremely slow, compared to a video card's onboard RAM. Adding to that the fact that video cards have RAM that's much, much faster than system RAM, only compounds the problem.

3. Since many of the parts required in a graphics card are already present in the CPU, when you combine and integrate the 2 you have a drastic net reduction in power requirements.

Only if you also have a drastic net reduction in performance. See above, for the reasons why.

Some more food for thought...
Compare the AMD San Diego 3700+ single core with the Toledo 4200+ Dual core...both on 90nm SOI.

San Diego = 2200 Mhz clockspeed, 105 Million Transistors, 89w TDP
Toledo = 2200 Mhz clockspeed, 155 Million Transistors (about 50% more!), 89w TDP

Umm, I'm assuming that you're just now finding out that whenever you use less vcore, a processor consumes less electricity? Add to that the fact that those are two different steppings (E4 vs E6), and AMD improved upon the (electrical) efficiency of the E6's, and it will all sense to you. They obviously couldn't have two 89 watt cores sitting under one heatsink, which is why they had to improve upon the E6's efficiency.

BTW, the actual consumed power of a Toledo 4200 is higher than the consumed power of the San Diego 3700, assuming my memory hasn't completely failed me. I'll see if I can find a link to a system power test that a reputable site has done.

edit: Sorry, I didn't realize you had already addressed the TDP issue, with dmens.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
Originally posted by: Viditor
As to my explanation, I'll try to use a car anology to make it simpler...
If you have a car with a V6 and add a second V6 engine to make it a V12, the combined fuel use and power/weight ratio doesn't double. Because the 2 engines are sharing a single chassis and body, the efficiency of the engines for performance is increased.

In this case, because the hybrid chip will utilize shared components such as the cache, Ram, signaling circuits, and mem controller with the CPU, the combination will have a net reduction in power usage and heat for the system.

yeah, but like you said, TDP doesn't mean actual power consumption. it is a recommended thermal design specification. so you car analogy has zero relevance to TDP.

also, the more complicated the design, the more likely there is a bigger gap between the TDP and the maximum power scenario.
 

DrMrLordX

Lifer
Apr 27, 2000
21,616
10,824
136
Originally posted by: myocardia

First, I thought it was somewhat obvious that I meant something other than a forum post.

"Anyone" does cover forum posters, so no, it wasn't. At this point every leaked Phenom benchmark seems to have been run on a B1 revision core, and tech journalists are saying nothing about which core revision will be showing up as Phenoms in a few days, so I consider Pederv's post to be logical at the very least.

Let me try again, do you have any links with that same speculation, written by anyone who gets their paychecks because of their writing about/testing of computer hardware?

No, unless you count Viditor who seems to think they'll all be B2 cores in '07 and B3 cores by January of '08 (provided things go well). I'd like to think he's correct, and for AMD's sake I hope that he is, but until someone gets their hands on some Phenom chips through retail channels we'll still be in the dark.

I actually emailed TankGuys asking them about the core revision on X4-9500s they have available for pre-order and have not yet received a reply. Maybe I should have PMed him here on Anandtech instead . . .