AMD Announces High-Performance Chip Set

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

As far as "expected" total TDP...Expected by whom?

Note my sig, AMD has been "expecting" many things.

Me loves the sig...there is no denying AMD did nothing short of blowing smoke up the industry's proverbial ass regarding performance/power expections of Barcelona.

Now they have pie on their face, lost their VP of marketing (hmmm, surely a coincidence) and are quite quiet about Phenom performance/power expectations.

Everything AMD has done this year is confidence building...if you own INTC. (which I don't)
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Idontcare
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

As far as "expected" total TDP...Expected by whom?

Note my sig, AMD has been "expecting" many things.

Me loves the sig...there is no denying AMD did nothing short of blowing smoke up the industry's proverbial ass regarding performance/power expections of Barcelona.

Now they have pie on their face, lost their VP of marketing (hmmm, surely a coincidence) and are quite quiet about Phenom performance/power expectations.

Everything AMD has done this year is confidence building...if you own INTC. (which I don't)

It's personal opinion posts like this that degenerate an excellent thread into a sledge fest...
Can we please keep it ON-Topic?
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

I think you don't quite understand the concept of an anology...

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

I'll try again...
The reason for the anologies is to try and illustrate to you that most of the transistors you find on a graphics card aren't required when you integrate the GPU into the die. Therefore, it's obvious that the power requirement will be far less.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

AMD has never even considered an MCM version of Fusion AFAIK...could you back that up?
Perhaps you are just confused by diagrams like this.
You need to look closer at the diagram...note that the multiple cores are interconnected by Direct Connect which is the crossbar switch. This necessarily means that it HAS to be on the same die and not an MCM.



 

jones377

Senior member
May 2, 2004
450
47
91
Originally posted by: Viditor
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

I think you don't quite understand the concept of an anology...

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

I'll try again...
The reason for the anologies is to try and illustrate to you that most of the transistors you find on a graphics card aren't required when you integrate the GPU into the die. Therefore, it's obvious that the power requirement will be far less.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

AMD has never even considered an MCM version of Fusion AFAIK...could you back that up?
Perhaps you are just confused by diagrams like this.
You need to look closer at the diagram...note that the multiple cores are interconnected by Direct Connect which is the crossbar switch. This necessarily means that it HAS to be on the same die and not an MCM.

Pretty much all of the transistors on a GPU are logic compared to a CPU where the vast majority are in the cache. So yes, if you want 0.5TFLOP on a CPU-GPU you need hundreds of millions of transistors, which is going to consume lots of power. Not 150W mind you, some of that is consumed by the onboard RAM, but still quite alot.

A CPU core excluding caches is on the order of 20-30 million transistors. A GPU with that few won't have very high performance.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: jones377
Originally posted by: Viditor
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

I think you don't quite understand the concept of an anology...

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

I'll try again...
The reason for the anologies is to try and illustrate to you that most of the transistors you find on a graphics card aren't required when you integrate the GPU into the die. Therefore, it's obvious that the power requirement will be far less.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

AMD has never even considered an MCM version of Fusion AFAIK...could you back that up?
Perhaps you are just confused by diagrams like this.
You need to look closer at the diagram...note that the multiple cores are interconnected by Direct Connect which is the crossbar switch. This necessarily means that it HAS to be on the same die and not an MCM.

Pretty much all of the transistors on a GPU are logic compared to a CPU where the vast majority are in the cache. So yes, if you want 0.5TFLOP on a CPU-GPU you need hundreds of millions of transistors, which is going to consume lots of power. Not 150W mind you, some of that is consumed by the onboard RAM, but still quite alot.

A CPU core excluding caches is on the order of 20-30 million transistors. A GPU with that few won't have very high performance.

Fair enough...but I believe that instead of discreet cache, Fusion will utilize a very large cache shared between the CPU and GPU cores. AMD has stated (IIRC) that they will need some new x86 instructions, and I wonder if part of that will be to help seperate and utilize the L3?

As to the setup engine, it should be possible to set that up on an off-board card connected via a cHT (Torrenza) interconnect.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: Viditor
It will never be close to that high...
For example, on-board chipset graphics like the Mobility 1150 run at 400 MHz and have a TDP well under 10w...can you think of any discrete card that has come close to that in the last 7 years?

Sure, take any of the lowest performance video cards of the last few years, and underclock them until they're down to the performance level of the mobility 1150, and they'd all be ~10 watts.

When you integrate the GPU, it drastically changes the power requirements.

No, it doesn't. Would it lower it by some degree? Undoubtedly, but the word drastic would be nowhere in the description, at least by anyone with a functioning brain stem.

Think about it...by integrating, you eliminate the need for another memory controller (like the one on the graphics card), the PCIe signalling device, and the distances you need to send any signal are measured in microns and not inches.

Memory controllers aren't what makes video cards power hogs, transistors are, at least that's where the majority of the power goes. BTW, it sure sounds to me like you're talking about a laptop with roughly the same performance as a K6-2 300/Celeron 300 (non-A), along with a 16MB TNT video card. If that's what you're talking about, of course it's TDP could be as low as ~10 watts, @45nm.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Originally posted by: myocardia
Yeah, but my point all along was that it's almost impossible to keep a 175 watt cpu from throttling, with the best high-end heatpipe. So, let's say that at 32nm, they're able to get this CGPU down to ~150 watts. Is AMD and/or Intel planning on shipping TR Ultra 120 Extremes with these little house heaters? Because anything less than an Ultra 120 Extreme or a Tuniq Tower, and you're gonna end up with a pile of smoldering silicon, that used to be a CGPU.

I'm sure they'll ship with a cooling solution configured for their target market. Who knows, maybe it'll wind up using something like the infamous OCZ Hydrojet or some other never-before-seen cooling device utilizing carbon nanotubes.

If they can't lower the TDP of the device to the point that they can cool it effectively, they'll just have to lower clock speeds and voltages until the beast is tamed (and hope nobody notices).

Originally posted by: Viditor

It will never be close to that high...
For example, on-board chipset graphics like the Mobility 1150 run at 400 MHz and have a TDP well under 10w...can you think of any discrete card that has come close to that in the last 7 years?
When you integrate the GPU, it drastically changes the power requirements.

Think about it...by integrating, you eliminate the need for another memory controller (like the one on the graphics card), the PCIe signalling device, and the distances you need to send any signal are measured in microns and not inches.

275W or what have you was just a number some other folks pulled out of thin air, so I reacted to it. I would hope they would never try to ship an on-card chipset with a TDP that high (or even one with a TDP of 150W) but you never know.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: DrMrLordX
275W or what have you was just a number some other folks pulled out of thin air, so I reacted to it. I would hope they would never try to ship an on-card chipset with a TDP that high (or even one with a TDP of 150W) but you never know.

Actually, AMD has raised the TDP of their soon to be released Phenoms to 125 watts, it seems, even though the fastest is only going to be 2.4 Ghz. The Phenom FX, to be released sometime Q1 '08, is supposed to have a 140 watt TDP. And the QX6850 is a 130 watt TDP chip. So, it isn't like they aren't already close to 150 watts, and that's at stock speeds, of course.:)
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Originally posted by: myocardia

Actually, AMD has raised the TDP of their soon to be released Phenoms to 125 watts, it seems, even though the fastest is only going to be 2.4 Ghz. The Phenom FX, to be released sometime Q1 '08, is supposed to have a 140 watt TDP. And the QX6850 is a 130 watt TDP chip. So, it isn't like they aren't already close to 150 watts, and that's at stock speeds, of course.:)

That doesn't necessarily mean they're going to ship a card with a chipset affected by those TDP figures. Hell the TDP on lower-end Phenom FXs is lower than 125W, and we all know how conservative AMD has been with TDP figures in the past.

It also seems pretty clear that the Phenom chips we'll be seeing in '07 will be rebadged "throwaway" Barcelonas using early/buggy cores so using TDP figures from early Phenom FX releases doesn't really seem to make sense.
 

RanDum72

Diamond Member
Feb 11, 2001
4,330
0
76
The price for the 'chipset' is $1999.00 It has the option of being an add-on card to PC's with PCI-Express x16 slots. I wonder if that would bring my 3Dmark scores to world-record levels?
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: Viditor
Originally posted by: Idontcare
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

As far as "expected" total TDP...Expected by whom?

Note my sig, AMD has been "expecting" many things.

Me loves the sig...there is no denying AMD did nothing short of blowing smoke up the industry's proverbial ass regarding performance/power expections of Barcelona.

Now they have pie on their face, lost their VP of marketing (hmmm, surely a coincidence) and are quite quiet about Phenom performance/power expectations.

Everything AMD has done this year is confidence building...if you own INTC. (which I don't)

It's personal opinion posts like this that degenerate an excellent thread into a sledge fest...
Can we please keep it ON-Topic?

You were the one who took it off topic by posting about fusion.

What you really want is to not take it to a topic where you can't post more AMD propaganda.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: DrMrLordX
Originally posted by: myocardia

Actually, AMD has raised the TDP of their soon to be released Phenoms to 125 watts, it seems, even though the fastest is only going to be 2.4 Ghz. The Phenom FX, to be released sometime Q1 '08, is supposed to have a 140 watt TDP. And the QX6850 is a 130 watt TDP chip. So, it isn't like they aren't already close to 150 watts, and that's at stock speeds, of course.:)

That doesn't necessarily mean they're going to ship a card with a chipset affected by those TDP figures.

Oh, I'm sure they won't. I'm pretty sure they'll be shipping something with roughly the performance of today's slower desktop X2's, along with the video equivalent of the aforementioned Mobility 1150. It'll be perfect for grandpa's new laptop.;)

It also seems pretty clear that the Phenom chips we'll be seeing in '07 will be rebadged "throwaway" Barcelonas using early/buggy cores so using TDP figures from early Phenom FX releases doesn't really seem to make sense.

Got any links to anyone even speculating about that? This is the first I've heard of such, but if it's true, it will surely be the death knell for AMD, at least as a cpu manufacturer. I'll be sure to warn anyone even considering buying a Phenom about that.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Originally posted by: myocardia

Oh, I'm sure they won't. I'm pretty sure they'll be shipping something with roughly the performance of today's slower desktop X2's, along with the video equivalent of the aforementioned Mobility 1150. It'll be perfect for grandpa's new laptop.;)

Doubt it, this product is aimed at HPCs that will probably be doing number-crunching and other FP-intensive tasks.

Got any links to anyone even speculating about that? This is the first I've heard of such, but if it's true, it will surely be the death knell for AMD, at least as a cpu manufacturer. I'll be sure to warn anyone even considering buying a Phenom about that.

Originally posted by: Pederv
Originally posted by: 21stHermit
What we know from today's DailyTech article:

Phenom X4 9500 ...... 95-Watt ..... 2.2GHz .... $280.00 ... HD9500WCGDBOX
Phenom X4 9600 ...... ...................2.3GHz .... $320 ....... HD9600WCGDBOX
Phenom X4 9700 .... 125-Watt ....... 2.4GHz .... $330 ...... HD9700XAGDBOX

I think we know why their is no 3.0GHz!!!

Because the new stepping works better and all the dies have been diverted to where the money is, the Opterons. That's why Supermicro and Cray are announcing new systems. What the Phenom ends up with is the dies made from the previous stepping.

From http://forums.anandtech.com/me...=2115258&enterthread=y

Seems pretty logical to me. Early Phenoms will probably be B1 stepping chips.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: Idontcare
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

As far as "expected" total TDP...Expected by whom?

Note my sig, AMD has been "expecting" many things.

Me loves the sig...there is no denying AMD did nothing short of blowing smoke up the industry's proverbial ass regarding performance/power expections of Barcelona.

Now they have pie on their face, lost their VP of marketing (hmmm, surely a coincidence) and are quite quiet about Phenom performance/power expectations.

Everything AMD has done this year is confidence building...if you own INTC. (which I don't)

It's personal opinion posts like this that degenerate an excellent thread into a sledge fest...
Can we please keep it ON-Topic?

You were the one who took it off topic by posting about fusion.

What you really want is to not take it to a topic where you can't post more AMD propaganda.

Don't be daft...this IS fusion (on a PCIe card).

I should say (more correctly) that it's Fusion's graphics engine...
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: myocardia
Originally posted by: Viditor
It will never be close to that high...
For example, on-board chipset graphics like the Mobility 1150 run at 400 MHz and have a TDP well under 10w...can you think of any discrete card that has come close to that in the last 7 years?

Sure, take any of the lowest performance video cards of the last few years, and underclock them until they're down to the performance level of the mobility 1150, and they'd all be ~10 watts.

Please list for me the number of graphics cards at 400 Mhz with a TDP well under 10w...

When you integrate the GPU, it drastically changes the power requirements.

No, it doesn't. Would it lower it by some degree? Undoubtedly, but the word drastic would be nowhere in the description, at least by anyone with a functioning brain stem.

Hmmm...firstly, the brain stem controls autonomic functions and not reasoning. So do you mean the term "drastic" would be nowhere in the description only by those for whom reason doesn't come into play? :)

Second, since it seems to be a subjective term, could you give your own guesstimate as to what degree YOU mean?

Think about it...by integrating, you eliminate the need for another memory controller (like the one on the graphics card), the PCIe signalling device, and the distances you need to send any signal are measured in microns and not inches.

Memory controllers aren't what makes video cards power hogs, transistors are, at least that's where the majority of the power goes. BTW, it sure sounds to me like you're talking about a laptop with roughly the same performance as a K6-2 300/Celeron 300 (non-A), along with a 16MB TNT video card. If that's what you're talking about, of course it's TDP could be as low as ~10 watts, @45nm.

Did you think that the cache, memory controllers, signalling devices, and RAM were made of something other than transistors?
And I'm talking about a processer that is much closer to a SuperComputer on a chip...

1. The reason that Brisbane has a higher latency in L2 cache is so that AMD can drastically increase cache sizes on it.
"AMD has given us the official confirmation that L2 cache latencies have increased, and that it purposefully did so in order to allow for the possibility of moving to larger cache sizes in future parts."
AT Article

2. A very large shared cache on Fusion could eliminate the need for the discrete caches found on graphics cards as well as significantly reduce the memory latency for graphics inherent with moving to on-die.

3. Since many of the parts required in a graphics card are already present in the CPU, when you combine and integrate the 2 you have a drastic net reduction in power requirements.


Some more food for thought...
Compare the AMD San Diego 3700+ single core with the Toledo 4200+ Dual core...both on 90nm SOI.

San Diego = 2200 Mhz clockspeed, 105 Million Transistors, 89w TDP
Toledo = 2200 Mhz clockspeed, 155 Million Transistors (about 50% more!), 89w TDP
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
AMD has given us the official confirmation that L2 cache latencies have increased, and that it purposefully did so in order to allow for the possibility of moving to larger cache sizes in future parts.

pure 100% fud. there is absolutely no reason to settle on a read time longer than what the cache structure can give you. the only logical explanation was that the L2 didn't scale like the rest of the design. that explanation implies the latency is lower but the choice was made to use a longer latency. so it is a marketing lie.

Some more food for thought...
Compare the AMD San Diego 3700+ single core with the Toledo 4200+ Dual core...both on 90nm SOI.

San Diego = 2200 Mhz clockspeed, 105 Million Transistors, 89w TDP
Toledo = 2200 Mhz clockspeed, 155 Million Transistors (about 50% more!), 89w TDP

two completely meaningless numbers. TDP has little relevance to actual power draw during operation. also, what is the VID? how is the sorting done? at that frequency, is dynamic power even that big of an issue? i doubt it. if you look at the top bins of those two cores, both the voltages and the TDP's are different. the datasheet says only 6W difference, but in reality, the actual difference is far higher.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: dmens
AMD has given us the official confirmation that L2 cache latencies have increased, and that it purposefully did so in order to allow for the possibility of moving to larger cache sizes in future parts.

pure 100% fud. there is absolutely no reason to settle on a read time longer than what the cache structure can give you. the only logical explanation was that the L2 didn't scale like the rest of the design. that explanation implies the latency is lower but the choice was made to use a longer latency. so it is a marketing lie.

Well, firstly it isn't FUD...that stands for Fear, Uncertainty, and Doubt (first coined by the IBM marketing team many years ago), and I don't see that as anything like this.

I didn't get the same implication from the explanation that you did...my impression was that they had redesigned the cache so that it was more scalable, but that the cost of the redesign was increased latency.

Some more food for thought...
Compare the AMD San Diego 3700+ single core with the Toledo 4200+ Dual core...both on 90nm SOI.

San Diego = 2200 Mhz clockspeed, 105 Million Transistors, 89w TDP
Toledo = 2200 Mhz clockspeed, 155 Million Transistors (about 50% more!), 89w TDP

two completely meaningless numbers. TDP has little relevance to actual power draw during operation. also, what is the VID? how is the sorting done? at that frequency, is dynamic power even that big of an issue? i doubt it. if you look at the top bins of those two cores, both the voltages and the TDP's are different. the datasheet says only 6W difference, but in reality, the actual difference is far higher.

We are agreed here, actually...TDP is probably the most useless metric there is. Unfortunately, it has become a standard for reviewers (God knows why) and most people don't know how to read anything else when it comes to power use or heat generation.
In fact, one big reason I said it was "food for thought" was to make exactly that point.

The origin of this portion of the thread was that some poster assumed that since the graphics card had a TDP of 150w and the CPU a TDP of 125w, then a fusion of the CPU and GPU would have a TDP of 275w...my point was that this is absolute rubbish (a point I would be willing to bet you agree with).
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
Originally posted by: Viditor
Well, firstly it isn't FUD...that stands for Fear, Uncertainty, and Doubt (first coined by the IBM marketing team many years ago), and I don't see that as anything like this.

I didn't get the same implication from the explanation that you did...my impression was that they had redesigned the cache so that it was more scalable, but that the cost of the redesign was increased latency.

your impression is wrong, the scalability argument makes zero sense to me personally. but plenty of people bought it, and that is why the statement is a good piece of fud. perhaps not in the traditional sense of dissuading the customers from a rival product, but just a bullshit explanation to explain away a sore point. same idea.

The origin of this portion of the thread was that some poster assumed that since the graphics card had a TDP of 150w and the CPU a TDP of 125w, then a fusion of the CPU and GPU would have a TDP of 275w...my point was that this is absolute rubbish (a point I would be willing to bet you agree with).

the TDP equivalence may be crap, but imho a hybrid CPU/GPU design would warrant a higher TDP than the TDP sum of the separate, equivalent components, so cooling requirements are met. such a design is going to have an extremely high thermal density.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: dmens
Originally posted by: Viditor
Well, firstly it isn't FUD...that stands for Fear, Uncertainty, and Doubt (first coined by the IBM marketing team many years ago), and I don't see that as anything like this.

I didn't get the same implication from the explanation that you did...my impression was that they had redesigned the cache so that it was more scalable, but that the cost of the redesign was increased latency.

your impression is wrong, the scalability argument makes zero sense to me personally. but plenty of people bought it, and that is why the statement is a good piece of fud. perhaps not in the traditional sense of dissuading the customers from a rival product, but just a bullshit explanation to explain away a sore point. same idea.

Then we shall have to agree to disagree...
As a nitpick, the difference between FUD and bullshit is that FUD is used specifically to discredit a competitor...

The origin of this portion of the thread was that some poster assumed that since the graphics card had a TDP of 150w and the CPU a TDP of 125w, then a fusion of the CPU and GPU would have a TDP of 275w...my point was that this is absolute rubbish (a point I would be willing to bet you agree with).

the TDP equivalence may be crap, but imho a hybrid CPU/GPU design would warrant a higher TDP than the TDP sum of the separate, equivalent components, so cooling requirements are met. such a design is going to have an extremely high thermal density.

Ummm...if I'm reading this right, you think that in this case a GCPU along the lines of Fusion would require a TDP higher than 275w?!? That's just silly...
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: Viditor
Originally posted by: Phynaz
Originally posted by: Viditor
Originally posted by: Idontcare
Originally posted by: Phynaz
The only problem is you contradict yourself.

You first link to slide that says mobile and mainstream, and then you talk about a 2900xt.

2900xt is neither mobile nor mainstream.

But anyway, I meant how about proof that putting something on a cpu die automatically lowers it's power consumption.

Transistors are transistors, no matter where they reside.

BTW, the fist version of fusion, if it ever happens, will be an MCM.

As far as "expected" total TDP...Expected by whom?

Note my sig, AMD has been "expecting" many things.

Me loves the sig...there is no denying AMD did nothing short of blowing smoke up the industry's proverbial ass regarding performance/power expections of Barcelona.

Now they have pie on their face, lost their VP of marketing (hmmm, surely a coincidence) and are quite quiet about Phenom performance/power expectations.

Everything AMD has done this year is confidence building...if you own INTC. (which I don't)

It's personal opinion posts like this that degenerate an excellent thread into a sledge fest...
Can we please keep it ON-Topic?

You were the one who took it off topic by posting about fusion.

What you really want is to not take it to a topic where you can't post more AMD propaganda.

Don't be daft...this IS fusion (on a PCIe card).

I should say (more correctly) that it's Fusion's graphics engine...


Wrong again.

This is a X38xx video card minus the video output. Just like it's predecessor was a X1900 series card minus the video output.

This has absolutly ZERO to do with fusion, and never will have anything to do with it.

"AMD says the FireStream 9170 is based around a GPU built on 55nm process technology?presumably the same RV670 GPU expected to appear in Radeon HD 3800 cards later this month?and that it features double-precision floating point technology, 320 "stream cores", 500 gigaFLOPS of floating-point computing power, 2GB of onboard GDDR3 memory, a PCI Express 2.0 interface, and a 150W power envelope. The new GPU is launching at $1,999, which AMD says is "competitively priced.""

Let me know when AMD is putting those kind of specs on die with a cpu. Mark your calendar for about 10 years out.


You really, really should give up the kool-aid.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
Originally posted by: Viditor
Then we shall have to agree to disagree...

sure, as in you nod your head at the marketing explanation without giving it a second thought, but i view it with skepticism.

Ummm...if I'm reading this right, you think that in this case a GCPU along the lines of Fusion would require a TDP higher than 275w?!? That's just silly...

oh yeah, and why is that? i gave a possible reason (thermal density), you have no explanation on the contrary other that quoting another popular marketing soundbite, namely, integration always requires less power.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: dmens
Originally posted by: Viditor
Then we shall have to agree to disagree...

sure, as in you nod your head at the marketing explanation without giving it a second thought, but i view it with skepticism.

No, as in it makes perfect sense to me (and evidently to Anand as well, or he obviously would have joined you in your skepticism).

Ummm...if I'm reading this right, you think that in this case a GCPU along the lines of Fusion would require a TDP higher than 275w?!? That's just silly...

oh yeah, and why is that? i gave a possible reason (thermal density), you have no explanation on the contrary other that quoting another popular marketing soundbite, namely, integration always requires less power.

I actually gave several reasons, including pointing out that your theory of thermal density doesn't always apply.
If the TDP of 2 different parts on the same process and at the same clockspeed can be the same even though one has a 50% greater transistor count, then the idea that integrating 2 devices and sharing resources yields a TDP higher than the sum of the 2 discrete parts is plain silly.
 

dmens

Platinum Member
Mar 18, 2005
2,271
917
136
Originally posted by: Viditor
No, as in it makes perfect sense to me (and evidently to Anand as well, or he obviously would have joined you in your skepticism).

the fact that anand agrees says nothing about the technical correctness of the statement.

I actually gave several reasons, including pointing out that your theory of thermal density doesn't always apply.
If the TDP of 2 different parts on the same process and at the same clockspeed can be the same even though one has a 50% greater transistor count, then the idea that integrating 2 devices and sharing resources yields a TDP higher than the sum of the 2 discrete parts is plain silly.

you haven't demonstrated anything at all regarding thermal density, because the very metric you're using to "disprove" what I said is based on TDP, which in itself is a bullshit metric, as you've said so repeatedly. your single/dual core example is meaningless (vid, bins, all that stuff). food for thought, or is that what you really believe? and i merely stated that a higher thermal density is likely to lead to a higher TDP rating, and a hybrid design is likely to have both.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: dmens
Originally posted by: Viditor
No, as in it makes perfect sense to me (and evidently to Anand as well, or he obviously would have joined you in your skepticism).

the fact that anand agrees says nothing about the technical correctness of the statement.

True, but the fact that you are skeptical says nothing about it being incorrect either.
As I said before, we should just agree to disagree...

I actually gave several reasons, including pointing out that your theory of thermal density doesn't always apply.
If the TDP of 2 different parts on the same process and at the same clockspeed can be the same even though one has a 50% greater transistor count, then the idea that integrating 2 devices and sharing resources yields a TDP higher than the sum of the 2 discrete parts is plain silly.

you haven't demonstrated anything at all regarding thermal density, because the very metric you're using to "disprove" what I said is based on TDP, which in itself is a bullshit metric, as you've said so repeatedly. your single/dual core example is meaningless (vid, bins, all that stuff). food for thought, or is that what you really believe? and i merely stated that a higher thermal density is likely to lead to a higher TDP rating, and a hybrid design is likely to have both.

The reason for that is that your statement was also based on TDP estimates...

"a hybrid CPU/GPU design would warrant a higher TDP than the TDP sum of the separate, equivalent components"

As to my explanation, I'll try to use a car anology to make it simpler...
If you have a car with a V6 and add a second V6 engine to make it a V12, the combined fuel use and power/weight ratio doesn't double. Because the 2 engines are sharing a single chassis and body, the efficiency of the engines for performance is increased.

In this case, because the hybrid chip will utilize shared components such as the cache, Ram, signaling circuits, and mem controller with the CPU, the combination will have a net reduction in power usage and heat for the system.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: DrMrLordX
Originally posted by: Pederv
Originally posted by: 21stHermit
What we know from today's DailyTech article:

Phenom X4 9500 ...... 95-Watt ..... 2.2GHz .... $280.00 ... HD9500WCGDBOX
Phenom X4 9600 ...... ...................2.3GHz .... $320 ....... HD9600WCGDBOX
Phenom X4 9700 .... 125-Watt ....... 2.4GHz .... $330 ...... HD9700XAGDBOX

I think we know why their is no 3.0GHz!!!

Because the new stepping works better and all the dies have been diverted to where the money is, the Opterons. That's why Supermicro and Cray are announcing new systems. What the Phenom ends up with is the dies made from the previous stepping.

From http://forums.anandtech.com/me...=2115258&enterthread=y

Seems pretty logical to me. Early Phenoms will probably be B1 stepping chips.

First, I thought it was somewhat obvious that I meant something other than a forum post. BTW, why not just link back to this page, since you had already speculated that?:D Let me try again, do you have any links with that same speculation, written by anyone who gets their paychecks because of their writing about/testing of computer hardware?