about tricores....

Pelu

Golden Member
Mar 3, 2008
1,208
0
0
Sup all.. I know this is going to get bad.. but...

I hear that almost all tri-core processors... arent really meant to be 3 cores at all.. they are simply quad cores that in the manufacturing one of the cores is screw up!!!... so the manufacturer simple did a simple modification to allow it to run with the rest of the working cores in this case 3...
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Originally posted by: Pelu
Sup all.. I know this is going to get bad.. but...

I hear that almost all tri-core processors... arent really meant to be 3 cores at all.. they are simply quad cores that in the manufacturing one of the cores is screw up!!!... so the manufacturer simple did a simple modification to allow it to run with the rest of the working cores in this case 3...

Welcome to last year! :laugh:

This is fairly common knowledge around here.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I don't see how this is "bad", especially considering the fact that in many cases you can enable the 4th core without issue.

The AMD triple-core is probably the best bang for the buck on the market right now.

Based on the OP, he was baiting and he knew it.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
Yeah, this is way old news. I don't exactly see how it is a bad thing as well. The testing for these processors is pretty stringent, so there isn't really a possibility of getting a lemon.
 

aatf510

Golden Member
Nov 13, 2004
1,811
0
0
Originally posted by: Pelu
Sup all.. I know this is going to get bad.. but...

I hear that almost all tri-core processors... arent really meant to be 3 cores at all.. they are simply quad cores that in the manufacturing one of the cores is screw up!!!... so the manufacturer simple did a simple modification to allow it to run with the rest of the working cores in this case 3...

What's the difference between disabling a defective core and speed-binning parts?

In case you don't already know, here is an example,
The Core 2 Duo E6300 might originally be manufactured as the top of the line X6800. Unfortunately, it did not make it through a QC for 2.93GHz and might even have 1MB of its L2 cache defective. Therefore, Intel labeled and packaged it and sold it as a 1.86GHz 2MB L2 E6300.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.
 

Flipped Gazelle

Diamond Member
Sep 5, 2004
6,666
3
81
Originally posted by: ilkhan
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.

Yeah, it would have been really nice if the inactive 4th core wasn't sucking up the juice.

Originally posted by: SickBeast
Based on the OP, he was baiting and he knew it.

I think the OP has a PhII.

It seems to me that he makes plenty of posts where he sounds awfully confused... so it is possible he thought he was "breaking" the story.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: ilkhan
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.

Ummm...that's only partially true. While the voltage is locked to the core, the clockspeed can be set to 0-100% on each individual core. Therefore the temps are greatly reduced...
 

Stumps

Diamond Member
Jun 18, 2001
7,125
0
0
hello....last year just called and they want their news back....

AMD X3's are pretty kick ass....especially on AM2+ boards ;)
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Viditor
Originally posted by: ilkhan
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.

Ummm...that's only partially true. While the voltage is locked to the core, the clockspeed can be set to 0-100% on each individual core. Therefore the temps are greatly reduced...

Viditor, ilkhan is referring to the fact that on X3's of both the 65nm and 45nm varieties the fourth core on the die which has been deactivated still consumes power by way of static leakage pathways (the same static leakage the other three dies have).

So while only three cores contribute to dynamic leakage (the kind that comes from xtors switching) all four cores contribute to static leakage even though one core is "fused off". AMD can't fuse off the power distribution to the fourth core, so all the xtors on the disabled core still get Vcc applied to them.

To avoid this (static leakage on disabled die) they need a type of powergate xtor the likes of which Intel uses in Nehalem for the very purpose of eliminating static leakage on unused cores.

To my understanding their is nothing about ilkhan's post which renders it merely partially true...its all true.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Originally posted by: Idontcare
Originally posted by: Viditor
Originally posted by: ilkhan
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.

Ummm...that's only partially true. While the voltage is locked to the core, the clockspeed can be set to 0-100% on each individual core. Therefore the temps are greatly reduced...

Viditor, ilkhan is referring to the fact that on X3's of both the 65nm and 45nm varieties the fourth core on the die which has been deactivated still consumes power by way of static leakage pathways (the same static leakage the other three dies have).

So while only three cores contribute to dynamic leakage (the kind that comes from xtors switching) all four cores contribute to static leakage even though one core is "fused off". AMD can't fuse off the power distribution to the fourth core, so all the xtors on the disabled core still get Vcc applied to them.

To avoid this (static leakage on disabled die) they need a type of powergate xtor the likes of which Intel uses in Nehalem for the very purpose of eliminating static leakage on unused cores.

To my understanding their is nothing about ilkhan's post which renders it merely partially true...its all true.

That is the design of the PHII, not a tri-core design which ilkhan seemed to try and point out.
 

rudder

Lifer
Nov 9, 2000
19,441
86
91
the 3 core cpu makes everything all around cheaper. AMD doesn't waste the chips so they don't have to eat the cost of throwing them in the trash. Consumers get a 3 core cheaper than a 4 core.
 

HOOfan 1

Platinum Member
Sep 2, 2007
2,337
15
81
BTTTS (Aaron)Brooks Traded to the Saints

that's an inside joke probably no one around here would get...maybe Amberclad. :D

means news that isn't exactly breaking
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Zstream
Originally posted by: Idontcare
Originally posted by: Viditor
Originally posted by: ilkhan
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.

Ummm...that's only partially true. While the voltage is locked to the core, the clockspeed can be set to 0-100% on each individual core. Therefore the temps are greatly reduced...

Viditor, ilkhan is referring to the fact that on X3's of both the 65nm and 45nm varieties the fourth core on the die which has been deactivated still consumes power by way of static leakage pathways (the same static leakage the other three dies have).

So while only three cores contribute to dynamic leakage (the kind that comes from xtors switching) all four cores contribute to static leakage even though one core is "fused off". AMD can't fuse off the power distribution to the fourth core, so all the xtors on the disabled core still get Vcc applied to them.

To avoid this (static leakage on disabled die) they need a type of powergate xtor the likes of which Intel uses in Nehalem for the very purpose of eliminating static leakage on unused cores.

To my understanding their is nothing about ilkhan's post which renders it merely partially true...its all true.

That is the design of the PHII, not a tri-core design which ilkhan seemed to try and point out.

You're going to have to spell out what you are talking about here in a few more words if you want me to understand what you are trying to communicate.

How is the PhII design different from the X3 design?

To preemptively avoid any misunderstanding stemming from the use of the term design, I am referring to it in the technical IC layout/design sense of the term (as I am assuming ilkhan was using it too): http://en.wikipedia.org/wiki/Integrated_circuit_layout
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Originally posted by: rudder
the 3 core cpu makes everything all around cheaper. AMD doesn't waste the chips so they don't have to eat the cost of throwing them in the trash. Consumers get a 3 core cheaper than a 4 core.

Since when is disabling a core "preventing it from being thrown away". This doesn't make any sense in regards to disabling a core from a quad and selling it as a tri-core.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: ExarKun333
Originally posted by: rudder
the 3 core cpu makes everything all around cheaper. AMD doesn't waste the chips so they don't have to eat the cost of throwing them in the trash. Consumers get a 3 core cheaper than a 4 core.

Since when is disabling a core "preventing it from being thrown away". This doesn't make any sense in regards to disabling a core from a quad and selling it as a tri-core.

In those cases where they are harvesting, i.e. disabling a partially functional (or completely dysfunctional) core thus enabling the chip to be sold as an X3 when otherwise it would have been thrown away (recycled actually, the Si is still worth money) as it could not be sold as an X4.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Originally posted by: Idontcare
Originally posted by: ExarKun333
Originally posted by: rudder
the 3 core cpu makes everything all around cheaper. AMD doesn't waste the chips so they don't have to eat the cost of throwing them in the trash. Consumers get a 3 core cheaper than a 4 core.

Since when is disabling a core "preventing it from being thrown away". This doesn't make any sense in regards to disabling a core from a quad and selling it as a tri-core.

In those cases where they are harvesting, i.e. disabling a partially functional (or completely dysfunctional) core thus enabling the chip to be sold as an X3 when otherwise it would have been thrown away (recycled actually, the Si is still worth money) as it could not be sold as an X4.

I guess what I was trying to get at (not very well) was why the 4th core would be intentionally disabled when it works? Does one of the cores "mostly" work, but isn't quite up to QA? Will we see a lot of issues with the CPUs where people have enabled the core? The debate seems to be if this stems from defective cores or if it is just another form of "binning" in some sense of the word.
 

ShawnD1

Lifer
May 24, 2003
15,987
2
81
Originally posted by: ExarKun333
Originally posted by: Idontcare
Originally posted by: ExarKun333
Originally posted by: rudder
the 3 core cpu makes everything all around cheaper. AMD doesn't waste the chips so they don't have to eat the cost of throwing them in the trash. Consumers get a 3 core cheaper than a 4 core.

Since when is disabling a core "preventing it from being thrown away". This doesn't make any sense in regards to disabling a core from a quad and selling it as a tri-core.

In those cases where they are harvesting, i.e. disabling a partially functional (or completely dysfunctional) core thus enabling the chip to be sold as an X3 when otherwise it would have been thrown away (recycled actually, the Si is still worth money) as it could not be sold as an X4.

I guess what I was trying to get at (not very well) was why the 4th core would be intentionally disabled when it works? Does one of the cores "mostly" work, but isn't quite up to QA? Will we see a lot of issues with the CPUs where people have enabled the core? The debate seems to be if this stems from defective cores or if it is just another form of "binning" in some sense of the word.

QA isn't checking to see if it works or not. They're checking to see if it will work for a reasonable period of time. If enabling the 4th core will only work for 1 year, then AMD isn't going to risk bad PR by enabling it.

The same thing applies to overclocking. Sure you can get your Core 2 to 4ghz or whatever, but will it last for 5 years of continuous use? Probably not, and that's why Intel didn't sell it as a 4ghz. It'll last that long at 2.8ghz, so we'll sell it as that.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Astrallite
OP hasn't responded lol I think this is a telling message.

OP has a reputation of being a "drive-by" thread creator. Nothing wrong with that, just saying I wouldn't read much into their absence in this particular instance.
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Idontcare
Originally posted by: Astrallite
OP hasn't responded lol I think this is a telling message.

OP has a reputation of being a "drive-by" thread creator. Nothing wrong with that, just saying I wouldn't read much into their absence in this particular instance.

Yea, this OP has started some interesting threads in the past.

I don't see how this is a 'bad' thing AMD is doing. They still get to sell the part for something, this is much better than throwing away the silicon. Radeon 4850's are just 4870's that didn't run stable at 4870 speeds. GTX260's are just GTX280 silicon that had defective parts and/or couldn't run at the GTX280 speed. OP, if you only knew what hard drive manufacturers did with drives with bad clusters too... this is just very common practice in the hardware world. My guess is AMD isn't making a whole lot of money on the tri cores, but they're doing a whole lot better than if they just threw away every PhII thtat had a single bad core.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Originally posted by: Idontcare
Originally posted by: Zstream
Originally posted by: Idontcare
Originally posted by: Viditor
Originally posted by: ilkhan
The only downside is AMD's design isn't good enough to shut off power to the fourth core, so its still putting out heat.
But as above, this is old news.

Ummm...that's only partially true. While the voltage is locked to the core, the clockspeed can be set to 0-100% on each individual core. Therefore the temps are greatly reduced...

Viditor, ilkhan is referring to the fact that on X3's of both the 65nm and 45nm varieties the fourth core on the die which has been deactivated still consumes power by way of static leakage pathways (the same static leakage the other three dies have).

So while only three cores contribute to dynamic leakage (the kind that comes from xtors switching) all four cores contribute to static leakage even though one core is "fused off". AMD can't fuse off the power distribution to the fourth core, so all the xtors on the disabled core still get Vcc applied to them.

To avoid this (static leakage on disabled die) they need a type of powergate xtor the likes of which Intel uses in Nehalem for the very purpose of eliminating static leakage on unused cores.

To my understanding their is nothing about ilkhan's post which renders it merely partially true...its all true.

That is the design of the PHII, not a tri-core design which ilkhan seemed to try and point out.

You're going to have to spell out what you are talking about here in a few more words if you want me to understand what you are trying to communicate.

How is the PhII design different from the X3 design?

To preemptively avoid any misunderstanding stemming from the use of the term design, I am referring to it in the technical IC layout/design sense of the term (as I am assuming ilkhan was using it too): http://en.wikipedia.org/wiki/Integrated_circuit_layout

I was referencing the fact it is the overall design of the chips. Meaning his complaint is not a valid complaint just for the X3. Bad design all around.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I got it now :thumbsup: Yeah only Nehalem with their powergate transistors are able to control core power dissipation. Phenom showed promise but without the PCU it really fell short.
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Originally posted by: ExarKun333
I guess what I was trying to get at (not very well) was why the 4th core would be intentionally disabled when it works? Does one of the cores "mostly" work, but isn't quite up to QA? Will we see a lot of issues with the CPUs where people have enabled the core? The debate seems to be if this stems from defective cores or if it is just another form of "binning" in some sense of the word.

It may very well have started as a form of being able to sell a quad with one defective core, but as the process matures and yields go up, chances go up of the fourth core being good-just-disabled.

There may be any number of reasons for that. Perhaps a distributor had a standing order and AMD just needs to fill them? This is similar to binning. For instance, if pretty much all Wolfdale cores can do 3.2GHz, why are there still "slow" Wolfdales being sold? Another thing is that the manufacturer needs products at every price/performance point. If AMD knows from past sales that 14% of all CPU buyers have a target price of $125, then they better have a competitive product at that price. Intel probably knows that if they got rid of the E5XXX chips they would just be handing sales to AMD, unless they drop their pricing on higher speed chips. BUT... if they do that then their ASP goes down because there ARE some people willing to pay more for a better product.

Anyways, harvesting is like binning in some respects, and they all do both. AMD with both CPUs and GPUs, Intel, NVIDIA. I don't understand why everyone else doesn't understand that. :confused: