• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Why doesn't Intel use on-die controllers?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
There is no known 😉 Intel processor that beats the AMD price equilvalent. That is why AMD does, simply, with no bias, kill Intel. It is some the shorter pipelines, which are very efficient in gaming that gives AMD the advantage, some is the integrated memory controller, and not much other stuff. And comparing Intel DDR2 systems against AMD DDR systems may be comparing apples to oraanges, but if those are the only types of fruit, that will happen. Really, if Intel does come out with better hardware, but does not include the ability to use it to its maximum capability, then that is a failure on their part. The other reason that AMD is better is overclocking. I love my FX-62 that I got for 280. 🙂

In short, I am not ashamed to admit that I am an AMD fanboy, but then, AMD only has my loyalty as long as they have the fastest processors for the money. And they sure do, last time I checked any review, from any site.
 
Originally posted by: themusgrat
There is no known 😉 Intel processor that beats the AMD price equilvalent. That is why AMD does, simply, with no bias, kill Intel.

Hmm, i did a quick check and the 3800 costs around the same as a 920.

DivX = Intel.
CloneCD = AMD
Gaming = AMD
LameMP3= AMD
Mainconcept encodere = Intel
Media encoder Streaming= Intel.
Multitasking = Intel
Ogg1.1.0 = AMD
WinRAR = Intel.

Fact is that in some apps the intel is much superior to the 3800. Gaming is not one of them, but for workstation computers they are worth considering at the least.
 
Originally posted by: Bobthelost
Originally posted by: themusgrat
There is no known 😉 Intel processor that beats the AMD price equilvalent. That is why AMD does, simply, with no bias, kill Intel.

Hmm, i did a quick check and the 3800 costs around the same as a 920.

DivX = Intel.
CloneCD = AMD
Gaming = AMD
LameMP3= AMD
Mainconcept encodere = Intel
Media encoder Streaming= Intel.
Multitasking = Intel
Ogg1.1.0 = AMD
WinRAR = Intel.

Fact is that in some apps the intel is much superior to the 3800. Gaming is not one of them, but for workstation computers they are worth considering at the least.

Sorry, but which reviews are you referring to?
 
Originally posted by: Viditor
Originally posted by: Bobthelost
Originally posted by: themusgrat
There is no known 😉 Intel processor that beats the AMD price equilvalent. That is why AMD does, simply, with no bias, kill Intel.

Hmm, i did a quick check and the 3800 costs around the same as a 920.

DivX = Intel.
CloneCD = AMD
Gaming = AMD
LameMP3= AMD
Mainconcept encodere = Intel
Media encoder Streaming= Intel.
Multitasking = Intel
Ogg1.1.0 = AMD
WinRAR = Intel.

Fact is that in some apps the intel is much superior to the 3800. Gaming is not one of them, but for workstation computers they are worth considering at the least.

Sorry, but which reviews are you referring to?

THG benchmarks. But every other set of reviews backs up the fact that the Pentium range is better than the A64 range for some roles. Not many, but some.
 
Originally posted by: Bobthelost
Originally posted by: themusgrat
There is no known 😉 Intel processor that beats the AMD price equilvalent. That is why AMD does, simply, with no bias, kill Intel.

Hmm, i did a quick check and the 3800 costs around the same as a 920.

DivX = Intel.
CloneCD = AMD
Gaming = AMD
LameMP3= AMD
Mainconcept encodere = Intel
Media encoder Streaming= Intel.
Multitasking = Intel
Ogg1.1.0 = AMD
WinRAR = Intel.
Stability = Intel
Fact is that in some apps the intel is much superior to the 3800. Gaming is not one of them, but for workstation computers they are worth considering at the least.

 
Originally posted by: BrownTown
they own the rights to use HyperTransport, so they wouldn't have to copy it, they could put the exact same thing in their CPUs if they wanted.

Like calling A64 extension "EMT64"?

 
lmao hans007 obviously doesn't know the improvement of AMD's 90nm transition. It dramatically increased efficiency. Dramatically is the keyword. Many are undervolting Venice cores to achieve mobile like power consumption figures. You still saying "AMD hasn't done anything since 2003" while glorifying the much forgotten hyperthreading's multitasking ability. Let's face it, it was a patch of netburst's inefficiency. Don't hate the AMD fans and review sites for siding with AMD. How about just give AMD credit and wait til Conroe/Merom is out to show your internet ego. Credit to the on-die memory controller that is cause without it AMD might not have the performance advantage today.

lol good catch: Stability Intel. That's a classic.
 
Originally posted by: BrownTown
an integrated memmory controller isn't a universally good thing. Money is probably a big reason why Intel keeps the northbridge, but there are others. Having an integrated memmory controller means that you are pretty much tied to a specific type of memmory, If new innovations come out the core has to be redesigned instead of jsut the northbridge. So, it could take longer to adapt to new memmory technologies, and it would likely be mroe expensive since you will have to put the whole core through the validation process instead of just the northbridge. Also, you cannot take an old processor and put it in a new motherboard and have it work with new memmory. So, when new memmory tech comes out you have to upgrade the motherboard, memmroy and CPU all at the same time.

With intel you already pretty much have to get a new processor every time they transition to a new chipset or memory.
 
Munky I'm not going to flame you but in that post I don?t see any brand names used. I don?t think there is a need to try to justify a limitation of the on board memory controller. He was just stating the northbridge can be modified to work with existing technology as with an on board memory controller you have to scrap the whole CPU. But you have a point that occasionally that problem happens although its certainly less frequent then the other type of memory controller. I?m going to brand drop now, IBM also uses on board memory controllers as do many other chip makers. While it is popular it has the above drawbacks, you gain some latency performance while you lose ability to support new memory technologies, so you make the call.
 
Trying to explain away the ondie memory controller due to the failure of rambus is silly - while rambus briefly owned a performance advantage over open-standard dram, it was never executed as anything more than an attempt to create a proprietary, highly profitable memory standard. We are all lucky that the Athlon came along and trounced intel's offerings at that time, or we might be paying double or more what we do now for ram, with little or none of the selection available now (from cheap value ram to high performance, but nonstandard offerings for enthusiasts). Getting burned on rambus was one thing, but it was the direct result of arrogance on intel's part.

The troublesome thing is that if intel had been able to execute things like rambus and itanium just a year or two earlier, when AMD was still a solidly second-tier manufacturer, they probably would have succeeded and we as consumers would have lost.

I say this as someone who has owned, and will continue to own both AMD and INTEL systems; whatever does what I need, reliably, and at the lowest cost is what I purchase. RIght now, depending on the application, that would be sempron, turion or P-M (depending on priorities), low-end P-D, and X2 once you get past the 920 (which has no X2 competitor).

 
while glorifying the much forgotten hyperthreading's multitasking ability. Let's face it, it was a patch of netburst's inefficiency

If you're going to criticize something, at least understand what it is, how it works, why it was done, and its performance characteristics.
 
Originally posted by: 3chordcharlie
Trying to explain away the ondie memory controller due to the failure of rambus is silly - while rambus briefly owned a performance advantage over open-standard dram, it was never executed as anything more than an attempt to create a proprietary, highly profitable memory standard. We are all lucky that the Athlon came along and trounced intel's offerings at that time, or we might be paying double or more what we do now for ram, with little or none of the selection available now (from cheap value ram to high performance, but nonstandard offerings for enthusiasts). Getting burned on rambus was one thing, but it was the direct result of arrogance on intel's part.

Learn your hardware history.

Rambus loyalties never amounted to more than around 1% of the total cost of memory, your "doubling" memory price is just ludicrous. Moreover, if you've been following any kind of hardware news at all, you know that the reason Rambus was expensive was because the major DRAM's (Samsung, Micron, Eldiphia, Hynix, etc etc) was artificially inflated the price of Rambus memory while purposely losing money on SDRAM variants to drive Rambus out of the market.

Rambus memory was indeed the high performer. The ancient PC800 RDRam outperformed DDR400 in dual channel configurations. RDRam was also better than SDRam, because it could scale to clock speeds better (while lowering latency) whereas DDR to DDR2 massively increased latency. The fact of the matter is that RDRam is superior technology and Intel went with it. It was the DRAM manufacturers that purposely snubbed it, and now are in the process of litigation (Samsung has already forked over $300M and pleaded guilty to DRAM price fixing).
 
Originally posted by: dmens
I don't give a crap when people insult intel, or any other entity. I just laugh at puerile fanboi opinions (e.g. yours). Of course, you are entitled to hold such opinions, just don't get pissy when others mock them, LOL.


I didn't get mad or upset at all with what you said. I just didn't quite understand your manner.

I would like to know how I am the fan boy?? Honestly?? Show me all of my 1000's of posts where you support your claim i'm a fanboy?? What about you though?

I simply stated what my opinion was and how some of Intels decisions could seem to support what I said about them. I am sorry it touched you so deep that you have to act the way your are about it.

Anyways, I answered the guy that asked the question, simple as that.


Jason
 
Originally posted by: dexvx
Originally posted by: 3chordcharlie
Trying to explain away the ondie memory controller due to the failure of rambus is silly - while rambus briefly owned a performance advantage over open-standard dram, it was never executed as anything more than an attempt to create a proprietary, highly profitable memory standard. We are all lucky that the Athlon came along and trounced intel's offerings at that time, or we might be paying double or more what we do now for ram, with little or none of the selection available now (from cheap value ram to high performance, but nonstandard offerings for enthusiasts). Getting burned on rambus was one thing, but it was the direct result of arrogance on intel's part.

Learn your hardware history.

Rambus loyalties never amounted to more than around 1% of the total cost of memory, your "doubling" memory price is just ludicrous. Moreover, if you've been following any kind of hardware news at all, you know that the reason Rambus was expensive was because the major DRAM's (Samsung, Micron, Eldiphia, Hynix, etc etc) was artificially inflated the price of Rambus memory while purposely losing money on SDRAM variants to drive Rambus out of the market.

Rambus memory was indeed the high performer. The ancient PC800 RDRam outperformed DDR400 in dual channel configurations. RDRam was also better than SDRam, because it could scale to clock speeds better (while lowering latency) whereas DDR to DDR2 massively increased latency. The fact of the matter is that RDRam is superior technology and Intel went with it. It was the DRAM manufacturers that purposely snubbed it, and now are in the process of litigation (Samsung has already forked over $300M and pleaded guilty to DRAM price fixing).

If rambus had become the de facto standard, it would not have remained at 1% royalties. Intel, and rambus, tried to corner a market, and their anti-competitive behaviour was met with anticompetitive retaliation. If rambus was so good, they should have been hiring out technical consultants to get memory manufacturers up to speed, rather than trying to simply 'cash in'.

I happen to think ddr2 was the wrong memory at the wrong time; process improvements suggest that traditional ddr could have expanded at least to 250-300mhz system clocks, and ddr latencies are definitely superior to ddr2 latencies; it's going to take a lot of clock speed before ddr2 is a clear performance improvement.
 
Originally posted by: dmens
while glorifying the much forgotten hyperthreading's multitasking ability. Let's face it, it was a patch of netburst's inefficiency

If you're going to criticize something, at least understand what it is, how it works, why it was done, and its performance characteristics.


You know, one of the big reasons for ht was to increase Netburst efficiency. There were wasted cpu cycles and figured out a way to put them to use. This could very well have been one reason why some area's in performance got more than 15% increase in performance, there being quite a few wasted cycles that are now being used.


Jason
 
Originally posted by: 3chordcharlie
If rambus had become the de facto standard, it would not have remained at 1% royalties. Intel, and rambus, tried to corner a market, and their anti-competitive behaviour was met with anticompetitive retaliation. If rambus was so good, they should have been hiring out technical consultants to get memory manufacturers up to speed, rather than trying to simply 'cash in'.

Anti-competitive behavior? The only anti-competitive behavior came from the DRAM manufacturers. And pray-tell, how would Rambus corner the market? They don't even MAKE stuff. They are just an IT company with technical know-how.

Originally posted by: 3chordcharlie
I happen to think ddr2 was the wrong memory at the wrong time; process improvements suggest that traditional ddr could have expanded at least to 250-300mhz system clocks, and ddr latencies are definitely superior to ddr2 latencies; it's going to take a lot of clock speed before ddr2 is a clear performance improvement.

Guess who pushed DDR2? The DRAM OEM's.
 
Originally posted by: formulav8
Originally posted by: dmens
while glorifying the much forgotten hyperthreading's multitasking ability. Let's face it, it was a patch of netburst's inefficiency

If you're going to criticize something, at least understand what it is, how it works, why it was done, and its performance characteristics.

You know, one of the big reasons for ht was to increase Netburst efficiency. There were wasted cpu cycles and figured out a way to put them to use. This could very well have been one reason why some area's in performance got more than 15% increase in performance, there being quite a few wasted cycles that are now being used.

Explain to me how a short-pipeline'd IBM Power series uses SMT technology.
 
Originally posted by: openwheelformula1
lmao hans007 obviously doesn't know the improvement of AMD's 90nm transition. It dramatically increased efficiency. Dramatically is the keyword. Many are undervolting Venice cores to achieve mobile like power consumption figures. You still saying "AMD hasn't done anything since 2003" while glorifying the much forgotten hyperthreading's multitasking ability. Let's face it, it was a patch of netburst's inefficiency. Don't hate the AMD fans and review sites for siding with AMD. How about just give AMD credit and wait til Conroe/Merom is out to show your internet ego. Credit to the on-die memory controller that is cause without it AMD might not have the performance advantage today.

lol good catch: Stability Intel. That's a classic.


it "dramatically increased efficiency". what are you like some sort of wannabe engineer?

any process shrink will do that. the prescott shrink did NOT do that because it was not a shrink it was an entirely new chip. had they shrunk northwood to 90nm it would have had similar power level drops. so you are crediting AMD with the "transition" to 90nm which basically every semiconductor company has now completed?

the dothan puts out less heat than a banias. with more cache to boot. you dont even understand anything more than what the 5000 internet amd fan sites are telling you. seriously go educate yourself with something that isnt just PR garbage.

hyperthreading is a far more important feature. it is way more efficient at useing pipelining cycles and other companies have also implemented it in their non super long pipeline cpus. sun has it in all their niagra chips, it is in the xbox360 cpu, the ps3 etc. i cant even understand how you can disparage hyperthreading as a non achievement.
 
Originally posted by: formulav8
You know, one of the big reasons for ht was to increase Netburst efficiency. There were wasted cpu cycles and figured out a way to put them to use. This could very well have been one reason why some area's in performance got more than 15% increase in performance, there being quite a few wasted cycles that are now being used.


Jason

What do you mean by wasted netburst cycles? Are you talking about jump mispredict wiping the pipeline? That bogus argument just never dies. First of all, it is a rare event. Secondly, with a trace cache bypassing the decode stages, the front->backend latency is not that bad. So saying that SMT was a "hack" to keep the machine busy during the jump restart is flat out wrong.

On the plus side, SMT does a great job bypassing pipestage stalls, which is far more common, and occurs on machines of all pipeline lengths. That's probably why lots of companies are doing it.

Now, there's plenty of real netburst quirks, but SMT was not designed to address any of them. SMT was on silicon before a lot of these issues were discovered.
 
lmao I learn something new every day from hardcore fans. I didn't know Prescott was an "ENTIRELY NEW" chip. Maybe they abandoned netburst and gave us more IPC. To both Intel fanatics hans007 and dmens, I was showing a big discrepency between the each company's development and improvement. Don't get offended if you thought I discredited hyperthreading. I loved my 2.8c, but it was short lived and quickly displaced by a Newcastle. BTW hans007, not "any process shrink" will "dramatically increase efficiency the way Winchester did after Newcastle. Especially near the ceiling of an archtecture such as netburst. There are VERY good reasons why Intel is abandoning netburst you know. I bet you haven't seen power consumption differences from reputable sites such as spcr. At least dmens passion for Intel is backed up by knowledge. Enjoy your Prescott/Smithfield/Presler, and I'll probably be joining you once Conroe hits the shelves. At the mean time I'll stick to my efficient and powerful X2 and Venice (Relatively to netburst that is). Ciao.


Hypothetically, if Intel suddenly decided to impliment integrated memory controller and hyper transport. Can you honestly tell me you wouldn't like it?

AMD PR hype? What? AMD has a PR department?

Am I a wannabe engineer? LMAO hell yeah, graduate student at UCSB-Material.
 
Originally posted by: dexvx
Originally posted by: 3chordcharlie
If rambus had become the de facto standard, it would not have remained at 1% royalties. Intel, and rambus, tried to corner a market, and their anti-competitive behaviour was met with anticompetitive retaliation. If rambus was so good, they should have been hiring out technical consultants to get memory manufacturers up to speed, rather than trying to simply 'cash in'.

Anti-competitive behavior? The only anti-competitive behavior came from the DRAM manufacturers. And pray-tell, how would Rambus corner the market? They don't even MAKE stuff. They are just an IT company with technical know-how.
By creating a closed format; like winzip tried to do with one of their recent versions.

Rambus created a technology, and realized they could make more money by extracting licensing fees than by actually producing something, or selling their expertise. Now, maybe the dram manufacturers objected because the los of an open standard is damaging to the benefits of a freemarket, but that's unlikely; more likely is that they would be tied for the long-term to being pure commodity producers with all innovation (and associated profits) accruing to the owners of rdram IP.

Originally posted by: 3chordcharlie
I happen to think ddr2 was the wrong memory at the wrong time; process improvements suggest that traditional ddr could have expanded at least to 250-300mhz system clocks, and ddr latencies are definitely superior to ddr2 latencies; it's going to take a lot of clock speed before ddr2 is a clear performance improvement.

Guess who pushed DDR2? The DRAM OEM's.[/quote]

For the same reasons that intel bet the farm on netburst; they didn't expect ddr to scale as well as it did (meanwhile, intel expected netburst to scale much higher).

If ddr had more or less stalled at 400mhz (instead of artificially stalling at that level) then ddr2@ 600+ mhz would be a real benefit.

As it stands, we should be getting systems with stock ddr500 ram, but we're not; the two issues are separate though; ddr2 was a miscalculation; rdram was an excellent technology that was co-opted for a grab at market power.
 
Prescott was definitely entirely new considering the amount of changes made over northwood. See, you really do learn something new every day. Just to set the record straight, process shrinks really are automatic goodness... unless the process is totally fuxed, but as dothan demonstrated, the process was fine.

Enough with the obvious hypotheticals already. No need to give yourself backpats over 20/20 hindsights.

By the way, I don't know why you call me a "passionate intel fanatic". I don't get paid enough to give a crap. But working here allowed me to develop a finely tuned bullshit detector. 🙂
 
every post of yours is dedicated to defend Intel. To a point you are willing to say Prescott was "Entirely New". You know very well Prescott was a further investment to scale higher in the very same "Netburst" technology. You know the rest to that story.
 
Back
Top