Case Proven: People that think X2 > Core2 clock for clock

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: zsdersw
Oh please. Netburst was not an abomination. It's competition did better on most of the benchmarks, but it was hardly anything close to what you're characterizing it as. It's the same thing with C2D versus K8. Just because C2D is better on most of the benchmarks doesn't make K8 an "abomination".

How much would a cpu have to suck for you to admit that it sux? The netburst architecture was lagging in performance behind the older K7 in the first half of its life. That's a modern equivalend if the new C2D was lagging behind the older K8. The fact that the newer C2D is faster is a normal technological progression, nothing spectacular about it.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: munky
Originally posted by: zsdersw
Oh please. Netburst was not an abomination. It's competition did better on most of the benchmarks, but it was hardly anything close to what you're characterizing it as. It's the same thing with C2D versus K8. Just because C2D is better on most of the benchmarks doesn't make K8 an "abomination".

How much would a cpu have to suck for you to admit that it sux? The netburst architecture was lagging in performance behind the older K7 in the first half of its life. That's a modern equivalend if the new C2D was lagging behind the older K8. The fact that the newer C2D is faster is a normal technological progression, nothing spectacular about it.

There's nothing spectacularly horrible about Netburst either.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: zsdersw
Originally posted by: Viditor
I don't think you get it here Z...
The ODMC of the K8 decreases the # of clock cycles burned in memory read/write by ~35-40%.
Anandtech article
So, no...it's not similar to using lower latency ram.

Perhaps it's you who doesn't get it. We're talking about the impact on C2D.. not K8.

Is there a reason you think it would be different on C2D?
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Viditor
Is there a reason you think it would be different on C2D?

Yes. The fact that C2D was designed to work within the framework of the FSB approach.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: zsdersw
Originally posted by: Viditor
Is there a reason you think it would be different on C2D?

Yes. The fact that C2D was designed to work within the framework of the FSB approach.

That doesn't really answer the question...what part of that design would preclude C2D from taking advantage of the lower latency given by a theoretical ODMC?
 

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
Originally posted by: munky
Originally posted by: zsdersw
Oh please. Netburst was not an abomination. It's competition did better on most of the benchmarks, but it was hardly anything close to what you're characterizing it as. It's the same thing with C2D versus K8. Just because C2D is better on most of the benchmarks doesn't make K8 an "abomination".

How much would a cpu have to suck for you to admit that it sux? The netburst architecture was lagging in performance behind the older K7 in the first half of its life. That's a modern equivalend if the new C2D was lagging behind the older K8. The fact that the newer C2D is faster is a normal technological progression, nothing spectacular about it.

At the time of release, Netburst was designed to use the SSE2 FPU instead of the traditional x87 FPU. Obviously, it was a huge gamble because no one really knew how to use the SSE2 FPU. Intel's compiler technology was behind their hardware curve, so a simple recompile wouldn't have done much until years later.

Nowadays, its a totally different story. The x86-64 basically *requires* you to use the SSE2 as the main FPU instead of the x87. What this means is that older Willamettes are given pretty large performance gains in newer software in comparison to the older K7. Right now, using SSE2 code (which the majority of all new apps use), even old Willamettes are outperforming Athlon XP's of the same rating or higher on a regular basis, something unheard of on release.

This is similar to the GeForce2 Ultra/GeForce 3 story. At release the GF3 was barely on par with the GF2-Ultra, but starting around 2-3 years ago, the GF3 is the minimum baseline for pretty much any game and the GF2-Ultra was just left in the dark altogether.

---

BTW, for those people that think Netburst is a total failure due to its 20 (and later 30) pipe design, the CPU Arch's at Intel think 30 is the "optimal" number. The CPU Arch's at IBM think 50 is the "optimal" number.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Viditor
That doesn't really answer the question...what part of that design would preclude C2D from taking advantage of the lower latency given by a theoretical ODMC?

The fact that memory latency (either the memory's specific latency or the latency of the FSB) doesn't have a significant effect on its performance.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: zsdersw
Originally posted by: Viditor
That doesn't really answer the question...what part of that design would preclude C2D from taking advantage of the lower latency given by a theoretical ODMC?

The fact that memory latency (either the memory's specific latency or the latency of the FSB) doesn't have a significant effect on its performance.

If you actually still believe that after reading the links I posted and reading, then I guess you will not be convinced. I can only say that you should study the difference between latencies on Ram and the latencies on the ODMC vs FSB models a little closer...
Note here that the differences in latency using using low-latency ram is only 1-2%, while if you will remember the difference in latency between K7 and K8 (FSB vs ODMC) are more like 35-40%...
 

imported_Crusader

Senior member
Feb 12, 2006
899
0
0
Originally posted by: zsdersw
Originally posted by: Viditor
That doesn't really answer the question...what part of that design would preclude C2D from taking advantage of the lower latency given by a theoretical ODMC?

The fact that memory latency (either the memory's specific latency or the latency of the FSB) doesn't have a significant effect on its performance.

Your statement is incorrect, yes it does. Just not with todays memory speeds.

In the future, Intel needs to move to the ODMC (prob around the time AMD pushes out the K8L) or they'll eventually get passed up.

Intel knows C2D would benefit from an ODMC.. which is why they are implementing one. As I've said.. if you are right you should get on the horn and let Intel know so they dont screw up. ;) I'm guessing they know better than -you-.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Viditor
If you actually still believe that after reading the links I posted and reading, then I guess you will not be convinced. I can only say that you should study the difference between latencies on Ram and the latencies on the ODMC vs FSB models a little closer...
Note here that the differences in latency using using low-latency ram is only 1-2%, while if you will remember the difference in latency between K7 and K8 (FSB vs ODMC) are more like 35-40%...

What affects K8 won't necessarily affect C2D.. or affect it in the same way. The two are very different in how everything related to memory impacts their overall performance.

 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Crusader
Your statement is incorrect, yes it does. Just not with todays memory speeds.

In the future, Intel needs to move to the ODMC (prob around the time AMD pushes out the K8L) or they'll eventually get passed up.

Intel knows C2D would benefit from an ODMC.. which is why they are implementing one. As I've said.. if you are right you should get on the horn and let Intel know so they dont screw up. ;) I'm guessing they know better than -you-.

They're implementing an IMC with a future chip.. not Conroe. Intel "knows" C2D would benefit from an IMC? And just how do -you- know that?
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: zsdersw
Originally posted by: Viditor
If you actually still believe that after reading the links I posted and reading, then I guess you will not be convinced. I can only say that you should study the difference between latencies on Ram and the latencies on the ODMC vs FSB models a little closer...
Note here that the differences in latency using using low-latency ram is only 1-2%, while if you will remember the difference in latency between K7 and K8 (FSB vs ODMC) are more like 35-40%...

What affects K8 won't necessarily affect C2D.. or affect it in the same way. The two are very different in how everything related to memory impacts their overall performance.

The designs are indeed different, but the concepts behind them really aren't...we are talking about 3 basic concepts here:

1. Bandwidth - how much data can we get to and from the core per clock
2. Latency - how quickly can we make it travel (the corallary to this is how far must it travel)
3. Core speed and efficiency - how well does the core deal with the data

C2D is a testament to Intel's engineers...they have created a chip with an amazingly fast and efficient core.
The bandwidth issue they have handled by increasing the FSB and internal throughput to an admirable level (though in greater than 2P scenarios this may yet prove to be an issue...we will see).
Latency has now become more of an issue for Intel, but only because they have raised the bar on the other 2 so high.

AMD is coming from the other direction...
They haven't had a bandwidth issue for some time, nor will they have because of HT.
Their latency is far superior to Intel's, but it's still an important metric for performance on K8.
With the release of K8L (theoretically), they will have a core that should prove close to equal to C2D's...

One of the reason I have recently purchased some INTC shares is the 2009 release of CSI. My thought is that (judging by roadmaps) both Intel and AMD should remain neck and neck as far as core technology goes till then, though AMD will have the faster chip overall because of the lower latency inherent in both HT and ODMC. However when CSI is incorporated into Intel's designs, they should have the same or better latency, and the same or better bandwidth. (I often invest well in advance of technology releases)

With that said, if you can think of any specific reason why this is in error, or why C2D will not be able to utilize lower latency (according to all roadmaps I've seen, the CSI cores will be roughly the same as C2D) I would dearly love to hear them!
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
according to all roadmaps I've seen, the CSI cores will be roughly the same as C2D

c3d?

in regards to latency, if the prefetcher does a good job, then latency is hidden. and in general, it performs well. lower latency is nice, but the question is if exposed latency is the common case, and how badly the machine is hung up while waiting.

also, the large cache helps by giving the prefetcher more room to play with.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: dmens
according to all roadmaps I've seen, the CSI cores will be roughly the same as C2D

c3d?

in regards to latency, if the prefetcher does a good job, then latency is hidden. and in general, it performs well. lower latency is nice, but the question is if exposed latency is the common case, and how badly the machine is hung up while waiting.

also, the large cache helps by giving the prefetcher more room to play with.

Good points dmens, and I believe harpoon84 touched on that as well...
However (keeping in mind that we are dealing with a hypothetical) K8L is reported to be matching C2D in much of this (though I don't know how well the large L3 will work...).

Given that scenario, and the fact that all designs must have SOME weakest link, what would you say is C2D's weakest link?
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Viditor
With that said, if you can think of any specific reason why this is in error, or why C2D will not be able to utilize lower latency (according to all roadmaps I've seen, the CSI cores will be roughly the same as C2D) I would dearly love to hear them!

C2D doesn't have a latency problem. Additionally, the cores that will use CSI don't have to be all that different to make the presence of an IMC more relevant than it is with C2D.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: zsdersw
C2D doesn't have a latency problem.
Where do you come up with this crap? Every processor ever manufactured has a latency problem. Now, the C2D is, IMO, the least effected by latency. But, that doesn't mean it doesn't have a latency problem.
 

OcHungry

Banned
Jun 14, 2006
197
0
0
I think he means Conroe was not late in release, thus no "latency problem"
But I kind of agree w/ you on conreo's latency problem. It should have been out 2 years ago and now its kind of late in having an impact on saving Intel from BK.
 

Viditor

Diamond Member
Oct 25, 1999
3,290
0
0
Originally posted by: myocardia
Originally posted by: zsdersw
C2D doesn't have a latency problem.
Every processor ever manufactured has a latency problem. Now, the C2D is, IMO, the least effected by latency. But, that doesn't mean it doesn't have a latency problem.

I would agree with that completely...
While the large cache and excellent prefetch do minimize the effect (as dmens and harpoon point out), they are still just a workaround with limited future impact.
I still see latency (and to a lesser extent the FSB) as one of Intel's next big hurdles...
 

imported_Crusader

Senior member
Feb 12, 2006
899
0
0
Originally posted by: zsdersw
Originally posted by: Crusader
Your statement is incorrect, yes it does. Just not with todays memory speeds.

In the future, Intel needs to move to the ODMC (prob around the time AMD pushes out the K8L) or they'll eventually get passed up.

Intel knows C2D would benefit from an ODMC.. which is why they are implementing one. As I've said.. if you are right you should get on the horn and let Intel know so they dont screw up. ;) I'm guessing they know better than -you-.

They're implementing an IMC with a future chip.. not Conroe. Intel "knows" C2D would benefit from an IMC? And just how do -you- know that?

Did I stutter son?
I said implement IMC on C2D. Not conroe. But nice try pulling at that straw.

How do I know? I read, and most importantly, unlike you.. I think. You mindless little twit.
I knew what you dont have a clue about, yet rambled on for 5 pages... about 2 1/2 months ago-

Originally posted by: Wesley Fink
That just means as Memory Speed increases AM2 will benefit more and Intel will eventually need to move to an on-processor controller.


Originally posted by: zsdersw
C2D doesn't have a latency problem.

LOL Oh sweet merciful Jesus... you're lucky someone else got to this before I did.
You really are the fool and an amusing one at that. Keep your dunce hat on and go sit in the corner before you continue to embarass yourself.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: dexvx
Originally posted by: Crusader
My point was, if you did buy the AMD (or Intel) either way you wont get a crap chip.

For the past years, with the exception of Northwood, Intel has been selling crap chips.
Theres nothing pathetic about using a X2 5000+ unless you are a rabid intel fanboy elitist. Still a great, fast chip..

I'm with Crusader on this one -- an X2 5000+ for $500 is not a good deal these days, but if you're one of those weirdos who don't overclock ( ;) ), it's not a terrible chip. It's a terrible deal - go with a much lower priced 3800+/4200+ and overclock it but you see where I'm going with this... For the layperson who wants to just build a PC and yet has lots of cash to spend, a 5000+ is a decent chip to run at stock. Of course an E6600 is a better chip (even at stock, and especially, of course, when overclocking)...

Originally posted by: dexvx
You're saying its ok to overpay for a chip that:

1) Does less in the performance/watt category (upwards of 40-60% less).

That is a totally bogus statement. In the past, The A64/X2 was 50%+ better in performance/watt over the P4/ P-D lineup of chips, but Conroe does not have that kind of lead in performance per watt, not even 65nm vs 90nm. Especially with 512K cache AMD chips, and especially not when idle. If you're not using the PC as a server or doing something which keeps the CPU at 100% usage all the time, the AMD might actually be a bit stingier on power consumption for general processing where usage is typically <5% once you've got word/internet/etc. open.

Intel having the kind of performance/Watt lead with C2D over X2 as AMD had with X2 over Pentium D is just FUD.

Hey look - AMD's still (marginally) better at idle and doesn't lose by much at load. Certainly nowhere near 40-60%.

The ONLY place you could make a case for Intel having significant lead in performance per watt over AMD right now (>25%) is at the very top - an X6800 @ 2.93 GHz vs a FX-62, partly because the FX-62 chip runs at higher voltage/has double the cache of the X2's and is a bleeding edge chip, while the X6800 @ 2.93 GHz is a walk in the park for conroe. And the performance of an X6800 spanks the FX-62 (25%+).

But if you're comapring middle of the pack or below, say an 2.4 GHz 4600+ X2 with a 2.4 GHz E6400, performance per watt is not going to be that much better on the E6400.


2) Does less general performance (upwards of 20-30% less than a E6600 at stock).

That's true in a 5000+ vs an E6600. 5000+ is not a good chip to buy for those of us who know what overclocking is ;) .

Yet, how is this different when you were buying "crap" Intel chips? I mean P4 Prescotts/Cedar Mills made *ok* machines (they work well and are reasonably fast compared to previous generation AthlonXP's and S754 A64's), but they fell into the same category as #1, #2 but NEVER #3 (except for EE's).

Because the Intel P4/Pentium D chips were crap vs AMD Athlon64 939 and X2, which they fought for some time. Power consumption was astronomical! Performance per watt (Intel's new favourite stat, yet something they wouldn't dare speak of before C2D) was also abysmal on the Pentium 4's/D's vs the A64's/X2's.

The "last stop for Netburst" 65nm P4's weren't too bad, but it was already way way past its prime by that point. X2 had been out for some time, also.


Originally posted by: atom
So, since when is AMD getting a cut from retailer markups? And how is this different from Intel? Intel makes a killing on the EE chips, AMD makes a killing on their FX chips, no surprises here.

I think Viditor can answer this, but its also based on distributor demand. Obviously since retailers like Newegg are selling out at inflated prices, the distributors aren't being retarded and not taking notice. Hence the distributors can mark it up. Since AMD sells directly to the distributors, they are idiots for not noticing the series of markups as well. Also I'm talking about the AM2-5000+, not even an FX chip.

So what's the deal, you're happy when Intel sells high margin chips but not AMD?
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: zsdersw
Idiot or not, name calling is pretty pathetic and juvenile. Try attacking the product and opinions.
Oh wait, you are wrong.. which is why you attack me instead of my post.
Forgot.

"Idiot" is name-calling.. and "pathetic and juvenile"? What do you suppose telling someone, as you did earlier, to "eat sh*t" is?
[/quote]

Telling someone to "eat sh*t" is juvenile. But who instigated the comment?


I don't know what the performance situation would look like if AMD's chips didn't have an IMC.. but that's not the point. The point is that the IMC and HyperTransport haven't been able to push AMD's chips ahead of Conroe/Merom/Woodcrest. So since Conroe/Merom/Woodcrest, with its FSB, beats AMD's chips.. what does that say about your proclamations of the FSB being "inferior"? It says they're crap. The FSB is only an inferior application in the 4P and 4P+ systems. Otherwise, it's definitely an example of "quality engineering".

You're totally ignoring factors of release date and internal architecture.

AMD's Hypertransport, IMC have been featured on A64's for 2 good years now, and have given AMD a tremendous boost in the IPC category over their previous chips.

Yes, AMD can't beat Intel at clock-per-clock anymore, with their 90nm chips with 512K X2 cache or 1MB X2 cache, vs brand new Intel 65nm chips with humongous 2MB and 4MB unified caches. But they put up a surprisingly good fight considering the age of the architecture and the fact that AMD's still on 90nm.

When AMD moves to 65nm, they might be able to bump up the cache, and/or clockspeeds (clockspeeds will go up eventually, but probably not dramatically at first).
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: myocardia
Originally posted by: zsdersw
C2D doesn't have a latency problem.
Where do you come up with this crap? Every processor ever manufactured has a latency problem. Now, the C2D is, IMO, the least effected by latency. But, that doesn't mean it doesn't have a latency problem.

"Now, the C2D is, IMO, the least effected by latency."

Yup.. that's what I meant.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Crusader
I said implement IMC on C2D. Not conroe. But nice try pulling at that straw.

How do I know? I read, and most importantly, unlike you.. I think. You mindless little twit.
I knew what you dont have a clue about, yet rambled on for 5 pages... about 2 1/2 months ago-

And what makes you so sure future chips will be called Core 2 Duo, especially ones with an IMC and CSI.

Sorry, I can't be a "mindless little twit" either. That's your job on OcHungry's days off.

 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: jiffylube1024
Telling someone to "eat sh*t" is juvenile. But who instigated the comment?

It doesn't matter who instigated it. The choice was made to respond with a juvenile comment.

You're totally ignoring factors of release date and internal architecture.

AMD's Hypertransport, IMC have been featured on A64's for 2 good years now, and have given AMD a tremendous boost in the IPC category over their previous chips.

Yes, AMD can't beat Intel at clock-per-clock anymore, with their 90nm chips with 512K X2 cache or 1MB X2 cache, vs brand new Intel 65nm chips with humongous 2MB and 4MB unified caches. But they put up a surprisingly good fight considering the age of the architecture and the fact that AMD's still on 90nm.

When AMD moves to 65nm, they might be able to bump up the cache, and/or clockspeeds (clockspeeds will go up eventually, but probably not dramatically at first).

And if the shoe were on the other foot, I doubt you'd be making that claim. Since when does it matter that the newest from Intel is on a smaller process than that from AMD? If we had 65nm chips from AMD to compare to 65nm chips from Intel at the time C2D made its debut, we'd do so.. but we didn't, so we didn't. Comparisons are made between what's available at the time.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
Originally posted by: jiffylube1024

That is a totally bogus statement. In the past, The A64/X2 was 50%+ better in performance/watt over the P4/ P-D lineup of chips, but Conroe does not have that kind of lead in performance per watt, not even 65nm vs 90nm. Especially with 512K cache AMD chips, and especially not when idle. If you're not using the PC as a server or doing something which keeps the CPU at 100% usage all the time, the AMD might actually be a bit stingier on power consumption for general processing where usage is typically <5% once you've got word/internet/etc. open.

Intel having the kind of performance/Watt lead with C2D over X2 as AMD had with X2 over Pentium D is just FUD.

Hey look - AMD's still (marginally) better at idle and doesn't lose by much at load. Certainly nowhere near 40-60%.
Depends on the review and the motherboard used. Anandtech's used a AMD selected microATX motherboard for the AM2 processors, while using a high-end 975x for the C2Ds. If the AM2s used a NForce 5 motherboard, then the C2D does have a 40%+ performance/watt advantage.

http://www.techreport.com/reviews/2006q3/core2/power-load.gif

But if you're comapring middle of the pack or below, say an 2.4 GHz 4600+ X2 with a 2.4 GHz E6400, performance per watt is not going to be that much better on the E6400.

XBitlab's measurements of the CPU power usage suggests the 2.4GHz E6600 54% of the power consumed by the 2.4GHz 4600+.

http://www.xbitlabs.com/images/cpu/core2duo-shootout/power-2.png


Originally posted by: jiffylube1024
Yes, AMD can't beat Intel at clock-per-clock anymore, with their 90nm chips with 512K X2 cache or 1MB X2 cache, vs brand new Intel 65nm chips with humongous 2MB and 4MB unified caches. But they put up a surprisingly good fight considering the age of the architecture and the fact that AMD's still on 90nm.
What's so humongous about about a 1x2MB cache versus a 2x1MB cache?