Case Proven: People that think X2 > Core2 clock for clock

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dexvx

Diamond Member
Feb 2, 2000
3,899
0
0
^

Thanks Accord99 for saving me the time. But yea, CPU-wise, a high-end Core2 vs a high-end X2/FX chip is on the same level of comparison as a Pentium-D Smithfield to an earlier revision X2. Yet for some reason, people are saying buying an X2/FX now you're still gettting a "finely engineered chip." The relativity of that statement does not hold up. On the good news, however, the X2-5000 has dropped to slightly over the $400 mark; people are obviously getting a clue now.

---

In regards to larger caches being a "work-around" to the IMC, I think its more another solution than a work-around. I mean, there is more than 1 way to solve a problem. And as for the 2p+ commentary, its been proven by the folks at XS that the Quad-Core Kentsfield scales very well (in parallel applications) to a modest 1066FSB.

---

As for Netburst, it suffered from 2 problems:

1) Rambus got thrown out of the memory business by the memory cartel (You know all the price fixing? It was actually done to force Rambus out of the market). Simply put, DDR/DDR2 is a joke compared to Rambus technology. Whereas DDR2 raised latency by increasing clock speed, RDRam latency grew lower as clock speed increased. The expensive IC's in earlier RDRam is a moot issue now, as DDR2 is approaching the same level (500ish Mhz).

2) The Prescott wall. I know some from the Prescott design team, and there was just some technical issues that werent thought out as a whole. Add to that the lack-luster 90nm transition, and theres your problem. They speculated if you just shrunk Northwood to 90nm (and gained a few 100 Mhz) and later did the Prescott on a more mature 90nm, it would've been a lot smoother.
 

palindrome

Senior member
Jan 11, 2006
942
1
81
Originally posted by: dexvx
^

Thanks Accord99 for saving me the time. But yea, CPU-wise, a high-end Core2 vs a high-end X2/FX chip is on the same level of comparison as a Pentium-D Smithfield to an earlier revision X2. Yet for some reason, people are saying buying an X2/FX now you're still gettting a "finely engineered chip." The relativity of that statement does not hold up. On the good news, however, the X2-5000 has dropped to slightly over the $400 mark; people are obviously getting a clue now.

---

In regards to larger caches being a "work-around" to the IMC, I think its more another solution than a work-around. I mean, there is more than 1 way to solve a problem. And as for the 2p+ commentary, its been proven by the folks at XS that the Quad-Core Kentsfield scales very well (in parallel applications) to a modest 1066FSB.

---

As for Netburst, it suffered from 2 problems:

1) Rambus got thrown out of the memory business by the memory cartel (You know all the price fixing? It was actually done to force Rambus out of the market). Simply put, DDR/DDR2 is a joke compared to Rambus technology. Whereas DDR2 raised latency by increasing clock speed, RDRam latency grew lower as clock speed increased. The expensive IC's in earlier RDRam is a moot issue now, as DDR2 is approaching the same level (500ish Mhz).

2) The Prescott wall. I know some from the Prescott design team, and there was just some technical issues that werent thought out as a whole. Add to that the lack-luster 90nm transition, and theres your problem. They speculated if you just shrunk Northwood to 90nm (and gained a few 100 Mhz) and later did the Prescott on a more mature 90nm, it would've been a lot smoother.

I agree with most of what you said, although you give Intel a little more credit whereas you bash AMD more than needed.

Ok, for one, why is this even a discussion? Almost every 3d app is GPU limited right now and isn't DDR3 compatibility slated for release in about a year? As far as I know, AMD is skipping this gen of "Core 2" type CPUs and focusing on the K8L (or whatever you want to call it now). Is that not what Intel did for like 5 or so years with the Pentium 4? Its a little embarrassing that a 100 billion dollar company is struggling to compete with a 15 billion dollar company...

Clock for clock at stock speeds Intel > AMD
But when you overclock, the gap closes (I'd assume due to the shared cache not being as beneficial at higher clock speeds)

I really don't see the point of debating an old news issue when the 3D community should talking about DX10 and 10.1 compatible video cards not who's overpriced high-end CPU is the best.... :confused:

Plus, why would any self-proclaimed experts buy high-end when you can just OC to the same speeds (for example: the old Opty 165 vs FX-60 K8 series cpus)? Yeah, yeah, I realize the cache is larger on some of the higher priced ones, but why do you need the most expensive CPU when overclocking has become such a common thing now.....
 

atom

Diamond Member
Oct 18, 1999
4,722
0
0
Originally posted by: palindrome

I agree with most of what you said, although you give Intel a little more credit whereas you bash AMD more than needed.

Ok, for one, why is this even a discussion? Almost every 3d app is GPU limited right now and isn't DDR3 compatibility slated for release in about a year? As far as I know, AMD is skipping this gen of "Core 2" type CPUs and focusing on the K8L (or whatever you want to call it now). Is that not what Intel did for like 5 or so years with the Pentium 4? Its a little embarrassing that a 100 billion dollar company is struggling to compete with a 15 billion dollar company...

Theres more to processing power than games. Look at some of the encoding and rendering (as in professional CGI not games) benchmarks the Conroe clearly leads. As far as AMD goes, there's always gonna be a "chase and be chased" situation between the two so I don't think Intel is any better or worse in that respect.

Clock for clock at stock speeds Intel > AMD
But when you overclock, the gap closes (I'd assume due to the shared cache not being as beneficial at higher clock speeds)

As you overclock, the performance gap only increases to Intel's favor..........

I really don't see the point of debating an old news issue when the 3D community should talking about DX10 and 10.1 compatible video cards not who's overpriced high-end CPU is the best.... :confused:

Plus, why would any self-proclaimed experts buy high-end when you can just OC to the same speeds (for example: the old Opty 165 vs FX-60 K8 series cpus)? Yeah, yeah, I realize the cache is larger on some of the higher priced ones, but why do you need the most expensive CPU when overclocking has become such a common thing now.....

Again, games aren't the be all end all for everyone.

But I have to agree that some people need to calm down, it's just a CPU. AMD isn't that bad today, same as a Pentium D wasn't that bad 6 months ago. The always amusing thing about CPU wars is that the most fanatical people are the least likely to actually need all that processing power. They are all benchmark whores. Most professionals I've met who actually need the best usually don't give a crap who makes what chip.
 

harpoon84

Golden Member
Jul 16, 2006
1,084
0
0
Originally posted by: palindrome
Clock for clock at stock speeds Intel > AMD
But when you overclock, the gap closes (I'd assume due to the shared cache not being as beneficial at higher clock speeds)

You got any numbers to back that up or are you just like OcHungry who claims C2D 'doesn't scale with overclocking'.

If you think AMD 'closes the gap' when overclocked I suggest you take a good look at this article: http://www.xbitlabs.com/articles/cpu/display/core2duo-e6300.html

If anything, Intel's lead grows once it is overclocked.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
Originally posted by: Viditor
Given that scenario, and the fact that all designs must have SOME weakest link, what would you say is C2D's weakest link?

the whole point of uarch is to balance it out so there's no perf curve anomalies due to any single factor. there is no single weakest link in a good design.
 

imported_Lothar

Diamond Member
Aug 10, 2006
4,559
1
0
Originally posted by: palindrome
Clock for clock at stock speeds Intel > AMD
But when you overclock, the gap closes (I'd assume due to the shared cache not being as beneficial at higher clock speeds)

Complete nonsense rubbish.
 

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
Originally posted by: Crusader
AMD makes a great chip.
Its Intel that finally caught up.

Before it was insane to consider buying Intel.
Now the situation is merely you could buy either one, and really not go wrong.


Intel more than caught up. Last year I would buy nothing other than AMD, this year I would buy nothing other than Intel. The performance increase going from my A64 3200+ O/C to 2600MHz to an X2 5000+ is so little that I would rather buy a new motherboard to get a real increase in performance by way of Core 2 Duo, which is a major performance jump.

BTW, today's AMD marketing numbers are way out of whack. 5000+ has no meaning anymore, the numbers seem to be just arbitrary and based on nothing. They may as well increase clock speed by 200MHz and add a little more cache and call it 8000+.
 

F1shF4t

Golden Member
Oct 18, 2005
1,583
1
71
Seriously people are taking this too seriously, its a darn cpu who cares.

Out of 20+ people i know who actually know things about computers (can build em etc) I'm the only one who does any sort of overclocking. So what difference would it make for them to have E6300 over 4200+ which perform the same at stock speed. Just pick one thats on sale at the time or has a better deal on a mobo.

Now if i was building a new comp now i would sertainly get a conroe cause it overclocks like mad. Then again i'm still perfectly happy with my 3800+ which spent the last 6 months at 2.7ghz (was at stock for about 2 hours :p after i bought it) and now at 2.8ghz since i got better ram :)
 

imported_Lothar

Diamond Member
Aug 10, 2006
4,559
1
0
Originally posted by: sxr7171
BTW, today's AMD marketing numbers are way out of whack. 5000+ has no meaning anymore, the numbers seem to be just arbitrary and based on nothing. They may as well increase clock speed by 200MHz and add a little more cache and call it 8000+.

Their number ratings have always been out of whack since the Athlon XP days.