Price notice.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
If we cut through the bullchit in this thread, one can easily say that a X2 3800+ is MORE than sufficient for gaming and that an E2160 O/C is a waste of money... That is reality. So we can argue 3.6Ghz QC Versus 2160 @ 3.2Ghz and you can tell me there is no tangable difference, and I can just say, same with a X2-3800+ and a 2160. They both can be overclocked and the difference is extremely small. So what is the deal?

Well, here is the deal. Not everyone uses their CPU for games only. So even if the benifit of a higher clocked CPU doesn't seem to be to great in games, you still have a much faster system.

Take it from forum users. I believe ZAP mentioned that the E2160 O/C just 'felt' slower in his system than the cnroe counterpart. Why? Because it is.

So, the arguement is somewhat pointless, IMO... Buy whatever you want, but the fact is, cache size DOES make a rather large difference as Appopin pointed out.

Anyway, thanks OP for pointing out the price of the faster E4500 is cheaper than the E4400. :thumbsup:
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: SerpentRoyal
Originally posted by: harpoon84
SerpentRoyal,

In regards to the cache argument, you are clearly wrong. If anything, as clockspeed increases, the dependency on cache size increases as well.

http://xtreview.com/addcomment...tium-E2140-@-3ghz.html

There is a clear stepping in performance between 1MB -> 2MB -> 4MB @ 3GHz.

I hope that clears things up once and for all.

I never said that both would have the same performance at 3.0GHz! The extra cache will add 200MHz (2M) to 400MHz (4M) of core speed. At higher game resolution, the GPU is still the main bottleneck if you can get the CPU speed north of 3.0GHz.

If you can get a X2 @ 2.8Ghz+ you have the same solution. So, I don't see your point. Plus, that is with current GPU hardware... Will it be sufficient when nVidia releases their next high chip in a few months? Probably not!
 

harpoon84

Golden Member
Jul 16, 2006
1,084
0
0
Originally posted by: SerpentRoyal
Originally posted by: harpoon84
SerpentRoyal,

In regards to the cache argument, you are clearly wrong. If anything, as clockspeed increases, the dependency on cache size increases as well.

http://xtreview.com/addcomment...tium-E2140-@-3ghz.html

There is a clear stepping in performance between 1MB -> 2MB -> 4MB @ 3GHz.

I hope that clears things up once and for all.

I never said that both would have the same performance at 3.0GHz! The extra cache will add 200MHz (2M) to 400MHz (4M) of core speed. At higher game resolution, the GPU is still the main bottleneck if you can get the CPU speed north of 3.0GHz.

If the game happens to be GPU bound and you game at high resolutions, then certainly. However, the latest games are really starting to be cache size dependant as well, so there should certainly be some sort of balance between CPU and GPU power.

For 8800GTS/2900XT or better GPUs I would recommend at least an E4x00 over the E21x0.
 

SerpentRoyal

Banned
May 20, 2007
3,517
0
0
The major factor that influence the responsiveness of a PC is the I/O and rpm of the HDD. Under controlled conditions, one should not be able to distinguish an E4300 vs an E2160 with general office and web browsing.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: SerpentRoyal
The major factor that influence the responsiveness of a PC is the I/O and rpm of the HDD. Under controlled conditions, one should not be able to distinguish an E4300 vs an E2160 with general office and web browsing.

that is quite a backdown from what you were saying earlier

IF you are *only* going to do general office and web browsing, a 3.0Ghz P4 is fine ... maybe a 1.4Ghz Tualatin Celeron is plenty if you don't have too much going on at once. :p

Fact is, CPU cache size is *now* becoming a factor to consider in gaming performance as is multicore.
 

SerpentRoyal

Banned
May 20, 2007
3,517
0
0
Simply responding to this statement:

"I believe ZAP mentioned that the E2160 O/C just 'felt' slower in his system than the cnroe counterpart".
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
ah, the infamous "Felt-Like" Benchmark Suite ... widely used and trusted everywhere in important buying decisions.
:D

--but i don't see where Zap mentioned it ... just AA mentioned it [from another thread?] :p
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Originally posted by: ArchAngel777
Take it from forum users. I believe ZAP mentioned that the E2160 O/C just 'felt' slower in his system than the cnroe counterpart.

Eh? [checks stash of parts in garage] I've never owned an E2160 and my E6750 that showed up a couple of days ago is still in a sealed retail box. Must be a ZAP at some other forum. :confused: My "main" rig is a socket 939 x2, "gaming" rig is socket 939 single core, HTPC is socket AM2 x2. I have had some other C2D and Pentium Dual Core chips (and even one Conroe-L) but none has been in a "regular use" machine.
 

SerpentRoyal

Banned
May 20, 2007
3,517
0
0
Originally posted by: Zap
Originally posted by: ArchAngel777
Take it from forum users. I believe ZAP mentioned that the E2160 O/C just 'felt' slower in his system than the cnroe counterpart.

Eh? [checks stash of parts in garage] I've never owned an E2160 and my E6750 that showed up a couple of days ago is still in a sealed retail box. Must be a ZAP at some other forum. :confused: My "main" rig is a socket 939 x2, "gaming" rig is socket 939 single core, HTPC is socket AM2 x2. I have had some other C2D and Pentium Dual Core chips (and even one Conroe-L) but none has been in a "regular use" machine.


Thanks for your clarification! Sounds too much off-the-wall to be credible.
 

tno

Senior member
Mar 17, 2007
815
0
76
This threads getting a little snarky isn't it. Pertinent to the topic: The reason that CPU comparo tests are done at low resolution is so that the performance is CPU bound, raising the resolution makes the performance GPU bound, so it makes CPU comparison data pointless. So, yes, with a good video card (8800 class) the amount of cache you have in a CPU will make little difference, all else being equal. As such, and as noted in most cheap build stories, if you're looking for the best bang for your buck build buy a cheap proc (e21x0) and a good video card (8800gts). It's telling then that the biggest difference usually between a cheap build and a medium build in most of those stories is the processor, because right behind video cards, CPUs are terribly important to gaming performance. And as apoppin said, cache size will play a larger role in all computer performance in the future, as more cores find themselves able to process more and more data, the speed with which they can access memory (dictated by the FSB) becomes a limiting factor (see quad core comparo's for evidence), so having gobs of cache will speed things up terribly.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
What is snarky?
:confused:

--and i thought we finally agreed :p

i found this interesting from Valve's Gabe Newell ... perhaps you will also [quoted in very small part]:

http://www.next-gen.biz/index....view&id=7422&Itemid=61

Essentially Intel, about the time they were talking about the 10GHz Pentium 4, were focused on clockspeed over everything. They thought single thread of execution was the way to go. And because of that, processor scaling was not increasing linearly with transistors ? it was sort of the square root: if you quadrupled the number of transistors you were only doubling the components of the CPU. The problem they ran into there were thermal issues. They weren?t able to manage heat. They weren?t going to be able to reach 10GHz without doing Freon cooling or something like that.

At the same time the GPU guys were essentially writing CPUs ? there?s no real difference, the GPU is just a CPU with a specific function: it runs graphics code. They were going in this different direction; they weren?t trying to run it at incredibly high clock rates, they just had lots and lots of execution units ? lots of cores, essentially. And Intel, because it could just throw tens of billions of dollars at its processor technology, was able to get a lot further with the single-thread direction than anyone else ? but even they eventually said, 'We have to throw away this single thread of execution model. We have to go to multiple ? we have to make this a software problem'.

Performance and scaling has stopped being a hardware problem and instead it?s been turned into a software problem. That?s bad news for us software guys ? but for hardware it?s good news, because it shifts the value proposition towards software developers. What it also means is clock rates will stay pretty much the same, but the number of execution units you have is going to explode. The good news is that we?re going to spend an era of growing linearly for a while, so transistor budgets will translate directly to improvements.

At that point we said, we understand we have to make these investments in multi-core. We have to worry about not just two cores, but 64 threads; 512 threads ? how are we going to reorganise it? What does that look like? But the more we look at it, the more excited we get. This current era is one of heterogeneous computing: you?ve got this one big chunk of code doing physics and AI, character animation and facial systems talking through this strange interface called DirectX to another chunk of your code which you write to run on GPUs. That?s just going to go away. And either Nvidia or Intel is going to win the battle for whose array of cores is taken up.

So that?s the backdrop behind us making these investments in multicore. Once you?ve made that decision, then adapting it to the 360 is fine, but we wouldn?t have made this investment if it were just to garner those benefits for the 360 ? it?s because of the current and future investments on Intel?s side that we can get really excited about it, because that?s where AI and physics are going to experience the rapid performance increase that we?ve been seeing over the last several years exclusively reserved for 3D graphics. You look at how fast Nvidia and ATI have been increasing graphics performance in the last ten years ? that?s how much faster our physics and AI are going to improve.

Edge: What?s the timescale for this boom?

GN: We?re going to start seeing it now. We?re going to be releasing multicore versions of Counter-Strike and Day of Defeat and Half-Life 2 after we ship The Orange Box. The challenge is going to be going forward. Right now we just have to deal with an order of magnitude of difference between DirectX 9 and DirectX 7 in terms of fill-rate and number of polygons. That?s a set of scaling issues that we?ve managed to adapt to. Soon we will have to answer the question of how do you design a game experience that could go from ten characters on screen to 1,000 characters on the screen. And how do you turn that into something worth purchasing? Is having 100 persons on the screen really ten times as fun as having just ten people on the screen?

In 2008 and 2009 we?re going to do stuff that?s optimised for the new high-end that doesn?t scale down, and use Steam to reach those customers, so we can start to learn what to do with 1,000 smart creatures on the screen at once. Then hopefully we can backfill and do more scaleable experiences.
The Video guys ignored it ... their loss
:D
it's *all* multicore ... very soon ... 2009 is the year of the QuadCore
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: SerpentRoyal
Originally posted by: Zap
Originally posted by: ArchAngel777
Take it from forum users. I believe ZAP mentioned that the E2160 O/C just 'felt' slower in his system than the cnroe counterpart.

Eh? [checks stash of parts in garage] I've never owned an E2160 and my E6750 that showed up a couple of days ago is still in a sealed retail box. Must be a ZAP at some other forum. :confused: My "main" rig is a socket 939 x2, "gaming" rig is socket 939 single core, HTPC is socket AM2 x2. I have had some other C2D and Pentium Dual Core chips (and even one Conroe-L) but none has been in a "regular use" machine.


Thanks for your clarification! Sounds too much off-the-wall to be credible.

Since when does not remembering the right person who said something make it less credable? Additionally, how would off-the-wall also reduce credability?
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: SerpentRoyal
So you're too lazy to check? Why not say "someone mentioned..." instead of typing "Zap mentioned"?

-1 for side stepping the question. Were done here. There is no reasoning with you, as seen in countless other threads with other users. Whatever, carry on.