32nm Quad's and Hexa's coming in March?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Why do you think HD6870 will not use any extra bandwidth compared to HD5870?

Because in the grand scheme of things the raw performance of the actual video card has much more impact than the bandwidth of the pci-e bus it's sitting on. So even if you put a 6870 on a 8x bus, I predict it will still run circles around a 5870 on a 16x bus. It's just not worth worrying about IMO.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
I realize Intel improves their products, but that doesn't mean people don't hold onto old hardware.

Good example is LGA 1366/1156. Even though those boards have been released for some time lots of people are still using LGA 775. Therefore upgradability still matters.

Yes, but how many people would upgrade from a C2D to a C2Q instead of switching over to an I7 platform? To me it seems like people would rather hold out for a new generation/architecture instead of upgrading a 2 year old platform.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,256
16,113
136
Yes, but how many people would upgrade from a C2D to a C2Q instead of switching over to an I7 platform? To me it seems like people would rather hold out for a new generation/architecture instead of upgrading a 2 year old platform.
I am probably one example. I have a LOT of 775 boards, memory, etc.. If I have one box that needs that abilities of a quad, blowing another $200 or more to replace the motherboard and ram is a waste. And selling the old hardware is a pain, and I don;t have the time to hassle with it. I sell it off to friends or local craigslist. Still takes forever.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Yes, but how many people would upgrade from a C2D to a C2Q instead of switching over to an I7 platform? To me it seems like people would rather hold out for a new generation/architecture instead of upgrading a 2 year old platform.

Yeah, but this has nothing to do with choosing X58 platform vs p55 platform.

What you are talking about is a CPU swap.

I still say X58 platform is a better choice for people wanting to keep the same mainboard 4+ years for the reasons I outlined in previous posts.
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Yeah, but this has nothing to do with choosing X58 platform vs p55 platform.

What you are talking about is a CPU swap.

I still say X58 platform is a better choice for people wanting to keep the same mainboard 4+ years for the reasons I outlined in previous posts.

Of course it does, unless one absolutely demands triple-channel memory and multiple x16 pci-e slots right now. Other than those 2 things, the only real advantage 1366 has is broader support for future cpus - something one might care about if they wanted a drop-in replacement with some 32nm 6-core version of i7 down the road.
 
Dec 30, 2004
12,553
2
76
No it does not bode well for them...but fully utilized, an overclock Ph2 can't lose to an i3. We see that "full" utilization in many apps currently that actually crunch information through 4 threads.

I stand by my verdict-- if i3 is _ever_ faster _now_ (and Hey Zeus' thread was proof it almost always wasn't), when full threading comes to gaming, the Ph2 will outperform it.

In the off scenario that an i3 beats the Ph2 today, the Ph2 is still capable of keeping the game nicely above 60fps.
Tomorrow, that i3 will not be able to keep up, just like it can't keep up in encoding.
Yep, we never saw the full overclocked comparisons (ie, 4.6 Ghz Core i3 vs 4 Ghz Phenom II x4).

Thankfully Anandtech did do a 4 Ghz Core i3 vs 3.4 Ghz Phenom II x4 gaming comparison though--->http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3724&p=5

I think it is absolutely rediculous that an Intel dual core beats the flagship AMD quad core in the most quad optimized game to date, Dragon Age Origins. What would have happened if Intel didn't purposely cripple the memory controller on Core i3?



Well yeah the fact AMD Phenom II x4 does much better in encoding points to game engines lagging pretty badly in multi-threaded utilization 3+ years after the introduction of quad core right?

In fact, the way programming is going I even wonder if full size mainboards will be less popular in the future for gaming. My newbie (non-IT industry guess) is that SSD and Video card technology will predominate instead. (Small compact cheap mainboard, Fullsize Video card, nice SSD being the most cost effective gaming experience)

That being said, I still think x58 is the way to go at the moment.

Hm. We'll have to wait and see. I didn't look at those numbers closely before. It's beating the AMD quad in 3 games. WoW I don't count because you have to edit config.wtf to get more than 2 thread usage. But still, it's...beating the Ph2. :(

I have to wonder if some of those use the Intel compiler and some a non-Intel compiler, because the performance different in some games is huge (like in those games) and not huge in others.
FWIW, the Ph2 965 he used does not have its cpu-nb overclocked any. I think bumping that up to 2.6 and the frequency up to 3.8, would at handle the i3 at 4.6Ghz quite nicely.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Of course it does, unless one absolutely demands triple-channel memory and multiple x16 pci-e slots right now.

But isn't the PCI-E slot advantage one of the major advantages of going X58 over P55 if SATA 3/USB3 devices are used?

Like I was saying earlier to the best of my knowledge p55 can only handle PCI-E 2.0 x8 if it is using the necessary AIB (for SATA 3, etc).

PCI-E 2.0 x8= PCI-E 1.x x16. <----I am not so sure how long that bandwidth will be adeqaute for future video cards?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
But isn't the PCI-E slot advantage one of the major advantages of going X58 over P55 if SATA 3/USB3 devices are used?

Like I was saying earlier to the best of my knowledge p55 can only handle PCI-E 2.0 x8 if it is using the necessary AIB (for SATA 3, etc).

PCI-E 2.0 x8= PCI-E 1.x x16. <----I am not so sure how long that bandwidth will be adeqaute for future video cards?

That depends on the individual boards. For example on my Asus p55 the second pci-e slot is actually a 4x electrical slot running off the SB. That not ideal for crossfire, which I don't plan on using, but it also means I get full 16x to my video card no matter what.

That also depends if SATA 3 or USB 3 becomes a big deal within the expected lifetime of current p55 boards. Personally I don't expect a lot of people getting add in boards for it, but rather holding out until their next upgrade to a future chipset.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
It IS 32nm. Fuad just wonders if it'll be 4 or 6 cores. From the naming it won't be 6.

Apparently the Core i7 970 won't be 32nm, but 45nm Bloomfield core instead. Here are some new CPUs coming out this year.

Core i7 980X: Gulftown/32nm/3.33GHz/3.60GHz Turbo Mode
Core i7 970: Bloomfield/45nm/3.33GHz/3.60GHz Turbo Mode
Core i7 880: Lynnfield/45nm/3.06GHz/3.73GHz Turbo Mode
Core i5 680: Clarkdale/32nm/3.60GHz/3.86GHz Turbo Mode
Core i3 550: Clarkdale/32nm/3.20GHz/No Turbo
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
I still want to know whether these rumored 4-core 32nm chips are native quad-cores or disabled hex-cores.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
I still want to know whether these rumored 4-core 32nm chips are native quad-cores or disabled hex-cores.

I would lean heavily to a cut-down hexacore. This would enable Intel to basically produce the same 32nm chip, and just disable cores as needed.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
I would lean heavily to a cut-down hexacore. This would enable Intel to basically produce the same 32nm chip, and just disable cores as needed.

Possible. They will probably "delete" the 2 cores rather than just fusing them off. They seemed to have done that with the extra memory controller channel on Lynnfields.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I dont care, as long as they clock well :)

The midrange Xeons come with either a 18x,19x or 20x multiplier.

It will be interesting to see how much cooler is really need for 32nm when the mainboard/RAM may be the limiting factor?

Also I wonder with 32nm if the mounting interface (contact pressure) to an extent becomes more important than the physical size of the cooler itself?
 

Gillbot

Lifer
Jan 11, 2001
28,830
17
81
The midrange Xeons come with either a 18x,19x or 20x multiplier.

It will be interesting to see how much cooler is really need for 32nm when the mainboard/RAM may be the limiting factor?

Also I wonder with 32nm if the mounting interface (contact pressure) to an extent becomes more important than the physical size of the cooler itself?
That's my suspicion as well, especially with the lower end ones like the L5609. At 200 bclk, that 14x multi limits you to 2.8GHz, so you are really going to need to push up bclk to get anything out of it. Heck, even some of the better ones need more multi if you are really going to push them.
 

AdamK47

Lifer
Oct 9, 1999
15,782
3,606
136
Hey! I've been on this forum for over 10 years, where is my free 6 core CPU?