Anand Sandy Bridge performance preview is up

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
where is the overclocking???

"While multipliers were locked, Intel left FSB overclocking open."

Im guessing anand failed lol..

Remember what i said about that FSB ocing.. LOL...

That quote is talking about the history of overclocking.

Anand had a locked CPU iirc, and since fsb overclocking is dead... yeah i bet he failed.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
As put forward on several business sites.... Yes and Yes. Cross licensing about the only resolution that made sense for Motherboards and other 3rd parties. Imagine if only one of AMD or Intel got rights to DDR4. Not fair in competition. In several recent settlements on both sides were cross licensing for four or five year periods, couple of generations and newer agreements will be put in place depending on future developments. Standard legal blah blah stuff with hands forced by FTC and other entities.

At their levels (Intel and AMD's) you can try to enforce patents all you want. If all (or most) of the Mobo manufacturers decide they don't want the legal hassles and drop one side, kinda makes you want to rethink sharing? Insane amounts of "licensing proceeds" swap sides at ridiculous rates daily, you just don't normally hear about it. Right now, because a few were settled, thereby fixing dollar amounts to some, many on the sidelines have jumped in too. Paul Allen's group among several others. Been flurries on the legal fronts that have Investor crowds keeping a peeled eye on legal liabilities and expenses to fend off said suits. That's why usually cheaper and less headaches to settle with "hush money" and make some just go away. Other get "cross licensing". Stockholders get kinda cranky if they hear from too many lawyers with lots of digits attached.

Imagine if the only company that could make x86 motherboards was intel due to patents :p oh wait.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I don't give a FUCK about how much these chips cost if they'll run starcraft 2, finally

HOWEVER,, if they try to anal me with a $400 motherboard.... I WILL kill kittens in anger. :mad:
Yeah, 'cause those 3+ year old CPUs, that you know, a ton of us run it on, because the new ones just aren't worth the $ as upgrades, don't do the job. No sirree. This acceptable performance must be a figment of our imaginations!
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
Also, Im curious as to how "unlocked" the K series will be. AND, do we get to set individual multis for 1/2/4 cores active? 3.3 with 4 active and 4.0 with 2 would satisfy me quite well.

With the right MB you can do this with the current K-series CPUs, and I see no reason Intel would change that with these upcoming ones.

I'll be trying for the same thing myself, though I was thinking more like 4GHz with 4 cores active and 4.5GHz with 2.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Yes... since the P67 chipset doesn't support the IGP. Might as well disable it and lower the thermal...

By the way, this might be the generation where the iGPU never needs to be disabled completely, even if its not displaying results to the monitor.

Llano haven't shown enough evidence to show close enough work between the two to aid the CPU other than what the GPU is normally used for, but with Sandy Bridge there is. This point also applies to "offloading FP". It's more.

The opposite might be true as well. GPU can use CPU resources to help accelerate graphics code. This will be something that will never be doable without having the two on the same die. Quite a twist to "the GPU is so weak it needs CPU to assist" isn't it? :)
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
With the right MB you can do this with the current K-series CPUs, and I see no reason Intel would change that with these upcoming ones.

I'll be trying for the same thing myself, though I was thinking more like 4GHz with 4 cores active and 4.5GHz with 2.
oh. good. Never played with a core-i K series. Your goals are better, of course. I was just wanting to be conservative/realistic.

By the way, this might be the generation where the iGPU never needs to be disabled completely, even if its not displaying results to the monitor.

Llano haven't shown enough evidence to show close enough work between the two to aid the CPU other than what the GPU is normally used for, but with Sandy Bridge there is. This point also applies to "offloading FP". It's more.

The opposite might be true as well. GPU can use CPU resources to help accelerate graphics code. This will be something that will never be doable without having the two on the same die. Quite a twist to "the GPU is so weak it needs CPU to assist" isn't it? :)
I know they interact with thermals because of separate turbo modes, and share cache, but what else do we know they share?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I know they interact with thermals because of separate turbo modes, and share cache, but what else do we know they share?

I could be completely wrong about this, but I think we know we very little about the ISA organization of Intel's GPU (unlike the case with their Larrabee, or the case with NV's cuda and AMD's evergreen)...the info may be out there but I have never come across it. (haven't actively sought it out either...)
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Well, finally intel has produced a GPU that doesn't totally suck. The fact that it competes with the AMD 5450 is pretty heartening (huge step from where they were).

Not saying this is some sort of awesome GPU, just that it looks to be far better than the utter garbage they have been releasing for years.

It would be nice to see intel start to release some high performance GPUs in their future CPUs.

The other thing that will be interesting to see is how OpenCL/Direct compute performance is effected by an on-die GPU solution. Given that the CPU and GPU share the same memory and a pretty short communications path, seems to me like OpenCl and the like might see some pretty decent latency, making it more worth it to move operations onto the GPU.
 
Last edited:

khon

Golden Member
Jun 8, 2010
1,318
124
106
oh. good. Never played with a core-i K series. Your goals are better, of course. I was just wanting to be conservative/realistic.

Seems more like pointless to me.

3.3 GHz is the stock speed for 4 cores, and 4GHz is only 8% above the stock turbo speed. Why even bother with such a small overclock ?
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Seems more like pointless to me.

3.3 GHz is the stock speed for 4 cores, and 4GHz is only 8% above the stock turbo speed. Why even bother with such a small overclock ?

Kicks and giggles?

I agree, overclocking really only makes sense with the low end parts that have low clock speeds.
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
I agree, overclocking really only makes sense with the low end parts that have low clock speeds.

That is not what I was saying at all. I was just sayin that for the K-series to be at all worthwhile you should probably be looking at a clock speed bump of 20%+

Hell, even the non-K parts could probably go about that high.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
Seems more like pointless to me.

3.3 GHz is the stock speed for 4 cores, and 4GHz is only 8% above the stock turbo speed. Why even bother with such a small overclock ?
4Ghz with all cores active is still a 21% increase over stock. And adding another $60 to the cost of the upgrade for a new heatsink doesn't fit the goals of the build either. The goals being cheap and powerful. Adding another couple hundred Mhz with a better HS isn't worth the increase in cost.

Either way, chill. Not everybody wants to have world record overclocks.
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
4Ghz with all cores active is still a 21% increase over stock. And adding another $60 to the cost of the upgrade for a new heatsink doesn't fit the goals of the build either. The goals being cheap and powerful. Adding another couple hundred Mhz with a better HS isn't worth the increase in cost.

Either way, chill. Not everybody wants to have world record overclocks.

Fair enough. Btw I'm sorry if I came off as hostile, that was not the intent.

Why go for the K-version if that's the case though ? I'll probably be another 30-40$, and the overclock you're aiming for would be possible with the non-K version anyway, since they allow the multiplier to be increased a bit above stock turbo, which for the i5-2500 would probably mean around 4GHz.
 
Last edited:

deanx0r

Senior member
Oct 1, 2002
890
20
76
Are there any plans for a dual-core counterpart of Sandy Bridge? 4 cores seem like an overkill and needlessly taxing on power for mobile computing.
 

kalniel

Member
Aug 16, 2010
52
0
0
You will be able to run up to 4 monitors in tandem with the discrete graphics and the integrated graphics, at least for laptops. I don't know about desktops.
I'm not sure if you meant to quote me when you gave this reply just below my quoted text, but if you did, I'm not sure that the ability to run extra monitors in tandem would make up for locked memory multipliers.

I would choose the unlocked memory frequencies and use a Radeon or GeForce card to transcode video, :p
Good point :p But those cards need to offer high enough quality or free codecs respectively - I'm not happy with the efforts so far.


By the way, this might be the generation where the iGPU never needs to be disabled completely, even if its not displaying results to the monitor.

Llano haven't shown enough evidence to show close enough work between the two to aid the CPU other than what the GPU is normally used for, but with Sandy Bridge there is. This point also applies to "offloading FP". It's more.
What evidence is this? The improved FP performance? I would be amazed if we were at that point yet.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
yeah, sandy bridge will have dual core versions. The i3s are both duals, as will the pentium versions.
 

iamgenius

Senior member
Jun 6, 2008
826
113
106
I have two things to say:

1- All the fun is in overclocking....why would you do this intel? AMD, you now have a chance I think. Make good overclockers cpus and I'll be happy to switch.

2- Who would benefit from the integrated graphics? I mean, if you are already a hardcore gamer and you always buy high end cards, this won't help you.....right? Only those who buy systems with no dedicated video cards may benefit and get better video performance, but then again those who don't spend $$$ on video cards don't really play games...........I don't really get it.

Thanks.
 

khon

Golden Member
Jun 8, 2010
1,318
124
106
Something that occured to me. For the Lynnfield -> SB replacements (i5-750 -> i5-2400, i5-760 -> i5-2500 and i7-870 -> i7-2600) the performence gains looks to be around 30% (15-20% clock speed bump, and 10% clock for clock gain), which is pretty good.

However, some of the SB parts will be replacing Westmere (i3-550 -> i3-2100 and i3-560 -> i3-2120), and for those the improvement looks decidedly less positive at maybe 5-10% (small clock speed decrease).

So is this kind of anemic improvement what we can expect for LGA-2011 as well, since it too will be replacing Westmere parts ?
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I have two things to say:

1- All the fun is in overclocking....why would you do this intel? AMD, you now have a chance I think. Make good overclockers cpus and I'll be happy to switch.

2- Who would benefit from the integrated graphics? I mean, if you are already a hardcore gamer and you always buy high end cards, this won't help you.....right? Only those who buy systems with no dedicated video cards may benefit and get better video performance, but then again those who don't spend $$$ on video cards don't really play games...........I don't really get it.

Thanks.

1. Overclocking enthusiasts have never been a large portion of the market. So intel really isn't loosing any customers by making it harder to do.

2. HTPCs, Grandmas, Light gamers, Kitchen computers, business workstations, Large servers using GPGPU capabilities. Mobile devices, ect. Gamers aren't the only people in the world that buy CPUs. There are lots of good uses for a powerful SOC sort of design (Which is really where intel seams to be going). Truth be told, hard core gamers make up a very small portion of the market. Having a video card on the CPU doesn't hurt them either.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
So is this kind of anemic improvement what we can expect for LGA-2011 as well, since it too will be replacing Westmere parts ?

Yea, but the dual core Westmere parts lack low latency memory controller so the gains should be similar.

What evidence is this? The improved FP performance? I would be amazed if we were at that point yet.

Intel didn't specifically target FP units, the overall IPC improved hence the FP performance went up as well. Some of the architectural changes might be more favorable to media too. Makes sense?

Also we are not at the point where the GPU can replace the CPU for FP ops. That will only happen in the speculative post-2015 era when the two are indistinguishable.

There's a paper which say with a fast internal interconnect you can use a GPU core for prefetching. An expansion of current approach where software VS using CPU can be used in place of hardware VS could be there as well, with a degree off-die CPU/GPU solutions can't match.

I'm not saying its concrete. But a fast interconnect such as a ring bus isn't there just to provide easy adding of cores.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Also we are not at the point where the GPU can replace the CPU for FP ops. That will only happen in the speculative post-2015 era when the two are indistinguishable.

I'm not so convinced this will ever happen. GPUs are great for highly parallel FP calculations, However, I don't think I could ever see a GPU being used for serial FP calculations.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I'm not so convinced this will ever happen. GPUs are great for highly parallel FP calculations, However, I don't think I could ever see a GPU being used for serial FP calculations.

What percentage of apps represent the case where (a) serial FP calcs are performed, and (b) the app itself is performance-limiting?

I could care less if the serial FPU instructions in MS Word are handled any faster on today's hardware versus next year's hardware...MS Word is not a performance-limited app.

My fpu-intensive performance-limited apps entail matrix math already, readily amenable to parallelization. (computational chemistry and finance modeling)

So I'm thinking what is the worse that can happen if my serial-fpu calcs are offloaded to a GPU wherein the execution latency increases 10x? Is my user experience with MS Word going to notably deteriorate?
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Ok, newbie question (after reading this thread and the S2011 thread)

How does one get these intel CPUs well before release? I assume they are some sort of beta testers, but what qualifies someone as a beta tester?

Also, why do some of these people release info, that Iam sure Intel does not want released, without getting in any trouble?

Not that I mind of course, I love reading about it. :)
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
What percentage of apps represent the case where (a) serial FP calcs are performed, and (b) the app itself is performance-limiting?

I could care less if the serial FPU instructions in MS Word are handled any faster on today's hardware versus next year's hardware...MS Word is not a performance-limited app.

My fpu-intensive performance-limited apps entail matrix math already, readily amenable to parallelization. (computational chemistry and finance modeling)

So I'm thinking what is the worse that can happen if my serial-fpu calcs are offloaded to a GPU wherein the execution latency increases 10x? Is my user experience with MS Word going to notably deteriorate?

:) Well, lets face it, the chances of the FPU going away are just slightly greater than 0 (in the next 10 years). I really doubt that intel will drop FPU instructions from their CPUs or worse, turn all the FPU instructions into some weird interpreted language to be shuffled off the the GPU.

Thinking about it, the only good example of serial FP execution that I can think of would be video games. A little more thought produces video encoding as well (yes, there are GPU video encoders, but they quite frankly suck, for multiple reasons. One of which is that they don't generally do deterministic calculations, something that is pretty critical for a good encoder).

I guess one more area that would be tough to do effective FP calculations on the GPU would be in databases. SOME database queries would work well with the GPGPU (Multiple rows doing math operations). However, the most common types of queries deal with one row from tables, those would take pretty big performance hits.

You MIGHT be able to get away with total/near total GPU FP calculations in games, however, using something akin to direct compute, but I won't hold my breath for it.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Ok, newbie question (after reading this thread and the S2011 thread)

How does one get these intel CPUs well before release? I assume they are some sort of beta testers, but what qualifies someone as a beta tester?
* Become an intel engineer
* Become a trusted reviewer
* steal it.

Of the three, the first is probably the easiest route.

Also, why do some of these people release info, that Iam sure Intel does not want released, without getting in any trouble?

Not that I mind of course, I love reading about it. :)

To be a reviewer, they have to sign some pretty hefty NDAs + have a pretty good relation with Intel. Trust me, no site is releasing info that Intel does not want released. And if they do, all that would happen is intel would contact them telling them to remove it. Most will comply since they like getting new products before their release.