4-Core Ivy Bridge VS my 12-core MacPro

dsc106

Senior member
May 31, 2012
320
10
81
SHORT VERSION: How does a 4-core Ivy Bridge compare to 2x 6-core Xeon Westmeres performance wise, in apps such as FCPX & Adobe CS6?

LONG VERSION:

I currently own an Apple MacPro desktop (that was purchased for me from my previous company). I do a lot of video editing, currently using FCP7.0/FCPX but am thinking of moving to Adobe CS6 as it will likely fit my needs better. This is my primary concern as it revolves around my WORK. That said, I like fun, and I do use my system for high-end gaming, so that is a factor as well.

I would sell my MacPro tower and build a PC, selling the MacPro for around $4000 and building a PC for under $2000. I want to put in the latest MSI thunderbolt motherboard just released, 32gb of RAM, and a GTX 670 or 680. For the processor, I was disappointed not to see a 6-core Ivy Bridge yet. My 2x 2.66GHz 6-Core Intel Xeon “Westmere” processors (12-core) in my MacPro have served me well... but I'm not sure how much use I am getting out of the 12-cores, even in these multi-core apps. Is there a way to really tell?

Would I be losing a lot of performance to build a replacement system (running Windows & possibly hackintosh) at 4 cores, but on the upgraded Ivy Bridge platform? Or for my work, would I be giving up a lot by moving away from the 12-core platform and dropping down to the Ivy Bridge quad-core?
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Mac or not, you are substituting a dual-socket workstation board for a single-socket consumer board. From a general standpoint I would disagree with this move because you are comparing two entirely different types of systems and you really can't do that (certainly not in the context of an "upgrade"), and I think it's too early to make a good decision on if a 4-core ivy bridge system would be better for you in all cases.

From what you have described, all the heavy lifting you do is in Final Cut, during import/decode/encode operations, which have been demonstrated to use 12+ threads. If you have other scenarios that use many cores, describe them to us and we will try and determine how they should be affected by a smaller, higher-frequency ivy bridge part.

Traditionally, you could say all of the single-threaded gains you would expect from ivy bridge will not make up for the sheer number of cores that Westmere offers, but this tradeoff only applies to Final Cut based on what we know now. And a number of new technologies are going to throw wildcards into a time-honored assumption like that.

First, Premiere CS6 supports both CUDA and Quick Sync so you could be looking at substantial hardware acceleration that you wouldn't get with your current system. Quick Sync is no joke, and has shown to be comparable to up to 32 x86 cores depending on your test, or 100% snake oil in another test. Personally I think Quick Sync is too new and too rigid to rely on for a production encoding machine, but it has tons of potential and if you *do* make the switch to Ivy Bridge, you will definitely want to explore those capabilities with Premiere CS6. If it was my system though, I would stick with x86 encoding for another generation or two.

CUDA however is a bit more flexible than QuickSync and Adobe has been developing with it for years. Look at these documents and see if any of the Premiere capabilities you might use are accelerated:

http://blogs.adobe.com/premiereprot...ocessing-and-the-mercury-playback-engine.html

http://blogs.adobe.com/premiereprot...y-playback-engine-and-adobe-premiere-pro.html

http://blogs.adobe.com/premiereprotraining/2012/05/opencl-and-premiere-pro-cs6.html


Then we'll have to find someone who has done the work and run the tests to see how fast premiere runs with a Kepler GPU and desktop CPU.

In general, I would say 4 cores at 3.5 GHz will absolutely be faster than 12 cores at 2.6 GHz in almost all of your use cases except for encoding. If you can mitigate some of those tradeoffs with Quick Sync or CUDA, then sell that Mac while it's worth some money and make the switch, but don't do anything until you've seen reputable, real-world performance figures from an independent 3rd party(not published by intel, apple, adobe, or nvidia). If you have excess funds available, you could build a nice ivy bridge workstation and sell whichever machine you like the least, but you will probably not notice any difference casually, so you'll have to develop a standardized battery of tests and measure the two systems yourself.

Also understand that a 6-core ivy bridge CPU for the desktop is never going to happen, and you will never see a dual-socket 1155 board, so holistically you will have gone through much effort to exchange one non-upgradeable system for a slightly newer non-upgradeable system. As long as you are buying small socket desktop boards, you will be limited to 4 cores for many years, and GPUs are not going to make up for everything, but they are good for some of it and slowly getting better.

I'm intrigued by GPU acceleration as much as the next guy, but we have not yet come to a Rubicon where I can, with confidence, tell someone to ditch their big CPUs for good.
 
Last edited:

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
If you have the money, the best soloution is to buy the PC, run your work tests on it, and sell off the slower one (even if it is the PC one).

Otherwise you will be looking at bentch marks and numbers and not being able to fully work out which is better without some hands on time.

Going from a 12 core to a 4 core though, I expect power savings (electricity) but about 1/2 the performance overall in good multi threaded applications. (gut feeling, nothing to back it up with unfortunitly).
 

dsc106

Senior member
May 31, 2012
320
10
81
Part of the reason I am looking at this switch is for the following:

* I never paid for the MacPro, it was basically "gifted". I would be gutting it of 6tb of HDD space and a 512gb Solid State, selling it for $4350, and then spending $2,000 on a high-end PC that i *could* hack to a Mac if I needed (though I'd probably just chill on Win7 + CS6. Effectively, I would make about $2,300 and possibly retain most of the performance, and/or GAIN some performance with the faster clocked core and the CUDA abilities on Premiere/AE (the Mac Pro has an AMD 5770, so the jump to a GTX 670/680 would be nice).

* I would be using the CUDA engine in Premiere. I am wondering how much performance in the encoding department I would be losing... with CUDA and a fast 4-core Ivy Bridge, might my "in-the-moment" working speeds/on-the-fly renders be around the same speed (or possibly faster)? Really curious how this compares.

* In the encoding department (compressing for web, sending to Blu-Ray, etc.) I am wondering how much performance I would lose here. If I render out a 10-minute video and it takes 20 minutes on 12-cores and 26 minutes on a quad-core Ivy Bridge, with comparable performance for my while-working-speeds, it seems smart to make $2,300

* other gains by switching would be: thunderbolt, USB 3.0, GTX 670/680, up from 16gb RAM to 32gb RAM

* granted, I could get a GTX680 for the MacPro once drivers are released (probably this summer) and purchase 32gb of RAM for this platform, but that would COST money rather than earn $2300...


hmm, hard to say. Really curious what the actual numbers say for a 2 year old westmere xeon platform VS the new ivy bridge... how much performance will I really use dropping from 12-medium-speed-cores to 4-high-speed-cores
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Looking at just your financial figures, I would advise you to make the switch. It seems like you would lose a little performance in encoding, and gain substantial performance everywhere else.

You can definitely manage a 4.2-4.5 GHz overclock that is stable enough for encoding, and like I said before, whether you overclock or not, you definitely should develop a burn test in Premiere, run it on both systems, and if the ivy bridge machine comes within 80% of the Westmere, sell that Mac.

You have not thrown in any additional caveats to suggest that you need more than 8 threads at a time.

Also, you essentially need a Quadro card to get CUDA encoding on a Mac, which is not going to make financial sense for you. Do not upgrade that Mac in any way if you are looking at recouping losses via resale.
 
Last edited:

dsc106

Senior member
May 31, 2012
320
10
81
link
link2
link3
well this answer anyting to you?

Sort of... I read through all of those that you linked. It says the Ivy Bridge core is 34% faster per core than Westmere... so I'm losing 66% of my cores (12 to 4), but gaining 34% speed... so would that make Ivy Bridge 32% slower at best? Meaning a 30 minute encode on my 12-core MacPro would take 40 minutes on a 4-core Ivy Bridge?

But that doesn't account for software not utilizing all 12-cores to the max; or to bandwidth/latency issues on a dual socket motherboard... It'd be really great to see Adobe CS6 render times/FCPX render times on a 12-Core Westmere VS a 4-core Ivy Bridge (or even something similar from the Sandy Bridge series?)

Looking at just your financial figures, I would advise you to make the switch. It seems like you would lose a little performance in encoding, and gain substantial performance everywhere else.

You have not thrown in any additional caveats to suggest that you need more than 8 threads at a time.


Also, you essentially need a Quadro card to get CUDA encoding on a Mac, which is not going to make financial sense for you. Do not upgrade that Mac in any way if you are looking at recouping losses via resale.

Yeah, depending on how I look at it... if I can sell the system for that much $$, could take it from the flip side - should I pay $2300 to "upgrade" from an Ivy Bridge 4-core/32gb RAM/GTX 680 (which I could own) to a 12-core Xeon/16GB RAM/5770 MacPro computer... would that really make sense for Adobe CS6 Premiere Pro & After Effects video editing? And maybe for making it a Hackintosh with FCP X editing?

I don't mind losing a little encoding time, especially if it means my speeds while working feel snappier. I can hit render and surf the web, leave the house, watch a movie. I really just care about while-editing speeds, and perhaps GTX 670/680 CUDA would blow it out of the water, in conjunciton with more RAM and maybe faster per-core performance?

Ah, decisions decisions...
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
The financials are tempting, I will give you that.

However, to get a handle on "while-editing speeds," you will need to see it firsthand or sample tons of anecdotal evidence from people who use both CS6 and FCPX.


You aren't going to get the data you need from a Deep Fritz benchmark or from comparing hardware, I can tell you that.
 

dsc106

Senior member
May 31, 2012
320
10
81
Bummer, yeah that is frustrating... I can't even seem to find a benchmark of encode times to help give me a baseline unfortunately...
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
That data would not be any good either, you have to do the same encode on both machines (same source material, same encoder params).
 

dsc106

Senior member
May 31, 2012
320
10
81
True, though I'd be content with a ballpark. What's your gut feeling on the performance differences here? What would YOU do?
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
My gut feeling tells me that "while-editing speed" is less threadable than encoding. Since this is your primary concern, we should focus on that. Can you do some worst-case editing in fcpx and look at your ram and CPU usage during your editing and try and describe it for me?

If it were me, I would sell the mac, prepare a well-equipped, ~4.5 GHz 3770K machine with a fast SSD, and pocket the money. If I was not satisfied with the change, I would live with it for a few months and then sell it when 10-core Xeons are available later this year and do a new build. Either way, I would come out with a profit from losing the Mac and almost double the performance per thread.
 
Last edited:

ALIVE

Golden Member
May 21, 2012
1,960
0
0
True, though I'd be content with a ballpark. What's your gut feeling on the performance differences here? What would YOU do?
E5-1650 something like that?
you will get 6core 12 thread and given the improvement over the older generations this single slot solution will not be that far away from what you have cpu wise
fewer faster cores are better than more slower ones given that programms do nto scale linear and the more cores the more difficult to scale them
that cpu is around 600$ but i do not think you will find a mobo with thunderbolt for this plattform.
you can still save money by making the switch to these
but a quadro card will set you back a lot
gamers cards are not suported for serious work and their perfomanse really suck even a cheap quadro will perfom better due to better driver support

well as always it is up too you
balance things gains and loss
and decide what is more valauble to you and go that way
 

dsc106

Senior member
May 31, 2012
320
10
81
Alive, I don't think I'd be looking at the ES-1650 Xeon, unless perhaps I should be?? I was thinking of the 3770K (i7 Ivy Bridge). Also, I must have thunderbolt.

Alyarb - i inserted a couple images of my activity monitor. It shows the thread usage (24 threads from my 12 core) under some stress testing in FCPX. I don't know how well Adobe CS6 takes advantage of multi-core, and CS6 has the Mercury Playback engine which seems to utilize the GPU better if I were to get a GTX 680.

FCP X is one of the best programmed apps out there for multi-core and it does seem it is using all the cores pretty well when I really load it up...

The milder picture is editing pre-transcoded footage (prores) and putting a lot of effects on. I do usually transcode. The more intense one is when it is rendering NON-ProRes (compressed mpeg4 h.264 AVC footage) in a couple layers with effects, AND background transcoding the H.264 files to ProRes.

Here are the pics:

http://www.flickr.com/photos/79771260@N03/?uploaded=3&magic_cookie=c2a9876d0317ab7ab0cb883b112ce52f
 

alyarb

Platinum Member
Jan 25, 2009
2,425
0
76
Without knowing exactly what you're doing at each time in each of those images, all I can say is this doesn't look like an app that would overwhelm a 3770K during operations in the user interface.

Yes I know you will more than likely transcode after you edit some original material, and that activity will without a doubt be faster on the Mac, but everything else, despite the brief periods of moderate or little vertical alignment, should be comparable, if not faster, on a much higher clocked ivy bridge.

I see the large chunk of activity in the image cores3, which I will assume represents encoding activity, but also note in each picture how Core 0 is uniquely more active over the entire interval while the other cores are not. This implies to me that a faster ivy chip will only be slower than the Mac Pro during a transcode, and I would expect a similar profile from similar activity in Premiere CS6.

Of course, you can *almost certainly* expect that encoding in CS6 will be supported by Quick Sync as an earlier referenced document indicates, and the Quick Sync encoder will be unbelievably faster than the x86 encoder on the Mac. So I'm not too worried about just encoding being slower on the CPU, because you will have Quick Sync. As long as everything ELSE is faster I'm not too worried about you migrating to such a much smaller system. Quick Sync performance in CS6 is something I would like to see verified independently, but see this page to get a ballpark on Quick Sync quality and performance in other apps:

http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review/21

also read this

http://forums.adobe.com/thread/849891

As for the build that you linked to in that other thread, I agree with most of the criticism. You do not need an 800 watt power supply for a 100 watt CPU, a large GPU, and a few disks. I would not spend more than $80 on a power supply. The case manufacturer and style is 100% preference and I am a huge fan of Lian so I'm not going to tell anyone what case they should get. I would not spend hundreds on exotic RAM and I would not pay double for a motherboard simply because it has Thunderbolt. I know you mentioned Thunderbolt in your OP, but this thread quickly turned into a CPU discussion so I avoided it.

He is right that there aren't enough Thunderbolt peripherals on the market to justify going out of your way for a Thunderbolt host. If you already own Thunderbolt devices, then I would understand, but cheap PCIe adapters will come and they will be cheap when they do. I would try and maximize my profit from selling the Mac. If you need Thunderbolt then whatever, get it. I don't care about that, my curiosity is how the performance in things you do would vary between these two distinct systems.
 
Last edited:

philipma1957

Golden Member
Jan 8, 2012
1,714
0
76
Bridge)



... Also, I must have thunderbolt...

Don't do this until at least one more t-bolt mobo comes along. I am not sure I want to have everything hinge on the one and only t-bolt board. Once there are two or three t-bolt boards I would do it. MSI(so have other mobo builders) had its share of problems with boards over the last decade so depending all on the one board is a risk I would not take.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
It says the Ivy Bridge core is 34% faster per core than Westmere... so I'm losing 66% of my cores (12 to 4), but gaining 34% speed... so would that make Ivy Bridge 32% slower at best?


Meaning a 30 minute encode on my 12-core MacPro would take 40 minutes on a 4-core Ivy Bridge?

But that doesn't account for software not utilizing all 12-cores to the max; or to bandwidth/latency issues on a dual socket motherboard...

should I pay $2300 to "upgrade" from an Ivy Bridge 4-core/32gb RAM/GTX 680 (which I could own)


I don't mind losing a little encoding time, especially if it means my speeds while working feel snappier.

1st, your numbers are wrong for calculating.

starting with 1 as the base line, looseing 66% for going 4 cores is 0.33, but the 34% increase from Ivy (not sure if you are counting the increase the iGPU brought to it as that is mentioned at about 20%, in real world the increase of Ivy over SB was closer to 5-10%). But the 34% increase is 0.33*1.34 or 0.44.

In short, less than 1/2 the performance of your 12 core setup. my 50% guess before is close enough as a guess.

2nd issue, at the 44%, the encode would be over an hour vs the current 30min (assuming all cores in use and no other bottle necks).

3rd issue, memory should not be a issue as the cpu has the memory controller on the cpu, and cpu to cpu communications is via a different system (so no sharing of the FSB as with older dual cpu setups).

4th, not sure about the "I will own" part. Do you not own the existing 12 core system?

5th and lastly, it might feel snapier (but with already having a SSD, any increase in snappyness is not as visable as the adding of a SSD "feel".

$2000 is a lot to worry about, but if you are doing this for work, the current system still works, Over clocking as mentioned above to reduce the difference does not seem wise for a work machine, and from reading, the biggest issue is you want more memory and a faster video card for gaming. Not seeing worth loosing processing time for what could be added to the current system.
 

dsc106

Senior member
May 31, 2012
320
10
81
greenhawk, I own the MacPro, the "could own" was referring to the Ivy Bridge.

with the 44% number you calculated, that's assuming the same clock speed. At 3.5ghz that's a 33% speed increase on top of that, bringing it 58%... if I overclock to 4.4ghz that's a 60% speed increase bringing to ~70% of the speed of the 12-core Westmere...

Am I doing that math right? I know it's more hypothetical than anything, just trying to clarify. While memory or other bandwidth issues between having a dual-CPU system might not come into play, software optimization DOES... if all cores aren't being used fully and it's not scaling linearly, the gap closes more... and if that's just for ENCODING times, not on-the-fly edits, it wouldn't bother me too much. Thoughts?

Seems the extra performance of more RAM and GTX 6xxx CUDA would be huge. It would cost me $800 to add this to the MacPro; I would have SAVE $2300 to do this via a PC plus gain faster clock cycles/Ivy Bridge performance when less cores is in use...

Thoughts?

I'm also sort of looking at this just from the financial. If Apple releases a new MacPro in June/July, this one is going to degrade in value quite a bit, and is already going on 2 years old. If I can maximize my profit from it NOW and get a system that is newer in many ways, saving $2300+ sure ain't a bad proposition...

Any change in thoughts, or what would you advise with that in mind? Not at all trying to argue for what I want (I don't know for sure what I want!)... BUT... trying to challenge this angle to get better clarity.

Also - is relying a moderately OC'ed system really all that bad of an idea? I don't mind a little risk with that as I'm pretty tech savvy... it doesn't seem like moderate OC's is really that big of a deal.

I would probably end up hacking this system for a hackintosh at a later date as well, once driver support is there, and putting FCPX back on it as well...
 

dsc106

Senior member
May 31, 2012
320
10
81
Also, regarding thunderbolt - I thought Intel said it would *NOT* be available via PCI-e add in cards because of the 10gbps+ bandwidth it would need to be built into the motherboard. I thought I read something where Intel said "no way" to thunderbolt add-in cards?

-------

alyarb, thanks for the GREAT feedback, advice, and insight. Appreciate it. I'll read through all those links and take that into consideration. You make some really great points.


This forum and the posters here are amazing!
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
ASUS has a TB header on some of their boards (mine has one) you just need to buy the adapter separately.
 

kernelc

Member
Aug 4, 2011
77
0
66
www.ilsistemista.net
Mac or not, you are substituting a dual-socket workstation board for a single-socket consumer board. From a general standpoint I would disagree with this move because you are comparing two entirely different types of systems and you really can't do that (certainly not in the context of an "upgrade"), and I think it's too early to make a good decision on if a 4-core ivy bridge system would be better for you in all cases.

From what you have described, all the heavy lifting you do is in Final Cut, during import/decode/encode operations, which have been demonstrated to use 12+ threads. If you have other scenarios that use many cores, describe them to us and we will try and determine how they should be affected by a smaller, higher-frequency ivy bridge part.

Traditionally, you could say all of the single-threaded gains you would expect from ivy bridge will not make up for the sheer number of cores that Westmere offers, but this tradeoff only applies to Final Cut based on what we know now. And a number of new technologies are going to throw wildcards into a time-honored assumption like that.

First, Premiere CS6 supports both CUDA and Quick Sync so you could be looking at substantial hardware acceleration that you wouldn't get with your current system. Quick Sync is no joke, and has shown to be comparable to up to 32 x86 cores depending on your test, or 100% snake oil in another test. Personally I think Quick Sync is too new and too rigid to rely on for a production encoding machine, but it has tons of potential and if you *do* make the switch to Ivy Bridge, you will definitely want to explore those capabilities with Premiere CS6. If it was my system though, I would stick with x86 encoding for another generation or two.

CUDA however is a bit more flexible than QuickSync and Adobe has been developing with it for years. Look at these documents and see if any of the Premiere capabilities you might use are accelerated:

http://blogs.adobe.com/premiereprot...ocessing-and-the-mercury-playback-engine.html

http://blogs.adobe.com/premiereprot...y-playback-engine-and-adobe-premiere-pro.html

http://blogs.adobe.com/premiereprotraining/2012/05/opencl-and-premiere-pro-cs6.html


Then we'll have to find someone who has done the work and run the tests to see how fast premiere runs with a Kepler GPU and desktop CPU.

In general, I would say 4 cores at 3.5 GHz will absolutely be faster than 12 cores at 2.6 GHz in almost all of your use cases except for encoding. If you can mitigate some of those tradeoffs with Quick Sync or CUDA, then sell that Mac while it's worth some money and make the switch, but don't do anything until you've seen reputable, real-world performance figures from an independent 3rd party(not published by intel, apple, adobe, or nvidia). If you have excess funds available, you could build a nice ivy bridge workstation and sell whichever machine you like the least, but you will probably not notice any difference casually, so you'll have to develop a standardized battery of tests and measure the two systems yourself.

Also understand that a 6-core ivy bridge CPU for the desktop is never going to happen, and you will never see a dual-socket 1155 board, so holistically you will have gone through much effort to exchange one non-upgradeable system for a slightly newer non-upgradeable system. As long as you are buying small socket desktop boards, you will be limited to 4 cores for many years, and GPUs are not going to make up for everything, but they are good for some of it and slowly getting better.

I'm intrigued by GPU acceleration as much as the next guy, but we have not yet come to a Rubicon where I can, with confidence, tell someone to ditch their big CPUs for good.

Great post, I agree.
 

kernelc

Member
Aug 4, 2011
77
0
66
www.ilsistemista.net
if I overclock to 4.4ghz that's a 60% speed increase bringing to ~70% of the speed of the 12-core Westmere...

In a work-station, I strongly advise against overclock.

While I overclocked more or less any desktop system to date :biggrin:, speaking about professional work I would use the CPU at default clock, as a rule.

Regards.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
with the 44% number you calculated, that's assuming the same clock speed. At 3.5ghz that's a 33% speed increase on top of that, bringing it 58%... if I overclock to 4.4ghz that's a 60% speed increase bringing to ~70% of the speed of the 12-core Westmere...

software optimization DOES... if all cores aren't being used fully and it's not scaling linearly, the gap closes more... and if that's just for ENCODING times, not on-the-fly edits, it wouldn't bother me too much. Thoughts?

I'm also sort of looking at this just from the financial. If Apple releases a new MacPro in June/July, this one is going to degrade in value quite a bit, and is already going on 2 years old. If I can maximize my profit from it NOW and get a system that is newer in many ways, saving $2300+ sure ain't a bad proposition...


Also - is relying a moderately OC'ed system really all that bad of an idea? I don't mind a little risk with that as I'm pretty tech savvy... it doesn't seem like moderate OC's is really that big of a deal.

once driver support is there, and putting FCPX back on it as well...

The 44% was based on the details posted up the thread about performance increase, so it goes either way. Looking around at the following

http://www.hpcwire.com/hpcwire/2012-05-08/chips_on_the_table:_sandy_bridge_versus_westmere.html

and

http://www.tomshardware.com/gallery/Intel-Xeon-E5-2690-004,0101-320463-0-2-3-1-jpg-.html

(just pulling something that seems usable) and assuming the programs you are using take advantage of the newer features Sandybridge brought (seeing as IvyBridge is mostly a die shrink with about 5% performance per clock increase), it would appear a ball park of a 3.4Ghz 6c Westmere is 1/2 the performance of a 8C SB-E, so the dual you have would equal a SB-E 8 core. But then that cpu is somewhere about $2K anyway I think.

The speed difference of the official cpus I do not factor in for a few reasons. The first being they both have different base rates, but similar top turbo modes (first link, 200Mhz difference). Makes the "@ 3.5Ghz" a little pointless as the cpu will already be running at that through turbo to get the bentched mark speed.

Now IvyBridge is considered about 5% faster than Sandybridge per clock speed (most reported faster speeds is the iGPU which you will not be using I suspect) and secondly with IvyBridge, 4.4Ghz is a high overclock. IIRC due to it's design, heat becomes a major issue and getting to 4.5Ghz is considered "as good as it gets" for air cooling. 4.2Ghz is the "should be fine getting to" from my readings but not a heavy overclocker. If looking at the fastest IvyBridge, it turbos to 3.9Ghz I think (single core though) so a 4.2 might get 7-10% increase. maybe a little more if you can get multiple cores to 4.2Ghz.

Which means, taking the 12c westmere -> 8 core SB gives a 4 core SB at 1/2 your current speed. Add in 5% for Ivybridge and 10% for over clocking and best is looking at 1*0.5*1.05*1.10 = 57.75% (that is assuming a "linpack" testing. looking at the toms hardware image, the integer performance is less, so performance would be more like needing 9.5 cores of SB to match (will work with 10), which puts the Ivybridge at 1*(4/10) *1.05*1.10 = 46.2% of your current system.

Still hovering about the 50% as I see it. Expecting 70% I think will result in major disappointment.

As to software using threads, if it works well for 12 threads, it will have no issues working on onlu 4 threads (and the 4 virtual threads if getting the better Ivy bridge)

Re selling now, being june already, I think you have already missed the boat. If I was looking at a $4K outlay, and knew something newer was comming in 4 weeks, I would be holding off buying. If you wanted to sell a month or two ago it might be different. I think that boat is already at the head water leaving the bay. That being said, comming out $1K ahead instead of $2K might still be attractive to you.

Of course, waiting might be needed anyway as IIRC that thunderbolt board has only recently been anounched, it might be another 4 weeks before your local shop has stock you can buy, definitly making the selling of the old system harder (to get the full $4K ect).


Re risk of over clocking. It is not really a risk in that you could damage the hardware (assuming you do not go nuts with the voltage), the risk for a machine you make your lively hood is while the system might be mostly stable, it could create an error in what it is working on that goes undetected. Some work loads this would not be an issue, but in others, it makes the whole run invalid. This is assumeing the computer does not reboot or lock up 5 hours into a processing session causing you to miss a dead line. For a work machine, I would want to limit chances of something negitivly effecting my income.

Lastly, the Hackintosh idea, I do not follow it, but the last I checked, due to how the setup works, waiting for drivers from code hackers is just pointless. a hackintosh can be hard to get going as Apple only created drivers for the hardware it wanted in the machine, getting your thunderbolt board working might never happen. But then this gets back to the point that you are placing your income creating system in the hands of people who code / hack away when they feel like it.