What comes after 28nm GPUs?

EliteRetard

Diamond Member
Mar 6, 2006
6,490
1,021
136
22nm? Just curious...and when might it happen?

With the SNAFU we had at 32nm and the delay here on 28nm any chance we will see a smaller process tech on the next GPUs?

I keep hoping for some big breakthrough in GPUs and a fantastic bang for the buck. Hasn't happened in half a decade. Maybe it just can't happen anymore...but I'm keeping my fingers crossed and hoping that the next process tech will finally move things along.

I hear people say that all GPUs are massive overkill today...lame excuse for people with garbage displays. 1080p 60hz WTF BBQ?! I had 1440p 85hz 10 years ago (still do). Graphics have stagnated simply because the hardware has (especially consoles). If we could get another GPU breakthrough and finally move the performance chart up a notch or two we would see new games come out to take advantage of it. So far though we are still stuck with the same performance from 2008.

You can't argue with your $1k GPU setup either, that's bogus. Look at the mainstream GPUs that people have and can afford...they struggle with years old games on these low res 1080p (or even less) screens. Even the $200+ mid-range GPUs of the latest generation struggle to run modern games on these cheap screens...hovering in the 30FPS range with jerky minimum rates. How can we expect improved looks if we can't handle what we have now?

Back in 08 the top end cards were $300 and could get 50-100FPS at 1080p in the best games and the $200 units typically got 40-60FPS. Our current $100 cards average 20FPS in a 5 year old game like Crysis. We should have 40FPS in games that old or the graphics can't move forward...and we can't get back to nice displays either. Our current hardware is just far to slow, and again the speed of mainstream GPUs hasn't moved since 08. That is why we see the graphics stalled since then.

I honestly believe the price of all current GPUs should drop $100, or we need new GPUs at these price points that moves performance up to that level. We need at least 7850/7870 for $100-150 before we can really start moving along again. That kind of mainstream performance is required to push 1440p as well, especially if we want to see improvements in graphics and increased refresh rates like 120hz.

Gaming is already a small market and these sky high ($300+) prices required to get decent performance is killing it for everybody. It shouldn't be surprising or unrealistic to expect 2-3x of performance in the budget and mainstream price levels after 5 years and its those markets specifically that expand and move the market forward...not the niche extreme high end buyers.

Sorry, going off on a rant here...still with our process improvements and advances in technology we should expect much faster hardware at all price levels. I really hope the next generation brings that to us, and it would be even easier to do it with another process/die shrink. Here's to hoping for revival in 2014.
 

EliteRetard

Diamond Member
Mar 6, 2006
6,490
1,021
136
How many atoms thick are these transistors anyway?
I guess I don't really know how big a nanometer is (never seen one).

The oldest process I can actually recall was the 9700pro on 150nm.
Wonder what kind of process was used in my Kaypro 10.
 

Galatian

Senior member
Dec 7, 2012
372
0
71
How many atoms thick are these transistors anyway?
I guess I don't really know how big a nanometer is (never seen one).

The oldest process I can actually recall was the 9700pro on 150nm.
Wonder what kind of process was used in my Kaypro 10.

An Atom is about 0.1 nm, so we still have way to go, but Intel predicts that quantum tunnel effects will render current transistors design unusable at sizes smaller then 5 nm.

Also there are still issues with lithography at such small sizes.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I won't bother linking reviews, specs of comparison of 2008 vs. 2012/2013 cards and their performance differences, etc. The GPU landscape hasn't stalled. What's happening is the tail-end of a very long PS360 generation is holding back the graphics industry. Additionally, there is another phenomenon - as we get closer and closer to realistic graphics, the amount of GPU power we need is exponentially higher.

Compare this:
1343730217Q8scDBKVfx_1_1_l.png


To this:
1343730217Q8scDBKVfx_1_2_l.png


How much better would you say the graphics in the 2nd picture look? 2-3, 4x better maybe? It's difficult to quantify.

The Environment Complexity in the 2nd picture is 571x greater!!

The 1st demo is rendered on a 5800U at just 1024x768.
The 2nd demo needs GTX670 SLI at 1920x1080.

GTX670 SLI is roughly as fast as GTX690. GTX690 is 83x more powerful than the FX5800 Ultra.

That gives you an idea. What you are asking for is next generation graphics rendered on $200 videocards in a span from 2008 to 2013. The next real breakthrough in graphics will require GPUs 10x more powerful than what we have today. I am not talking about incremental increases like Witcher 3, Crysis 3 and so on. To make the same leap/impact like going from Pic 1 to Pic 2 for our next jump, we'll likely need GPUs with 200-300x the power of a GTX670 SLI because of exponential requirements that more advanced graphical effects bring.

What's happening is every little additional graphical effect that gets us closer to realistic graphics is exponentially more costly on GPU hardware. Yet, GPU hardware increases are still limited by Moore's Law. Things like tessellation, global illumination, are very taxing on graphics cards and those are just the tip of the ice-berg towards next gen graphics.

Lazy/unoptimized and poorly coded console ported games that take advantage of 2, barely 4 CPU cores, aren't helping either. It's not just about the resolution increase, but graphical effects/polygon count, higher resolution textures, more realistic physics effects, etc.

The extra details in games will show up on characters as we zoom in closer to their faces for instance.

1343730217Q8scDBKVfx_3_7_l.png

1343730217Q8scDBKVfx_3_8_l.png


It's going to be harder and harder to move beyond this last picture (as I predict GPUs 200-300x faster than GTX690) but hopefully with PS4/720 around the corner, we should see a significant jump in graphics in the next 5 years. Perhaps real-time graphics will reach that New Dawn (Pic 2) level in 5 years.
 
Last edited:

EliteRetard

Diamond Member
Mar 6, 2006
6,490
1,021
136
We use to get 50% or more performance for the same price each year.

Now we get 20-30% more performance for 50% increased cost. Even though dies sizes are smaller and production cycles are longer (so R&D is cheaper per unit).

So the performance of an $80 card years ago still costs $80-120. $200 performance still costs $200. All they have done is add a little extra performance in a high priced bracket ($300-1000). It's terrible. And when IGPs are as fast as $60 cards that's not a good on the IGP...that's just a bad on the GPUs.

Based on the total cost of manufacture and proper movement of performance in the market, the prices on cards should now be something like the following:

7750/7770 $50-70 7850/7870 $100-150 7950/7970 $200-250. Ghz edition could be $300.

Nvidia 650/ti $50-70 660/ti $100-150 670/680 $200-250
 

Kippa

Senior member
Dec 12, 2011
392
1
81
Do you think ray tracing will become standard in games within the next decade or so?
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
And yet people in the HardOCP thread comment on how the first Dawn is more aesthetically pleasing, even though the 2nd one has a hundred times the graphical complexity. You see, when it comes down to it, having million-polygon tree trunks, subsurface scattering, etc. will only take you so far -- the important thing is if your audience actually likes the scene.

And before I comment on how they could have used parallax maps (this came out in 2005) on the tree trunks to achieve almost the same effect while saving millions of polys, or done proper backface culling, or reduced the number of light sources with hardly any aesthetic impact... Yes I know I'm talking about a tech demo, but still, efficiency matters.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
We use to get 50% or more performance for the same price each year.

That hasn't happened in a long time. Since 8800GTX, we started to get 50-60% more performance every 2 years.

http://www.computerbase.de/artikel/grafikkarten/2011/bericht-grafikkarten-evolution/3/

Your unhappiness with the GPU market should have started in 2007 because there was no GPU 50% faster than 8800GTX every year thereafter.

Now we get 20-30% more performance for 50% increased cost. Even though dies sizes are smaller and production cycles are longer (so R&D is cheaper per unit).

Physics limits mean that its harder and harder to shrink transistors to lower nodes using silicon. Fabrication facilities cost more money than ever. The end result is 28nm wafers are far more expensive than 40nm or before.

Things are MUCH worse in the CPU space. At least every 2 years we can count on a 50-60% increase in GPU speed.

The baddest Intel CPU is barely faster than a 2008 1st generation Core i7 @ 4.0ghz in games. That means in 4+ years, CPU performance for games has barely moved. Prices didn't move either as i7 920 was $284. All we got was a reduction in power consumption and maybe a 35% boost in general apps from an i7 3770K @ 4.7ghz.

59672596.png

http://www.guru3d.com/articles_pages/far_cry_3_graphics_performance_review_benchmark,7.html

So the performance of an $80 card years ago still costs $80-120. $200 performance still costs $200.

June 16, 2008 - GTX280 was $649

Today HD7770 beats that card for $89. It's also single slot, and comes with a free FC3 game.

That means you have to pay 7.2x less for the same performance. :biggrin:

All they have done is add a little extra performance in a high priced bracket ($300-1000). It's terrible.

Nov 11, 2010 - GTX580 was $499

HD7850 2GB OC matches that card for $165. Comes with 2 free games that can be resold to minimize the upgrade cost.

the prices on cards should now be something like the following:

7950/7970 $200-250. Ghz edition could be $300.

HD7950 is $259 CDN with 2 free games. That's close.

March 26, 2010, GTX480 was $499. HD7950 OC >>>> HD7970GE/GTX680. That means in LESS than 2 years you are now getting 50% more performance for half price!:biggrin:

Another way to look at it, HD7950 OC for $260-280 gets you faster performance than a stock $499 GTX680 or a 925mhz HD7970 $549 delivered slightly more than a year ago. So that means that level of performance is HALF as expensive just just 12 months.:biggrin:

1Ghz HD7970 is $380 with 5 free games. Sell those games, and you can have it for $300.

Last year they were deals on HD7970 for $299-320, GTX670 for $300 and HD7950 for $270.
 
Last edited:

aaksheytalwar

Diamond Member
Feb 17, 2012
3,389
0
76
A 7950 oc is about as fast as a 7970 ghz stock. Not light years ahead. And it is just 5-15% faster than a 680 stock and at par with a 680 oc.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Things are MUCH worse in the CPU space. At least every 2 years we can count on a 50-60% increase in GPU speed.

The baddest Intel CPU is barely faster than a 2008 1st generation Core i7 @ 4.0ghz in games. That means in 4+ years, CPU performance for games has barely moved. Prices didn't move either as i7 920 was $284. All we got was a reduction in power consumption and maybe a 35% boost in general apps from an i7 3770K @ 4.7ghz.

I'm sorry, but this is a load of horse shit :)

Sure it is worse than GPUs, but you cannot compare an OC'ed CPU with a stock CPU although you seem to love these kind of skewed comparisons. Not everyone overclocks, and not everyone reaches the same good results.

Lets have a look at i7-920 vs i7-3770K. Both did cost about the same at launch. All higher clocked Nehalems came significantly later and/or were significantly more expensive.

50%:
http://www.pcgameshardware.de/Battl...ld-3-Multiplayer-Tipps-CPU-Benchmark-1039293/

60%:
http://www.pcgameshardware.de/Far-C...y-3-Test-Grafikkarten-CPU-Benchmarks-1036726/

65%:
http://www.pcgameshardware.de/The-E...-128680/Tests/Skyrim-im-Test-mit-Mods-908710/

way over 60%, i7-920 not represented, but the i5-760 is:
http://www.pcgameshardware.de/Need-...-Wanted-Prozessor-CPU-Test-Benchmark-1034023/

So what you're saying is not true. And that is not even including 6 core CPUs that didn't exist in 2008.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Can this new dawn demo be run on AMD's hardware? I remember that you could modify some files and run the old dawn demo on ATi's cards and what's even funnier is that they were considerably faster then nvidia's cards running nvidia's own demo :D but GF FXes were the biggest flop ever as far as graphics cards are concerned.
 

Midwayman

Diamond Member
Jan 28, 2000
5,723
325
126
Do you think ray tracing will become standard in games within the next decade or so?

It seem inevitable. Raytracing is the solution to needing 200x the gpu power to make a noticeable difference in rendering. However it is a change in paradigm.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm sorry, but this is a load of horse shit :)

Sure it is worse than GPUs, but you cannot compare an OC'ed CPU with a stock CPU although you seem to love these kind of skewed comparisons.

Are you serious? Jeez, this forum continues to amaze me at people's inability to read what's being written.

What graph did I linke and what does it depict? Core i7 1st generation overclocked against Core i7 3rd generation overclocked. Please go re-read my post more times if you aren't seeing the context of what's being compared exactly:

"The baddest Intel CPU is barely faster than a 2008 1st generation Core i7 @ 4.0ghz in games. That means in 4+ years, CPU performance for games has barely moved. Prices didn't move either as i7 920 was $284. All we got was a reduction in power consumption and maybe a 35% boost in general apps from an i7 3770K @ 4.7ghz."

Seriously, next time please take 15 minutes to read what I am posting if you don't understand before questioning what I am saying. I am clearly comparing OC vs. OC. Even the graph I linked shows this. Facepalm. Please, link any gaming benchmarks you want of IVB OCed to 4.7ghz outperforming a Core i7 920 @ 4.0ghz in games by 50-60%.

Again, I didn't say a word about 6-core CPU OC for general apps against an i7 920 OC. Talking about games. BTW, the i7 920 @ 4.0ghz is from 2008.

The GPU market has slowed down but it's moving at a very respectable pace. The CPU market is now focused on performance/watt and shrinking die sizes. Whatever large growth is happening in the die size is mostly being allocated towards the graphics side in the APU.

Haswell's 10% increase in IPC is nothing worth talking about when HD7970 OC brought 70-80% over HD6970 OC and Titan OC will be probably 50% faster than GTX680 OC, if not more. Is Haswell going to overclock to 6-7Ghz on air? Not a chance. Haswell's 10-15% increase is garbage compared to what GPU makers can net out year after year. CPUs haven't been exciting since Nehalem.

If you look at CPU performance progress in games for a PC enthusiasts vs. GPU since the year 2008, it's pathetic in comparison. GTX280 $649 level of performance can be had for $90, or a 7.2 reduction. Let me know where I can buy Core i7 920 level of CPU performance for 7.2 less, or for $39. Even if you spend $1000 on the i7 3970X and overclock it to 5.0ghz, you'll be lucky to see anything worth talking about in games over an i7 920 @ 4.0ghz without $800 worth of GPUs. For 99% of users, this is not a valid comparison.

2012 3rd generation Core i7 @ 3.5ghz is just 1% faster on games on average than a 2008 1st generation i7 @ 3.3ghz with a single GTX670:

prumer.png


Take that same 1st generation i7 @ 3.3ghz and add a Titan to it and it would mop the floor with a 5.0ghz $1000 i7 3970X and a GTX670 graphics card. But it just happens to be that i7 920s hit 4.0ghz on stock voltage and 4.4ghz overvolted, which shows just how anemic the progress in the CPU space for games has been since 2008.
 
Last edited:

uclaLabrat

Diamond Member
Aug 2, 2007
5,537
2,834
136
An Atom is about 0.1 nm, so we still have way to go, but Intel predicts that quantum tunnel effects will render current transistors design unusable at sizes smaller then 5 nm.

Also there are still issues with lithography at such small sizes.
Silicon bond length is something like .22 nm if I remember correctly, and according to wiki the lattice spacing of the crystal is like .54 nm, not sure how that effects the domain spacing for transistors. But yeah, at 5 nm, you're looking at 10-15 silicon atoms per feature. That's pretty damn small.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
June 16, 2008 - GTX280 was $649

Today HD7770 beats that card for $89. It's also single slot, and comes with a free FC3 game.

That means you have to pay 7.2x less for the same performance. :biggrin:

I liked what you were saying / posting right up until this part, which is very misleading.

Case in point : GTX 260 vs Radeon 7770.

http://www.anandtech.com/bench/Product/664?vs=536

It all depends on resolution. GTX 260 beats the 7770 on Starcraft 2 1680x1050 79 FPS to 64 FPS with MSAA turned OFF.

Turn on MSAA and up the resolution to 1920x1200 and the tables turn.

Some other numbers, percent increase of the 7770 over the GTX 260 :

Crysis Warhead 1920x1200 4XAA +15%
Metro 2033 1680x1050 HQ 16xAF +29%
Elder Scrolls Skyrim 1680x1050 4x MSAA 16x AF +19.1%

Radeon 7770 release date : 2012-2-15
GTX 260 release date : 6-26-2008

3.5 years and unless you're running over 1920x1080 and just absolutely gotta have it 4xAA - the difference is meager at best.

And the GTX 285 beats the 7770 in about 75% of the comparison benchmarks.

By most measures I would say the GPU market has stalled, definitely in comparison to pre 8800GTX times as you noted.

Addendum to this: The 260 was a competitor to the 4850 and 5770 at the time. Today, the 650 / 650 TI are price competitors to the 7770 / 7770 Ghz edition. Nvidia and ATI/AMD like to play games with moving their nomenclature around. The 260 was a $300 card at release, and less than a year later could be had for well under $200. Basically I look at these as being ~$150 cards within 12 months of release. Can't say that for the 560 / 660 series. The older 560 (non Ti) is still $180+ unless you find a clearance deal.
 
Last edited:

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
We use to get 50% or more performance for the same price each year.

Now we get 20-30% more performance for 50% increased cost. Even though dies sizes are smaller and production cycles are longer (so R&D is cheaper per unit).

So the performance of an $80 card years ago still costs $80-120. $200 performance still costs $200. All they have done is add a little extra performance in a high priced bracket ($300-1000). It's terrible. And when IGPs are as fast as $60 cards that's not a good on the IGP...that's just a bad on the GPUs.

Based on the total cost of manufacture and proper movement of performance in the market, the prices on cards should now be something like the following:

7750/7770 $50-70 7850/7870 $100-150 7950/7970 $200-250. Ghz edition could be $300.

Nvidia 650/ti $50-70 660/ti $100-150 670/680 $200-250
Dude, what are you fricken talking about? Everything you're saying is just whiney demands. Too bad reality does not take requests. GPU designs have matured and big performance jumps from improved designs are declining. Moore's law has already hit it's first wall and clock speed increase have ENDED. It will hit a scaling wall too once the quantum tunneling effects you mention eventually get here. Get used to stagnation because it's here to stay.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Are you serious? Jeez, this forum continues to amaze me at people's inability to read what's being written.

What graph did I linke and what does it depict? Core i7 1st generation overclocked against Core i7 3rd generation overclocked. Please go re-read my post more times if you aren't seeing the context of what's being compared exactly:

"The baddest Intel CPU is barely faster than a 2008 1st generation Core i7 @ 4.0ghz in games. That means in 4+ years, CPU performance for games has barely moved. Prices didn't move either as i7 920 was $284. All we got was a reduction in power consumption and maybe a 35% boost in general apps from an i7 3770K @ 4.7ghz."

Seriously, next time please take 15 minutes to read what I am posting if you don't understand before questioning what I am saying. I am clearly comparing OC vs. OC. Even the graph I linked shows this. Facepalm. Please, link any gaming benchmarks you want of IVB OCed to 4.7ghz outperforming a Core i7 920 @ 4.0ghz in games by 50-60%.

Again, I didn't say a word about 6-core CPU OC for general apps against an i7 920 OC. Talking about games. BTW, the i7 920 @ 4.0ghz is from 2008.

The GPU market has slowed down but it's moving at a very respectable pace. The CPU market is now focused on performance/watt and shrinking die sizes. Whatever large growth is happening in the die size is mostly being allocated towards the graphics side in the APU.

Haswell's 10% increase in IPC is nothing worth talking about when HD7970 OC brought 70-80% over HD6970 OC and Titan OC will be probably 50% faster than GTX680 OC, if not more. Is Haswell going to overclock to 6-7Ghz on air? Not a chance. Haswell's 10-15% increase is garbage compared to what GPU makers can net out year after year. CPUs haven't been exciting since Nehalem.

If you look at CPU performance progress in games for a PC enthusiasts vs. GPU since the year 2008, it's pathetic in comparison. GTX280 $649 level of performance can be had for $90, or a 7.2 reduction. Let me know where I can buy Core i7 920 level of CPU performance for 7.2 less, or for $39. Even if you spend $1000 on the i7 3970X and overclock it to 5.0ghz, you'll be lucky to see anything worth talking about in games over an i7 920 @ 4.0ghz without $800 worth of GPUs. For 99% of users, this is not a valid comparison.

2012 3rd generation Core i7 @ 3.5ghz is just 1% faster on games on average than a 2008 1st generation i7 @ 3.3ghz with a single GTX670:

prumer.png


Take that same 1st generation i7 @ 3.3ghz and add a Titan to it and it would mop the floor with a 5.0ghz $1000 i7 3970X and a GTX670 graphics card. But it just happens to be that i7 920s hit 4.0ghz on stock voltage and 4.4ghz overvolted, which shows just how anemic the progress in the CPU space for games has been since 2008.

You always have to be careful with CPU benchmarks. CPU benchmarks with a GPU bottleneck present make no sense, yet you post them. CPU benchmarking is difficult and most sites do it wrong. I posted a link where the difference is 60% in FC3 since PCGH is one of the few sites that go about this very professionally and they know what they're doing. For CPU benchmarks, they are the #1 site to go to.
And you really need to be less overoptimistic about your OC results. 4.4ghz overvolted is an extreme result that not everyone gets or wants to get since the CPU probably dissipates 300+W and you need a very very good cooling to be okay with this. Let's include 6-7 GHz i7 3rd gen OC then with liquid nitrogen then and let those CPUs consume 300W to make a fair comparison.

A reasonable assumption would be 4 GHz i7 1st gen and 4.6 GHz i7 3rd gen (with 4-6 cores depending on your wallet). 15% higher clock, 15% higher IPC and a little higher fps due to the 2 extra cores would get you around 40% more performance. Not exceptionally good, but not too shabby either.

Btw the GPU/CPU comparison doesn't make sense. Graphics is an extremely parallel workload, and therefore GPUs can take full advantage of more units and thus node shrinks. CPUs cannot unless you completely disregard single threading performance and put 60 small cores onto a die. But that's not how it works, since game code cannot be perfectly multithreaded, and CPU makers don't make CPUs for games only but have to cater to many different workloads, most of them still basically single threaded compared to graphics. Dedicated and specialized hardware that is tailord to a specific workload like GPUs will always be better utilized and progress faster, that's just logical. To complain about the lack of progress in the CPU space is therefore illogical. Run some serial code on your GPUs@1.2 GHz and you'll be shocked what a stinker that will be ;)
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I posted a link where the difference is 60% in FC3 since PCGH is one of the few sites that go about this very professionally and they know what they're doing.

That's stock vs. stock. I was always discussing it from an enthusiast perspective, hence the FC3 graph I linked showed Nehalem OC vs. IVB OC.

A reasonable assumption would be 4 GHz i7 1st gen and 4.6 GHz i7 3rd gen (with 4-6 cores depending on your wallet). 15% higher clock, 15% higher IPC and a little higher fps due to the 2 extra cores would get you around 40% more performance.

That's theoretical. Show me at least 10 games where IVB 4.6ghz is 40% faster than Nehalem at 4.0ghz? Heck show me 1 game!

Btw the GPU/CPU comparison doesn't make sense. Graphics is an extremely parallel workload, and therefore GPUs can take full advantage of more units and thus node shrinks.

I never said it's "fair". My point is the OP is complaining about graphics but graphics performance and price/performance has exponentially improved. The improvements on the CPU side for games have been more or less non-existent from Nehalem/Lynnfield @ 4.0ghz unless all you play are CPU limited games like Skyrim/SC2/WOW where you might get 25% more FPS on a 4.7ghz IVB. Since i7 920 came out in 2008, that's sad. Why didn't the OP start ripping CPUs first? The stagnation there is unbelievable.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,601
2
81
Your FC3 benchmark is GPU bottlenecked. You can see that already when there is no difference between all Intel CPUs with different IPCs and clock speeds even at only 1280x1024. Thus this graph is useless to compare CPU performance. They should have selected either a more CPU intensive game or scene or used a stronger GPU solution.

Sure it's theoretical since as I said, good CPU benchmarks are few and far between, so it is nigh impossible to find comparable numbers without GPU bottlenecks.
But as you well know, if you're CPU bottlenecked, higher clocks give 1:1 fps and higher IPC will most of the time as well:
http://www.computerbase.de/artikel/prozessoren/2011/test-intel-sandy-bridge/47/
I rounded a bit up since as mentioned you can see a small benefit from 6 cores already. So while I cannot provide you with hard number evidence, I still think my assumption is perfectly reasonable.
 
Last edited: