Go Back   AnandTech Forums > Hardware and Technology > CPUs and Overclocking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 12-11-2012, 03:46 PM   #76
mikk
Senior Member
 
Join Date: May 2012
Location: Germany
Posts: 934
Default

Quote:
Originally Posted by NTMBK View Post

There's already definite gains to faster memory on Trinity- it's graphics performance will scale with memory bandwidth as far as DDR3-2133, whereas IB's graphics seems bandwidth saturated at ~DDR3-1600.

Depends on the game.

Trine 2 3770k HD4000

DDR3-1600= 34,1
DDR3-2400= 39,7

Or here some more: http://www.hardwarecanucks.com/forum...review-21.html


Expected was a 95W TDP, so I'm surprised that people are upset from a 84W TDP. The CPU core itself is ~15% bigger: AVX2, FMA, TSX, more instructions units, two new ports, integrated FIVR. Furthermore a bigger iGPU with new VQE in QS and dx11.1 support. In fact this 84W is lower than expected.
mikk is offline   Reply With Quote
Old 12-11-2012, 03:51 PM   #77
TuxDave
Lifer
 
TuxDave's Avatar
 
Join Date: Oct 2002
Posts: 10,442
Default

Quote:
Originally Posted by Fjodor2001 View Post
What is the reason the CPU frequency has topped out at current levels? Is there any physiological explanation?
If you want to look at physiological barriers, you can first start by looking at the turbo frequency of chips to see what's possible without touching the design. Mind you the absolute physiological barrier is higher if you could go back into the chip and actually change the physical device sizing etc... (and the worst case is to go into the uArch). So there's no real physiological barrier on why the frequency is where it is.

I mentioned it earlier, it's just because there's better places to spend your thermal budget (and power delivery budget!)
__________________
post count = post count + 0.999.....
(\__/)
(='.'=)This is Bunny. Copy and paste bunny into your
(")_(")signature to help him gain world domination.
TuxDave is offline   Reply With Quote
Old 12-11-2012, 04:28 PM   #78
Fjodor2001
Golden Member
 
Join Date: Feb 2010
Posts: 1,482
Default

Quote:
Originally Posted by TuxDave View Post
If you want to look at physiological barriers, you can first start by looking at the turbo frequency of chips to see what's possible without touching the design. Mind you the absolute physiological barrier is higher if you could go back into the chip and actually change the physical device sizing etc... (and the worst case is to go into the uArch). So there's no real physiological barrier on why the frequency is where it is.

I mentioned it earlier, it's just because there's better places to spend your thermal budget (and power delivery budget!)
Well, as mentioned in my previous post, the CPU frequency increased from 233 MHz to 2.0 GHz in 4 years from 1997 to 2001. I.e. it increased about 1000 %.

Do you really think we could repeat the same journey in the next 4 years if the thermal budget would be spent differently? I.e. we'd be at 35 GHz in 2016. I don't think we'll be anywhere near that, regardless of thermal budget decisions or TDP increases.

So clearly the CPU frequency has topped out compared to how things progressed during 1985--2005. And clearly there must be a physological reason for it, otherwise we would have seen much larger CPU frequency increases from one CPU generation to the next during the last 4-6 years.

Last edited by Fjodor2001; 12-11-2012 at 05:46 PM.
Fjodor2001 is offline   Reply With Quote
Old 12-11-2012, 04:49 PM   #79
Concillian
Diamond Member
 
Join Date: May 2004
Location: Dublin, CA
Posts: 3,653
Default

Quote:
Originally Posted by Fjodor2001 View Post
Well, as mentioned in my previous post, the CPU frequency increased from 233 MHz to 2.0 GHz in 4 years from 1997 to 2001. I.e. it increased about 1000 %.
And what happened to TDP? What's the increase if it's normalized to CPU only power budget?

That was in an era where CPU only power budget exploded, and the architectures receded in terms of any measurable computations per MHz metrics. We're now in an era where CPU power budget is shrinking every generation to allow for higher GPU power budgets and increased integration of motherboard components and metrics for compute performance per MHz are improving. Of course frequency per time is huge in your example time-frame, but measurable compute efficiency (computational output per power input) is a significantly more meaningful metric, and I'm willing to bet my house that if you used that metric the growth from your example time-frame will look significantly less favorable compared with the last (or next) 4 years.

Last edited by Concillian; 12-11-2012 at 05:07 PM.
Concillian is offline   Reply With Quote
Old 12-11-2012, 05:12 PM   #80
TuxDave
Lifer
 
TuxDave's Avatar
 
Join Date: Oct 2002
Posts: 10,442
Default

Quote:
Originally Posted by Fjodor2001 View Post
Well, as mentioned in my previous post, the CPU frequency increased from 233 MHz to 2.0 GHz in 4 years from 1997 to 2001. I.e. it increased about 1000 %.

Do you really think we could repeat the same journey in the next 4 years if the thermal budget would be spent differently? I.e. we'd be at 35 GHz in 2016. I don't think we'll be anywhere near that, regardless of thermal budget decisions or TDP increases.

So clearly the CPU frequency has topped out compared to how things progressed during 1985--2005. And clearly there must be a physological reason for it, otherwise we would have seen much larger CPUs frequency increases from one CPU generation to the next during the last 4-6 years.
You could repeat that journey to 20GHz if you want. 50ps cycle time? Don't know how much logic you'll get in between each stage but devices definitely work at that speed. So if you want clock speed (and had no thermal concerns) you do exactly the opposite of what I mentioned. You start removing performance features to gain performance via clock speed. I'm pretty sure at the 20GHz range you'd be far worse off than before.

So as I mentioned, there is no physiological (as in it's impossible to physically build something at that speed) barrier up to 20GHz. Sure the dies have to shrink by a ton to get the clock grid churning at that speed but yes, you can physically build something really dumb at 20GHz. But instead when thermal budgets crashed the party, the whole idea of performance per watt came into play where scaling up frequency turned out to be one of the worse tradeoffs you can make.
__________________
post count = post count + 0.999.....
(\__/)
(='.'=)This is Bunny. Copy and paste bunny into your
(")_(")signature to help him gain world domination.

Last edited by TuxDave; 12-11-2012 at 05:15 PM.
TuxDave is offline   Reply With Quote
Old 12-11-2012, 05:21 PM   #81
Lonyo
Lifer
 
Lonyo's Avatar
 
Join Date: Aug 2002
Posts: 21,633
Default

Quote:
Originally Posted by Fjodor2001 View Post
Well, as mentioned in my previous post, the CPU frequency increased from 233 MHz to 2.0 GHz in 4 years from 1997 to 2001. I.e. it increased about 1000 %.

Do you really think we could repeat the same journey in the next 4 years if the thermal budget would be spent differently? I.e. we'd be at 35 GHz in 2016. I don't think we'll be anywhere near that, regardless of thermal budget decisions or TDP increases.

So clearly the CPU frequency has topped out compared to how things progressed during 1985--2005. And clearly there must be a physological reason for it, otherwise we would have seen much larger CPUs frequency increases from one CPU generation to the next during the last 4-6 years.
There have been various articles on AT (the website) about cores vs speed.
Search function is such that I can;t think of how I would hope to find any, but the basic gist from what I remember is... adding more cores is a much easier way of improving performance (albeit in multi threaded situations).

Increasing clock speed increases power consumption a lot more than adding more cores does (e.g. 5GHz vs 2x2.5GHz, ideal world perfect scaling etc from a performance standpoint, 2x2.5GHz uses less power).

Also while clock SPEED may not have increase, IPC/performance per clock has increased, so 4GHz now is faster at tasks than 4GHz 5 years ago.

As to why GHz makes power use go up, not 100% on the specifics, IDC might be a good bet, or hunting through old AT articles.


Found what might be the main ones (based on titles...)
http://www.anandtech.com/show/1611
http://www.anandtech.com/show/1645

Quote:
The Quest for More Processing Power, Part One: "Is the single core CPU doomed?"



Another aspect of clockspeeds is pipeline length, which also impacts IPC:
http://www.anandtech.com/show/495/2
Longer pipeline = higher frequencies = more penalty for misses = lower IPC.
__________________
CPU: Q3570K @ 4.1GHz 1.23v // Mobo: Asus P8Z77-V // GFX: Sapphire Tri-X 290 @ 1000/5200 // RAM: Corsair DDR3 @ 1600MHz 9-9-9-24 // SSD: Samsung 830 128GB
Video cards: TNT2, Ti4400, 9800, 7800GT(+7200GS), HD4850(+HD2400), HD6850, HD7950 (Laptops: GF6150, HD3200, GMA500)

Last edited by Lonyo; 12-11-2012 at 05:30 PM.
Lonyo is offline   Reply With Quote
Old 12-11-2012, 05:25 PM   #82
TuxDave
Lifer
 
TuxDave's Avatar
 
Join Date: Oct 2002
Posts: 10,442
Default

Quote:
Originally Posted by Lonyo View Post
As to why GHz makes power use go up, not 100% on the specifics, IDC might be a good bet, or hunting through old AT articles.
Assuming all else equal, the dumb (but accurate) way of answering that is looking at activity factor. Higher frequency means more electrons are travelling from power to ground per second and so without changing anything else except frequency, higher frequency --> more electrons moving around --> more energy used.

There are (sadly) other effects that cause the power vs clock frequency to not be as linear as that.
__________________
post count = post count + 0.999.....
(\__/)
(='.'=)This is Bunny. Copy and paste bunny into your
(")_(")signature to help him gain world domination.

Last edited by TuxDave; 12-11-2012 at 05:29 PM.
TuxDave is offline   Reply With Quote
Old 12-11-2012, 05:36 PM   #83
Idontcare
Administrator
Elite Member
 
Idontcare's Avatar
 
Join Date: Oct 1999
Location: 台北市
Posts: 20,411
Default

Quote:
Originally Posted by Concillian View Post
And what happened to TDP? What's the increase if it's normalized to CPU only power budget?

That was in an era where CPU only power budget exploded, and the architectures receded in terms of any measurable computations per MHz metrics. We're now in an era where CPU power budget is shrinking every generation to allow for higher GPU power budgets and increased integration of motherboard components and metrics for compute performance per MHz are improving. Of course frequency per time is huge in your example time-frame, but measurable compute efficiency (computational output per power input) is a significantly more meaningful metric, and I'm willing to bet my house that if you used that metric the growth from your example time-frame will look significantly less favorable compared with the last (or next) 4 years.
Pentium II 233MHz reportedly used 34.8W on 350nm.

Pentium 4 2.0GHz reportedly used 71.8W on 180nm.

So power basically doubled while clockspeeds increased 8.6x, all from a mere 2 nodes of shrinking.

That is actually rather remarkable considering how little is gained from a process node nowadays.

For comparison, on 22nm if you triple the power budget for the 3770k it buys an extra 1.3GHz of clockspeed (3.5GHz -> 4.8GHz).


Last edited by Idontcare; 12-11-2012 at 05:39 PM.
Idontcare is offline   Reply With Quote
Old 12-11-2012, 06:40 PM   #84
Lepton87
Golden Member
 
Lepton87's Avatar
 
Join Date: Jul 2009
Location: Poland(EU)
Posts: 1,709
Default

Quote:
Originally Posted by boxleitnerb View Post
Most 2500K and 2600K could hit 4.5 GHz, too. What was the model name of that 4.5 GHz Xeon btw? Just because Intel only released one CPU of that kind, doesn't mean anything.
If you want to stay at the same process, take Yorkfield -> Nehalem. Not higher clocks, but a hefty 30% higher IPC (in games).

10% more performance is disappointing in my opinion. I don't doubt Intel could have put 10% higher clocks into a 95W TDP envelope if they had wanted to.
Xeon X5698 It was actually 4.4GHz I remembered it wrong, but it's still the fastest X86 stock clocked CPU that has ever been released. From what I remember it was OEM only so you couldn't just buy it.
__________________
i5 2600K@4778MHz(47x101.7MHz) 1.45V,Noctua NH-D14, Asus Maximus IV Extreme, 8GB Corsair 1866MHz, Gigabyte GTX Titan SLI, 2x Corsair MX100 256 in Raid 0, 2xSeagate 3TB 7200RPM in RAID 0, Sandforce 2 120GB + 2TB WD Caviar Green, Seagate 1TB 7200RPM, BE Quiet 1200W, dell u2711
Lepton87 is offline   Reply With Quote
Old 12-11-2012, 06:44 PM   #85
Torn Mind
Platinum Member
 
Torn Mind's Avatar
 
Join Date: Nov 2012
Location: Maryland
Posts: 2,311
Default

Quote:
Originally Posted by Fjodor2001 View Post
All the way from the 1980--2005 we saw steady increases in CPU frequency for each new CPU generation. This was responsible for much of the performance increase during that period.

For example we went from:

Pentium II 233 Mhz (May 7, 1997)
Pentium II 450 Mhz (August 24, 1998)
Pentium III 800 MHz (December 20, 1999)
Pentium 4 2.0 Ghz (August 27, 2001)

So looking at the CPU frequency alone, the performance increased by almost 1000% in about 4 years!

Sure there may be bottlenecks and uArch differences so it's just an approximation, but still...

What is the reason the CPU frequency has topped out at current levels? Is there any physiological explanation?
No, Hz measures "pulses" not speed. Its units is 1/second. To get speed, you need an actual variable in the numerator, i.e distance or instructions.

Yeah, that Pentium 4 2.0 Ghz gets ABSOLUTELY destroyed by the paltry-looking Celeron G4xx series.

That Pentium 4 at about 3.0 GHz would also offer performance comparable to Dothan processors clocked at 1.86 Ghz. Pentium 4 actually is the reason the processor makers realized there is a physics wall in regards to increasing clockspeed.


The point is, using clockspeed as a measure of performance ONLY works with processors that have the exact same microarchitecture. Start comparing processors of different microarchtectures, and you gain absolutely NO information as to which processor performs better.
A Prescott 3.2 GHz or a Celeron G550 going at 2.6 Ghz? The answer is the Celeron because it offers superior Instructions per second, which is derived from multiplying IPC by clockspeed.

http://en.wikipedia.org/wiki/Clock_rate
__________________
SR061| Asrock H77M | 2x2GB G.Skill 1333Mhz NS RAM | PowerSpec TX-606 Case| 500GB 7200RPM Seagate Drive| Antec Eartwatts EA-500 (2006) | Asus DVD Burner | parallell and COM port header | Old Dell Keyboard
http://www.heatware.com/eval.php?id=93090

Last edited by Torn Mind; 12-11-2012 at 06:47 PM.
Torn Mind is offline   Reply With Quote
Old 12-11-2012, 06:44 PM   #86
Concillian
Diamond Member
 
Join Date: May 2004
Location: Dublin, CA
Posts: 3,653
Default

Quote:
Originally Posted by Idontcare View Post
Pentium II 233MHz reportedly used 34.8W on 350nm.

Pentium 4 2.0GHz reportedly used 71.8W on 180nm.

So power basically doubled while clockspeeds increased 8.6x, all from a mere 2 nodes of shrinking.

That is actually rather remarkable considering how little is gained from a process node nowadays.

For comparison, on 22nm if you triple the power budget for the 3770k it buys an extra 1.3GHz of clockspeed (3.5GHz -> 4.8GHz).

You need to show the other version... the one with the arrows for +14% at the same power consumption and the -2x% power at the same clock speed.

That's still not the whole story though...

P4 < P3 in IPC.
P4 at 2 GHz is more like a 1.5 GHz or so there's another % chunk less.
By the same token, for any reasonable performance metric IB at the same speed is (slightly) better at the same clock speed as SB. Clock speed is a red herring. What matters is usable performance compared to power required.

So in terms of actual performance you go from 266 --> 1500 ish (~5.7) while power is approximately double, so your metric of performance over TDP increase is really only ~2.8x.

This is far less impressive than the iniital 266 vs. 2000 MHz comparison. It's world class marketing level manipulation in order to make current CPU advancements look bad compared to "the good ol' days."

The same comparison of 4 years ago vs. IB at release:
i7 940 vs. i5 2400 (Cinebench R10 single thread from bench) = 1.3x on newer core & very slightly higher speed...
TDP 130 vs. 77 ==> 0.6
1.3 / 0.6 ==> 2.2x
That's even with very little MHz difference between the two CPUs... and the IB has close to 50% of the die dedicated to GPU... so 77 is probably too high to be using for the TDP in that comparison.... suddenly we're really close to the 2.8x that we saw from 1997 --> 2001... maybe even higher

CPU advancements haven't slowed as much as people want to believe. The issue is that "we" (as a society) don't need faster CPUs for most tasks, so the "advancement budget" is going somewhere other than CPUs. If faster CPUs were as necessary as they were in 2000 for normal tasks, you can bet we'd still be seeing huge advancements in CPU capability.

Last edited by Concillian; 12-11-2012 at 07:07 PM.
Concillian is offline   Reply With Quote
Old 12-11-2012, 06:54 PM   #87
Torn Mind
Platinum Member
 
Torn Mind's Avatar
 
Join Date: Nov 2012
Location: Maryland
Posts: 2,311
Default

Quote:
Originally Posted by Fjodor2001 View Post
Well, as mentioned in my previous post, the CPU frequency increased from 233 MHz to 2.0 GHz in 4 years from 1997 to 2001. I.e. it increased about 1000 %.

Do you really think we could repeat the same journey in the next 4 years if the thermal budget would be spent differently? I.e. we'd be at 35 GHz in 2016. I don't think we'll be anywhere near that, regardless of thermal budget decisions or TDP increases.

So clearly the CPU frequency has topped out compared to how things progressed during 1985--2005. And clearly there must be a physological reason for it, otherwise we would have seen much larger CPU frequency increases from one CPU generation to the next during the last 4-6 years.
Hardcore clockspeed increases began in the Pentium 4 era, and Intel even was trying to get 7 Ghz with Tejas and Jayhawk. They stopped and came to their senses though. If they were released, Intel would have been seriously burned.

http://en.wikipedia.org/wiki/Tejas_and_Jayhawk

Quote:
Tejas was slated to operate at frequencies of 7 GHz or higher, twice the clock speed of the fastest-clocked Core 2 processor, which is clocked at 3.5 GHz. However, Tejas would likely have performed worse, as it would have executed fewer instructions per clock cycle, and it would have run hotter as well with a TDP much higher than the Prescott core of Pentium 4. The CPU was cancelled late in its development after it had reached its tapeout phase.[1]
__________________
SR061| Asrock H77M | 2x2GB G.Skill 1333Mhz NS RAM | PowerSpec TX-606 Case| 500GB 7200RPM Seagate Drive| Antec Eartwatts EA-500 (2006) | Asus DVD Burner | parallell and COM port header | Old Dell Keyboard
http://www.heatware.com/eval.php?id=93090
Torn Mind is offline   Reply With Quote
Old 12-11-2012, 07:10 PM   #88
lopri
Elite Member
 
lopri's Avatar
 
Join Date: Jul 2002
Posts: 9,765
Default

Quote:
Originally Posted by boxleitnerb View Post
If we look at the desktop, the fastest CPU of the performance segment was the i7-880 at 3.06 GHz. The 2600K, its successor, clocked 10% higher and had 10-15% higher IPC, too.

My point is, we haven't seen these seizable jumps since Sandy Bridge.
Correct. Northwood -> Hammer -> Conroe -> Sandy are the only meaningful jumps in IPC improvements for me. Admittedly there was a multi-core revolution during the Hammer/Conroe era so I can cut some (large) slacks for that. But there hasn't been anything interesting to me since the original Nehalems and Thubans. Sandy is fast, to be sure, but is kind of boring. Ivy is.. ugh. Bulldozer? LOL.
lopri is offline   Reply With Quote
Old 12-11-2012, 07:24 PM   #89
hokies83
Senior Member
 
hokies83's Avatar
 
Join Date: Oct 2010
Posts: 835
Default

Quote:
Originally Posted by moonbogg View Post
It will be mine as well, so long as I can get 70fps instead of 60fps (that would be siiiiick).
No that sux XD i have 120hz Monitor XD

I can resale my 1155 stuff and 3770k for top dollar why not go 4770k?
__________________
MM Ascension
Gigabyte G1 Sniper 3
I7 3770k 5.1ghz 24/7 with H100
G Skill Trident X series 2500mhz 2x4gb 2x Gtx 680 1350mhz/+500 mem
G19 kb m57 mouse
Bose companion 3 speakers Yamakasi Catleap 2560x1440 Ips
hokies83 is offline   Reply With Quote
Old 12-11-2012, 10:12 PM   #90
IntelUser2000
Elite Member
 
IntelUser2000's Avatar
 
Join Date: Oct 2003
Posts: 3,494
Default

Quote:
Originally Posted by Concillian View Post
P4 < P3 in IPC.
P4 at 2 GHz is more like a 1.5 GHz or so there's another % chunk less.
This isn't true. That's because the the 266MHz is a Pentium II. The Pentium 4 gets behind with the on-die 256KB L2 Pentium III's, but I doubt it'll be much behind a off-die L2 cache Pentium II. With a 66MHz FSB which is half the Pentium III as well!

Here's what I think Intel did the past couple of years with frequency and power.

Intel's tricks to clock speed increase at "same power"

Penryn to Nehalem:

Trick #1: Notebooks went from 25W SV chip + 10W MCH to 35W combined. Because the MCH has harder time reaching TDP than CPU cores do, it gained "free clock speed" due to thermal headroom of the extra few watts that the MCH doesn't use.

#2: Turbo! While the peak power usage stayed the same, it increased average power use at higher(but not peak) loads by use of Turbo. Since it can clock down to Base when necessary, its "free" again.

Nehalem to Sandy Bridge:

-Power Tricks: Things like the Physical Register File and uop cache was used for decreasing power, rather than increasing performance. That allowed Base frequency to go up, while Turbo stayed nearly the same.

Sandy Bridge/Ivy Bridge to Haswell:

-My predictions are the increase in TDP is to increase Turbo frequency close to the max Turbo frequency. Right now with 3770K, it goes 3.5/3.6/3.7/3.8/3.9GHz. In 4770K, it might be 3.5/3.8/3.8/3.8/3.9GHz. That means in scenarios where lot of cores and threads are active, there may be frequency gains.
__________________
Core i7 2600K + Turbo Boost | Intel DH67BL/GMA HD 3000 IGP | Corsair XMS3 2x2GB DDR3-1600 @ 1333 9-9-9-24 |
Intel X25-M G1 80GB + Seagate 160GB 7200RPM | OCZ Modstream 450W | Samsung Syncmaster 931c | Windows 7 Home Premium 64-bit | Microsoft Sidewinder Mouse | Viliv S5-Atom Z520 WinXP UMPC
IntelUser2000 is offline   Reply With Quote
Old 12-11-2012, 10:56 PM   #91
IntelUser2000
Elite Member
 
IntelUser2000's Avatar
 
Join Date: Oct 2003
Posts: 3,494
Default

Quote:
Originally Posted by inf64 View Post
All summed up for GT2 vs HD4000: ~10%(?) more IPC,4% clock and 25% more EU. In perf. numbers: 1.1x1.04x1.25=1.43 or 43% faster than HD4000. GT3 in turn would be 1.43x1.33=1.9x or 90% faster than HD4000 (if it hits desktop).

The question : will 43% more GPU performance than HD4000 be enough to trigger the memory BW bottleneck?
I think even HD 4000 can benefit from faster memory and things like the on-package DRAM. But early documents suggest the absolute performance improvement may be slightly less than you suggest. The pure FLOPs per EU doesn't change, meaning the changes elsewhere are to bring up its weakness rather than boost peak. For example, Intel states that in Haswell, the graphics unit texture performance in some cases may go up to 4x, which will help in texture bound games.

Early leaks suggested 15-25% for GT2 and 2x for GT3. If that's for mobile desktops are probably better as mobile Ivy Bridge clocks relatively better against mobile Haswell than on the desktop comparisons do.

Think 20-30% instead. Bottleneck isn't all or nothing though. Because it varies from application to application, scenario to scenario and even frame by frame! The fact that Ivy Bridge's HD 4000 scales going to DDR3-1866 means its somewhat bottlenecked, but not as much. Perhaps it'll become sensitive as fastest desktop Llano does.
__________________
Core i7 2600K + Turbo Boost | Intel DH67BL/GMA HD 3000 IGP | Corsair XMS3 2x2GB DDR3-1600 @ 1333 9-9-9-24 |
Intel X25-M G1 80GB + Seagate 160GB 7200RPM | OCZ Modstream 450W | Samsung Syncmaster 931c | Windows 7 Home Premium 64-bit | Microsoft Sidewinder Mouse | Viliv S5-Atom Z520 WinXP UMPC

Last edited by IntelUser2000; 12-11-2012 at 11:00 PM.
IntelUser2000 is offline   Reply With Quote
Old 12-11-2012, 11:21 PM   #92
Khato
Senior Member
 
Join Date: Jul 2001
Location: Folsom, CA
Posts: 886
Default

Quote:
Originally Posted by IntelUser2000 View Post
Think 20-30% instead. Bottleneck isn't all or nothing though. Because it varies from application to application, scenario to scenario and even frame by frame! The fact that Ivy Bridge's HD 4000 scales going to DDR3-1866 means its somewhat bottlenecked, but not as much. Perhaps it'll become sensitive as fastest desktop Llano does.
It is indeed a question of what the actual bottlenecks in HSW graphics are. I'm still hoping that it might fix whatever bottlenecks are causing IVB to go from being competitive against Trinity on some games to being approximately half as fast on others. 'Cause if that bottleneck is gone it's a lot more than just a 20-30% performance gain that could be seen... Sadly, I doubt such will be the case.
Khato is offline   Reply With Quote
Old 12-12-2012, 01:17 AM   #93
boxleitnerb
Platinum Member
 
Join Date: Oct 2011
Posts: 2,514
Default

Quote:
Originally Posted by lopri View Post
Correct. Northwood -> Hammer -> Conroe -> Sandy are the only meaningful jumps in IPC improvements for me. Admittedly there was a multi-core revolution during the Hammer/Conroe era so I can cut some (large) slacks for that. But there hasn't been anything interesting to me since the original Nehalems and Thubans. Sandy is fast, to be sure, but is kind of boring. Ivy is.. ugh. Bulldozer? LOL.
Nehalem improved IPC also considerably if you're looking at todays applications and games. About 30% or so.
boxleitnerb is offline   Reply With Quote
Old 12-12-2012, 03:15 AM   #94
meloz
Senior Member
 
meloz's Avatar
 
Join Date: Jul 2008
Posts: 260
Default

What happened to this forum? Why are so many (erroneously) co-relating Haswell's 84 watt TDP with the 'stagnant' clock it shares with IVB, and then acting as if the sky is falling?

By themselves, neither the TDP nor clockspeed says anything about performance unless we know about the IPC, and we do not know the IPC. This is all the more true since Haswell is a 'Tock', a new microarch and not a shrink+ like IVB.

With a new arch, it is entirely possible to increase performance while staying 'stagnant' at a given frequency and core count.

And some people are even complaining about a paltry 7 watt increase in TDP on a desktop part? What happened to performance-per-watt, if system performance has increased correspondingly, we have nothing to complain about.

Besides, with Haswell Intel are moving more and more stuff to CPU from MB, so it is better to compare platform power consumption. Which will be at least 20% lower (for desktops) according to Intel's own presentation at IDF. So again, why the tears?

I would be more than happy with a 'mere' 10% performance increase on CPU side, for us Linux users Haswell is all about iGPU anyway. That's where I have higher expectations. 10% better CPU, 25% better iGPU along with an overall 20% lower energy consumption is a solid upgrade (as long as Intel do not jack up the prices).

I strongly urge members to read: Intel's Haswell Architecture Analyzed: Building a New PC and a New Intel




Quote:
Originally Posted by boxleitnerb View Post
The one thing that raises some doubt about accuracy of this chart is the presence of "4600" series iGPU on all chips. This is unlike Intel, they like to segment. Could they be using "4600" name as a placeholder until they are prepared to reveal more?

But then again, I note that the 'K' SKUs have VT-D disabled in typical Intel fashion. So maybe this chart is authentic after all! (har har)
meloz is offline   Reply With Quote
Old 12-12-2012, 05:07 AM   #95
ShintaiDK
Lifer
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 10,738
Default

Same with the BGA socket rumour. Seems alot of people on this forum search for drama that aint there.
__________________
Anandtech forums=Xtremesystems forums
ShintaiDK is offline   Reply With Quote
Old 12-12-2012, 06:57 AM   #96
Idontcare
Administrator
Elite Member
 
Idontcare's Avatar
 
Join Date: Oct 1999
Location: 台北市
Posts: 20,411
Default

Quote:
Originally Posted by meloz View Post
What happened to this forum?
Folks are bored, so the mundane and the absurd get equal billing as bonafide content.
Idontcare is offline   Reply With Quote
Old 12-12-2012, 02:46 PM   #97
Fjodor2001
Golden Member
 
Join Date: Feb 2010
Posts: 1,482
Default

Quote:
Originally Posted by meloz View Post
What happened to this forum? Why are so many (erroneously) co-relating Haswell's 84 watt TDP with the 'stagnant' clock it shares with IVB, and then acting as if the sky is falling?
Turn it around. Why are so many here vigorously defending the low CPU performance increases we've seen during the last 4 years, compared to how things progressed all the way from 1985 to around 2003 (and a one time jump in 2006 when transitioning to new uArch with Conroe)?

Do you have to create some hallelujah spirit to motivate purchasing Haswell? I don't think so. It has plenty of benefits such as better iGPU, integrated VRM, and lower power consumption for some use cases (mainly applicable to ultrabooks/laptops/etc).

But when it comes to pure CPU performance increases... sorry, it's nowhere near what we saw going from one CPU generation to the next during the golden years. Is that so hard to admit?
Fjodor2001 is offline   Reply With Quote
Old 12-12-2012, 03:42 PM   #98
Torn Mind
Platinum Member
 
Torn Mind's Avatar
 
Join Date: Nov 2012
Location: Maryland
Posts: 2,311
Default

Quote:
Originally Posted by Fjodor2001 View Post
Turn it around. Why are so many here vigorously defending the low CPU performance increases we've seen during the last 4 years, compared to how things progressed all the way from 1985 to around 2003 (and a one time jump in 2006 when transitioning to new uArch with Conroe)?

Do you have to create some hallelujah spirit to motivate purchasing Haswell? I don't think so. It has plenty of benefits such as better iGPU, integrated VRM, and lower power consumption for some use cases (mainly applicable to ultrabooks/laptops/etc).

But when it comes to pure CPU performance increases... sorry, it's nowhere near what we saw going from one CPU generation to the next during the golden years. Is that so hard to admit?
So, you still clinging to the megahertz myth? There's a reason benchmarks are used to guage performance instead of clockspeed: benches are more accurate, even the inconsistent Passmark is more useful than seeing 1.6 Ghz except when talking about processors with identical microarchtectures.

The gains of the Pentium 4's Netburst archtecture were not astronomically high compared to its Pentium III predecessors, and lower clocked Pentium Ms would provide the same performance as a highly clocked Pentium 4.

Four years ago, Nehalem was just released in November and offered substantial performance increases over Conroe while also lower power consumption. Now we are at Ivy Bridge, where the i5s are unequivocally faster than the QXXXX series.
__________________
SR061| Asrock H77M | 2x2GB G.Skill 1333Mhz NS RAM | PowerSpec TX-606 Case| 500GB 7200RPM Seagate Drive| Antec Eartwatts EA-500 (2006) | Asus DVD Burner | parallell and COM port header | Old Dell Keyboard
http://www.heatware.com/eval.php?id=93090
Torn Mind is offline   Reply With Quote
Old 12-12-2012, 04:18 PM   #99
Smartazz
Diamond Member
 
Join Date: Dec 2005
Posts: 6,128
Default

We didn't see the biggest jump with Sandy Bridge, yet it's considered a great architecture.
__________________
i5 2500K@4.6GHz, 16GB G.SKILL 1600MHz, R9 290x, Seasonic X850, X-Fi Fatal1ty, Samsung 830 and 840 with Antec 1200.
Retina MacBook Pro 15", 2.6GHz, 16GB, 512GB SSD
Achieva Shimian, Das Keyboard, Logitech G400 and Razer Scarab.
Smartazz is offline   Reply With Quote
Old 12-12-2012, 04:24 PM   #100
Fjodor2001
Golden Member
 
Join Date: Feb 2010
Posts: 1,482
Default

Quote:
Originally Posted by Torn Mind View Post
So, you still clinging to the megahertz myth? There's a reason benchmarks are used to guage performance instead of clockspeed: benches are more accurate, even the inconsistent Passmark is more useful than seeing 1.6 Ghz except when talking about processors with identical microarchtectures.

The gains of the Pentium 4's Netburst archtecture were not astronomically high compared to its Pentium III predecessors, and lower clocked Pentium Ms would provide the same performance as a highly clocked Pentium 4.

Four years ago, Nehalem was just released in November and offered substantial performance increases over Conroe while also lower power consumption. Now we are at Ivy Bridge, where the i5s are unequivocally faster than the QXXXX series.
You don't need to tell me Hz are not everything. But still, it matters. A lot. And going from PII 266 MHz -> P4 2.0 GHz we did see an IPC increase too. In addition to an 8.6x CPU frequency increase.

Going from Yorkfield->IB we've perhaps seen more IPC increase. But only 2.83 GHz (Q9550) -> 3.5 GHz (3770K) => 1.23x frequency increase. So the IPC increase going from Yorkfield->IB is nowhere near enough to compensate for the higher frequency increase we saw going from PII->P4.

To sum it up: Show me benchmarks where the relative CPU performance increase going from PII 266 MHz (1997) -> P4 2.0 GHz (2001) is lower compared to going from Q9550 (2007/2008) -> 3770K (2012), then I'll believe you.
Fjodor2001 is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 02:58 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.