the era of overclocking is ending

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Resident Evil 5 and arguably Dragon Age Origins were also both developed primarily for the console's. So I think console's have actually helped developers push into utilizing multiple CPU cores.

Actually all previous Capcom console games were single-threaded: Lost Planet, Dead Rising, Devil May Cry series, etc.
Resident Evil 5 engine was developed separately for PCs from ground-up. It even has DX11. You can read more here:
ftp://download.intel.com/products/processor/corei7/ResidentEvil.pdf

The article specifically discusses how the MT framework engine for consoles was single-threaded and that it was after Capcom worked directly with Intel that they were able to reprogram the engine to take advantage of multiple cores, i.e., 8 threads.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,574
10,211
126
More or less 400-430 FSB with insane FSB termination voltage of 1.45V+ which is too dangerous for 45nm Penryn. The exception was the Abit IP35!!!

Thanks. I'll stick to 400FSB then. Not even sure I have any control over VTT/FSB term voltage on my Gigabyte P35-DS3R board. It has limited voltage controls, compared to my DFI X48 board(s).
 

combust3r

Member
Jan 2, 2011
88
0
0
Last time I o/c'ed something was AMD K6II from 300 to 417MHz (450 was too much, get glyphs under DOS).

These days, except for o/c GPU to get a playable framerate, there is no much sense to o/c your CPU/RAM. I do compile a ton of code everyday, but I don't mind waiting for 5 minutes longer, most of the time I don't even sit in front of my laptop untill it finishes the job.

I do however like to hit the sweetspot when purchasing components for my new build (I do that every 3-4 years) and I want them to last at least 2 years beyond that so that those who are buying that old gear from me be happy also.

It's like tuning auto's, you want to race and want to show that you can tune it better than the others - and like to drive faster then all the rest. Or get more than what you paid for. The last reason is the most sane one...
 
Last edited:

Nintendesert

Diamond Member
Mar 28, 2010
7,761
5
0
Sure, we can still get K parts and go for 5G on water or whatever, but is there any need to overclock anymore? Frankly I don't see any difference in day to day tasks between a stock and an overclocked SB anymore, even in games. Personally I feel the core2 duo/quad era is the last hurrah of overclocking, where gains are huge and perceivable.



Some of us do more than just web browse.
 

PreferLinux

Senior member
Dec 29, 2010
420
0
0
Not really free if you're looking at the Intel-K models. You can't really overclock on SB without buying a K-model.

So if you want to have fun on a budget, get an AMD setup. Otherwise get a 2500K or 2600K and that's it for now on the 1155 platform.
I'm guessing that AMD will follow pretty quickly if they can compete with Intel on performance.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I was thinking that I wasn't even going to bother overclocking, when I upgraded my desktop rig to a Q9300. But that didn't last long. Quick change in the BIOS, FSB from 333 to 400, and I got .5Ghz of free performance. Didn't even have to touch the default vcore.

Sorry, overclocking is addicting, and I'm not getting over it anytime soon. :)

Edit: I'm wondering if I can push it higher, I thought P35 mobos topped out at around 400MHz FSB, but I saw a comment from someone regarding an IP35-E, they had at 485Mhz FSB, with a Q8200.

450 is typically the most that people can get on a p35 mobo with a quad, at least for 24/7 stability. I think that in testing I got my x3350/ip35pro up to ~ 463 into windows, but no amount of voltage kept it stable over ~ 455. 450 was my sweet spot, and from all the people I've talked to over the years I haven't seen anybody reliable claim much higher than that. btw, my 2 ip35-e's also get up pretty close to 450, but I've never been able to get either stable higher than that.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
More or less 400-430 FSB with insane FSB termination voltage of 1.45V+ which is too dangerous for 45nm Penryn. The exception was the Abit IP35!!!

But be careful with the FSB termination voltage. I believe the stock is 1.2V on P35. My Q6600 could work stably at 433 FSB on P35-UD3R but required +0.3V on FSB termination. That equates to 1.5V which was way too high for my liking.

http://www.anandtech.com/weblog/showpost.aspx?i=428

"(Editor- Raja in case the police get involved) made the mistake of using a very high VTT termination voltage of 1.51V (VTT is used to terminate data lines between the MCH and CPU).

We know users are running VTT voltages even higher than ours on 45nm processors and probably have not had a problem yet. We will run high VTT voltages in short bursts to test the limits of the board and CPU. However, this is the first time we have tried anything over 1.45V on a 24+-hour basis to test application stability.

Let this be a warning – do not go over 1.4V maximum for 24/7 use!

This is our first 45nm Quad core processor we managed to kill outright during testing. We hope it is the last one too. The problem is that we also have a Q9300 that is on life support after experiencing a 36-hour run at 435FSB with VTT set to 1.45V. While our experiences might not represent results elsewhere, we thought our advice to just, “Say no to high VTT"."


I think I'm at 1.31 vtt on the pro, virtual larry pm me if you want all my settings.

I'm still pissed that abit went kaput. that's the best mobo I've ever owned.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I'm guessing that AMD will follow pretty quickly if they can compete with Intel on performance.

I don't think so, intel has left the door wide open for amd here. they'd be foolish to close out OC'ing unless they have a very strong design-related reason to do it. intel otoh has a strong reason to charge extra to allow OC'ing due to their dominant market position. hopefully AMD will put enough pressure on intel that they don't block OC'ing entirely in future generations.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Thanks. I'll stick to 400FSB then. Not even sure I have any control over VTT/FSB term voltage on my Gigabyte P35-DS3R board. It has limited voltage controls, compared to my DFI X48 board(s).

Yes, you do have it. I had this board Revision 2.0. There is FSB voltage (along with PCIe voltage, and MCH voltage). Stock FSB voltage on that board was 1.2V and it allows for another +0.3V increase. So for 45nm processor, make sure not to increase FSB voltage by more than +0.2V if going beyond 400 FSB. 45nm Penryns might be have differently than my Q6600 did though. I just remember that Gigabyte P45-UD3R was really what you wanted for those 450-485FSB 45nm Q9550 overclocks. Knowing my board would crap out near 434-435, I never went with Penryns.
 
Last edited:

jimhsu

Senior member
Mar 22, 2009
705
0
76
If the era of overclocking is over, then why is "Grandma" doing it now?

http://forums.anandtech.com/showthread.php?t=2134687

Seems like overclocking is finally going mainstream, actually.

Rephrasing the OP:

The era of enthusiast overclocking -- LN2 setups, insane voltages, scoreboards, vapor phase cooling, etc is coming to a close. The era of mainstream overclocking - Turbo boost, Sandy Bridge, overclocking 1ghz on stock coolers- is just beginning.

I'm sure enthusiasts will find something to compete on, as always -- passively cooled OC competition anyone?
 
Last edited:

PUN

Golden Member
Dec 5, 1999
1,590
16
81
I stopped extreme overclocking (only default voltage overclock) because:

1) I don't need much horsepower for the work that I do. (Gaming, Documents, internet, emails, pictures and occasional encoding)
2) to be Green.
3) less fans, less noise.

I just upgrade the video cards to run the games as smooth as possible.
 

PreferLinux

Senior member
Dec 29, 2010
420
0
0
Rephrasing the OP:

The era of enthusiast overclocking -- LN2 setups, insane voltages, scoreboards, vapor phase cooling, etc is coming to a close. The era of mainstream overclocking - Turbo boost, Sandy Bridge, overclocking 1ghz on stock coolers- is just beginning.

I'm sure enthusiasts will find something to compete on, as always -- passively cooled OC competition anyone?
I think the enthusiast overclocking may well be back with LGA2011.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
It's only the end of an era if you exclusively purchase Intel CPUs. There was a time before AMD gained prominence and Intel was the only game in town when it came to overclocking (there was Cyrix but they were a bit of a joke IMO).

Bulldozer is not far off, and if you want to overclock something low-end you can do quite well with a cheap AMD setup. I haven't felt the need for more performance than my Phenom II at 3.8ghz gives me.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Sure, we can still get K parts and go for 5G on water or whatever, but is there any need to overclock anymore? Frankly I don't see any difference in day to day tasks between a stock and an overclocked SB anymore, even in games. Personally I feel the core2 duo/quad era is the last hurrah of overclocking, where gains are huge and perceivable.

Ah, the "fast enough theory" (the positing that CPUs are fast enough already such that faster ones don't matter). The thing is, with faster processing you can do more impressive things. Software will continue to evolve, and every year software demands more. And besides, people don't like waiting. Sure, surfing the web is fast (although it could still be faster!), but how long does it take to install a big program? how long does it take to encode a 2 hour HD movie? Many areas have huge room for improvement.

However, software demands doesn't increase as fast as hardware speed improves. This is why some of that extra speed has been sacrificed to cut down on size, cost, and recently power consumption as well.

Consider that once computers were room sized, took a very long time to perform the simplest of tasks, and were extremely expensive. Motherboards are shrinking to uATX, HDDs are shrinking to 2.5" SSDs. Components are being integrated into the CPU... But there is still so much more room for growth. Software will continue to grow in demands, and hardware will continue to out-pace it, and hardware will go down in cost, size, and power consumption as a result. But we are a long, long time away from a cellphone sized device running high definition games at photo-realistic quality.

And it is quite possible we would never get there, eventually miniaturization cannot progress any further. This will drastically reduce progress rate, and it will get harder and harder to design better architectures. Software will probably continue to grow in demands and we will see computers increasing in size again.

Beyond that point it becomes very hard to predict, but that point is already many decades into the future.
 
Last edited:

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
It's kind of in a sense already hit us in the fact that higher clock speeds will be harder and harder to obtain. We're not doubling clock speed per generation like in the P2/P3 era. So the solution was to add cores. Now software has to catch up to this and the future in software is all about parallelization. We want our software to run on 4, 8, 16 cores and massively in parallel on GPUs. The sorts of media processing that form the bulk of today's CPU usage are more easily run in parallel.

I have no idea how this will work for software/uses that are inherently serial in nature though. Ultimately I think we may rely on massively parallel biological computers.
 

mv2devnull

Golden Member
Apr 13, 2010
1,526
160
106
The thing is, with faster processing you can do more impressive things. Software will continue to evolve, and every year software demands more.
Very true. However, there are two paths. The first is brute force; the software attempts to bluntly make use of the new hardware, still clinging to old methods. Actually, OC does help that path. The other path devices new algorithms to make most of the new hardware features. Even that benefits from additional cycles. The latter path is more demanding for the developer.

Either way, every last cycle and bit will get used sooner or later, and more remains more.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
It's kind of in a sense already hit us in the fact that higher clock speeds will be harder and harder to obtain. We're not doubling clock speed per generation like in the P2/P3 era. So the solution was to add cores. Now software has to catch up to this and the future in software is all about parallelization. We want our software to run on 4, 8, 16 cores and massively in parallel on GPUs. The sorts of media processing that form the bulk of today's CPU usage are more easily run in parallel.

I have no idea how this will work for software/uses that are inherently serial in nature though. Ultimately I think we may rely on massively parallel biological computers.

We will never have a biocomputer at the home. We might see a hybrid AI with biological components to provide the intelligence and traditional chips for mathematical capability, but those are certainly not going to be something for the home user.

As while brains are massively parallel, they are not built for running arbitrary code. And I don't see how in the world you would get ECC running on something like that. Even subsentient animals have personalities, and bio-computers will likewise be subject to moods.

Besides all that, an array of modern GPUs the same size as a hypothetical bio computer will likely be capable of more FLOPs, massively distributed and all.

Very true. However, there are two paths. The first is brute force; the software attempts to bluntly make use of the new hardware, still clinging to old methods. Actually, OC does help that path. The other path devices new algorithms to make most of the new hardware features. Even that benefits from additional cycles. The latter path is more demanding for the developer.

Either way, every last cycle and bit will get used sooner or later, and more remains more.

Those are some astute observations. The limit of brute force approach is another reason of why hardware demand grew slower than the speed of hardware. Where cost, price, size, and now power get improved instead.

It should be noted though that I do consider new algorithms "a speed increase", nowhere have I indicated that speed = mhz. Clockspeed hasn't increased in years, but speed has exploded. Multi core processors are theoretically capable of much, much greater overall computations per second

wikipedia said:
the fastest PC processors six-core has a theoretical peak performance of 107.55 GFLOPS (Intel Core i7 980 XE) in double precision calculations. GPUs are considerably more powerful. For example, NVIDIA Tesla C2050 GPU computing processors perform around 515 GFLOPS[14] in double precision calculations while AMD FireStream 9270 peaks at 240 GFLOPS.[15] In single precision performance, NVIDIA Tesla C2050 computing processors perform around 1.03 TFLOPS while AMD FireStream 9270 cards peak at 1.2 TFLOPS. Both NVIDIA and AMD's consumer gaming GPUs may reach higher FLOPS. For example, AMD’s HemlockXT 5970[15] reaches 928 GFLOPS in double precision calculations with two GPUs on board while NVIDIA GTX480 reaches 672 GFLOPS[14] with one GPU on board.
FLOPS continue to increase at a great speed, parallel software is required to wring performance from them. They do so not "just because", but because software needs to do so to remain competitive. The extra performance has a use.
 
Last edited: