Comparison of Pentium G3258 Overclocking Results

know of fence

Senior member
May 28, 2009
555
2
71
Gleaned from Anandtech and condensed into one graph are the results from Crashtech's G3258 overclocking notes, Ian Cutress' write-up on the Pentium Anniversary Edition as well as my own testing.
All three use aftermarket cooling. Measured Vcore may differ by plus/minus 5 mV from the set values used here.
Updates will likely include more and more 'graphic' images of temperature, power consumption as well as sources.

p2eH80f.png


I will also gladly include your data to get an even more representative sample. (Even a single fine-tuned stable voltage data point would be appreciated. Fine tuned means, 5 mV below which the CPU is unstable, 4.2 GHz seems like a reasonable target.)

Update 1.1

One of the constant questions that comes up during OC'ing is by how much one should increase voltage, going from one multiplier to the next. After some rather blind trial and error testing, it seems that 5 mV steps are still a little wide especially at lower clocks. Units were taken down a notch from 0.005 V to 5 Millivolts [mV] to not have to write zeros. This second chart shows all subsequent voltage jumps I had to set starting from 0.860 V @32x going up by 20 mV to 0.880 V @33x.
6FVetPx.png


It is apparent that the voltage jumps need to be subsequently increasing to keep the overclock stable. With finer grained testing the graph would likely resemble a perfect stairway. To get an idea of the incline, a trend line is drawn. The equation of the trend line suggests that starting with 15 the voltage jumps should increase by exactly 3 mV. So the steps should be 18 mV; 21; 24; 27, 30 and so forth. 3 mV adjustments seem definitely preferable to the somewhat arbitrary 5 and 25 mV steps, at least for my rig. CPUs as well as cooling solutions have a range of properties and and thusly have different graphs/slopes. Still, further testing shows that the Haswell's FIVR doesn't allow finer grained VCore adjustments, than 4 or 5 mV, the voltage always ends up at the closest default value. For further graphs measured Vcore, rather then set values should be used.

Side note: The trend line equation treats the X-axis as if it were just integer steps, rather than the actual clock rate of 3'200'000'000 Hz.

Second side note: In terms of differential calculus, working out the voltage delta in 100 MHz intervals is a crude way to approximate the first derivation of the original graph. Should this first derivation be linear, as was assumed with the trend line, then the original chart is likely a parabola of sorts, with a formula of: voltage equals something something [frequency] squared.

Power Consumption

Higher clock rate and voltage directly raises power consumption, which is measured with a simple and imprecise watt-meter. Getting up there past 4.4 GHz, it takes considerable power consumption boosts of more than 10% to get a modest 2.2 % performance increase. However power consumption barely matters when using Speedstep and adaptive voltage, as these high states are rarely reached during normal use, but running the highest possible overclock will force the CPU fan to audibly speed up, which may be annoying. So 4.3 GHz looks like good middle ground.

GEqHrrb.png


A Simple Experiment

I always wanted to do this little experiment. I was surprised just by how neat the results were. Maybe you can figure it out.

uLaW38X.png


Update 1.2 - Offset Undervolting

Having looked at Vcore and and at the Vcore differential above base frequency, the only remaining unknowns are voltages between 800 and 3200 MHz. During testing I was surprised to see Vcore go way, way down to 0.477 V in completely steady fashion.
Also stock Speedstep voltages have been recorded by forcing lower power states with Windows Power Management. Intel's Speedstep, which is basically power management by way of reducing voltage and frequency, only uses 15 out of the 25 multipliers available.

eMglRbp.png


If using [variable frequency] and [adaptive voltage] an overclock of, say [4.3 GHz] is set, this overclock state becomes (100%) CPU state while the stock speed of 3.2 GHz becomes (99%), I guess "%" only means that the scale goes up to a 100.
There is a huge gap between stock voltages and the smallest stable voltage the CPU actually requires, and this gap only widens when the CPU speed is reduced.

rTDK2dM.png


So for people who were worried that setting a big negative offset might make their system unstable, you needn't fret, the differences get higher, the lowest power states are very much overvolted. My magic number for the offset is "minus 0.183 V" which is exactly the difference between stock and stable at 3.2 GHz.
This offset across the full frequency range is probably more important than the overclock voltage, it saves power while the CPU is idle during video playback and other generally predictable and less demanding tasks, basically most of the time.

There is also a whole lot more going on in this graph, you see voltage being linear at stock frequencies, while in the overclocking range the graph is visibly bent. Somewhat exaggerated with a kink here because only 4.3 GHz is used, in the graph that only shows actual Speedstep states.
So comparing the default and overclocking clock ranges we see different voltage behavior. How it changes and what makes it change will be subject of the next update.

Update 1.3

It took about two A4 pages of tightly written stress testing results, to record the minimal voltages across the entire achievable frequency range. This is the natural, CPU Voltage-Frequency-Curve for a 4th Generation
-Haswell Pentium G3258 Anniversary Edition. This curve is unedited except for a highlighted spot, at which it stops being linear and starts to bend upwards, which is discussed in a follow-up topic.
D0F3N6Y.png




Rather than measuring power at the wall wart it's probably much easier to rely on the Package Power reported by the CPU. Also isn't it quite cool to see an desktop CPU consume 24 W during heavy OCCT stress testing. Thanks to a 80plus platinum rated (230 VAC) power supply, mini-ITX board, 1.25V DDR3 and undervolting, this is probably the lowest idle power (20 W) you can get on a desktop PC with discrete graphics (750 ti). Granted lowest idle power and silence were the primary goals with component selection. This rig also includes an additional 2.5" HDD, because unlike 3.5" drives they are nearly inaudible during operation.

3SRBo63.png

 
Last edited:

know of fence

Senior member
May 28, 2009
555
2
71
Ian's data is from the AT review, he included temps power consumption and scores, though skipped a few lower frequencies.
I used set voltage or (CPU Volts) to show that testing was done in big 5 mV adjustments or even bigger steps (25 mV in Ian's testing). Load voltage isn't all that different plus/minus a couple mV. The Haswell FIVR is doing a descent job maintaining voltage under load.

That said when testing with adaptive voltage, which I prefer for day-to-day operation certain measured voltages can be quite different and up to 15 mV higher.
G3258%20OC%20Results_575px.png

Brief AT Review of the Pentium: http://www.anandtech.com/show/8232/...ary-edition-review-the-intel-pentium-g3258-ae
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
User know of fence asked me to repost my G3258 results. I think it's a good idea to aggregate them here in one thread, and I would encourage others to do the same!

Test setup:

Gigabyte Z97X-UD5H
GSKILL 2x4GB F3-12800CL9D-8GBXL
be quiet! Straight Power 10 800W
Corsair H110
Samsung 850 EVO 500GB
Windows 10 Build 10525
Intel Burn Test V2
CPU-Z 1.73
Real Temp GT 3.70

VRIN was set to 1.900 for all tests, higher settings were attempted at x50 to no avail.
Above 4.6GHz, LLC was set from "Auto" to "Extreme." Vring needed to be adjusted up manually above x40 to avoid BSODs, this may be an idiosyncrasy of my mobo. Vring was not determined as precisely as Vcore due to lack of time, less voltage may or may not work at multipliers over 47. Observed voltage readings were never more than 2mV from the set value, so I have only included the set values because it looks better.


G3258 Overclocking Results:

Freq _ Vcore _ Vring __ Tcore(C)

32 ___ 1.080 _ Auto ___ 50 (Stock)
32 ___ 0.875 _ Auto ___ 44
33 ___ 0.875 _ Auto ___ 45
34 ___ 0.900 _ Auto ___ 45
35 ___ 0.925 _ Auto ___ 46
36 ___ 0.950 _ Auto ___ 48
37 ___ 0.975 _ Auto ___ 49
38 ___ 1.000 _ Auto ___ 50
39 ___ 1.025 _ Auto ___ 50
40 ___ 1.050 _ 1.050 __ 50
41 ___ 1.075 _ 1.050 __ 51
42 ___ 1.100 _ 1.050 __ 51
43 ___ 1.125 _ 1.050 __ 52
44 ___ 1.175 _ 1.050 __ 55
45 ___ 1.200 _ 1.050 __ 57
46 ___ 1.275 _ 1.075 __ 62
47 ___ 1.300 _ 1.100 __ 65
48 ___ 1.350 _ 1.125 __ 71
49 ___ 1.400 _ 1.200 __ 77
50 ___ 1.525 _ 1.250 __ BSOD

 
Last edited:

know of fence

Senior member
May 28, 2009
555
2
71
Thanks again. Testing stability, usually a steady and drawn out process, yet it suddenly turned out to be really exciting once I noticed that the line gradually smoothes out the finer the adjustments become. In fact if you see a bump in your line, that probably means, shaving off a couple of milivolts won't hurt stability, a dip on the other hand means that those settings may not be quite stable.

But more importantly this demonstrates that the electrical properties of a CPU are very much proportional and predictable. All it takes is determining the lowest stable voltage for a single not too high frequency (say 4.1 GHz), to know where your CPU stands in the distribution of the silicon lottery, is it more of a dud or a Golden Sample.
I've read in an OC guide (citation needed), that this is exactly what motherboard manufacturers do in their own quick testing, so this may not be news to some people.

In the chart you see it is around 3.8 to 4.1 GHz that the lines start to diverge. Just from these 3 data sets you could already predict if the CPU could reach a working 4.6; 4.7 or 4.9 overclock, depending on what your minimum stable voltage is at 4.1 GHz.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
Thank you for this thread. I knew something wasn't quite right with my G3258 in my GA-H81M-DS2V v1.0 board. I was running the F5 BIOS, which came with the board. At the time I bought it, that was the newest.

It maxed at 1.2v, VRIN @ 1.72 approx (supposed to be 1.8V, I think). Fastest I could get it to run Windows at was 38 multi, although the BIOS said it was at 4.11Ghz with a 38 multi.

I couldn't get it to POST at 40 multi, and couldn't get it to boot Windows 7 64-bit at 39 (BSOD).

So I had it at 38 multi, but doing DC would cause it to crash / BSOD, so I cranked it back down to 36 multi.

Well, I went to Gigabyte.com, and they had a new BIOS for this board (Fix Win10 OC issue). Thankfully, it didn't take away my OC ability, but instead, it greatly enhanced it.

First, I found out that enabling XMP, was what was causing 36 multi to show 4.11Ghz CPU clock. If I left XMP unconfigured, it showed DRAM at 1400 instead of 1599. (H81 chipset can't OC RAM past 1400 anyways.)

Second, I was able to adjust the vcore past 1.2v, finally.

I was able to set 42 multi, and it booted Windows 7!

Then I got greedy, and tried 44 multi. It would POST, sort of, but then it BSODed, and then it wouldn't even boot BIOS or let me into setup, so I had to clear CMOS.

So I have it set to 1.3v and 43 multi right now, and I'm doing DC (WCG) on both cores of my G3258.

CPU-Z 1.73 (newest) shows that my CPU is at 4289, 43x, and 1.300v (Edit: It just rebooted, and Waterfox recovered this post when I restarted.) I bumped the vcore to 1.310v in BIOS. (Edit: Darn, crashed again. I dropped the multi to 42x.)
Edit: dropped vcore to 1.25v, still BSODed.

All of my BSODs are of type 0x124. Any suggestions? Just not enough vcore?

Right now, I'm at 42x multi, 1400 RAM, and 1.300v. (Edit: 32x NB.)
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
I know that on the Z97X-UD5H, any attempt to set the multiplier over x40 would instantly BSOD unless the ring voltage was set manually. I don't remember that being the case with other CPUs, but with this particular combo it was a repeatable phenomenon. Trouble is that I don't think the B or H series boards are given the ability to adjust Vring, afaik.
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
Thanks again. Testing stability, usually a steady and drawn out process, yet it suddenly turned out to be really exciting once I noticed that the line gradually smoothes out the finer the adjustments become. In fact if you see a bump in your line, that probably means, shaving off a couple of milivolts won't hurt stability, a dip on the other hand means that those settings may not be quite stable.

But more importantly this demonstrates that the electrical properties of a CPU are very much proportional and predictable. All it takes is determining the lowest stable voltage for a single not too high frequency (say 4.1 GHz), to know where your CPU stands in the distribution of the silicon lottery, is it more of a dud or a Golden Sample.
I've read in an OC guide (citation needed), that this is exactly what motherboard manufacturers in their own quick testing, so this may not be news to some people.

In the chart you see it is around 3.8 to 4.1 GHz that the lines start to diverge. Just from these 3 data sets you could already predict if the CPU could reach a working 4.6; 4.7 or 4.9 overclock, depending on what your minimum stable voltage is at 4.1 GHz.
Yes, in fact it seemed clear that a granularity of 25mV was not fine enough to capture the true voltage the CPU would run at; but going to, say, 10mV increments would have turned an already time consuming process into a near all-nighter.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
Did you guys all do a similar stability test?

I thought I was stable after 20 runs of LinX and 8 hours or prime, but then the system hardlocked in mechwarrior online within a couple of minutes.
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
I can only speak for myself when I say that extensive stability testing is just not feasible when collecting data on eighteen different multipliers which might require three test runs each. My suggestion would be that a 24/7 overclock would probably use the voltage settings more akin to one multiplier higher, for instance the setting for x47 on my G3258 is 1.3 Vcore and 1.1 Vring, but my 24/7 setting turned out to be x46 with 1.29 Vcore and 1.08 Vring.
 

know of fence

Senior member
May 28, 2009
555
2
71
All of my BSODs are of type 0x124. Any suggestions? Just not enough vcore?

Can you set a fixed Core Input voltage of 1.9 V, I certainly had some trouble getting variable VRIN to work. I also used maximum Load Line Calibration, for it, which causes CPU Input to jump to 1.966 V occasionally.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
I can only speak for myself when I say that extensive stability testing is just not feasible when collecting data on eighteen different multipliers which might require three test runs each. My suggestion would be that a 24/7 overclock would probably use the voltage settings more akin to one multiplier higher, for instance the setting for x47 on my G3258 is 1.3 Vcore and 1.1 Vring, but my 24/7 setting turned out to be x46 with 1.29 Vcore and 1.08 Vring.

What is Vring, exactly? The internal cache ring-bus? How is that different from "NB" or "uncore"?

I have two Vring settings in my BIOS, a voltage setting, and an offset setting. The offset setting is completely greyed out, cannot set it. I bumped the voltage setting (Vring) to 1.080V, and Vcore down to 1.260V.

Interestingly, there must be some LLC in effect by default, because whatever I set for Vcore, is spot-on when I check with CPU-Z, even under full load.

Edit: Bummer, it crashed again. Going to stick with 42x, 1.300V Vcore, Auto Vring.
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
Ring bus is the core to L3 cache bus, it's decoupled from the cores in Haswell, it's also known as Uncore.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
I will say this: for $90 for both CPU + board, this was a heck of a deal for a monster ST web-browser machine.

Edit: Meant to also say, that unless SKL has an unlocked or otherwise overclockable low-end part, that I doubt SKL will provide as much value at the low-end.
 
Last edited:

know of fence

Senior member
May 28, 2009
555
2
71
MY first update to the original topic is finished.

Now I can say with certainty that it makes a lot of sense to write down overclocking results in a table, and maybe even plot them in a chart. Also testing the full frequency range is useful as it allows you to adjust your overclock later to any desired frequency and it can predict what voltage further OC'ing will require. Furthermore it makes a bit of sense to fine tune your voltage to a certain degree at least, rather than rapidly jumping multipliers. It's helpful to have a point of reference.

This whole thing started from an idea to compare stock voltages of the G3258. So far those seem completely arbitrary. My stock Auto voltage (1.041 V) wasn't very high, yet it still was a whole 181 mV above what the CPU needed, adding an estimated 5 W to power consumption and heat. Crashtech's stock voltage was even higher, others have reported it as high as 1.090 V.
I will have to look into negative Offset next, to undervolt the full frequency range down to 800 MHz, as well as test the stability of l3 Cache / Ring overclocking, maybe also chart temperatures.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
Even at 1.300V and 4.2Ghz, it BSODed twice on me, STOP 0x124 again. Temps were hitting 85-86C at worst, and power was 65-70W.

I was able to drop down to 4.0Ghz, 1.200V, and now it has been stable, doing DC on both cores for nearly 24h. Temps 75C, power under 50W.

Why would 4.0 @ 1.200V be stable, when 4.2 @ 1.300V wasn't, totally? Could this be a vring issue?
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
Well mine only runs at 4.4GHz, but I can tell you one thing: My PortfoliobossX tool runs like 4 times as fast on my G3258 compared to a 4C/8T 3GHz nahalem. I was absolutely floored by how much faster it runs. It takes like 4 minutes on the nehalem, and less than one minute on the G3258.
 

know of fence

Senior member
May 28, 2009
555
2
71
Did you guys all do a similar stability test?
I thought I was stable after 20 runs of LinX and 8 hours or prime, but then the system hardlocked in mechwarrior online within a couple of minutes.
To test the properties of the CPU it's just important to be consistent, fist 10 min prime95 for the various data points is run. Later the lowest determined value is confirmed in day-to-day testing, which includes running Prime 95 in or Realbench. I had to bump the voltage for several unstable data points, especially for 4.6 GHz and updated the original graph several times already. After a while one gets a sense of the repeating patterns. Generally if the system locks up after having run Realbench for half an hour a 5 mV bump will make it uncrashable. For a final setting it would probably be prudent to raise the value by 10 mV to compensate for unusual circumstances like summer heat, stress, dust buildup and general degradation.
I'm currently testing the full range of multipliers, which means that for every of the 39 data points from 0.8 to 4.6 GHz the system needs to crash at least once. At low voltages and clocks this becomes more difficult, small prime95 has become rather insufficient, so I've been relying on Realbench recently, though it makes the system almost unusable. For testing a BSOD is a happy occasion, because it means that I hit bottom and can move on. :D

Even at 1.300V and 4.2Ghz, it BSODed twice on me, STOP 0x124 again. Temps were hitting 85-86C at worst, and power was 65-70W.

I was able to drop down to 4.0Ghz, 1.200V, and now it has been stable, doing DC on both cores for nearly 24h. Temps 75C, power under 50W.

Why would 4.0 @ 1.200V be stable, when 4.2 @ 1.300V wasn't, totally? Could this be a vring issue?

IMO you shouldn't OC the L3 cache, overclocking two values at once makes it impossible to determine the culprit. Test one then the other, L3 cache barely has an influence on performance in most tasks anyway.

So now that you found a stable value, have you tried fine tuning the voltage. I would lower Vcore to 1.190 run stress test for 10 min, lower it to 1.180 run test, repeat until crash, then slowly (5 mV steps) increase VCore again until the system is stable and ready for long term testing.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,327
10,035
126
AFAIK, the "uncore" multiplier is still at 32x, so I don't believe that I'm overclocking the L3 cache. My comment about the vring voltage possibly needing to be increased, was in response to a prior comment in this thread about having to increase vring voltage, when overclocking over 40x CPU core multi.
 

crashtech

Lifer
Jan 4, 2013
10,523
2,111
146
AFAIK, the "uncore" multiplier is still at 32x, so I don't believe that I'm overclocking the L3 cache. My comment about the vring voltage possibly needing to be increased, was in response to a prior comment in this thread about having to increase vring voltage, when overclocking over 40x CPU core multi.

Yeah, that was me. The funny thing is that I don't remember needing to do that before, so it might be a firmware thing between BIOS versions, I don't know.
 

know of fence

Senior member
May 28, 2009
555
2
71
Update 1.2 - Offset Undervolting

Having looked at Vcore and and at the Vcore differential above base frequency, the only remaining unknowns are voltages between 800 and 3200 MHz. During testing I was surprised to see Vcore go way, way down to 0.477 V in completely steady fashion.
Also stock Speedstep voltages have been recorded by forcing lower power states with Windows Power Management. Intel's Speedstep, which is basically power management by way of reducing voltage and frequency, only uses 15 out of the 25 multipliers available.

eMglRbp.png


If using [variable frequency] and [adaptive voltage] an overclock of, say [4.3 GHz] is set, this overclock state becomes (100%) CPU state while the stock speed of 3.2 GHz becomes (99%), I guess "%" only means that the scale goes up to a 100.
There is a huge gap between stock voltages and the smallest stable voltage the CPU actually requires, and this gap only widens when the CPU speed is reduced.

rTDK2dM.png


So for people who were worried that setting a big negative Offset might make their system unstable, you needn't fret, the differences get higher, the lowest power states are very much overvolted. My magic number for the Offset is "minus 0.183 V" which is exactly the difference between stock and stable at 3.2 GHz.
This Offset across the full frequency range is probably more important than the Overclock Voltage, it saves power while the CPU is idle during video playback and other generally predictable and less demanding tasks, basically most of the time.

There is also a whole lot more going on in this graph, we see voltage being linear at stock frequencies, while in the overclocking range the graph is visibly bent. Somewhat exaggerated with a kink here because only 4.3 GHz is used, in the graph that only shows actual Speedstep states.
So comparing the default and overclocking clock ranges we see different voltage behavior. How it changes and what makes it change will be subject of the next update.
 

Tidekilla115

Member
Feb 28, 2016
148
0
16
I have mine at 4.4hhz at 1.280 volts with cooler master 212evo push/pull config i did lots of voltage adjusting, 12 hours of prime95 and 6 runs at cine bench I think I have it pretty stable the max temp was 71c and avarage under load was 66c. I think I could go higher but I have no need to this allows me to play all my games well it made a huge differance with the bottleneck that my r9 380x put on the rest of the system.