C6=C7 at the core level. Package level C6 is the lowest power state with a PCIe GPU. Package C7 requires the IGP or switchable graphics. CPU will determine PC6 or PC7 automatically. Package C-states are also resolution dependent when on IGP - very large resolutions and multi-mon displays may be limited to C3 or C2.
C6/C7 may not be more power efficient because of the extra overhead to enter and leave those C states.
what is the difference between package level C-state and core level C-state?
With IGP connected to a small monitor (1280x1024) via DVI, it seems my G3258 only goes to core-level C6.
Unless C7 doesn't kick in until I put the computer to sleep?
Core C states are per-core based and different cores can be in different C states. The package C state is determined by all cores. If 3 cores are in C3 and 1 core in C0, then the package C state must be C0 also.
Thanks guys. The concept makes sense, but can't wrap my head around what both of our screenshots are showing (I included a picture of ThrottleStop below). So if my Core C-state is at C6 for 98%+ of the time, how come my package c-state is always in C0 (shows 0% for C2 thru C7)? Does that mean it's keeping 1 core always at C0 and only dropping the other core to C6?Just for comparison, here is a picture of an overclocked 4700MQ mobile CPU. The individual cores are spending 99% of their time in C7 while the entire CPU package is spending just under 80% of the time in the C6 package C state.
Do you recommend leaving them all enabled, or would it be better to just enable C6 and leave C1E and C3 disabled? I disabled C1E and EIST as when overclocking, core voltage is fixed on my Gigabyte motherboard so only the frequency drops and not voltage (according to CPU-Z and RealTemp at least). And I did not see any benefit in just dropping the frequency as I figure there's some latency in dynamically changing the frequency.There is a latency and power cost associated with each C state transition. C1E puts the core into the lowest frequency and voltage state. C3 flushes the L1 and L2 caches, which must then be directly or indirectly (through misses) reloaded from L3 and main memory, which is slower and may require more power overall then simply entering C1E instead. C6 will remove core voltage but the core state is first written to dedicated SRAM. If all cores are requesting C7, then package C6 can potentially flush L3 and power it down.
Do you recommend leaving them all enabled
Yes, I would just keep them all enabled (all C-states and EIST) even if you're overclocking. It's good for your CPU, it's good for your power bill and lowers idle temps. I seriously doubt there's a measurable difference between leaving them on or off so why not reap the benefits.
what I did on my ib was to use only EISTApparently, it increases latency and affects SSD performance. So you'd want to make sure you're getting actual power-saving benefits before enabling each one.
Apparently, it increases latency and affects SSD performance. So you'd want to make sure you're getting actual power-saving benefits before enabling each one.
Core state and package states are different. You can have all cores at C6 while having no power savings for the rest of the package. For instance if at a C7 package level then L3 cache is flushed. This will generally cause a performance hit when waking up.So if my Core C-state is at C6 for 98%+ of the time, how come my package c-state is always in C0 (shows 0% for C2 thru C7)?
Not so much in changing the frequency but in changing the voltage. Before a frequency can be set higher, the CPU has to wait for the VRM to ramp up the voltage. Whilst this has improved greatly over the generations, using fixed voltage should alleviate this.And I did not see any benefit in just dropping the frequency as I figure there's some latency in dynamically changing the frequency.
It's still there and why Intel introduced Dynamic Storage Acceleration (DSA) which basically dynamically adjusts C-state levels depending on disk load.I'm pretty sure this isn't an issue with Haswell.
Apparently, it increases latency and affects SSD performance.
Core state and package states are different. You can have all cores at C6 while having no power savings for the rest of the package. For instance if at a C7 package level then L3 cache is flushed. This will generally cause a performance hit when waking up.
Any idea whether it's core C6/C7 states or package C6/C7 states that provide most of the power savings?Manufacturers can choose what C States their products will use. Your individual cores are spending over 98% of the time in C6 but for whatever reason, a manufacturer might decide to disable some or all of the deeper package C States. Software on your computer can also interfere with the package C States. Google Chrome used to have a feature / bug that blocked some of the deeper package C States. I think that is why Internet Explorer tended to win the power consumption tests. I have not seen any recent testing of the latest versions of Chrome and IE.
Any idea whether it's core C6/C7 states or package C6/C7 states that provide most of the power savings?
I'm idling at 30w power draw from the wall right now (G3258), with just an old SSD connected. I thought it would be lower, so I guess that's due the package C-state staying at C0.
Intel said:Dynamic Storage Accelerator unleashes the performance of your SSDs. It maximizes storage I/O performance by dynamically adjusting system power management policies to deliver up to a 15 percent¹⁴ performance boost compared to default power management.
Intel said:¹⁴Dynamic Storage Accelerator performance is dependent upon several factors including workload, storage configuration, operating system, and CPU C-state transition efficiency. Intel analysis has found that Dynamic Storage Accelerator performance mode, 2 SSD RAID 0 provides up to a 15 percent performance gain as compared to default power management. Test configuration: 3 GHz processor, 2 x 2 GB
@ 1333 MHz RST 12.0.0.1075OS HDD: Western Digital Black WD2002FAEX 2 TB; Intel® SSD 320 Series; OS Tested: RAID 0, Two Disk; Windows* 7 SP1 build 7601;Benchmark software: PCMark* Vantage
1.0.2 patch 1901
IMO not a great difference and most would probably not notice except perhaps in benchmarks.
I think some of the latency issues that were previously measured were because of the sluggish Windows Balanced profile.
IMO not a great difference and most would probably not notice except perhaps in benchmarks.
Pretty much this. Unless you're only running synthetic benchmarks you're likely not going to be able to even measure the difference with all the power saving stuff enabled.
How so?