Discussion 12700k vs 5900x/3900x/5950x DC benchmarks info, and some build info.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
How Does Temperature Affect The Performance of Computer Components? (chron.com)



Both CPUs should have the same or very similar sized heatsink with the same amount of airflow (fan with similar CFM) to ensure a fair comparison. I know that Zen 3 may still come out ahead but at least, it should be an apples to apples comparison by keeping as many of the factors as constant as possible.
well the 5950x has a BeQuiet and the 12700F has one that looks and installs like a BeQuiet.
 

StefanR5R

Elite Member
Dec 10, 2016
5,554
7,916
136
Somewhat related to providing good thermal conditions to the test subjects:

For everyday scientific computing, Ryzens should arguably be operated in Eco mode. :-)
Just a small step down in throughput, but improved power efficiency and thermals. (Or so I heard; I don't have Ryzens myself.)

*Maybe* Alderlake has got BIOS options which similarly increase its efficiency in computationally heavy loads. (Desktop Alderlake, that is. I presume the mobile siblings are already configured for efficiency out of the box, although certainly not with heavy loads in mind.)
 
Jul 27, 2020
16,567
10,565
106
For everyday scientific computing, Ryzens should arguably be operated in Eco mode. :-)
Just a small step down in throughput, but improved power efficiency and thermals. (Or so I heard; I don't have Ryzens myself.)
I think AMD has changed its culture from the inside out to favor power over performance at all costs, to the point where I think if an engineer suggests something that could increase performance at the expense of a little extra power, he/she is made to stand in the corner with a dunce hat :D

Intel, on the other hand, hired Jim Keller and didn't learn anything from him and poor guy had to leave, sighing and nodding his head in utter dismay.
 

StefanR5R

Elite Member
Dec 10, 2016
5,554
7,916
136
OK, a full loaded 5950x, then a fully loaded 12700F, both at stock.
Ryzen 9 5950X = host ID 1070652
according to post #7, running 31 tasks simultaneously
sample: 200 validated GFN 17Mega results,
from tasks which were sent on March 10 around 06:45 UTC and returned on March 10...12
task durations:
9,037 s on average
10-percentile: 8,642 s, 90-percentile: 9,471 s, CV: 3.6 %​
task credits:
481.815 on average
10-percentile: 481.820, 90-percentile: 481.820, CV: 0.0 %​

--> 4.61 kPPD per thread, 143 kPPD per host

distribution of the task durations: 5950X.png

Core i7-12700F = host ID 1129353
according to post #7, running 19 tasks simultaneously
sample: 200 validated GFN 17Mega results
from tasks which were sent on March 9 around 01:00 UTC and returned on March 10...12
task durations:
9,689 s on average
10-percentile: 9,322 s, 90-percentile: 10,032 s, CV: 3.3 %​
task credits:
481.800 on average
10-percentile: 481.800, 90-percentile: 481.800, CV: 0.0 %​

--> 4.30 kPPD per thread, 82 kPPD per host

distribution of the task durations: 12700F.png

There is no bi-modal distribution, and the coefficient of variation (CV) is low too, lower even than on the 5950X. Hence, either each E core performs the same as one thread on a P core in this sample, or the OS's scheduler shifted each task between E and P cores all the time. — Or an E core performs at least somewhat similar to a thread of a P core, and the OS's scheduler shifted each task between E and P cores several times, such that all tasks spent similar fractions of their time on each of the two types of cores.

Well, since it took ~2h40m for a task to complete, it is fair to assume that each task spent some time on either type of cores. Hence, those 4.30 kPPD per thread are a mixture of the performances of P core threads and E cores.

I still don't know whether or not either of the computers was cache starved = memory bottlenecked. Going by the per-host kPPD, and assuming that per-host memory performance was quite similar between the two hosts, then memory performance was not a dominating factor in these two samples.
 
Last edited:
  • Like
Reactions: igor_kavinski
Jul 27, 2020
16,567
10,565
106
Hence, either each E core performs the same as one thread on a P core in this sample, or the OS's scheduler shifted each task between E and P cores all the time. — Or an E core performs at least somewhat similar to a thread of a P core, and the OS's scheduler shifted each task between E and P cores several times, such that all tasks spent similar fractions of their time on each of the two types of cores.
That's why I hate schedulers juggling tasks. There's the context switching overhead for the juggle and now the new core where the task lands may be a lower performance core. Dumb, dumb, dumb! :mad:

It's like the task runs for a set period of time on the P-core, said core gets a little toasty, task moves to E-core, P-core gets cool, scheduler again sweats it with task and so on.
 

StefanR5R

Elite Member
Dec 10, 2016
5,554
7,916
136
I don't know which policy the process scheduler in current Linux kernels pursues. But at least in the past, its policy was to avoid moving processes from one physical core to another, because that would not be a good use of the processor caches.

(May be less of an issue with shared level 3 cache on most CPUs nowadays.)
 
Jul 27, 2020
16,567
10,565
106
OP needs to compare results with both CPUs on at least Linux Kernel 5.16. Older schedulers may have considerable performance issues due to process juggling.
 

StefanR5R

Elite Member
Dec 10, 2016
5,554
7,916
136
No. The kernel updates are immaterial to throughput of all-core/all-threads loads. They are relevant to partial loads.
 

StefanR5R

Elite Member
Dec 10, 2016
5,554
7,916
136
The Linux 5.16 related article which you linked in #13 refers to the Phoronix article "Linux Now Faster Than Windows 11 For Intel Core i9 12900K "Alder Lake" With Latest Kernel". This article, like most of Phoronix' test reports, does not discuss which tests are lightly (if not single-) threaded vs. which are highly threaded and scalable. One would have to cross-reference with tests on server CPUs. However, renderer benchmarks scale well to all threads. They are on page 4 of the article.

Here are LWN's reports on the Linux kernel 5.16 "merge windows": part 1, part 2
The only relevant change is this:
Jonathan Corbet said:
Core kernel
[...]
  • The CPU scheduler has gained an understanding of "clusters", a hardware arrangement where multiple cores share the same L2 cache. The cluster-aware scheduler will take pains to distribute tasks across all clusters in the system to balance the load on caches across the machine.
The change above affects Alder Lake insofar as 4 E cores share one L2 cache, while each P core (hence each pair of threads on a P core) has got its own L2 cache.

But if you load all logical CPUs in the first place, this change in scheduling policy doesn't matter anymore.

BTW, preemption frequency has always been configurable at compile time of the kernel. I.e., for most Linux users, the distributor chooses this and tends to keep this choice over many releases of a distribution. In several distributions, users can switch between kernel images which are either optimized for responsiveness or for throughput.

PS,
for those who are interested, here is the full list of LWN's reports on Linux' CPU scheduler:
https://lwn.net/Kernel/Index/#Scheduler
And on particular kernel releases:
https://lwn.net/Kernel/Index/#Releases

Edit,
the change in kernel 5.16 is not limited to L2 cache clusters; it works with L3 cache clusters too, and potentially other topological properties. (Sources: kernelnewbies, Phoronix, Kconfig help text)
"Note, this patch isn't a universal win, as spreading isn't necessarily a win, particularly for those workloads which can benefit from packing." (Source: commit message)
 
Last edited:
  • Like
Reactions: igor_kavinski

StefanR5R

Elite Member
Dec 10, 2016
5,554
7,916
136
I read the article and patches. In particular, these kernel changes implement a little kernel driver (actually, an extension to the existing Intel thermal driver) which reads "performance" and "efficiency" ratings of each logical CPU from hardware (not any absolute ratings, but relative values to convey an ordering between the logical CPUs of the machine) and exports these data to userspace. Then, a userspace daemon could be written which works with the exported data, together with some sort of insight into the workload on the computer, and modify CPU affinites of processes. However, there are no changes to the kernel's own process scheduler in the scope of this patchset.
 
Last edited:
  • Like
Reactions: igor_kavinski

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
OK, new motherboard from RMA came in and installed. The system did not know about the on-bard network card (linux) and I neglected to ask how to fix that, I tried driver manager, and it wanted the USB stick. Well, I put it in and it was seen by the OS, but not the driver manager. So I thought I would try a re-install. Well, then I messed up, and wiped out windows.
Well, I did get the cas rating down to 17 from 19, so thats one good thing.

So, I am going to start from scratch, and the machine will be down for a few days. CRAP !!!
 
Jul 27, 2020
16,567
10,565
106
Questions:

Why did you RMA the mobo?

How did you get the CAS latency down to 17? New mobo allowed doing that?
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
Problem with the motherboard was NO network port. The lights sais it was connected to a port, and the other light said data was transferring, but nothin in windows or linux. Then put in an aftermarket card, and BAM, both work fine. Now the new motherboard both work fine on the internal port. Win 11 still installing, very slowwwwww
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
And while I have to reinstall, I am doing mint 20. And will update the kernal to the most recent.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
OK, what is it about this new Z690 chipset ? Linux 20.1 does not recognize the network device, at least does not install drivers, so I put the aftermarket card back in. How do I do that ?

And as got GB5 under linux, I could not figure out how to install it, just had a tar file, I unzipped, and then what ?
 
Jul 27, 2020
16,567
10,565
106
OK, what is it about this new Z690 chipset ? Linux 20.1 does not recognize the network device, at least does not install drivers, so I put the aftermarket card back in. How do I do that ?

And as got GB5 under linux, I could not figure out how to install it, just had a tar file, I unzipped, and then what ?
Sorry. I'm almost clueless in Linux :blush:
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
Edit: forget all this and look below
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
OK, rebooted, then had to revert to novueu drivers (spelling). An now I get this. Now installing boinc....

1923
Single-Core Score

13138
Multi-core

Running 5.13 linux kernal.

Edit: @StefanR5R , it you want to compare times anything started after this time 17:25 PST will have the latest linux kernal and latest mint version.

 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,606
14,587
136
Update. My first visual, not confirmed by any stats is that the new version of linux, plus the updated kernal, might be beating the 5950x on an average core basis, which they were NOT in the last eval. Stay tuned.