Question Intel 12th to 13th generation performance comparison

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

GunsMadeAmericaFree

Golden Member
Jan 23, 2007
1,245
290
136
Intel13thGenRefresh.jpg


I thought this was an interesting read - benchmark comparisons between Intel 12th generation & 13th generation:

Article with details

That's an average performance increase of 47% from one generation to the next. I wonder if AMD will have a similar increase?
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
but, populating them does force a sharing of resources that slows down the first thread.
No it doesn't. If the first thread is important and speed sensitive it will be running at higher priority and will not be slowed down by a second thread running on the same core, the second thread will only get the left overs.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
No it doesn't. If the first thread is important and speed sensitive it will be running at higher priority and will not be slowed down by a second thread running on the same core, the second thread will only get the left overs.
Neither Intel nor AMD's SMT implementations feature QoS. And as the diagram I linked clearly showed, there are resources that are statically partitioned. That means divided equally between the two threads.
 

Starjack

Member
Apr 10, 2016
25
0
66
1) The majority of games don't need the 16 threads that the 12600K can process. So, having the 4 E cores just doesn't have a chance to help most games. I'm not at all surprised the results were within 1% of each other in that video.

2) The 12600K is one of the ugly stepchild processors with just 4 E cores. I would avoid it. While @Kocicak has data above showing a ~25% performance boost with the 4 E cores, for not much more money you could get 8 P cores (the 12700F is only $25 more). 8/6 = 1.33, so you should expect nearly a 33% boost in that situation with just having more P cores (plus you'd get E cores too so the final performance would be maybe ~50% more). Finally, there is a problem with the way Intel chose to use background software--it puts all the work onto the E cores. So, if you did want to run something in the background, you are making your few E cores do all the work. There just aren't enough E cores to make that a good use of the processor. The 13600K and 13600KF solve that latter problem with enough E cores to get the background jobs done in a reasonable amount of time.

Since you use video reviews, here is one showing the 13600KF dominates the 12600K for just $5 more:

Even with my current laptop with the Core i3-1215U, there's a little more utilisation on the 2 P-cores than the 4 E-cores. This happens when i'm playing a game or not playing a game. I'm kind of anxious to wait on Intel to release the 13th Gen i3s for laptops because i'm so curious to see how much improvement in performance they would give, depending on the newer architect and how many P/E cores they would put in these chipsets. Although i'm also hoping to see AMD's offering as well especially if i want to upgrade in the future. If AMD did theirs, are we looking at a Zen+/optimized Bulldozer hybrid design for their future chipsets? Some or most of us may not be fans of the Bulldozer architect but i can't remember if AMD did create any energy efficient core architect that will rival that of Gracemont cores.
 
Last edited:

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
If AMD did theirs, are we looking at a Zen+/optimized Bulldozer hybrid design for their future chipsets? Some or most of us may not be fans of the Bulldozer architect but i can't remember if AMD did create any energy efficient core architect that will rival that of Gracemont cores.
No, we are just looking at variants of Zen cores then.

If the rumors about PHX2/Phoenix 2 are true that chip would use hybrid design to combine stock Zen 4 cores with denser Zen 4c cores, so no focus on energy efficiency but area efficiency, the way Intel currently uses E cores as well.
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
I think a lot of us don't remember when HT was first introduced. At the time you had 1 core. So adding a second thread was a BIG deal in usability and feel. With so many cores now, for the most part it really doesn't matter much anymore outside of edge cases. Most people wouldn't notice if it was taken away, and it would reduce the attack surface for malware.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
I think a lot of us don't remember when HT was first introduced. At the time you had 1 core. So adding a second thread was a BIG deal in usability and feel. With so many cores now, for the most part it really doesn't matter much anymore outside of edge cases. Most people wouldn't notice if it was taken away, and it would reduce the attack surface for malware.
Not speaking for my specific case, but there are quite a few that do encoding and such, and they would notice, as well as the server community.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
I think a lot of us don't remember when HT was first introduced. At the time you had 1 core. So adding a second thread was a BIG deal in usability and feel. With so many cores now, for the most part it really doesn't matter much anymore outside of edge cases. Most people wouldn't notice if it was taken away, and it would reduce the attack surface for malware

I'm old enough to remember when HT was first introduced on the PIV, and yeah, the performance boost back then was much larger for obvious reasons. Efficiency cores has really impacted the performance characteristics of SMT as I alluded to on the previous page. When I tested HT on and off in encoding workloads on my 6900K, I think I got like a 20%+ difference if I remember. With my 3930K and 5930K it was even higher. With my 13900KF, I can't get into the double digit percentage range and it's all because of the efficiency cores that are now absorbing most of the additional TLP.

That said, that doesn't mean the technology itself no longer has any merit. Desktop applications typically lack enough TLP to exploit this many threads, but as @Markfw alluded to, other platforms will gobble as many threads as you can throw at it. And because both Intel and AMD design their microarchitectures to perform across different platform sectors including desktop, mobile, enterprise and HPC, I doubt HT/SMT will be going away.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
I'm old enough to remember when HT was first introduced on the PIV, and yeah, the performance boost back then was much larger for obvious reasons. Efficiency cores has really impacted the performance characteristics of SMT as I alluded to on the previous page. When I tested HT on and off in encoding workloads on my 6900K, I think I got like a 20%+ difference if I remember. With my 3930K and 5930K it was even higher. With my 13900KF, I can't get into the double digit percentage range and it's all because of the efficiency cores that are now absorbing most of the additional TLP.

That said, that doesn't mean the technology itself no longer has any merit. Desktop applications typically lack enough TLP to exploit this many threads, but as @Markfw alluded to, other platforms will gobble as many threads as you can throw at it. And because both Intel and AMD design their microarchitectures to perform across different platform sectors including desktop, mobile, enterprise and HPC, I doubt HT/SMT will be going away.
Exactly. SOME desktop people will see an advantage easily as I alluded to, but HEDT server, workstation and other will also see it. The bottom line is that (most likely) more than 50% of all cpus in all markets will see the advantage. 70% of the desktop world is a minority. (a guess)
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
Desktop applications typically lack enough TLP to exploit this many threads, but as @Markfw alluded to, other platforms will gobble as many threads as you can throw at it.
Well the best platform for that kind of embarrassingly parallel workload would be no P cores at all, and just a sea of E-cores. So the current configurations for both server and client clearly don't have those as a major priority.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
Well the best platform for that kind of embarrassingly parallel workload would be no P cores at all, and just a sea of E-cores. So the current configurations for both server and client clearly don't have those as a major priority.
Excuse me... I know that in the server world, avx-512 has value, and other tasks can use wide performant cores more than e-cores also.

Not to mention in my world, e-cores are useless.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I know that in the server world, avx-512 has value, and other tasks can use wide performant cores more than e-cores also.
Depends entirely on the task. We were discussing rendering, which doesn't seem to particularly care about AVX-512, but loves the extra throughput from the E-cores.
Not to mention in my world, e-cores are useless.
We've all seen how many different claims you've made about your particular "workload". And if a task is embarrassingly parallel, it won't care for wide cores.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,542
14,496
136
Depends entirely on the task. We were discussing rendering, which doesn't seem to particularly care about AVX-512, but loves the extra throughput from the E-cores.

We've all seen how many different claims you've made about your particular "workload". And if a task is embarrassingly parallel, it won't care for wide cores.
I would refute your claims, but its useless. Right or wrong, you will reply until it appears you are right, since the other person gives up and does not reply.

Have fun living in your fantasy world.

Oh, and just a word about my world. Many of us rent space on the cloud, since their server grade video cards and CPUs are superior to the consumer world.
 

Exist50

Platinum Member
Aug 18, 2016
2,445
3,043
136
I would refute your claims, but its useless. Right or wrong, you will reply until it appears you are right, since the other person gives up and does not reply.
There're posts in this very thread to support my claim. Whether you're willing to acknowledge these basic facts is not my concern.
Oh, and just a word about my world. Many of us rent space on the cloud, since their server grade video cards and CPUs are superior to the consumer world.
"Your world" is awfully accommodating to whatever agenda you're pushing at a given time. Maybe one day we'll actually see data to back up any of those claims.

And amusing to reference the cloud when smaller cores are gaining traction there too.