• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Massive security hole in CPU's incoming?Official Meltdown/Spectre Discussion Thread

Page 57 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

psolord

Golden Member
Sep 16, 2009
1,318
371
136
Now let’s move on the quick Core i5-8600k with the 970, which is essentially a preview, since I will do more testing later on, with the 1070 installed, in order to better highlight any differences in performance.


Again the SSD test come first with the system also at stock. Please that all post kb4056892 patch benchmarks on the 8600k, were done with the 1.40 Asrock Z370 Extreme 4 BIOS, which included microcode fixes for the cpu, regarding the Spectre and Meltdown vulnerabilities. It is therefore more solidly patched, compared to i7-860.




Ashampoo cpu check report verifies the system to be ok.







I have two SSDs. A Samsung 850 EVO 500GB and a Sandisk Extrepe Pro 240GB. Needless to say that the screeshots with the lower performance, are the ones of the patched system. I have the windows version captured on those.















Unfortunately, there is a substantial and directly measureable performance drop on both SSDs, that reach 1/3 of performance loss on the smaller file sizes. On the bigger file sizes things are much better of course. I did a mistake and used different ATTO versions for the Samsung and Sandisk drives, but the performance drop has been recorded correctly for both anyway.


As for pure cpu tests, I didn’t do much. Just cpuz and cinebench.









Cinebench didn’t show significant difference, but cpuz showed a drop on the multicore result. I then realized that I had used version 1.81 on the pre patch test and version 1.82 on the post patch test. I am not sure if this would affect things. Still I trust cinebench more, since it’s a much heavier test.


Ok then, gaming benchmarks time. i5-8600k@5Ghz, GTX 970@1.5Ghz.


The pattern is the same as above.


Assassins Creed Origins 1920X1080 Ultra









Gears of War 4 1920X1080 Ultra









World of Tanks Encore 1920X1080 Ultra






Grand Theft Auto V 1920X1080 Very High






And I left Ashes of the Singularity for the end, because I only have post patch measurements, but there’s a reason I am including those too.
















Again as you can see, we have a measureable drop on Gears of War 4. It’s probable that with the 1070 the difference will be higher. The point is not just that however.



Let’s do a comparison on the above numbers. Take GTA V for example on the i7-860+1070. You will see that it has the same benchmarking result of 75fps average, as the 8600k with the much slower 970. However its 0.1% and 1% lows are quite better. You can feel it while playing. This is a direct result of how cpu limited this game is. For reference, the 1070 with the 8600k gave me 115fps average, but this is a discussion for another time.



After that, you can take a look at Gears of War 4 post patch results for both cpu scores. 364fps for the 8600k, 199 fps for the 860.



And of course we cannot defy the king of cpu limits, Ashes of the Singularity, which for the Vulkan test being the best for both systems, gave us post patch, an average cpu framerate of 152fps for the 8600k and 74fps for the i7-860.



Why am I saying all that and why am I comparing first and eighth generation cpus? Because as you can see even from these few tests, the i5-8600k continues to perform as an 8th gen cpu. It did not suddenly turn into a Lynnfield or something. Also the Lynnfield stayed a Lynnfield and did not become a Yorkfield or whatever.



I generally observe a severe doom and gloom attitude and the consensus that our systems are only fit for the trash, seems to have taken hold on some users minds. This is not what I am seeing however. Always talking from a home user perspective.



I am not trying to diminish the importance of the issue. It’s very serious and it is sad that it has broken out the way it did. However I do see some seriousness from all affected parties. I mean Asrock brought out the BIOS with what, less than a week or something.



Now regarding the professional markets, I can understand that things will be much worse, especially with the very real IO performance degradation. Even some home users with fast SSDs will be rightfully annoyed. In these situations I believe some form of compensation should take place or maybe some hefty discounts for future products. Heck I know that I would be furious if I had seen a severe degradation on the gaming graphics department, which is my main focus.


Of course testing will continue. I have a good 1070+8600k pre patch gaming benchmarks database already, which I will compare with some select post patch benchmarks. If I find anything weird I will repost.



For reference, here are my pre patch benchmarking videos, from which the above pre patch results came from. I did no recordings for the post patch runs.


Take care.


Assassin's Creed Origins 1920X1080 ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Tom Clancy's Rainbow Six Siege 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Forza Motorsport 7 1920X1080 Ultra 4xAA GTX 1070 @2Ghz CORE i7-860 @4GHz


Ashes of the Singularity 1920X1080 High DX11+DX12+Vulkan GTX 1070 @2Ghz CORE i7-860 @4GHz


Gears of War 4 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Gears of War Ultimate 1920X1080 maxed GTX 1070 @2Ghz CORE i7-860 @4GHz


Prey 1920X1080 very high GTX 1070 @2Ghz CORE i7-860 @4GHz


Total War Warhammer 2 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Unigine Valley 1920X1080 Extreme HD GTX 1070 @2Ghz CORE i7-860 @4GHz


Shadow of War 1920X1080 Ultra+V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


World of Tanks Encore 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


The Evil Within 2 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Road Redemption 1920X1080 fantastic GTX 1070 @2Ghz CORE i7-860 @4GHz


Dirt 4 1920X1080 4xAA Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


F1 2017 1920X1080 ultra + high GTX 1070 @2Ghz CORE i7-860 @4GHz


Dead Rising 4 1920X1080 V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


ELEX 1920X1080 maxed GTX 1070 @2Ghz CORE i7-860 @4GHz


Project Cars 2 1920X1080 ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Grand Theft Auto V 1920X1080 V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


i5-8600k + 1070


World of Tanks Encore 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz


Grand Theft Auto V 1920x1080 V.High outdoors GTX 970 @1.5Ghz Core i5-8600k @5GHz


Gears of War 4 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz


Assassin's Creed Origins 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz
 

epsilon84

Senior member
Aug 29, 2010
996
704
136
Now let’s move on the quick Core i5-8600k with the 970, which is essentially a preview, since I will do more testing later on, with the 1070 installed, in order to better highlight any differences in performance.


Again the SSD test come first with the system also at stock. Please that all post kb4056892 patch benchmarks on the 8600k, were done with the 1.40 Asrock Z370 Extreme 4 BIOS, which included microcode fixes for the cpu, regarding the Spectre and Meltdown vulnerabilities. It is therefore more solidly patched, compared to i7-860.




Ashampoo cpu check report verifies the system to be ok.







I have two SSDs. A Samsung 850 EVO 500GB and a Sandisk Extrepe Pro 240GB. Needless to say that the screeshots with the lower performance, are the ones of the patched system. I have the windows version captured on those.















Unfortunately, there is a substantial and directly measureable performance drop on both SSDs, that reach 1/3 of performance loss on the smaller file sizes. On the bigger file sizes things are much better of course. I did a mistake and used different ATTO versions for the Samsung and Sandisk drives, but the performance drop has been recorded correctly for both anyway.


As for pure cpu tests, I didn’t do much. Just cpuz and cinebench.









Cinebench didn’t show significant difference, but cpuz showed a drop on the multicore result. I then realized that I had used version 1.81 on the pre patch test and version 1.82 on the post patch test. I am not sure if this would affect things. Still I trust cinebench more, since it’s a much heavier test.


Ok then, gaming benchmarks time. i5-8600k@5Ghz, GTX 970@1.5Ghz.


The pattern is the same as above.


Assassins Creed Origins 1920X1080 Ultra









Gears of War 4 1920X1080 Ultra









World of Tanks Encore 1920X1080 Ultra






Grand Theft Auto V 1920X1080 Very High






And I left Ashes of the Singularity for the end, because I only have post patch measurements, but there’s a reason I am including those too.
















Again as you can see, we have a measureable drop on Gears of War 4. It’s probable that with the 1070 the difference will be higher. The point is not just that however.



Let’s do a comparison on the above numbers. Take GTA V for example on the i7-860+1070. You will see that it has the same benchmarking result of 75fps average, as the 8600k with the much slower 970. However its 0.1% and 1% lows are quite better. You can feel it while playing. This is a direct result of how cpu limited this game is. For reference, the 1070 with the 8600k gave me 115fps average, but this is a discussion for another time.



After that, you can take a look at Gears of War 4 post patch results for both cpu scores. 364fps for the 8600k, 199 fps for the 860.



And of course we cannot defy the king of cpu limits, Ashes of the Singularity, which for the Vulkan test being the best for both systems, gave us post patch, an average cpu framerate of 152fps for the 8600k and 74fps for the i7-860.



Why am I saying all that and why am I comparing first and eighth generation cpus? Because as you can see even from these few tests, the i5-8600k continues to perform as an 8th gen cpu. It did not suddenly turn into a Lynnfield or something. Also the Lynnfield stayed a Lynnfield and did not become a Yorkfield or whatever.



I generally observe a severe doom and gloom attitude and the consensus that our systems are only fit for the trash, seems to have taken hold on some users minds. This is not what I am seeing however. Always talking from a home user perspective.



I am not trying to diminish the importance of the issue. It’s very serious and it is sad that it has broken out the way it did. However I do see some seriousness from all affected parties. I mean Asrock brought out the BIOS with what, less than a week or something.



Now regarding the professional markets, I can understand that things will be much worse, especially with the very real IO performance degradation. Even some home users with fast SSDs will be rightfully annoyed. In these situations I believe some form of compensation should take place or maybe some hefty discounts for future products. Heck I know that I would be furious if I had seen a severe degradation on the gaming graphics department, which is my main focus.


Of course testing will continue. I have a good 1070+8600k pre patch gaming benchmarks database already, which I will compare with some select post patch benchmarks. If I find anything weird I will repost.



For reference, here are my pre patch benchmarking videos, from which the above pre patch results came from. I did no recordings for the post patch runs.


Take care.


Assassin's Creed Origins 1920X1080 ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Tom Clancy's Rainbow Six Siege 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Forza Motorsport 7 1920X1080 Ultra 4xAA GTX 1070 @2Ghz CORE i7-860 @4GHz


Ashes of the Singularity 1920X1080 High DX11+DX12+Vulkan GTX 1070 @2Ghz CORE i7-860 @4GHz


Gears of War 4 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Gears of War Ultimate 1920X1080 maxed GTX 1070 @2Ghz CORE i7-860 @4GHz


Prey 1920X1080 very high GTX 1070 @2Ghz CORE i7-860 @4GHz


Total War Warhammer 2 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Unigine Valley 1920X1080 Extreme HD GTX 1070 @2Ghz CORE i7-860 @4GHz


Shadow of War 1920X1080 Ultra+V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


World of Tanks Encore 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


The Evil Within 2 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Road Redemption 1920X1080 fantastic GTX 1070 @2Ghz CORE i7-860 @4GHz


Dirt 4 1920X1080 4xAA Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


F1 2017 1920X1080 ultra + high GTX 1070 @2Ghz CORE i7-860 @4GHz


Dead Rising 4 1920X1080 V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


ELEX 1920X1080 maxed GTX 1070 @2Ghz CORE i7-860 @4GHz


Project Cars 2 1920X1080 ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Grand Theft Auto V 1920X1080 V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


i5-8600k + 1070


World of Tanks Encore 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz


Grand Theft Auto V 1920x1080 V.High outdoors GTX 970 @1.5Ghz Core i5-8600k @5GHz


Gears of War 4 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz


Assassin's Creed Origins 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz
Thanks! Really appreciate the effort you put into this
 
  • Like
Reactions: psolord

IEC

Elite Member
Super Moderator
Jun 10, 2004
13,954
3,637
136
CDM benchmarks with the new 0x84 microcode at stock settings, 8700K, as promised.

P1.30 UEFI (no MC patch):
Sequential Read (Q= 32,T= 1) : 1967.230 MB/s
Sequential Write (Q= 32,T= 1) : 1326.054 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1169.721 MB/s [ 285576.4 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 1034.159 MB/s [ 252480.2 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 514.705 MB/s [ 125660.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 687.330 MB/s [ 167805.2 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 43.424 MB/s [ 10601.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 156.527 MB/s [ 38214.6 IOPS]
P1.60 UEFI (0x84 MC patch):
Sequential Read (Q= 32,T= 1) : 1947.143 MB/s
Sequential Write (Q= 32,T= 1) : 1361.696 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1047.426 MB/s [ 255719.2 IOPS] -10.46% IOPS
Random Write 4KiB (Q= 8,T= 8) : 1087.049 MB/s [ 265392.8 IOPS] +5.11% IOPS
Random Read 4KiB (Q= 32,T= 1) : 311.168 MB/s [ 75968.8 IOPS] -39.54% IOPS
Random Write 4KiB (Q= 32,T= 1) : 386.990 MB/s [ 94480.0 IOPS] -43.70% IOPS
Random Read 4KiB (Q= 1,T= 1) : 40.586 MB/s [ 9908.7 IOPS] -6.54% IOPS
Random Write 4KiB (Q= 1,T= 1) : 120.764 MB/s [ 29483.4 IOPS] -22.85% IOPS
 

krumme

Diamond Member
Oct 9, 2009
5,899
1,525
136
CDM benchmarks with the new 0x84 microcode at stock settings, 8700K, as promised.

P1.30 UEFI (no MC patch):


P1.60 UEFI (0x84 MC patch):
Thanx. Its serious toll on the all important 4k iops imo :(
Makes me weary if going 8700k is the right way vs comming 2700x. Man i am a bit torn.
 

krumme

Diamond Member
Oct 9, 2009
5,899
1,525
136
Q depth of 32 taking the hit, which is not bad for home users, I think.
Yeaa thats right but 10% hit is also bad imo as its one of the few things you can actually feel. Its not like there is no alternative today.

And btw 40% for professional use is not bad but a total meltdown. Ugly stuff. Its literally unusable if you have those q depth loads.

Hmm. Reminds me i should take a updated tour of our server cost/benefits. Squeeze.
 
  • Like
Reactions: dark zero

LTC8K6

Lifer
Mar 10, 2004
28,523
1,570
126
Yeaa thats right but 10% hit is also bad imo as its one of the few things you can actually feel. Its not like there is no alternative today.

And btw 40% for professional use is not bad but a total meltdown. Ugly stuff. Its literally unusable if you have those q depth loads.

Hmm. Reminds me i should take a updated tour of our server cost/benefits. Squeeze.
Is it possible that the server patches are the opposite? Set up so that the higher Q depths take less of a hit?

I suppose it's also possible that patches may be improved so the hit is not as bad.
 

krumme

Diamond Member
Oct 9, 2009
5,899
1,525
136
Is it possible that the server patches are the opposite? Set up so that the higher Q depths take less of a hit?

I suppose it's also possible that patches may be improved so the hit is not as bad.
I was thinking the same.
I would asume as they learn more about it that they can optimize the stuff.
Dont know. Might be cheaper just to upgrade cpu and mb. For all involved. Hardware cost is only a fraction.
When i look at our server cost/benefit what i value is safety foremost. Then support competence and communication skills to the devs. Then scalability. Then performance/cost where eg iops plays a good part. I mean looking at the cost hardware is just minor part and a lot of time is saved just by moving the stuff to new mb/cpu in 1 year time.

The bad thing is the unpredictability about performance we face. Then you have to gun for higher level. Its a mess. Excactly where Intel was strong vs Epyc solution. The predictability.
Man i would rather not use my time of such a thing.
 

formulav8

Diamond Member
Sep 18, 2000
6,998
521
126
How Spectre And Meltdown Mitigation Hits Xeon Performance - As tested by Intel

Here is some of the gist from the article. I think I got it right, but very tired,
-

Wordpress using HHVM was about 9% or so on Skylake/BroadwellHaswell.

FOI using 64 KByte block sizes with double NVMe drives hitting a single core is around 27% with Haswell, 30% with Broadwell, and no lose with Skylake,

Using 4 KByte sizes Skylake lost 30%, Haswell 60%, and Broadwell 59%

Intel then changed the FIO code to Retpoline and 64 KB with no change with Skylake (Still no loss), 2% on Broadwell, and 1% on Haswell The 4 KB test showed Skylake saw 18% loss, Broadwell 22%, Haswell 20%.

Here’s the thing: While the Linux community seems to be rallying around Retpoline as one of the mitigation methods for such heavy I/O workloads, and while technically the Retpoline changes are very simple, the validation and testing process for these kinds of changes can add a lot of time that enterprises will not be thrilled about.
SQL Server without logging was about a 4% performance loss.

This is not necessarily representative of the real world because end users turn on various amounts of logging, depending on how they want to keep track of performance of the database or do tuning on it. But in general, the more logging you do with a database, the more the performance hit you will see with the Spectre and Meltdown mitigations. This is just because of the writing of that database logging information to storage (disk or flash, it doesn’t matter in terms of CPU hit) forces a user-kernel boundary crossing in the memory space. We did not see any performance figures showing this logging hit, but presume it looks something like FIO for that portion of the overall database workload.
 
Last edited:

dark zero

Platinum Member
Jun 2, 2015
2,543
100
106
This looks as expected from non-gaming benchmarks. Might not bother many here who are only concerned in gaming, but provides a pretty bleak picture of the intel CPU bugs/fixes on some other workloads.
Indeed, for proffesional use the impact is very noticeable.
 

moinmoin

Platinum Member
Jun 1, 2017
2,234
2,660
106
The Meltdown fix for Windows 7 was worse than Meltdown itself.
Did you think Meltdown was bad? Unprivileged applications being able to read kernel memory at speeds possibly as high as megabytes per second was not a good thing.

Meet the Windows 7 Meltdown patch from January. It stopped Meltdown but opened up a vulnerability way worse ... It allowed any process to read the complete memory contents at gigabytes per second, oh - it was possible to write to arbitrary memory as well.

No fancy exploits were needed. Windows 7 already did the hard work of mapping in the required memory into every running process. Exploitation was just a matter of read and write to already mapped in-process virtual memory. No fancy APIs or syscalls required - just standard read and write!
Only Windows 7 x64 systems patched with the 2018-01 or 2018-02 patches are vulnerable. If your system isn't patched since December 2017 or if it's patched with the 2018-03 patches or later it will be secure.
 

nickmania

Member
Aug 11, 2016
41
11
51
For those with old computers I am using a c2q 6600 and I have turned off the windows 10 patch because the perfomance loss was very noticiable.
 

PottedMeat

Lifer
Apr 17, 2002
12,366
470
126
yay for me i guess, Dell updated the Optiplex 980 (1st gen i3/5/7 core) series last week

Spectre & Meltdown Vulnerability
and Performance Status

System is Meltdown protected: YES
System is Spectre protected: YES
Performance: SLOWER
CPUID: 106E5
not feeling anything different really
 

ASK THE COMMUNITY