Massive security hole in CPU's incoming?Official Meltdown/Spectre Discussion Thread

Page 56 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IndyColtsFan

Lifer
Sep 22, 2007
33,656
687
126
Yes, I want to see Intel (and AMD) start re-making all the old chips again...and shipping them out to all the customers who have "faulty" chips.

I wonder who'd go bankrupt first?

I do think they should offer something to those of us who bought CPUs after they were informed. They knew damn well that Coffee Lake was affected and at a minimum, should’ve delayed the launch and warned people they could be at risk before people spent their money.

Of course we all know what will happen - lawyers will pocket millions and the rest of us will get paltry checks or coupons for paltry amounts which expire quickly.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I do think they should offer something to those of us who bought CPUs after they were informed. They knew damn well that Coffee Lake was affected and at a minimum, should’ve delayed the launch and warned people they could be at risk before people spent their money.

Of course we all know what will happen - lawyers will pocket millions and the rest of us will get paltry checks or coupons for paltry amounts which expire quickly.
You'll take your microcode update and like it mister!

:D
 

psolord

Golden Member
Sep 16, 2009
1,913
1,192
136
Yes, I want to see Intel (and AMD) start re-making all the old chips again...and shipping them out to all the customers who have "faulty" chips.

I wonder who'd go bankrupt first?

Yes and while we are at it, they should make them all 14nm++.

I'd love to see my i7-860 and 2500k at 14nm. xD

j/k

Seriously though, what's taking so long with the update. Asrock posted new BIOSes with the first microcode fix. Removed them without a peep and now nothing. wtf?
 

dahorns

Senior member
Sep 13, 2013
550
83
91
I do think they should offer something to those of us who bought CPUs after they were informed. They knew damn well that Coffee Lake was affected and at a minimum, should’ve delayed the launch and warned people they could be at risk before people spent their money.

Of course we all know what will happen - lawyers will pocket millions and the rest of us will get paltry checks or coupons for paltry amounts which expire quickly.

is 20 bucks paltry for a ~$300 processor? It is a 6% return for what costs the typical consumer some 6% in performance. That seems reasonable to me. I think that would be a decent outcome for the class action.
 

Exterous

Super Moderator
Jun 20, 2006
20,368
3,444
126
Intel now opts to let their customers test the (Skylake only) microcode for them.
We also continue to release beta microcode updates so that customers and partners have the opportunity to conduct extensive testing before we move them into production.
https://newsroom.intel.com/news/security-issue-update-progress-continues-firmware-updates/

Isn't that completely backward? Shouldn't Intel conduct extensive testing before customers move them into production?

Its totally backwards but I'm guessing everyone is scrambling to figure out fixes so the usual QA has gone out the window. There have been botched patches from Intel, Microsoft, Vmware etc...
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
Some Linux guy has been analyzing the performance regressions from the Meltdown and Spectre patches, looks like interesting stuff!

http://www.brendangregg.com/blog/2018-02-09/kpti-kaiser-meltdown-performance.html
"The KPTI patches to mitigate Meltdown can incur massive overhead, anything from 1% to over 800%. Where you are on that spectrum depends on your syscall and page fault rates, due to the extra CPU cycle overheads, and your memory working set size, due to TLB flushing on syscalls and context switches."
Heh.

The article is a very good read, seems like Meltdown performance losses depend on working set size (actually accessed RAM) and can be mostly mitigated using PCID (tough luck for older Intel chips) and especially Huge Pages.
 

naukkis

Senior member
Jun 5, 2002
705
576
136
Intel tries to fix Spectre with microcode updates but what about Meltdown? KPTI isolates kernel memory but isn't usermode memory still accessible with that simple Meltdown way? Basically simply just read and use any address you want and use some side-channel attack to leak that address's data.

So is Intel cpu's basically still inadequate to protect any usermode sandboxed programs to leak whole parent program memory? Web browsers got "fix" with less accurate timers to make harder to side-channel attack but that kind of fix isn't bulletproof by any standards.
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
For Spectre 1 there are two ways of mitigation:
- Microcode and OS support for Intel's new IBRS, STIBP, IBPB instructions (those that Linus Torvalds blasted as security as opt-in instead by default) that specifically disable parts of the CPU's speculative execution.
- Compiler extensions that tweak affected code paths in applications to never use speculative execution. Google's Retpoline is the most widely used mitigation here, though Microsoft does its own approach, and... it doesn't do what it promises to do.
Developers and users cannot rely on Microsoft's current compiler mitigation to protect against the conditional branch variant of Spectre. Speculation barriers are only an effective defense if they are applied to all vulnerable code patterns in a process, so compiler-level mitigations need to instrument all potentially-vulnerable code patterns. Microsoft's blog post states that "there is no guarantee that all possible instances of variant 1 will be instrumented under /Qspectre". In practice, the current implementation misses many (and probably most) vulnerable code patterns, leading to unrealistically optimistic performance results as compared to robust countermeasures while creating a potentially false sense of security.

Intel tries to fix Spectre with microcode updates but what about Meltdown? KPTI isolates kernel memory but isn't usermode memory still accessible with that simple Meltdown way? Basically simply just read and use any address you want and use some side-channel attack to leak that address's data.
That's exactly what Spectre is about. Meltdown is a specific subset of Spectre, one that only affects Intel (as far as x86 chips go).
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
According to the Inspectre program, meltdown is fixed on all of my Intel boxes, and performance is "good".

Spectre is not fixed on any of them.
 

cytg111

Lifer
Mar 17, 2008
23,176
12,837
136
Can we do a sum? Whats the word right now? Spectre still alive even with patches and for very specifc loads the slowdown is 800% (what it that, a slowdown to ~20% ?)
 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
At some point one has to wonder if Intel ever managed to have any actually working low level privilege checks in their chips...

Intel introduced SGX (Software Guard eXtension) with Skylake which is supposed to provide shielded execution environments (called enclaves) to applications so they are isolated even from OS and hypervisors. That's the claim. Unfortunately it turns out it's broken by Spectre.
 

EXCellR8

Diamond Member
Sep 1, 2010
3,982
839
136
Spectre seems to be circumvented on my Dell workstation at the office with a BIOS flash from the official support site. I was going to try doing the same on my Skylake computer tonight but I might not have time. I didn't see any updates for any of my Z87 + Haswell computers, not that I'm particularly concerned but seems the mobo vendors haven't released anything or aren't going to...
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
What kind of slowdowns have people seen in GCC compiling tasks from these fixes?

I'm seeing over 2x performance difference in a specific timed compilation task on GCC 7.2 Mingw toolchain, on both Coffee Lake & Skylake-X systems.
Using fully updated Win10 (16299.248) and with the final microcode revisions (0x84 for CFL and 0x2000043 for SKL-X).
 

psolord

Golden Member
Sep 16, 2009
1,913
1,192
136
Asrock has published the P1.60 BIOS, for the Z370 Extreme 4, with the new microcode updates for 10 days now and I missed it. Probably other Z370 BIOSes are also updated as well. Whoever is interested should check.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,329
4,913
136
What kind of slowdowns have people seen in GCC compiling tasks from these fixes?

I'm seeing over 2x performance difference in a specific timed compilation task on GCC 7.2 Mingw toolchain, on both Coffee Lake & Skylake-X systems.
Using fully updated Win10 (16299.248) and with the final microcode revisions (0x84 for CFL and 0x2000043 for SKL-X).

I'm using fully updated Win10, 0x84 microcode for CFL (ASRock UEFI P1.60) and have the following performance data using CDM on a NVMe Samsung 960 EVO:

Test : 1024 MiB [C: 78.2% (180.6/231.0 GiB)] (x5) [Interval=5 sec]

P1.40 (pulled update), Windows patched, with previous pulled MC update:
Sequential Read (Q= 32,T= 1) : 1967.230 MB/s
Sequential Write (Q= 32,T= 1) : 1326.054 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1169.721 MB/s [ 285576.4 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 1034.159 MB/s [ 252480.2 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 514.705 MB/s [ 125660.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 687.330 MB/s [ 167805.2 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 43.424 MB/s [ 10601.6 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 156.527 MB/s [ 38214.6 IOPS]


P1.60 (current), Windows fully updated, 0x84 microcode:
Sequential Read (Q= 32,T= 1) : 1958.870 MB/s
Sequential Write (Q= 32,T= 1) : 1525.327 MB/s
Random Read 4KiB (Q= 8,T= 8) : 1169.036 MB/s [ 285409.2 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 1154.408 MB/s [ 281837.9 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 370.994 MB/s [ 90574.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 442.046 MB/s [ 107921.4 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 43.628 MB/s [ 10651.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 141.809 MB/s [ 34621.3 IOPS]

IOPS took a further hit versus the original "fixed" microcode that was later pulled... so I imagine the impact on GCC compiles will not be fun.

Edit: This is not entirely apples to apples, as the test using P1.40 was run at stock settings, while P1.60 was run with OC to 4.7 all core, 4.7 cache, no AVX offset.
 
  • Like
Reactions: moinmoin

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
Edit: This is not entirely apples to apples, as the test using P1.40 was run at stock settings, while P1.60 was run with OC to 4.7 all core, 4.7 cache, no AVX offset.

Are you going to run the latest at stock settings also? I'd imagine those who are curious about the impact would want to know and would be thankful. Did your run the same tests and have the results prior to P1.40 also?
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,329
4,913
136
Are you going to run the latest at stock settings also? I'd imagine those who are curious about the impact would want to know and would be thankful. Did your run the same tests and have the results prior to P1.40 also?

I do have the CDM result files from the stock benchmarks using P1.30 (no MC update) and P1.40 (pulled MC update). I can't spare the reboot tonight to return to stock settings, but I'll plan on doing some more testing tomorrow to see if performance changed at all.
 
  • Like
Reactions: Kenmitch

psolord

Golden Member
Sep 16, 2009
1,913
1,192
136
Is this relevant for consumer loads like office and games? If so how much?

I posted some gaming benchmarks in this thread a while back, but I can't find them. Thankfully I keep the BBcodes of things that are somewhat important, so here it is. All in all, gaming is unaffected mostly. Haven't tested with the 1.60 BIOS though. As for the office apps, I haven't noticed anything. The 8600k is too fast for those anyway.



Hello everybody.



So I did some benchmarks, before and after the kb4056892 patch, primarily on my core i7-860, but also some quick tests on the i5-8600k. My 2500k will have to wait for a while, because I am running some other projects at the same time and I need to finish up with the i5-860.



This effort is completely hobbyist and must not be compared with professional reviews. It’s just that I believe no reviewer will actually take old systems under consideration, so that’s where I stepped in. There are some shortcomings on this test anyway, some of which are deliberate.


The test is not perfect because I have used mixed drivers. Meaning that in some tests I have used the same driver, in another a couple months different drivers, but in Crysis I have used a year old driver for the pre patch run (it made no difference anyway). From my personal experience, newer drivers rarely bring any performance improvement. After the first couple drivers have come out, nothing of significance changes. Usually Nvidia’s game ready drivers, are ready right at the game launch and very few improvements come after that. Actually it’s primarily the game patches that affect the game’s performance, which granted, may need some reconsideration on the driver side. Still the games I have used did not have that problem (except one specific improvement that occurred in Dirt 4).



In this test, there are three kinds of measurements. First I did the classic SSD benchmark, before and after the patch. The I have gaming/graphics benchmarks, which consist from either the built in benchmark of the game or from my custom gameplay benchmarks. Then I have World of Tanks Encore, which is a special category, because it’s an automated benchmark, but I used fraps to gather framerate data while the benchmark was running, because it only produces a ranking number and I wanted fps data.



In my custom gameplay benchmarks, I rerun some of my previous benchmarks of my database, with the same settings, same location etc. Keep in mind that these runs are not 30-60 second runs, but several minutes long ones. I collected data with fraps or Ocat depending on the game.



Please note that ALL post patch benchmarks, were with the 390.65 driver for both the 1070+860 and 970+8600k configurations. This driver is not only the latest available, but also brings security updates regarding the recent vulnerabilities.



So let’s begin with the core i7-860 SSD benchmarks. Please note that the SSD benchmarks on boths systems, were done at stock clocks. The 860’s SSD is an old but decent Corsair Force GT 120GB.







As you can see, there are subtle differences, not worthy of any serious worries imo. However, don’t forget, that we are talking about an older SSD, which also is connected via SATA2 since this is an old motherboard as well.


Now let’s process to the automated graphics benchmarks, which were all done with the core i7-860@4Ghz and the GTX 1070@2Ghz.




Assassins Creed Origins 1920x1080 Ultra








Ashes of the Singularity 1920x1080 High



















Crysis classic benchmark 1920X1080 Very High









F1 2017 1920X1080 Ultra









Unigine Heaven 1920x1080 Extreme









Forza Motorsport 7 1920X1080 Ultra






 
  • Like
Reactions: krumme

psolord

Golden Member
Sep 16, 2009
1,913
1,192
136
Gears of War 4 1920X1080 Ultra










Rainbow Six Siege 1920X1080 Ultra









Shadow of War 1920X1080 Ultra









Unigine Valley Extreme HD









Gears of War Ultimate Edition 1920X1080 maxed









Total War Warhammer 2 1920x1080 Ultra















And the special category of World of Tanks Encore I told you about, since it essentially belongs in the automated category.




World of Tanks Encore 1920X1080 Ultra







Now as you can see, the differences are not big. They are quite insignificant I would date say actually. They seem to be well within the margin of error. I actually decided to benchmark the i7-860 with the 1070 first, since it’s the weakest of my processors and any impact on the cpu performance would be directly highlighted. Many of these runs are cpu limited already.



The two games that showed a measurable and repeatable performance drop, were both UWP games, Gears of War and Forza Motorsport 7. Even so, the drop was not such to make you jump off your chair! Actually in Gears 4, in spite the general performance drop, we did get a 5% lows performance increase. Note that I sued the same driver for both the pre and post patch runs.
 

psolord

Golden Member
Sep 16, 2009
1,913
1,192
136
My custom gameplay benchmarks follow suit. You can see the game titles and the settings in the screeshots. All at 1920X1080.































Here we can see that there are no big differences on the average framerate, which is the first primary and most important result. There are some fluctuations on the 0.1% lows mostly, but not in all games. Prey seem to had a harder time that the rest of them. I did notice two momentary hiccups during the run and I was certain they would appear in the 0.1% lows.


The better 0.1% and 1% lows you see on Dirt 4, I have to admit that have been affected by the newer driver. This is the exception to the rule however and not the other way around. There is a specific part at the beginning of the run, which makes the framerate to dip. It dips on the newer driver also, but a little less. The overall average framerate was not affected as you can see however and I felt the game running exactly the same.
 
  • Like
Reactions: krumme