Massive security hole in CPU's incoming?Official Meltdown/Spectre Discussion Thread

Page 71 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ClockHound

Golden Member
Nov 27, 2007
1,108
214
106
Hmmm... Seems Intel is very concerned about security.

Security of their share price. Last 2 quarters spent $15B on share buybacks. And another $20B approved by the board in the next 12-18 months. Pulling a wing out of the Boeing/McDonald playbook.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
The same researchers warned Intel about the vulnerability in April — as it did with the other flaws they discovered that were patched a month later. Intel took until this month to investigate, the researchers said.

View attachment 13131

While intels behaviours is questionable at best, I'm still in the camp that many of these issues are overblown, especially for the home users and that the fixes simply waste a ton of energy (cpu cycles) for no realy security gain (limiting browser timing acuracy did the trick to make meltdown and spectre attack impossible remotely). Of course different story for cloud providers but even there if your data was that sensitive that such an attack would be worthwhile you would only put it on a dedicated instance anyway also making the issue a bit overblown.

About this new attack:

The new variant of the Zombieload attack allows hackers with physical access to a device t

See. This isn't even an issue for cloud providers. Once someone has physical access all bets are off anyway and there are far easier attacks once you are at that point.
It's like adding a 3rd lock to your door when the windows are open. It's annyoing and useless Just as annoyoing as my laptop running 20% slower because some other user running a VM on my laptop (right ???) could theoretically steal data from me.

I really think these overblown reactions are a problem caused by social media. Because once it's out there without context, intel and co. are required to react and fix it even if it isn't needed and simpler solutions do the trick like limiting browser timing accuracy.

The real solution would be for the big chip providers instead of using this as a PR chance and dis each other to stand together and actually explain why the issues are overblown and provide real solutions. Given the current "green" hype uselessly wasting CPU cycles would be a huge argument.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I figure Zombieload isn't. But the original Spectre and Meltdown variants should be, right?

Icelake isn't affected by the latest vulnerabilities.

Zombieload v2: https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00270.html
JCC: https://www.intel.com/content/dam/s...mitigations-jump-conditional-code-erratum.pdf

Most of the repeatable vulnerabilities seems to be related to Skylake. Icelake not having some may mean there are more fundamental issues at the uarch level for SKL cores. The reason I say this is because they fixed the earlier Zombieload for Cascade Lake and Coffeelake but not for v2.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
Most of the repeatable vulnerabilities seems to be related to Skylake. Icelake not having some may mean there are more fundamental issues at the uarch level for SKL cores. The reason I say this is because they fixed the earlier Zombieload for Cascade Lake and Coffeelake but not for v2.

Figure that's just because they fixed v2 in Sunny Cove but for some reason did not in Cascade Lake. Based upon what the researcher said, it almost sounded like Intel internally told the Sunny Cove team about it in time but not the Cascade Lake team. Which would explain why Intel at the last minute didn't want to make v2 public because it would hurt Cascade Lake sales.
 
Last edited:

Panino Manino

Senior member
Jan 28, 2017
813
1,010
136
Maybe the issues are really "overblown", but, if a vulnerability is discovered and publicized, Intel has the obligation to release a fix, right?
If these fixes comes with lower performance and disable features one can argue that Intel intentionally tried to trick them into buying a product with worse performance and less features than they promised. If Intel put the NDAs in place with the purpose to not hurt sales first and foremost this is kinda wrong.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
See. This isn't even an issue for cloud providers. Once someone has physical access all bets are off anyway and there are far easier attacks once you are at that point.

The need for physical access is a mistake by that article's author. The actual whitepaper makes no mention of needing physical access, actually just the opposite:

ZombieLoad is furthermore not limited to native code execution, but also works across virtualization boundaries. Hence, virtual machines can attack not only the hypervisor but also different virtual machines running on a sibling logical core.

I think they are confused because in the variant 2 section they mention needing physical page address. This is not the same thing as needing physical access to the computer and since they say it can be accessible via a virtual address:

With the second variant of inducing zombie loads, we eliminate the requirement of a kernel mapping. We only require a physical page p which is user accessible via a virtual address v. Any page allocated in user space fulfills this requirement.

This makes it have less access requirement than ZombieLoad variant 1 (which needs kernel mapping) and definitely applies to cloud providers and really anyone who provides remote access to their computer. The researchers even believe that user applications such as a web browser are a viable attack scenario.

Many secrets are likely to be found in user-space applications such as browsers.

 
Last edited:

joesiv

Member
Mar 21, 2019
75
24
41
Some performance testing on cascade lake for the ZombieLoad mitigation's:

Seems to be around 8% performance decrease with the "default" mitigation, seems the biggest deficit (again) is context switching. Seems just disabling TSX is best if performance is the priority. Though I don't really understand what TSX is good for lol...
 
  • Like
Reactions: lightmanek

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
Some performance testing on cascade lake for the ZombieLoad mitigation's:

Seems to be around 8% performance decrease with the "default" mitigation, seems the biggest deficit (again) is context switching. Seems just disabling TSX is best if performance is the priority. Though I don't really understand what TSX is good for lol...

TSX can provide significant performance improvements under heavy memory use scenarios. Under the right conditions, you could lose upwards of 50% performance by turning off TSX. This doesn't apply to consumer workloads but stuff like database transactions and some really memory heavy compute stuff.

Edit:
Looking over Phoronix's benchmarks it looks like just disabling TSX is the better option. Intel's implementation of TSX was problematic from the start (they had to disable the initial implementation through microcode) and it seems like while it is now usable, with the new vulnerability it is best just to disable it rather than implement the mitigation which negates any good it may have done in some cases. Perhaps a more realistic heavy database workload might show TSX with mitigation being faster than disabling it, but the posted benchmarks don't offer much hope there.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Figure that's just because they fixed v2 in Sunny Cove but for some reason did not in Cascade Lake. Based upon what the researcher said, it almost sounded like Intel internally told the Sunny Cove team about it in time but not the Cascade Lake team. Which would explain why Intel at the last minute didn't want to make v2 public because it would hurt Cascade Lake sales.

Plus Icelake came later.

I still believe in the deeper foundation being wrong on Skylake though. You can see looking at their list, different architectures such as their Atoms are less affected or affected in different areas.

When there were rumors earlier saying we'll see more serious errors from Intel due to them skimping on validation seems to be coming true now.

I don't know why they are so slow to react and act like Titanic-class ships responding to an iceberg. It's not just validation, but their serious difficulty in moving their PCH on-die with their mainstream Core processors are troubling too. Remember the bug they had with their PCH in 2011 with 2nd Gen Core processors? It's like they are afraid it'll happen again, but on-die means they'll have to recall CPUs which will be far more costly. Not being able to move it on die is partly the reason their battery life is significantly lower compared to ARM devices, despite them making big advances since Haswell.

But why? If its due to the bureaucratic layers they have gotten over the years then they need to find ways to simplify it somehow. It also seems as companies get bigger they start being divided internally and engage in politics of their own.

I've been reading about part of their success is that it allows teams to take over if they prove they can be more successful. But in the long-run such internal competition may mean some might work to suppress others. Just like how they admitted they crippled x86 to make IA64 look better than it is.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Seems to be around 8% performance decrease with the "default" mitigation, seems the biggest deficit (again) is context switching. Seems just disabling TSX is best if performance is the priority. Though I don't really understand what TSX is good for lol...

Zombieload seems to refer to TSX being killed and coming back alive but never really being alive in the first place, just like a zombie.

Yea, disable TSX seems to be the way to go.
 

Gideon

Golden Member
Nov 27, 2007
1,608
3,570
136
Off topic I know, but I've never seen anything to this effect before, do you have a link to something?
There was an interview with one of Intel's ex-workers. He mentioned that they deliberately refused to make x86 64bit extensions (which were later added by AMD) and tried to pitch IA64 instead.

He saw that there was a huge gap between the two offerings and warned that AMD would take advantage of that. And AMD sure did, "they drove a truck through that opening" (his words, paraphrased from memory).
 

soresu

Platinum Member
Dec 19, 2014
2,617
1,812
136
There was an interview with one of Intel's ex-workers. He mentioned that they deliberately refused to make x86 64bit extensions (which were later added by AMD) and tried to pitch IA64 instead.

He saw that there was a huge gap between the two offerings and warned that AMD would take advantage of that. And AMD sure did, "they drove a truck through that opening" (his words, paraphrased from memory).
Here was me thinking that NetBurst was their sole own goal of that era.
 

ondma

Platinum Member
Mar 18, 2018
2,718
1,278
136
There was an interview with one of Intel's ex-workers. He mentioned that they deliberately refused to make x86 64bit extensions (which were later added by AMD) and tried to pitch IA64 instead.

He saw that there was a huge gap between the two offerings and warned that AMD would take advantage of that. And AMD sure did, "they drove a truck through that opening" (his words, paraphrased from memory).
You mean "ex worker" as in "disgruntled ex employee"? Without more information about the source, his reason for leaving, his qualifications, and verification from other known reliable sources, that seems pretty thin.
 

Gideon

Golden Member
Nov 27, 2007
1,608
3,570
136
You mean "ex worker" as in "disgruntled ex employee"? Without more information about the source, his reason for leaving, his qualifications, and verification from other known reliable sources, that seems pretty thin.
It doesn't really take much effort to see that Intels strategy for Itanium was dumbfoundingly unrealistic and not for a moment but for years upon years. Itanium was also their o ly 64bit arch they invested in.

Itanium sales expectation vs reality:
Itanium_Sales_Forecasts_edit.png


As for the interview, It was with a higher up that worked there for many years before and after the events, done like 10 years after the Itanium "hayday". Only one or two of the questions (of quite a long inteeview) talked avout the subject and the overall tone surely wasn't dissing, thry also talked about the comeback.
 

cortexa99

Senior member
Jul 2, 2018
318
505
136
Intel's Mitigation For CVE-2019-14615 Graphics Vulnerability Obliterates Gen7 iGPU Performance


Yesterday we noted that the Linux kernel picked up a patch mitigating an Intel Gen9 graphics vulnerability. It didn't sound too bad at first but then seeing Ivy Bridge Gen7 and Haswell Gen7.5 graphics are also affected raised eyebrows especially with that requiring a much larger mitigation. Now in testing the performance impact, the current mitigation patches completely wreck the performance of Ivybridge/Haswell graphics performance.

This is a never-end story. But it still surprise me that graphic performance also being crippled by this new vulnerability, and it's much more painful than any other vulnerabilities in the past 2 years cuz resulting nearly 50% perf down for intergrated graphic.
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,608
3,570
136
Oh boy, another (Intel-only) vulnerabilty, CacheOut, this time connected to TSX instructions.
We present CacheOut, a new speculative execution attack that is capable of leaking data from Intel CPUs across many security boundaries. We show that despite Intel's attempts to address previous generations of speculative execution attacks, CPUs are still vulnerable, allowing attackers to exploit these vulnerabilities to leak sensitive data.

Moreover, unlike previous MDS issues, we show in our work how an attacker can exploit the CPU's caching mechanisms to select what data to leak, as opposed to waiting for the data to be available. Finally, we empirically demonstrate that CacheOut can violate nearly every hardware-based security domain, leaking data from the OS kernel, co-resident virtual machines, and even SGX enclaves.
 
  • Wow
Reactions: lobz

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
Intel still not done with Zombieload mitigations apparently.


For the third time in less than a year, Intel has disclosed a new set of vulnerabilities related to the speculative functionality of its processors. On Monday, the company said it will issue a software update "in the coming weeks" that will fix two more microarchitectural data sampling (MDS) or Zombieload flaws. This latest update comes after the company released two separate patches in May and November of last year.

Compared to the MDS flaws Intel addressed in those two previous patches, these latest ones have a couple of limitations. To start, one of the vulnerabilities, L1DES, doesn't work on Intel's more recent chips. Moreover, a hacker can't execute the attack using a web browser. Intel also says it's "not aware" of anyone taking advantage of the flaws outside of the lab.

However, like when the company issued its second MDS patch in November, security researchers are criticizing Intel for its piecemeal approach. "We spent months trying to convince Intel that leaks from L1D evictions were possible and needed to be addressed," the international team of computer scientists that discovered the flaw wrote on their website. In an addendum to their original paper, there's a sense of exasperation with the company. "We reiterate that RIDL-class vulnerabilities are non-trivial to fix or mitigate, and current 'spot' mitigation strategies for resolving these issues are questionable," the researchers write.
 

UsandThem

Elite Member
May 4, 2000
16,068
7,380
146
https://www.tomshardware.com/news/intel-new-microcodes-cpu-security-flaws
Intel on Thursday released a microcode update for the latest speculative execution flaws, such as the MDS attacks, that have affected its CPUs. The update is now available for both consumer and server versions of Windows 10 build 1903, but users must install it manually.

I figured I'd just leave it here because it's just the latest microcode update is just another version of the speculative execution flaw.
 

joesiv

Member
Mar 21, 2019
75
24
41
Manually? Interesting. Wonder why it isn't mandatory.
Wonder if it has a perforance penalty. From many peoples perspective the speculative execution attacks are medium risk, and as such don't warrant overreaction (including those in my company). So perhaps Microsoft is leaving it up to the admins to decide if it's worth the effort/performance to apply it?