Intel CPUs Hit by NetCAT Security Vulnerability, AMD Not Impacted

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
While I agree that companies cheat intentionally, I'm not so sure this as the case here. Maybe it was simply an engineering trade-off or they didn't even think about it. I say that because back then when this general uArch was designed (Core uarch, I think Yonah) such security consideration sure weren't such a great deal and cloud was non-existent back then. In case of VW intentionally cheating the law is something entirely different.
It may have been an engineering trade-off. If I were a betting man, however, I'd argue it was a conscious corporate decision of trading security for marketing.

Intel have known since 1995 (at least) that the TLB, spec-ex, etc on their P6 uarch was a major security problem. They continued to use the same known-compromised design for 20+ years. They've definitely known that their SMT implementation was insecure for some time as well, since at least 2005 when the first CVE for their hyperthreading implementation was released.

Apple and other vendors almost a decade ago started to try to mitigate these at the kernel level, and Linux/Unix followed suit. KASLR and KPTI were some of the fixes, I can't recall the details of what OS vendors have done.

Long story short, Intel has been sharing bits between cores since P6 and have known about the security implications for at least 24 years, if not longer, and continued to excuse their behavior because "no one would ever run malicious code on the same computer that is running other important stuff" (paraphrased). It's not even sharing cache - the in-flight data can be snooped because of their design decisions.

Intel wanted multiple cores. Then they wanted hyperthreading. They decided that rather than completely rewrite the playbook, they'd shoe-horn old design decisions into a new era of computing. Then, marketing again, they had to go after speed too. And they traded security for speed. And I just can't see an engineering team trading security for more speed on uarch that they know are going into servers. I think it's far more likely they had a direction given to them by executives. But maybe I'm naive about that. Maybe their engineers really are that ignorant.
 

Jimzz

Diamond Member
Oct 23, 2012
4,399
190
106
It may have been an engineering trade-off. If I were a betting man, however, I'd argue it was a conscious corporate decision of trading security for marketing.

Intel have known since 1995 (at least) that the TLB, spec-ex, etc on their P6 uarch was a major security problem. They continued to use the same known-compromised design for 20+ years. They've definitely known that their SMT implementation was insecure for some time as well, since at least 2005 when the first CVE for their hyperthreading implementation was released.

Apple and other vendors almost a decade ago started to try to mitigate these at the kernel level, and Linux/Unix followed suit. KASLR and KPTI were some of the fixes, I can't recall the details of what OS vendors have done.

Long story short, Intel has been sharing bits between cores since P6 and have known about the security implications for at least 24 years, if not longer, and continued to excuse their behavior because "no one would ever run malicious code on the same computer that is running other important stuff" (paraphrased). It's not even sharing cache - the in-flight data can be snooped because of their design decisions.

Intel wanted multiple cores. Then they wanted hyperthreading. They decided that rather than completely rewrite the playbook, they'd shoe-horn old design decisions into a new era of computing. Then, marketing again, they had to go after speed too. And they traded security for speed. And I just can't see an engineering team trading security for more speed on uarch that they know are going into servers. I think it's far more likely they had a direction given to them by executives. But maybe I'm naive about that. Maybe their engineers really are that ignorant.


I wonder if the P3/P4 era played part in that design as they were getting beaten by AMD. So getting past the P4 performance issues and catching up to AMD was the bigger push, not security or long term concerns.
With that said you would think with 10nm being delayed so much these security issues would have been fixed on one of the many respins using a +++ manufacture time. So Intel has no excuse, they had AMD beaten to a pulp during the bulldozer years in server so plenty of time to do something.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,661
136
back then when this general uArch was designed (Core uarch, I think Yonah) such security consideration sure weren't such a great deal
Still doesn't seem to be a big deal at Intel.
Intel® bought a sponsored "article" on security titled "We've secured our CPU silicon" on The Register
Choice quote on security in the sea of hand wavings:
Intel® says these side-channel leaks can only be maliciously exploited in the wild by highly sophisticated cyber-criminals. The archetypal hacker in the bedroom won't have the skills to abuse these flaws in the real world to steal information.
All's fine at Intel®!
 

Ajay

Lifer
Jan 8, 2001
15,451
7,861
136
Long story short, Intel has been sharing bits between cores since P6 and have known about the security implications for at least 24 years, if not longer, and continued to excuse their behavior because "no one would ever run malicious code on the same computer that is running other important stuff" (paraphrased). It's not even sharing cache - the in-flight data can be snooped because of their design decisions.

There's also the issue that prior to the explosion of internet use, server exposure to untrusted networks was far more limited, restricted and often not even possible. The only security risks were from 'inside' attacks. So there was some reason to question the merits of security over performance and manageability. With 20-20 hindsight, these were very poor descisions. The fact that Intel did not aggressively limit their attack surfaces in light of the vast exposure to untrusted networks in the new era of computing was a catastrophic error.
 

coercitiv

Diamond Member
Jan 24, 2014
6,200
11,899
136
The archetypal hacker in the bedroom won't have the skills to abuse these flaws in the real world to steal information.
So Intel's security, one of the six pillars of innovation meant to propel them into the future, is built around a threat model of low skill & low resource hackers. Reminds me of a recent press release of another giant: "the sophisticated attack was narrowly focused, not a broad-based exploit". It's easy to talk the big talk when measuring yourself against script kiddies in the bedroom, but just like Apple, when Lord Voldemort comes to town they won't even have the guts to call the threat by name.

I just hope the crisis ahead ends up putting engineers back in charge of the product line, so we don't have to watch another Boeing in the making.
 

kawi6rr

Senior member
Oct 17, 2013
567
156
116
If anyone here thinks Intel didn’t know about this then you’re only fooling yourself. We all know how unscrupulous Intel can be and we see all their shenanigans all the time. They’re a huge company and their primary goal is to make money not be your friend so if you think their looking out for your best interest then again you’ve been fooled.
 
  • Like
Reactions: spursindonesia

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
So Intel's security, one of the six pillars of innovation meant to propel them into the future, is built around a threat model of low skill & low resource hackers.

To a certain degree they are right. These vulnerabilities are a bit blown out of proportion. Fixing JS timer resolution in browser was really all that was needed to prevent attacks on consumer devices from web sites. All the performance limiting patches realistically are overkill for even most enthusiast. I don't host VMs for different parties of interest.

Even in cloud it's overblown. If you need top notch security you were running the stuff on your own servers or had dedicated servers in the cloud with no other users on them already before these issues were known. You think a bank runs their stuff on shared servers in the cloud? And since as a hacker you have 0 clue and control who else is on your server, you will just be fishing for random data and really need to get lucky to find something you can profit from. It's not a targeted attack while also not being a "mass attack" like crypto-lockers while also being complex. it's really only something "state-hackers" would probably abuse.

I'm not defending intel. I'm arguing from an engineering side and analyzing real risks and costs and for a personal computer the cost is much higher than the risk was.
 

coercitiv

Diamond Member
Jan 24, 2014
6,200
11,899
136
You think a bank runs their stuff on shared servers in the cloud?
You think an airplane maker would ever make a fly-by-wire system that only uses *one* angle-of-attack sensor to decide whether it should push the airplane towards the ground while also overriding *all* pilot controls? Your argument is the embodiment of common sense, but unfortunately reality violently disagrees. The more insecure systems are, the less human management needs to screw up until the system is compromised.

Personally I don't see a reason to attack Intel for what they've done until today. Hindsight isn't the best lens to look down on someone. However, when I see these PR moves that explicitly downplay the importance of discoveries, that's when the brown pudding hits the fan. I don't even wish I'm ever proven right, as the time required to secure hardware around the world for small/medium business and administration will seem like an eternity.
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
You think an airplane maker would ever make a fly-by-wire system that only uses *one* angle-of-attack sensor to decide whether it should push the airplane towards the ground while also overriding *all* pilot controls? Your argument is the embodiment of common sense, but unfortunately reality violently disagrees. The more insecure systems are, the less human management needs to screw up until the system is compromised.

Fair enough. But here the bean-counters only see the immediate benefit of making the system "simple". lower cost. Downside doesn't have any direct cost to Boeing...

In case of a bank, the downside is pretty direct. Money stolen. But then yeah, maybe moinmoin is right and it's better I don't know. After all I know 1 case were the person got the full account worth of money back from the bank (several 10k). Only requirement was to sign a paper not to go public about the theft...which tells me everything I need to know. They know their system is insecure.
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
I can bet you Intel did not know
You think Intel didn't know about the security exploits? They've known since at least 1995. And they kept the same design with the same exploits not just for years, but decades.

You assert things without evidence, that can be dismissed with easily-discoverable evidence.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
AMD's RnD team had less then 1/10 of the funding intel had.

I don't think the R&D budget is the end all when it comes to security. Sure it can play a part, but often it's one good idea leads to another.

Fun fact....AMD's Secure Memory Encryption technology was a afterthought of securing the consoles from the pirates.

 
  • Like
Reactions: Schmide