• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Article Intel CEO Bob Swan to step down in February, VMware CEO Pat Gelsinger to replace him

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106
An electrical engineering as CEO, now that's good news!
More of a software and cloud guy for the last decade. Which needn't be a bad thing.

He knows traditional Intel well enough, had plenty involvement when Intel fared well, as well as the time Intel first fell behind AMD and recovered from that. His detour with EMC and VMware lasted long enough that he also has plenty of outside perspective regarding corporate organization and culture, as well as cloud datacenter market where Intel appears to be losing the most right now.

Will be interesting to watch how that turns out for Intel, in any case should be better for tech from Intel than anything since 2009.
 
  • Like
Reactions: NeoLuxembourg

gdansk

Senior member
Feb 8, 2011
549
227
116
In this video he starts with a bit about his thoughts on Intel. Some interesting bits: acquisitions may be a waste of the balance sheet (9:40). Forcing 'synergy' can cause misses (6:42) with switching manufacturing processes listed as an example of such a misstep. He wanted a focus on GPGPU (14:09) and regrets that Intel gave up on these high throughput products since it allowed the rise of (datacenter-focused) Nvidia.

From that I expect Xe to be a major focus while he's CEO. Given how open he seemed to TSMC I also expect more products built by external companies.
 
  • Like
Reactions: moinmoin

Mopetar

Diamond Member
Jan 31, 2011
5,234
1,846
136
In a certain sense, Intel using other fabs doesn't hurt them all that much. Sure they lose the ability to keep any margins TSMC makes within the company, but every TSMC wafer that Intel buys is one that AMD and Nvidia can't use. If you can't beat your competition head to head, you can always starve them of resources.
 
  • Like
Reactions: beginner99

jpiniero

Diamond Member
Oct 1, 2010
8,686
1,558
126
In a certain sense, Intel using other fabs doesn't hurt them all that much. Sure they lose the ability to keep any margins TSMC makes within the company, but every TSMC wafer that Intel buys is one that AMD and Nvidia can't use. If you can't beat your competition head to head, you can always starve them of resources.
Those margins are what fuels the subsequent nodes. They don't have to spin off the fabs but they might not be able to afford 7 or anything past it.
 

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106
Btw. the Computer History Museum interview is also available as transcriptions if you prefer reading it:
Part 1: Part 2:
In a certain sense, Intel using other fabs doesn't hurt them all that much. Sure they lose the ability to keep any margins TSMC makes within the company, but every TSMC wafer that Intel buys is one that AMD and Nvidia can't use. If you can't beat your competition head to head, you can always starve them of resources.
Doing so boosts the competing foundry (like TSMC in your example) though, the higher the capacity utilization the higher prices it can fetch the more it can invest into node advancements.
 

zir_blazer

Senior member
Jun 6, 2013
957
129
106
Whoa, what we have here...

Gelsinger: Yeah, I remember my first acquisition that I was personally responsible for at Intel was Chips and Technologies, right when we acquired that. Being part of the chip-set company and I personally oversaw that acquisition, bringing it in, making it part of the Intel chip-set business at the time and it was a 2D graphics capability and in particular was what we were looking for.
C&T seemed to be a rather major innovator in the late 80's (It is credited as the maker of the first Chipset), then suddently became irrelevant in the early 90's before being absorbed by Intel.
As a sidenote, one of the guys that left C&T went on to create S3 Graphics, which developed a Bus known as Advanced Chip Interconnect, later adopted by Intel as... PCI.


S3 and PCIe
Dadao left the company in 1989 to launch another start-up, S3 Graphics, which focused on graphics processors. They again used the gate-array technology to get to market fast. They looked around for the biggest arrays and found what they needed at Seiko-Epson. Dadaobana key invention at S3 was a new interconnect, a local bus, to move data between chips faster. They called it Advanced Chip Interconnect, which later became Intel’s PCI and eventually the industry-standard PCIe.
I have absolutely no idea what happened there. I don't recall S3 ever being credited in PCI, nor how Intel became involved with ACI design then took over it as if it was its own.
 
  • Like
Reactions: moinmoin

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106
Yeah, that long CHM interview is full of intriguing details. @gdansk already mentioned another part, Gelsinger pushing for massive multicore and GPGPU before he left for EMC.
Gelsinger: But the other thing that we had turned our attention to and I was turning my attention to, and I say this was the last of my list of things I wanted to get done when I was the head of the enterprise business for Intel, was graphics and the massive multicore. We really saw that the space that Nvidia with GPGPUs, CUDA-- that whole space, and we sort of said “This matters.” Their silicon footprint was always sort of this view: Who has more transistors on here? Okay, memory, they’re sort of commodity. Networking. We gotta go with networking and we started to build up the Intel networking business. But that graphics footprint: People were starting to use that for nongraphics purposes with CUDA and GPGPU. We didn’t think of it through machine learning and AI as we would today but those throughput workloads were getting bigger. So when we started the-- it became known as the Larrabee Project. It was sort of the last big project that I was getting underway at Intel, and I knew that if workloads emerged that weren’t on the Intel architecture, Intel lost. That project was underway. We had two purposes in it and one was high performance computing and one was graphics. It was sort of the two workloads that we were working on and, again, if we looked at it today we would have said over five years ahead of our time, in terms of getting a machine learning AI workload in place. It really wasn’t seen quite yet, but it was a class of that whole I’ll say throughput-oriented versus latency-oriented workloads that were really driving it, and that was sort of the last big thing I had underway. And when EMC offered me the job of going there as their president, well, I struggled because I had made a list of ten things I wanted to get done when I took the enterprise job. Took over the microprocessor development engine for Intel. And the last one left was this graphics-throughput workload one, and I knew that wasn’t done. So I was sort of like, I never don’t finish the job. I was one of those. I really struggled over leaving at the time because that one was undone, but nine out of ten were done. I was being wooed to consider coming over to EMC and I knew that was really important. It wasn’t done but it was a couple of, three more years to get it done. So I decided to leave at that point. That one was undone and Intel killed it shortly after my departure, and that was hard, disappointing to see and in retrospect, Nvidia would not be the company it is today had that been pursued because the workloads would have stayed on the Intel architecture.
In a way Intel's stagnation started when Gelsinger left and that project was canceled instead others within Intel pursuing it as well. So Nvidia and later AMD picked it up, and Intel is now playing catch up.
 

JoeRambo

Senior member
Jun 13, 2013
956
718
136
That interview is a gold mine of detail. Some things like early RISC vs CISC are there. Intel committed horrible mistake when politics pushed Pat Gelsinger out, in retrospect even his GPGPU idea was first wrong, and then right when AI took off. Hopefully he can still straighten the ship before the forces of market swamp it.
 

iamgenius

Senior member
Jun 6, 2008
543
19
81
It is hopefully a change for the better. I'm no good at politics and business, but Vmware served me very well in the last few years. Their ex-CEO must not be just another random guy.
 

Mopetar

Diamond Member
Jan 31, 2011
5,234
1,846
136
Doing so boosts the competing foundry (like TSMC in your example) though, the higher the capacity utilization the higher prices it can fetch the more it can invest into node advancements.
Intel competes with AMD and Nvidia far more than it does with TSMC so any boost Intel would give to the company isn't a loss for them as much as competing CPUs and GPUs are.

If you wanted to hurt TSMC the best way of doing so is to take out their customers that can turn wafers into high margin products. An easy way to do so if you can't beat them with your own designs on your own fab is to take as many wafers away from your competitors as possible.

If your own fab improves later you don't need to use TSMC. If it still has issues and TSMC has invested in a new and better process Intel can still use that to their benefit.
 

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106
Intel competes with AMD and Nvidia far more than it does with TSMC so any boost Intel would give to the company isn't a loss for them as much as competing CPUs and GPUs are.
Nvidia maybe, and definitely if its takeover of ARM succeeds. AMD on the other hand while a direct competitor is also a failback for the x86 market not to implode if Intel's own products are not competitive. Whereas AMD is a threat to Intel's x86 market share, ARM is the much bigger threat to the very existence of Intel's core market.

If you wanted to hurt TSMC the best way of doing so is to take out their customers that can turn wafers into high margin products. An easy way to do so if you can't beat them with your own designs on your own fab is to take as many wafers away from your competitors as possible.
Intel taking out e.g. Apple? Good luck with that. Remember that by going after TSMC Intel will always clash with its big consumers as well. This won't go over well if it also pretends to be a customer itself.

If your own fab improves later you don't need to use TSMC. If it still has issues and TSMC has invested in a new and better process Intel can still use that to their benefit.
While TSMC will go the long way at seeming neutral to the public I'm sure it will take internal measures to ensure Intel can't switch back and forth as it likes. The "easiest" way is staying ahead in the process node development. Staying ahead of the node development curve is both TSMC's biggest service and best way to keep customers in its fold.

Intel's foundry needs to become competitive again. These two approaches won't help Intel achieving that.
 

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106

Hitman928

Diamond Member
Apr 15, 2012
3,313
3,442
136
He gets it.;)

Yeah, this was a good hire for Intel. The question will be can he actually make the changes necessary to realize that vision and if so, how long will it take? There was a very serious exodus from Intel in terms of experienced CPU designers over the last few years so it won't be a short term turn around. They will need to clean house (internal politics wise) and probably offer some really good compensation packages to lure high level guys back to Intel at this point.
 

Hitman928

Diamond Member
Apr 15, 2012
3,313
3,442
136
That is kind of like saying to win the Super Bowl you have to score more points than the other team.
I think you'd be surprised how this seemingly obvious statement isn't so obvious to many who are in management positions in a place like Intel.
 

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106
The question will be can he actually make the changes necessary to realize that vision
Since he rejected Intel's advances in the past my guess is he bid his time until the board gave him the freedom to do as he wants. So two sub-questions remain: Can Intel as a company make the changes necessary? And does Gelsinger make the correct decisions?

and if so, how long will it take?
For any kind of turnaround being visible to the public I'd expect at least 5 years.

Taking AMD as an example, from the point Mark Papermaster joined in 2011 to the launch of the first Zen chip in 2017 took nearly 6 years, the turnaround being visible to everybody took 2-3 further years until Zen 2 or Zen 3 depending on where you want to draw the line.

---

As an aside the monetary compensation described as a lure in the previous article is actually rather low, with the big number mentioned fully depending on the performance and the length of Gelsinger's stay:
Intel said Thursday it will pay Gelsinger $1.25 million in base salary, a $1.75 million hiring bonus and an annual bonus valued at $3.4 million, depending on performance. If Gelsinger buys up to $10 million in Intel stock, the company said it will give him a matching number of restricted shares.

In addition, Gelsinger stands to receive $100 million in restricted stock – contingent on Intel and its share price meeting various performance metrics. So he may ultimately receive considerably less than that – or, potentially, even more. The restricted stock vests over a period of five years, contingent on Gelsinger remaining with Intel.
 

Ajay

Diamond Member
Jan 8, 2001
7,858
2,947
136
Those margins are what fuels the subsequent nodes. They don't have to spin off the fabs but they might not be able to afford 7 or anything past it.
They have the money. They are blowing billions on stock buy backs to prop up the share price. They can stop that and put it into work on a whole array of internal problems instead if they choose.
 

moinmoin

Platinum Member
Jun 1, 2017
2,215
2,632
106
They have the money. They are blowing billions on stock buy backs to prop up the share price. They can stop that and put it into work on a whole array of internal problems instead if they choose.
They got the money on the back of a stream of record breaking financial quarters thanks to the pure profit 14nm brings in. That well is currently in process of drying up, and both 10nm and 7nm are nowhere near replacing it as the cash cow. Yeah, blowing those billions (as well as those from selling its SSD business) on stock buy backs will look not so wise in the years to come.
 

ASK THE COMMUNITY