Question Intel Corp CEO Pat Gelsinger on AI revolution..

IGBT

Lifer
Jul 16, 2001
17,920
121
106
Intel Corp: Intel's CEO, Pat Gelsinger, is on a mission to make Intel a leader in the AI space, and he ain't playing around...With the first consumer chip featuring a built-in neural processor for machine learning tasks, Meteor Lake, set to ship later this year, we're about to witness an AI revolution...So, get ready for an AI-powered shakeup that's gonna reshape the tech landscape and redefine Intel's role in the market.
 

TheELF

Diamond Member
Dec 22, 2012
3,960
713
126
Add a source link, also having AI in a consumer chip will do diddlysquat toward a revolution.
Any relevant AI is being done in the cloud on massive amounts of data, the AI on the consumer chip will help a bit at making pictures and videos look better.
Quicksync is around for a while and it's great but nobody saw any revolution happening because of it.
 

IGBT

Lifer
Jul 16, 2001
17,920
121
106
Add a source link, also having AI in a consumer chip will do diddlysquat toward a revolution.
Any relevant AI is being done in the cloud on massive amounts of data, the AI on the consumer chip will help a bit at making pictures and videos look better.
Quicksync is around for a while and it's great but nobody saw any revolution happening because of it.
May be more of a marketing concept to get consumers to replace their existing computers...
 

scannall

Golden Member
Jan 1, 2012
1,943
1,629
136
Intel Corp: Intel's CEO, Pat Gelsinger, is on a mission to make Intel a leader in the AI space, and he ain't playing around...With the first consumer chip featuring a built-in neural processor for machine learning tasks, Meteor Lake, set to ship later this year, we're about to witness an AI revolution...So, get ready for an AI-powered shakeup that's gonna reshape the tech landscape and redefine Intel's role in the market.
I'd just point out Apple has had the NPU for several years now, so they will be second at best.
 

moinmoin

Diamond Member
Jun 1, 2017
4,712
7,231
136
How could you not mentioning AMD has Ryzen AI within Phoenix Point? Not to mention Qualcomm has Hexagon AIE for few gens of 8cx platform....so Intel is actually fourth player to offer AIE, as usual late to party :rolleyes:
Even MediaTek has AI in its products, Samsung of course as well. NPU or whatever they are called are in phones for years already. The better question is who aside Intel doesn't have it yet?
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
Any relevant AI is being done in the cloud on massive amounts of data, the AI on the consumer chip will help a bit at making pictures and videos look better.
AI has two elements: training and inference. You are completely ignoring half of the picture.

Training requires massive amounts of power and data. That training will mostly be done in servers on the internet. Imagine things like teaching a computer what an image of a fish looks like--that will be done in the cloud.

But inference, where you use a model that the training created, often will not be done on servers. Things like asking PowerPoint to browse all of your computer photos to make a collage of photos of your father fishing for his memorial service--that is most likely done on individual devices. Focus the laptop camera only on your face (even more specifically, focus on your eyes) during a conference call--done on your laptop not the cloud. Getting photoshop to properly select the subject (all of it) and none of the background (none of it)--that will be done on your computer. Real-time voice translation from any language to any language works best on your device, especially if you are not in internet range. Having Microsoft Word write a cover letter summarizing your work is best done on your work desktop and not have all your sensitive work data sent to the cloud. Having Excel scan all your data to write your quarterly business report is best done in secret to avoid insider trading laws--that is on your computer not in the cloud. Etc.
 

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
AI has two elements: training and inference. You are completely ignoring half of the picture.

Training requires massive amounts of power and data. That training will mostly be done in servers on the internet. Imagine things like teaching a computer what an image of a fish looks like--that will be done in the cloud.

But inference, where you use a model that the training created, often will not be done on servers. Things like asking PowerPoint to browse all of your computer photos to make a collage of photos of your father fishing for his memorial service--that is most likely done on individual devices. Focus the laptop camera only on your face (even more specifically, focus on your eyes) during a conference call--done on your laptop not the cloud. Getting photoshop to properly select the subject (all of it) and none of the background (none of it)--that will be done on your computer. Real-time voice translation from any language to any language works best on your device, especially if you are not in internet range. Having Microsoft Word write a cover letter summarizing your work is best done on your work desktop and not have all your sensitive work data sent to the cloud. Having Excel scan all your data to write your quarterly business report is best done in secret to avoid insider trading laws--that is on your computer not in the cloud. Etc.
I have a a chatgpt ~3.5 clone running on my 3080ti and utilizing all the hardware and all the watts it is *dog slow*. It will tell you how to cook meth and what not BUT in its own time.

Point being, I dont see what a piece of side silicon on a CPU is gonna achieve in terms of revolutions.
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
I have a a chatgpt ~3.5 clone running on my 3080ti and utilizing all the hardware and all the watts it is *dog slow*. It will tell you how to cook meth and what not BUT in its own time.

Point being, I dont see what a piece of side silicon on a CPU is gonna achieve in terms of revolutions.
GPUs are fine for AI training. They suck for AI interference. That is why you want a dedicated piece of silicon: to be 100x to 1000x more power efficient and faster. On the topic of this thread, I'm sure Intel's version will be okay but not mind-blowing. But, it will just be the start.

You might not think it is going to be a revolution, but a lot of tech companies do think so and are pouring billions of dollars into it. I wouldn't go that far myself to say it is a revolution, but it will be a game changer for many specific tasks. Here is a simple write-up. https://www.forbes.com/sites/quicke...y-business-owner-should-know/?sh=4e3f940d2ab9
 

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
GPUs are fine for AI training. They suck for AI interference. That is why you want a dedicated piece of silicon: to be 100x to 1000x more power efficient and faster. On the topic of this thread, I'm sure Intel's version will be okay but not mind-blowing. But, it will just be the start.

You might not think it is going to be a revolution, but a lot of tech companies do think so and are pouring billions of dollars into it. I wouldn't go that far myself to say it is a revolution, but it will be a game changer for many specific tasks. Here is a simple write-up. https://www.forbes.com/sites/quicke...y-business-owner-should-know/?sh=4e3f940d2ab9
Inference or not, if you gonna rock a GPT-4 like LLM on silicon you gonna make space for 100 trillion pre trained parameters plus the hardware to operate. You saying all this gonna fit on a side piece of silicon on an i7?

edit: Where does it say that copilot is gonna run off local silicon? It sounds like an OpenAI LLM gpt derivative, I assume it's gonna be cloud based as well.
 
Last edited:

dullard

Elite Member
May 21, 2001
24,711
3,012
126
Inference or not, if you gonna rock a GPT-4 like LLM on silicon you gonna make space for 100 trillion pre trained parameters plus the hardware to operate. You saying all this gonna fit on a side piece of silicon on an i7?

edit: Where does it say that copilot is gonna run off local silicon? It sounds like an OpenAI LLM gpt derivative, I assume it's gonna be cloud based as well.
You are blurring two different issues here.

1) Speed of the initial chips. I assume at the start they will be no where near server speeds in the cloud. They will not be running GPT-4 and certainly not at speeds like you wish. But that will come with future generations and even the initial ones will be much more power efficient than a GPU. I assume the first iteration would be much more based around smaller tasks like proper autofocus for online meetings, background blur that doesn't glitch every 30 seconds, etc.

2) Capability of AI and whether or not it is a "revolution". As to whether or not it is a revolution is up to the specific person's needs and objectives. For some, these side chips will be a revolution. For me, they'll be a great time-saver here and there. Nice to have, but not a revolution. At least not yet.
 
  • Like
Reactions: Saylick

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
You are blurring two different issues here.

1) Speed of the initial chips. I assume at the start they will be no where near server speeds in the cloud. They will not be running GPT-4 and certainly not at speeds like you wish. But that will come with future generations and even the initial ones will be much more power efficient than a GPU. I assume the first iteration would be much more based around smaller tasks like proper autofocus for online meetings, background blur that doesn't glitch every 30 seconds, etc.

2) Capability of AI and whether or not it is a "revolution". As to whether or not it is a revolution is up to the specific person's needs and objectives. For some, these side chips will be a revolution. For me, they'll be a great time-saver here and there. Nice to have, but not a revolution. At least not yet.

No no. No no no no no. Goalposts. AI is defn. on the verge of revolutionizing... many things. What I am arguing is that it's just not the AI you get to cram into a phone. Unless it's utilizing decentralized services, cloud, whatever.
Sure, some day at sub 0.1nm and a fantazillion more transistors ... well anything can happen. The neural nets you can cram into a phone's silicon we've had since .. 20 years, its only with the BIG EFFEN networks and hardware of today AI is taking off.
Size matters.
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
No no. No no no no no. Goalposts. AI is defn. on the verge of revolutionizing... many things. What I am arguing is that it's just not the AI you get to cram into a phone. Unless it's utilizing decentralized services, cloud, whatever.
Sure, some day at sub 0.1nm and a fantazillion more transistors ... well anything can happen. The neural nets you can cram into a phone's silicon we've had since .. 20 years, its only with the BIG EFFEN networks and hardware of today AI is taking off.
Size matters.
The thing is, inference takes far, far, far less resources than training. Two completely different beasts. Will the first side chips be as powerful as servers? Certainly not. But, they'll be powerful enough to do some great effects here and there. It'll all be gravy from there on out.

If your only definition of success is that you get 100 trillion parameters in a phone, then you are setting yourself up for the wrong events. Usable interference doesn't take even a small fraction of that. It just takes some good matrix multiplication--even doing it with just a few bits (even 8-bit math can be overkill) to get the answer quickly is good enough.

Heck, speaking of goalposts, your 100 trillion parameters is 59x more than GPT-4 uses, and 571x more than GPT-3.5 And that is AFTER you moved the goalpost to being GPT-3.5 or GPT-4 instead of just general AI tasks.
 
  • Like
Reactions: moinmoin

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
The thing is, inference takes far, far, far less resources than training. Two completely different beasts. Will the first side chips be as powerful as servers? Certainly not. But, they'll be powerful enough to do some great effects here and there. It'll all be gravy from there on out.

If your only definition of success is that you get 100 trillion parameters in a phone, then you are setting yourself up for the wrong events. Usable interference doesn't take even a small fraction of that. It just takes some good matrix multiplication--even doing it with just a few bits (even 8-bit math can be overkill) to get the answer quickly is good enough.

Heck, speaking of goalposts, your 100 trillion parameters is 59x more than GPT-4 uses, and 571x more than GPT-3.5 And that is AFTER you moved the goalpost to being GPT-3.5 or GPT-4 instead of just general AI tasks.

1. That's nothing to do with goalposts, that was me hitting up a bogus article. The same point remains at 1.7T though

2. We have never talked about the compute necessary to train such a network. You keep pointing out inference, yet its the only thing we've been talking about. You need the same amount to neurons and parameters to load the trained network right? So while you may not need the brute force of a100's you do need the infrastructure to load the trained network and push data through, that means 1.7T weights.

3. I assumed we were talking "cutting edge of the AI revolution" as we sit in it since the article in the first post talks about "Dominating AI" and "taking position away from nVidia" - paraphrasing here, the link is gone.
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
1. That's nothing to do with goalposts, that was me hitting up a bogus article. The same point remains at 1.7T though
But there was never a bogus article. The OP had no article. He mentioned the neural chip in Meteor Lake. If you want a basis to start from (since we still don't know the details), lets start with this Anandtech article: https://www.anandtech.com/show/1887...-lake-vpu-block-lays-out-vision-for-client-ai
2. You need the same amount to neurons and parameters to load the trained network right?
No you absolutely don't. You can have simplified parameter sets for different tasks. If you are running software to identify a plant from a photo, you do not need that same software to include parameters that can translate Finnish into Klingon. You do not need one parameter set to rule them all like GPT-4 is attempting to do.
3. I assumed we were talking "cutting edge of the AI revolution" as we sit in it since the article in the first post talks about "Dominating AI" and "taking position away from nVidia" - paraphrasing here, the link is gone.
Ah, you made up an article out of thin air and are arguing against your imaginary article. I see now. Heck the 2nd post here asks for a link since the first post didn't have one. The first post including nVidia is your own post.
 
Last edited:

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
But there was never a bogus article. The OP had no article. He mentioned the neural chip in Meteor Lake. If you want a basis to start from (since we still don't know the details), lets start with this Anandtech article: https://www.anandtech.com/show/1887...-lake-vpu-block-lays-out-vision-for-client-ai

No you absolutely don't. You can have simplified parameter sets for different tasks. If you are running software to identify a plant from a photo, you do not need that same software to include parameters that can translate Finnish into Klingon. You do not need one parameter set to rule them all like GPT-4 is attempting to do.

Ah, you made up an article out of thin air and are arguing against your imaginary article. I see now. Heck the 2nd post here asks for a link since the first post didn't have one. The first post including nVidia is your own post.
I dont think you understrand what a neural network is or how it works, anyway I am not gaining from this convo, so I am out, happy life.
 

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
I don't think you have in mind what a typical phone's silicon looked like 20 years ago...
That could have been what I meant… OR I was talking about the neural networks. ANN’s are old technology. We’ve had them for quite a while, just didnt have the hardware to drive them like we do now.
 
Jul 27, 2020
13,143
7,810
106
Someone may one day come up with a sensory stimulation device that gives processing instructions to our cerebral cortex through tactile feedback and we'll finaly use every single neuron in our brain, to encode the reality we want our eyes to see. Then we'll get tired, fall asleep, wake up in the reality we hate and get back to working so we can create our desired reality. Rinse. Repeat.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,151
136
Someone may one day come up with a sensory stimulation device that gives processing instructions to our cerebral cortex through tactile feedback and we'll finaly use every single neuron in our brain, to encode the reality we want our eyes to see. Then we'll get tired, fall asleep, wake up in the reality we hate and get back to working so we can create our desired reality. Rinse. Repeat.
shoving a wet finger into a loose electrical socket would achieve the same result.
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
I dont think you understrand what a neural network is or how it works, anyway I am not gaining from this convo, so I am out, happy life.
Great, now that you are no longer spreading false information and having false targets for what AI must do, the rest of us can have a real conversation.

We don't currently know the exact specs of Meteor Lake's VPU. I'd suggest looking at Loihi 2 for approximate neural network capabilities on Intel 4. The goal is to START moving AI from the cloud to the client.
1690985091135.png

Will it have all the capability of GPT-4? Certainly not on Meteor Lake. But it doesn't have to either. It also will be way, way more powerful than "phone's silicon we've had since .. 20 years".

The initial uses will still be good nice-to-haves. Especially on a laptop chip you can expect initial effects to be much more along the lines of integration with Teams and Office. Think of simple AI tasks that don't require trillions of parameters. Tasks like low-power usage background blurring in Teams meetings without visual artifacts that we currently have. Or Office Help functions that actually help rather than link you to a webpage that is possibly slightly related to the task you actually need help with.
 
Last edited:

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
Intel Corp: Intel's CEO, Pat Gelsinger, is on a mission to make Intel a leader in the AI space, and he ain't playing around...With the first consumer chip featuring a built-in neural processor for machine learning tasks, Meteor Lake, set to ship later this year, we're about to witness an AI revolution...So, get ready for an AI-powered shakeup that's gonna reshape the tech landscape and redefine Intel's role in the market.

Great, now that you are no longer spreading false information and having false targets for what AI must do, the rest of us can have a real conversation.

We don't currently know the exact specs of Meteor Lake's VPU. I'd suggest looking at Loihi 2 for approximate neural network capabilities on Intel 4. The goal is to START moving AI from the cloud to the client.
View attachment 83958

Will it have all the capability of GPT-4? Certainly not on Meteor Lake. But it doesn't have to either. It also will be way, way more powerful than "phone's silicon we've had since .. 20 years".

The initial uses will still be good nice-to-haves. Especially on a laptop chip you can expect initial effects to be much more along the lines of integration with Teams and Office. Think of simple AI tasks that don't require trillions of parameters. Tasks like low-power usage background blurring in Teams meetings without visual artifacts that we currently have. Or Office Help functions that actually help rather than link you to a webpage that is possibly slightly related to the task you actually need help with.

1. I suspect OP got his details from here
1690990233531.png

The full article here


That's what I found before, somehow I forgot the detective work and assumed it was linked. It was not. It is now.

2. The OP states
With the first consumer chip featuring a built-in neural processor for machine learning tasks, Meteor Lake, set to ship later this year, we're about to witness an AI revolution
And here you are arguing that this revolution is driven by better Microsoft Teams background blurring.
Are you feeling ok man?

3. The friggin same article says something along these lines as well :

"On the one hand, of course Intel’s CEO would say this. It’s Nvidia, not Intel, which makes the kind of chips that power the AI cloud. Nvidia’s the one that rocketed to a $1 trillion market cap because it sold the right kind of shovels for the AI gold rush. Intel needs to find its own way in."


Summa summarum: Boom, you're roasted.
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
And here you are arguing that this revolution is driven by better Microsoft Teams background blurring.
Are you feeling ok man?

Summa summarum: Boom, you're roasted.
Let me try again, what do these two quotes from me state?
I wouldn't go that far myself to say it is a revolution
Nice to have, but not a revolution. At least not yet.
Not a revolution. If me stating it is NOT a revolution means I am roasted, then what does that make you (other than the obvious of being illiterate towards my posts and hallucinating links)? I already caught you lying once, and now after stating you won't come back, you come back. So you lied twice in the first page of the thread.

This is the first step: better AI than most of what people have outside of servers. Worse than servers. Still useful. Bigger is better with AI. But you don't have to have trillions of parameters to do useful AI.
 
Last edited:
  • Love
Reactions: cytg111