Question Intel Corp CEO Pat Gelsinger on AI revolution..

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

A///

Diamond Member
Feb 24, 2017
4,352
3,151
136
who the heck says roasted anymore? It isn't the 90s you geezer. I haven't said that word since then. It's uncool now a days. Hasn't been damn cool in over 20 years.
 

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
Let me try again, what do these two quotes from me state?


Not a revolution. If me stating it is NOT a revolution means I am roasted, then what does that make you (other than the obvious of being illiterate towards my posts and hallucinating links)? I already caught you lying once, and now after stating you won't come back, you come back. So you lied twice in the first page of the thread.

This is the first step: better AI than most of what people have outside of servers. Worse than servers. Still useful. Bigger is better with AI. But you don't have to have trillions of parameters to do useful AI.
Its not a revolution but its a gamechanger and you link up


to underscore that it *is* a revolution, though that has nothing to do with the on die silicon, copilot, if I understand it correctly is LLM like chatgpt (it makes sense, openai and ms).

So you started talking about LLM's and referencing it as a revolution and you are surprised I start talking about chatgpt.

Dude. :):):) ... You funny.

Also. Point out where I lied?
 

dullard

Elite Member
May 21, 2001
24,711
3,012
126
Also. Point out where I lied?
Lets start with this lie:
the first post talks about "Dominating AI" and "taking position away from nVidia" - paraphrasing here, the link is gone.
And for reference I'll copy the first post, which says nothing about dominating AI, says nothing about nVidia, and has no link (never had one):
Intel Corp: Intel's CEO, Pat Gelsinger, is on a mission to make Intel a leader in the AI space, and he ain't playing around...With the first consumer chip featuring a built-in neural processor for machine learning tasks, Meteor Lake, set to ship later this year, we're about to witness an AI revolution...So, get ready for an AI-powered shakeup that's gonna reshape the tech landscape and redefine Intel's role in the market.

Sure you later came up with an article. However your own article says nothing about Dominating AI. It mentions moving AI to the edge for use cases that don't work well on the cloud. Here are use cases mentioned in your own link for CLIENT chips (the same use cases that you appear to think can only be done on the cloud):
All of the new effects: real-time language translation in your zoom calls, real-time transcription, automation inferencing, relevance portraying, generated content and gaming environments, real-time creator environments through Adobe and others that are doing those as part of the client, new productivity tools — being able to do local legal brief generations on a clients, one after the other, right? Across every aspect of consumer, developer and enterprise efficiency use cases, we see that there’s going to be a raft of AI enablement and those will be client-centered. Those will also be at the edge.
 
Last edited:
  • Like
Reactions: Mopetar and IGBT

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
Lets start with this lie:

And for reference I'll copy the first post, which says nothing about dominating AI, says nothing about nVidia, and has no link (never had one):


Sure you later came up with an article. However your own article says nothing about Dominating AI. It mentions moving AI to the edge for use cases that don't work well on the cloud. Here are use cases mentioned in your own link for CLIENT chips (the same use cases that you appear to think can only be done on the cloud):
No. That is not a lie, I am man, I am fallible, and I openly admitted to that. I subsequently sourced OP for him, (same I did originally and got shit mixed up, thought it was part of his post, it was not.)

Now find a real lie. Accusing someone of lying is heavy business man. Can you back it up? :).
 

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
On a side note, I do have some data points that dont line up in my head though. I have worked with neural nets before they became famous, earliest 15 years ago before there was popular API's at the ready and you coded your own shit. Basic stuff, like four layers, 2 hidden, 1 input, 1 output, backpropagation and sigmoid activation, feedforward(have also used APIs/libs that came later). The thing that bugs me a little is that I began doing the math on the 1.7T parameters@32 bits...
And that is a lot of memory. It is a s-ton of memory.

Everything I know about neural nets says that you need the final trained network to load into a similar topology that it was trained on. So yea, it's faster cause it's just one pass forward and you have a result, and not millions of iterative training passes.

It's just, does chatgpt-4 that we all use, really execute on a buttload of A100's 1.7T topology each prompt we fire at it? The alternative is that someone invented neural algebra and that's impossible.

edit: This is a sidetrack, a different thread perhaps.
 
Jul 27, 2020
13,143
7,810
106
It's just, does chatgpt-4 that we all use, really execute on a buttload of A100's 1.7T topology each prompt we fire at it?
ChatGPT works so fast because it uses a technique called Transformer. Transformer is a neural network architecture specifically designed for natural language processing tasks. It can learn long-range dependencies between words, allowing it to generate fluent and coherent text.
I'll be happy if you admit that you didn't know about the Transformer NN arch coz I didn't either :p
 
  • Like
Reactions: cytg111

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136

I'll be happy if you admit that you didn't know about the Transformer NN arch coz I didn't either :p
Cool, it doesnt really quantify what a transformer is, its size etc in relation to chatgpt. And fast in relation to what alternative? I admit, I find myself severely behind the curve here, having a back to school moment to be frank. Not a bad time or bad topic perhaps.
(Hey, I love eating crow if it means I gain knowledge).
 
  • Like
Reactions: igor_kavinski

dullard

Elite Member
May 21, 2001
24,711
3,012
126
On a side note, I do have some data points that dont line up in my head though. I have worked with neural nets before they became famous, earliest 15 years ago before there was popular API's at the ready and you coded your own shit. Basic stuff, like four layers, 2 hidden, 1 input, 1 output, backpropagation and sigmoid activation, feedforward(have also used APIs/libs that came later). The thing that bugs me a little is that I began doing the math on the 1.7T parameters@32 bits...
And that is a lot of memory. It is a s-ton of memory.

Everything I know about neural nets says that you need the final trained network to load into a similar topology that it was trained on. So yea, it's faster cause it's just one pass forward and you have a result, and not millions of iterative training passes.

It's just, does chatgpt-4 that we all use, really execute on a buttload of A100's 1.7T topology each prompt we fire at it? The alternative is that someone invented neural algebra and that's impossible.
1) I don't know specifically about GPT-4's internal workings, but it would do better without using 32 bit parameters. 8-bits work better since they are faster, use less memory, and less power (thereby letting you use more parameters). https://spectrum.ieee.org/floating-point-numbers-posits-processor

2) GPT-4 isn't one large model. It is a group of smaller models. It doesn't always run all models. That is, it can be broken up into smaller subsections as needed. https://pub.towardsai.net/gpt-4-8-models-in-one-the-secret-is-out-e3d16fd1eee0

3) The neural math is fascinating. A model does not need to all run, it only needs to focus on specific parts as needed. And as igor said, it can run in parallel with multiple runs at the same time.
 
  • Like
Reactions: cytg111

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
1) I don't know specifically about GPT-4's internal workings, but it would do better without using 32 bit parameters. 8-bits work better since they are faster, use less memory, and less power (thereby letting you use more parameters). https://spectrum.ieee.org/floating-point-numbers-posits-processor

2) GPT-4 isn't one large model. It is a group of smaller models. It doesn't always run all models. That is, it can be broken up into smaller subsections as needed. https://pub.towardsai.net/gpt-4-8-models-in-one-the-secret-is-out-e3d16fd1eee0

3) The neural math is fascinating. It doesn't all run, it only needs to focus on specific parts as needed.
See, I sort of inferred that it must be like this, cause the alternative would be something like n vs np...
Dude that link, thanks for that.

edit: So god damn... ChatGPT-4 is like 8 daisy chained 3.5's. Oh man. That scales.

(and if 8 bits get's the job done... is that in the a100 though?).
 
  • Haha
Reactions: igor_kavinski

dullard

Elite Member
May 21, 2001
24,711
3,012
126
I did? OK, I'll take it even if you may be mistaken :D
I'll admit my mistake now. Your link said it, not you. But you can still take the credit--there is plenty of credit to go around and plenty of learning to do when it comes to AI.

"The Transformer is also able to parallelize its computations, which means that it can process multiple requests at the same time. This allows ChatGPT to respond to user queries very quickly."
 
  • Like
Reactions: igor_kavinski

A///

Diamond Member
Feb 24, 2017
4,352
3,151
136
I prefer the kind of ai that speeds tasks up. not the kind of microsoft office stuff igor is obsessed with he's asked me about n prior threads. Some people are rightly pissy about intel including ip to speed up certain tasks on processors where it may waste valuable die space according to some but amd plans to do the same. they all do. What then. What will you whine about then.
 

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136
They probably have to split it up in 8's cause of hardware limitations... Which means that it is current hardware that is the bottleneck for achieving skynet(sorry), not neural network design or teaching algos.

The software is ahead, need the hardware to catch up.

Semi scary thought.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,151
136
on the other hand @cytg111 if ai can one day diagnose or test human functions without excessive and intrusive testing that is a win for humanity. men and women would rejoice because no longer would they need to have to be prodded, poked or stretched to examine something.
 

Abwx

Lifer
Apr 2, 2011
10,581
3,053
136
Do not get your hopes too high, silicon AI on the CPU is only for trivial tasks, exemple is a sophisticated DSP for unusual sound processing, an Intel memo gave as exemple the possibility to make your barking dog shut up while you re performing a visioconference...
 
  • Haha
Reactions: Markfw

A///

Diamond Member
Feb 24, 2017
4,352
3,151
136
already on their silicon. many solutions like it exist, hardware and dedicated offloaded software based. amd's got it on their 6000 cpu series and 6000 and above graphic cards. i imagine if they didn't include avx512 on their cpus it would have been available on the cpu or they plan on adding the same "useless" features through the basis of their acquisition of xylinx.
 

Thunder 57

Platinum Member
Aug 19, 2007
2,501
3,403
136
Do not get your hopes too high, silicon AI on the CPU is only for trivial tasks, exemple is a sophisticated DSP for unusual sound processing, an Intel memo gave as exemple the possibility to make your barking dog shut up while you re performing a visioconference...

Couldn't you know, put the dog in another room? Use a dog whistle? I was hoping AI could do better. I am semi joking. I know it is early. And we must beware of accidently creating Skynet.
 

senttoschool

Golden Member
Jan 30, 2010
1,687
370
136
Point being, I dont see what a piece of side silicon on a CPU is gonna achieve in terms of revolutions.

That's why I made this prediction.

People here don't agree though.

Today, a SoC is mainly a GPU with a CPU attached to it. The GPU usually takes up significantly more space. Look at the die shot of an Apple Silicon.

In the future, the NPUs will use most of the space in a SoC. The traditional CPU is less and less relevant.
 
Last edited:

cytg111

Lifer
Mar 17, 2008
22,342
12,079
136

That's why I made this prediction.

People here don't agree though.

Today, a SoC is mainly a GPU with a CPU attached to it. The GPU usually takes up significantly more space. Look at the die shot of an Apple Silicon.

In the future, the NPUs will use most of the space in a SoC. The traditional CPU is less and less relevant.

I get it, but I dont see it. 2040 maybe? 2050? But still, the cloud based stuff will still have a huuuuge advantage over anything running locally. If you could get anything close to gpt-3 running on local hardware, maybe you could use it for NPC's in gaming or something that requires low latency?

I do like the idea of something running locally, cause the alternative is to have a cloud AI agent know about all my privates, so if the privacy oriented stuff could be offloaded from the cloud, that would be great. (thinking something like copilot etc).
 

FangBLade

Member
Apr 13, 2022
199
395
96
Considering the state Pat has brought the company to, to the extent that even the small AMD is outperforming them in all areas, maybe it would be better for Intel to be taken over by artificial intelligence, lol.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,151
136
I was at one of those weeklong conferences Intel had at major events trying to promote this pile of junk. I'd like to say most of us in the crowd knew it was dead in the water. I only went through my employer then because Intel sales people would take your company out for lunch, dinner and drinks to get you to sign a contract with them. They were hard up for something good then while AMD was slashing away at them even though Intel were giving them a good dressing down through backroom deals with major odms.

I remember one meal coming out to being a few thousand dollars when all was said and done for a group of 13 people. included booze but not tip. Gained a few kilos on that trip. Outside of getting free room and board, meals, alcohol out of Intel back then it was one of their most pivotal points in survival. They'd sunk so much money into Itanium that they really needed that win. Short of buying up prostitutes for their clients to go with the hooch their situation sucked, even with all those backroom deals i mentioned with the major ODMs or "smaller" companies that relied on fast processors to accomplish their work. Had Intel not messed with AMD's success then and let things go, the state of processors would be different IMO. We'd still have gotten Core/2 duo, but I think AMD tripping over themselves later on may not have ocurred because they'd have had the finances to sustain their r n d. the ati purchase was them over paying and if jensen had not had the ego of the sun along with Hector Ruiz being a massive dummy with the ego the size of the sun, then AMD's original intent on purchasing Nvidia may have gone through and AMD would be a force to reckon with today.

Natural progression would have been ruiz kicked out by the board, Jensen instated, and Su later coming on in some capacity. Although if we take what we know now and Su went to Intel despite Intel being known for not hiring outsiders, it would be an interesting ensemble. Intel not hiring outsiders for high up positions may be a narrow sighted issue of theirs.
 
Last edited:
  • Like
Reactions: Thibsie