Is broasted better?who the heck says roasted anymore?
you've been playing with your banana bunches again haven't you?Is broasted better?
Its not a revolution but its a gamechanger and you link upLet me try again, what do these two quotes from me state?
Not a revolution. If me stating it is NOT a revolution means I am roasted, then what does that make you (other than the obvious of being illiterate towards my posts and hallucinating links)? I already caught you lying once, and now after stating you won't come back, you come back. So you lied twice in the first page of the thread.
This is the first step: better AI than most of what people have outside of servers. Worse than servers. Still useful. Bigger is better with AI. But you don't have to have trillions of parameters to do useful AI.
Lets start with this lie:Also. Point out where I lied?
And for reference I'll copy the first post, which says nothing about dominating AI, says nothing about nVidia, and has no link (never had one):the first post talks about "Dominating AI" and "taking position away from nVidia" - paraphrasing here, the link is gone.
Intel Corp: Intel's CEO, Pat Gelsinger, is on a mission to make Intel a leader in the AI space, and he ain't playing around...With the first consumer chip featuring a built-in neural processor for machine learning tasks, Meteor Lake, set to ship later this year, we're about to witness an AI revolution...So, get ready for an AI-powered shakeup that's gonna reshape the tech landscape and redefine Intel's role in the market.
All of the new effects: real-time language translation in your zoom calls, real-time transcription, automation inferencing, relevance portraying, generated content and gaming environments, real-time creator environments through Adobe and others that are doing those as part of the client, new productivity tools — being able to do local legal brief generations on a clients, one after the other, right? Across every aspect of consumer, developer and enterprise efficiency use cases, we see that there’s going to be a raft of AI enablement and those will be client-centered. Those will also be at the edge.
No. That is not a lie, I am man, I am fallible, and I openly admitted to that. I subsequently sourced OP for him, (same I did originally and got shit mixed up, thought it was part of his post, it was not.)Lets start with this lie:
And for reference I'll copy the first post, which says nothing about dominating AI, says nothing about nVidia, and has no link (never had one):
Sure you later came up with an article. However your own article says nothing about Dominating AI. It mentions moving AI to the edge for use cases that don't work well on the cloud. Here are use cases mentioned in your own link for CLIENT chips (the same use cases that you appear to think can only be done on the cloud):
It's just, does chatgpt-4 that we all use, really execute on a buttload of A100's 1.7T topology each prompt we fire at it?
I'll be happy if you admit that you didn't know about the Transformer NN arch coz I didn't eitherChatGPT works so fast because it uses a technique called Transformer. Transformer is a neural network architecture specifically designed for natural language processing tasks. It can learn long-range dependencies between words, allowing it to generate fluent and coherent text.
Cool, it doesnt really quantify what a transformer is, its size etc in relation to chatgpt. And fast in relation to what alternative? I admit, I find myself severely behind the curve here, having a back to school moment to be frank. Not a bad time or bad topic perhaps.![]()
How does chat GPT work so fast?
Answer (1 of 7): GPT, or Generative Pre-training Transformer, is a type of language model that uses machine learning techniques to generate text that is similar to human-written text. It works by predicting the next word in a sequence of words, based on the context of the previous words. One rea...www.quora.com
I'll be happy if you admit that you didn't know about the Transformer NN arch coz I didn't either![]()
1) I don't know specifically about GPT-4's internal workings, but it would do better without using 32 bit parameters. 8-bits work better since they are faster, use less memory, and less power (thereby letting you use more parameters). https://spectrum.ieee.org/floating-point-numbers-posits-processorOn a side note, I do have some data points that dont line up in my head though. I have worked with neural nets before they became famous, earliest 15 years ago before there was popular API's at the ready and you coded your own shit. Basic stuff, like four layers, 2 hidden, 1 input, 1 output, backpropagation and sigmoid activation, feedforward(have also used APIs/libs that came later). The thing that bugs me a little is that I began doing the math on the 1.7T parameters@32 bits...
And that is a lot of memory. It is a s-ton of memory.
Everything I know about neural nets says that you need the final trained network to load into a similar topology that it was trained on. So yea, it's faster cause it's just one pass forward and you have a result, and not millions of iterative training passes.
It's just, does chatgpt-4 that we all use, really execute on a buttload of A100's 1.7T topology each prompt we fire at it? The alternative is that someone invented neural algebra and that's impossible.
See, I sort of inferred that it must be like this, cause the alternative would be something like n vs np...1) I don't know specifically about GPT-4's internal workings, but it would do better without using 32 bit parameters. 8-bits work better since they are faster, use less memory, and less power (thereby letting you use more parameters). https://spectrum.ieee.org/floating-point-numbers-posits-processor
2) GPT-4 isn't one large model. It is a group of smaller models. It doesn't always run all models. That is, it can be broken up into smaller subsections as needed. https://pub.towardsai.net/gpt-4-8-models-in-one-the-secret-is-out-e3d16fd1eee0
3) The neural math is fascinating. It doesn't all run, it only needs to focus on specific parts as needed.
I did? OK, I'll take it even if you may be mistakenAnd as igor said, it can run in parallel with multiple runs at the same time.
I'll admit my mistake now. Your link said it, not you. But you can still take the credit--there is plenty of credit to go around and plenty of learning to do when it comes to AI.I did? OK, I'll take it even if you may be mistaken![]()
Do not get your hopes too high, silicon AI on the CPU is only for trivial tasks, exemple is a sophisticated DSP for unusual sound processing, an Intel memo gave as exemple the possibility to make your barking dog shut up while you re performing a visioconference...
Point being, I dont see what a piece of side silicon on a CPU is gonna achieve in terms of revolutions.
Question - By 2030, we will be buying massive NPUs with a CPU and GPU attached to it.
Today, NPUs such as Apple's neural engine takes less space than the CPU or GPU in a SoC. By 2030, I predict that we won't be buying "CPUs". We will all be buying NPUs with a CPU and a GPU attached to it. NPUs will become the new CPUs. More applications will start to make massive use of AI...forums.anandtech.com
That's why I made this prediction.
People here don't agree though.
Today, a SoC is mainly a GPU with a CPU attached to it. The GPU usually takes up significantly more space. Look at the die shot of an Apple Silicon.
In the future, the NPUs will use most of the space in a SoC. The traditional CPU is less and less relevant.
In the future, the NPUs will use most of the space in a SoC. The traditional CPU is less and less relevant.
I was at one of those weeklong conferences Intel had at major events trying to promote this pile of junk. I'd like to say most of us in the crowd knew it was dead in the water. I only went through my employer then because Intel sales people would take your company out for lunch, dinner and drinks to get you to sign a contract with them. They were hard up for something good then while AMD was slashing away at them even though Intel were giving them a good dressing down through backroom deals with major odms.Itanium