Info Crazy analog CPU using flash cells to do Matrix Multiplication for AI processing

igor_kavinski

Diamond Member
Jul 27, 2020
3,839
2,309
106
M1076 Analog Matrix Processor – Mythic (mythic-ai.com)

Array of 76 AMP tiles, each with a Mythic Analog Compute Engine (Mythic ACE™)
Capacity for up to 80M weights - able to run single or multiple complex DNNs entirely on-chip
On-chip DNN model execution and weight parameter storage with no external DRAM
Deterministic execution of AI models for predictable performance and power

Execution of models at higher resolution and lower latency for better results
Support for INT4, INT8, and INT16 operations
4-lane PCIe 2.1 interface with up to 2GB/s of bandwidth for inferencing processing
Available I/Os – 10 GPIOs, QSPI, I2C, and UARTs
19mm x 15.5mm BGA package
Typical power consumption running complex models 3~4W
Is there any other AI chip that can boast these features?
 
  • Wow
Reactions: Grazick

amd6502

Senior member
Apr 21, 2017
971
359
136
We're Building Computers Wrong - YouTube

OMG! Mind blown. Consumes just 3-4W.

Highly recommended. Has captions.

Disclaimer: Takes a bit of time to get to the good stuff. Some patience required.
Yup little doubt this has got to be the future.

You can always do rounding to a digital answer if 100% consistency is needed, although you end up dropping a lot of accuracy.

Multiplications are lightening quick and low power. (What about division and inverse?)
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,101
224
116
Yup little doubt this has got to be the future.

You can always do rounding to a digital answer if 100% consistency is needed, although you end up dropping a lot of accuracy.
Make no mistake that analog circuits have been a dead end and still are for the most part ...

How can analog circuits be the future if they have no programming languages or even the capability to represent fundamental boolean logic primitives like NOT, (N)AND, (N)OR, X(N)OR that we take for granted all of the time with digital circuits ?

Analog circuit design has far more gatekeeping involved with subjects like differential equations and electromagnetic physics while just learning basic boolean logic can let you create digital circuits that are turing complete ...
 
Last edited:

amd6502

Senior member
Apr 21, 2017
971
359
136
Have you watched the video Igor linked, or at least the part about multiplication and addition. Neural networks use massive linear algebra (i.e. multiplication and addition), as do GPU graphics. So both GPU and AI silicon would be ripe to take advantage of analog arithmetic operations.

I think the future would continue to be digital, but with analog subcomponents that do some crazy fast and low power math. Hybrid computing, framed in the same digital structure as we're used to, but with internal analog accelerators for fast math. You could even make these transparent so a programmer might never even know the details of analog components being underneath.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,101
224
116
Have you watched the video Igor linked, or at least the part about multiplication and addition. Neural networks use massive linear algebra (i.e. multiplication and addition), as do GPU graphics. So both GPU and AI silicon would be ripe to take advantage of analog arithmetic operations.

I think the future would continue to be digital, but with analog subcomponents that do some crazy fast and low power math. Hybrid computing, framed in the same digital structure as we're used to, but with internal analog accelerators for fast math. You could even make these transparent so a programmer might never even know the details of analog components being underneath.
Only if you think the future of graphics or other domains of computing doesn't rely on the IEEE 754 standard or deterministic logic which absolutely isn't going to be true anytime soon ...

Much of graphics or other fields of computing that uses linear algebra also needs exact behaviour for calculations or complementary deterministic logic as well. Being transparent could not be farther from the truth. If we take DLSS as an example of matrix hardware acceleration in real-time graphics applications, developers still had to do tons of work integrating this functionality despite Nvidia touting it as some sort of black box to them with many more hours of optimizing their neural network on their end. Much work involved all for what is relegated to being some glorified TAA filter so I can only imagine how disastrous it would be to have the rest of the entire graphics logic or other random codebases dependent on some black box that's not even consistent or outside of a developers control ...
 

Frenetic Pony

Member
May 1, 2012
153
87
101
Hmmm, imagining just using optical to connect multiple of these chips together, on silicon photonics for chiplets, then some sort of photonic interconnect skipping the current standards. No conversion to digital until the last possible second... the current tech isn't training ready, but maybe with some work. Sounds expensive, might also sound worth it.
 

igor_kavinski

Diamond Member
Jul 27, 2020
3,839
2,309
106
I think the future would continue to be digital, but with analog subcomponents that do some crazy fast and low power math.
I think some amazing things could happen if engineers from Mythic AI and Nvidia got together to research possibilities of combining these technologies to complement each other. But maybe AMD should buy Mythic AI. I don't trust Jensen.
 

dullard

Elite Member
May 21, 2001
24,047
2,263
126
Only if you think the future of graphics or other domains of computing doesn't rely on the IEEE 754 standard or deterministic logic which absolutely isn't going to be true anytime soon ...

Much of graphics or other fields of computing that uses linear algebra also needs exact behaviour for calculations or complementary deterministic logic as well. Being transparent could not be farther from the truth. If we take DLSS as an example of matrix hardware acceleration in real-time graphics applications, developers still had to do tons of work integrating this functionality despite Nvidia touting it as some sort of black box to them with many more hours of optimizing their neural network on their end. Much work involved all for what is relegated to being some glorified TAA filter so I can only imagine how disastrous it would be to have the rest of the entire graphics logic or other random codebases dependent on some black box that's not even consistent or outside of a developers control ...
Have you heard of AI? AI absolutely does not need or even want exact behavior. We don't have an exact definition for a photo of a dog--one pixel difference suddenly does not make it a cat or a panda. No, AI used to determine that the photo is a dog needs the exact opposite of exactness. If AI sees enough photos of a dog, then AI can learn what a dog approximately is. So when it sees something new that it has never seen before, it can say that it is very likely a dog or is very likely to not be a dog.

Or in more practical terms, do you want the AI driven vehicles to brake when you cross the street in front of them? Or should they only brake when they see an exact copy of a predefined image?

AI processors are going from 32 bits, to 16 bits, to 8 bits because they can do far more math in the same time/same power at 8 bits and they don't need the higher precision or accuracy. https://www.intel.com/content/www/us/en/developer/articles/technical/lower-numerical-precision-deep-learning-inference-and-training.html
 
Last edited:

amd6502

Senior member
Apr 21, 2017
971
359
136
I think it could be. Maybe it's more of a matter of how quickly the analog components can be developed into affordable chips with good characteristics. (Yes that sounds like it will take a few years easily). For applications like rendering video gaming graphics output, high consistency precision isn't really very important. So it's not that the demand and applications aren't there already.

There'll probably be some parameter programmers can optionally use to specify how much accuracy to trade in for precision via rounding. And some numerical calculations do a shooting method to refine a guess to be much closer to the real solution with each iteration. So for some things that need high accuracy as well as high enough precsion maybe one can do a few rounds of calculation using the analog maths rather than one.
 

Frenetic Pony

Member
May 1, 2012
153
87
101
I think it could be. Maybe it's more of a matter of how quickly the analog components can be developed into affordable chips with good characteristics. (Yes that sounds like it will take a few years easily). For applications like rendering video gaming graphics output, high consistency precision isn't really very important. So it's not that the demand and applications aren't there already.

There'll probably be some parameter programmers can optionally use to specify how much accuracy to trade in for precision via rounding. And some numerical calculations do a shooting method to refine a guess to be much closer to the real solution with each iteration. So for some things that need high accuracy as well as high enough precsion maybe one can do a few rounds of calculation using the analog maths rather than one.
High precison for graphics is uhh, required. You need exact replication on most things. AI is literally the only application that might be fault tolerant enough for this sort of analog.
 

jamescox

Senior member
Nov 11, 2009
551
941
136
Make no mistake that analog circuits have been a dead end and still are for the most part ...

How can analog circuits be the future if they have no programming languages or even the capability to represent fundamental boolean logic primitives like NOT, (N)AND, (N)OR, X(N)OR that we take for granted all of the time with digital circuits ?

Analog circuit design has far more gatekeeping involved with subjects like differential equations and electromagnetic physics while just learning basic boolean logic can let you create digital circuits that are turing complete ...
This isn’t really an analog processor. It is more like having a hardware random number generating circuit in a digital processor. There is still a digital processor in this device. You would program the weights into the array of flash cells and then it can be used for inference, not training. The digital processor would handle everything except the multiplications needed for the inference operation.

This could still allow for very low power operation compared to using a pure digital processor. I don’t know how frequently it can be reprogrammed. It might mostly be used for a static network, like a cell phone listening for specific phrases without activating the rest of the system or a self driving car doing object classification. A person walking is different than a person on a bicycle or a person walking a bicycle. We are getting to the point where a self driving vehicle may take considerable amounts of power just for the processing, so this might be very useful to reduce power consumption.
 

Leeea

Golden Member
Apr 3, 2020
1,916
2,542
106
Analog is not THE future, although it does have A future in niche areas.


Analog has issues with precision. Analog computations are effected by component aging, quality, ambient temperature, and all of these issues result in loss of data precision. The op video mentions this challenge, and attempting to overcome it by converting back to digital over and over.

Digital systems are more resistant to noise both on the power and on the signal lines. Digital systems are capable of detecting internal failure and typically throw a system fault rather then output garbage data. Analog systems have to be externally tested to verify output is within expected parameters for input. Input parameters not specifically tested can still produce garbage data. Retesting and adjustment has to be done frequently to avoid signal drift (lose of precision).

Analog is hardwired (including the cool mythic chips in the op post), and cannot be altered to a new function. The OPs video compares it to a GPU, but a more fair comparison would be an ASIC. The mythic chips specifically are subject to cell leakage and wear, a bit like an SDD where every cell is used and there is no over provisioning.

Analog systems achieve efficiency gains by applying special circuitry to solve one problem. The same can be done with digital systems for massive gains also. ( https://en.wikipedia.org/wiki/Application-specific_integrated_circuit ).

Analog is subject to the same limitations as an ASIC, but has many downsides ASICs do not have. The applications for analog systems where they show appreciable gains over ASIC systems are niche.
 
Last edited:

jamescox

Senior member
Nov 11, 2009
551
941
136
Analog is not THE future, although it does have A future in niche areas.


Analog has issues with precision. Analog computations are effected by component aging, quality, ambient temperature, and all of these issues result in loss of data precision. The op video mentions this challenge, and attempting to overcome it by converting back to digital over and over.

Digital systems are more resistant to noise both on the power and on the signal lines. Digital systems are capable of detecting internal failure and typically throw a system fault rather then output garbage data. Analog systems have to be externally tested to verify output is within expected parameters for input. Input parameters not specifically tested can still produce garbage data. Retesting and adjustment has to be done frequently to avoid signal drift (lose of precision).

Analog is hardwired (including the cool mythic chips in the op post), and cannot be altered to a new function. The OPs video compares it to a GPU, but a more fair comparison would be an ASIC. The mythic chips specifically are subject to cell leakage and wear, a bit like an SDD where every cell is used and there is no over provisioning.

Analog systems achieve efficiency gains by applying special circuitry to solve one problem. The same can be done with digital systems for massive gains also. ( https://en.wikipedia.org/wiki/Application-specific_integrated_circuit ).

Analog is subject to the same limitations as an ASIC, but has many downsides ASICs do not have. The applications for analog systems where they show appreciable gains over ASIC systems are niche.
They have gotten very good at programming precise voltage levels into flash cells and they don’t really degrade very much over time. You can leave a flash device for years and still read it although there is some ECC being used.

Also, this isn’t really hardwired. You can presumably re-write the cells many times, possibly thousands. If you wanted to actually hard code a set of coefficients, they could make an ASIC of similar design just with actual resistors corresponding to the necessary coefficients. It would not be programmable in any way though.
 
  • Like
Reactions: Leeea

nickmania

Member
Aug 11, 2016
44
12
81
I am really surprised about this analog computers, as for my age I have never heard about them. I am really impressed, did you know any comparation between this analog computers and our brains? I have been reading a lot of time about human brain and I found a lot of similarities with this analog computers, specially in the modulation of the signal, for example, the nervous system activating profound memories in our cells due the intensity of the "signal", for example a shock that activates our chemicals pathways and the signals from the amigdala.

Thanks for bringing here this video, you have give me a really amazing clue about the similarity of the human brain and computers.
 
  • Like
Reactions: igor_kavinski

ASK THE COMMUNITY