The comparison to bacteria fails because bacteria are living things, AI has no life.
Our discussion will now fall apart because we can ascribe any and all traits and ability's we want to an AI. It can be an idiot savant or an all powerful, all knowing, hulking mass of will power. The fundamental issue is that AI is imaginary, so imagination becomes the limiting factor in it's ability's.
So...ok, bacteria fit our arbitrary definition of "life." Ok. That doesn't have much bearing on something's ability to be durable. An AI could exist in a single isolated system, or it could exist in a distributed manner, such as out on cloud-based computers.
Self-aware AIs are currently imaginary, yes. Intelligent AIs do exist in labs though, and are capable of learning new things. How intelligent? Not very. We're limited by the number of transistors we can dedicate to the task.
At one point, a 500 TFLOP computer was also impossible.
Memory as it is for us is just the storage of a dynamic model of the environment, and even of existing memories. Intelligence is the ability to adapt that model on the fly. Self-awareness is the inclusion of one's own existence in that model of the environment. Consciousness is the continuous updating of this entire system of models.
Implementing that on computers is tough, currently. We've only really been working on the problem properly for a few decades. Computers were also originally designed to yield consistent and repeatable results. However, you've got plenty of emergent properties showing up: Computer bugs appear
constantly. A complex system behaves in unexpected ways. We might find ourselves at some point with an AI supercomputer in a lab that has, unintentionally, become aware of its own existence.