But the question is whether it rises to a level of complexity, in terms of numbers of variables and calculations needed to update their state, that would require a notable amount of computing power? My knee-jerk answer was also "No," but maybe I need to learn more about transistors.
Not a notable amount of computing power, no. In fact, we have some pretty good computer simulations for this sort of stuff already. The amount of computational power needed depends on how accurate you want the representation to be. There are simple modeling equations that cover most problems, but once you start talking about things like the physical layout of the transistors and their proximity to each other, then things get really dicey pretty quickly.
That currently isn't viably simulatable. The likes of Intel and AMD instead use rules of thumb guidelines for transistor placement rather than relying on fully proven out simulations to figure out just how close you can smoosh those things together.
Of course, that is mostly for the complex logic circuits (which are taking up a very small portion of diespace) I've got little doubt that caches have received a much larger portion of computational power to optimize size on the die. Especially since cache structure is pretty simple all in all.
But I digress. The answer is, you could accurately simulate a transistor with any modern computer (and probably most graphic calculators). The complexity increases exponentially (maybe even factorially) when you start to introduce new elements.
How many transistors are needed depends. Probably enough to get a processor put together that can start doing mathematics.
Or you could take the dumb route and say "one", since transistors tend to behave like transistors oddly enough.
tl;dr Complexity of simulation increases as the simulation become more accurate.