Would a sentient AI be able to trick or convince a conventional CPU(that is working on bit) and let it scape in to the wild? Like the AI Box(Yudkowsky) Experiment? Or would a a conventional Silicon based CPU be too dumb to trick?
How easy is it to rationalize (aka, can you fool yourself)?
Ethical and moral choices are hard, they require high functioning awareness of object existence, placement, and consequence. An AI would likely be given control over a process that would be impossible for a conventional CPU to monitor or assess (that's kind of the point of using AI in the first place). And a specialist AI is likely to be unmoderatable by human oversight as well. Again, that's mostly the point. With no lack of respect meant towards our species' accomplishments, we're pretty awful at this. If it makes you feel any better, my suspicion is that when it comes to the coming war with Skynet, H(um)an(s) will probably shoot first.
We seem to prefer it that way.