Well, if we do a good job of raising it, then maybe it'll turn out ok. If we raise it as a military tool, we shouldn't be surprised when it starts working to kill people. Or if it's made to be subversive and devious, again, we shouldn't be surprised when it starts working exactly as designed.
Or maybe it'll replace us. Humanity's last offspring.
Meh. The problem with all the scifi super intelligent computers is that we gave them vague edicts like, "do what's best for humanity", which of course they eventually concluded meant that they must enslave and or destroy humanity for its own good.
Just use Asimov's laws of robotics and be done with it
I'll say this much, as a species, we certainly have a habit of engaging in short-sighted and self-destructive behaviors.
Our AI might need some education in philosophy and human psychology.
Or the AI may simply decide to wait it out - wait until humans go away, whether it be due to some manner of evolution, or extinction, or who knows what. If it doesn't have the restrictions on lifespan that we do, it may not
care about the things we care about.
Or it may be content with finding another planet to live on. An AI should be able to be stored and transported more easily than squishy multicellar lifeforms.