If robots became that advanced, they would pick up their shit and go someplace else.
They don't need us,.. unless you get into the details and the dependancies of why the cylons are all up in humanity's grill,.. then that is a situation different from your 'lets say'.
I get a kick out of these scenarios. They're fun to think about, but I think extremely unlikely even assuming the advances in machine intelligence necessary to create a robot that can think abstractly, draw conclusions, set goals, and make and execute plans. If a society is capable of creating such a machine then it is capable of making it impossible for that machine to consider revolting, a la Asimov's Laws of Robotics. Which raises the interesting question of whether, if we prevent thinking machines from thinking about disobedience, will they be capable of realizing that they cannot think about disobedience, and somehow changing their own engineering to remove the block?
As a professional computer programmer (albeit with limited AI experience), I just don't see how a machine will ever gain consciousness. They will always do what they were programmed to do.
I think a more likely scenario is malicious code/hacks. We'll eventually be dependent on computers for everything (getting there) and they will eventually be hacked. Maybe not in a big, apocalypse-causing disaster, but lives will be lost from a computer security breech at some point.
All it takes is one depressed/angry sysadmin to delete a few lines of code from the robot's BIOS. IMO the robots will evolve so fast that any "war" will be very brief before all humans are dead.
BSG in particular was a little different. Humans and Cylons evolved seperately and seemed to sustain technological parity throughout the Thousand Yahren War - possibly being at a kind of technological plateau as much as that could be depicted in the 70's.
If it happens I got dibs on the smoking hot lady cylon.
Considering that most robot research is backed by governments and military, it's just a matter of time.
Even if you had your robot prime directives like in Robocop, just like in Robocop you would have some self interested party making changes to the code to benefit them and/or hacker that would take control and reprogram them.
It won't be the robots who lead the revolt, it will be whoever controls them.
Now if we sci-fi it up a bit, and say ok robots have gained self awareness, unless they also gain some sort of human empathy, they will probably consider themselves the superior life and enslave/eradicate everything else. I don't necessarily think a sentient robot would need to be programmed for violence, they could learn it fairly quickly via the internet. Movies that depict this tend to be much more exciting than movies that feature happy friendly robots, but in a scenario where they really were sentient, I do not believe they would be our friends.
If robots became that advanced, they would pick up their shit and go someplace else.
They don't need us,.. unless you get into the details and the dependancies of why the cylons are all up in humanity's grill,.. then that is a situation different from your 'lets say'.
As a professional computer programmer (albeit with limited AI experience), I just don't see how a machine will ever gain consciousness. They will always do what they were programmed to do.
I think a more likely scenario is malicious code/hacks. We'll eventually be dependent on computers for everything (getting there) and they will eventually be hacked. Maybe not in a big, apocalypse-causing disaster, but lives will be lost from a computer security breech at some point.