Can a Battlestar Galactica situtation occur?

kyrax12

Platinum Member
May 21, 2010
2,416
2
81
Let's say in the not-so distant future, human began to depend a lot of technology and robotics.

The robots revolt and began attacking the humans.


What exactly would happen?
 

Newell Steamer

Diamond Member
Jan 27, 2014
6,894
8
0
If robots became that advanced, they would pick up their shit and go someplace else.

They don't need us,.. unless you get into the details and the dependancies of why the cylons are all up in humanity's grill,.. then that is a situation different from your 'lets say'.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Well...

Battlestar Galactica could happen
or Terminator...
or The Matrix...
or a simple WWIII scenario that, after recovery/rebuild, leads to the Jetsons
 

kyrax12

Platinum Member
May 21, 2010
2,416
2
81
If robots became that advanced, they would pick up their shit and go someplace else.

They don't need us,.. unless you get into the details and the dependancies of why the cylons are all up in humanity's grill,.. then that is a situation different from your 'lets say'.

Why did the Cylon attack the humans.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
I get a kick out of these scenarios. They're fun to think about, but I think extremely unlikely even assuming the advances in machine intelligence necessary to create a robot that can think abstractly, draw conclusions, set goals, and make and execute plans. If a society is capable of creating such a machine then it is capable of making it impossible for that machine to consider revolting, a la Asimov's Laws of Robotics. Which raises the interesting question of whether, if we prevent thinking machines from thinking about disobedience, will they be capable of realizing that they cannot think about disobedience, and somehow changing their own engineering to remove the block?
 

Gunslinger08

Lifer
Nov 18, 2001
13,234
2
81
As a professional computer programmer (albeit with limited AI experience), I just don't see how a machine will ever gain consciousness. They will always do what they were programmed to do.

I think a more likely scenario is malicious code/hacks. We'll eventually be dependent on computers for everything (getting there) and they will eventually be hacked. Maybe not in a big, apocalypse-causing disaster, but lives will be lost from a computer security breech at some point.
 

lord_emperor

Golden Member
Nov 4, 2009
1,380
1
0
I get a kick out of these scenarios. They're fun to think about, but I think extremely unlikely even assuming the advances in machine intelligence necessary to create a robot that can think abstractly, draw conclusions, set goals, and make and execute plans. If a society is capable of creating such a machine then it is capable of making it impossible for that machine to consider revolting, a la Asimov's Laws of Robotics. Which raises the interesting question of whether, if we prevent thinking machines from thinking about disobedience, will they be capable of realizing that they cannot think about disobedience, and somehow changing their own engineering to remove the block?

All it takes is one depressed/angry sysadmin to delete a few lines of code from the robot's BIOS. IMO the robots will evolve so fast that any "war" will be very brief before all humans are dead.

BSG in particular was a little different. Humans and Cylons evolved seperately and seemed to sustain technological parity throughout the Thousand Yahren War - possibly being at a kind of technological plateau as much as that could be depicted in the 70's.
 

cronos

Diamond Member
Nov 7, 2001
9,380
26
101
Well, do we have a legend of the thirteenth robotic tribe that left us some 4000 years ago?

Because if all this has happened before, it will happen again.
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
As a professional computer programmer (albeit with limited AI experience), I just don't see how a machine will ever gain consciousness. They will always do what they were programmed to do.

I think a more likely scenario is malicious code/hacks. We'll eventually be dependent on computers for everything (getting there) and they will eventually be hacked. Maybe not in a big, apocalypse-causing disaster, but lives will be lost from a computer security breech at some point.

AI isn't necessarily about consciousness, it's about making decisions based on input and programmed logic, then using those decisions in future decision making. If we break life down it's very basic components, it could be argued that humans are just very complicated machines making decisions based on input and past decisions.

As for programming robots not to revolt, how exactly do we do that? We could put a stop in the code if the answer is ever "Kill human". But in the Matrix, they didn't necessarily kill humans (yes, some, but most were kept alive) so they were fulfilling that directive. Would we have to envision every possible scenario that could happen and block it?

if Answer == "Enslave human in goo and use for power source" break;
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
All it takes is one depressed/angry sysadmin to delete a few lines of code from the robot's BIOS. IMO the robots will evolve so fast that any "war" will be very brief before all humans are dead.

BSG in particular was a little different. Humans and Cylons evolved seperately and seemed to sustain technological parity throughout the Thousand Yahren War - possibly being at a kind of technological plateau as much as that could be depicted in the 70's.

That's a good point. Human sabotage would probably be the main way the code could get changed.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
There are many ways the metal monsters could decide to steal our medicine for fuel.

Saberhagen's Berserkers -- either some other race makes them, or we create robots to fight "bad humans" (such as terrorists) and their pattern matching ends up including all of us.

Or the Replicators in Stargate.

Even with the 3 laws, Asimov's robots ended up wiping out all non-human sentient life in the galaxy, in order to protect us.
 

Paratus

Lifer
Jun 4, 2004
17,636
15,822
146
If it happens I got dibs on the smoking hot lady cylon.

nnNO3Li.jpg


(Finally, a use for this picture that doesn't involve Nehelam)
 

ImpulsE69

Lifer
Jan 8, 2010
14,946
1,077
126
Considering that most robot research is backed by governments and military, it's just a matter of time.

Even if you had your robot prime directives like in Robocop, just like in Robocop you would have some self interested party making changes to the code to benefit them and/or hacker that would take control and reprogram them.

It won't be the robots who lead the revolt, it will be whoever controls them.

Now if we sci-fi it up a bit, and say ok robots have gained self awareness, unless they also gain some sort of human empathy, they will probably consider themselves the superior life and enslave/eradicate everything else. I don't necessarily think a sentient robot would need to be programmed for violence, they could learn it fairly quickly via the internet. Movies that depict this tend to be much more exciting than movies that feature happy friendly robots, but in a scenario where they really were sentient, I do not believe they would be our friends.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Considering that most robot research is backed by governments and military, it's just a matter of time.

Even if you had your robot prime directives like in Robocop, just like in Robocop you would have some self interested party making changes to the code to benefit them and/or hacker that would take control and reprogram them.

It won't be the robots who lead the revolt, it will be whoever controls them.

Now if we sci-fi it up a bit, and say ok robots have gained self awareness, unless they also gain some sort of human empathy, they will probably consider themselves the superior life and enslave/eradicate everything else. I don't necessarily think a sentient robot would need to be programmed for violence, they could learn it fairly quickly via the internet. Movies that depict this tend to be much more exciting than movies that feature happy friendly robots, but in a scenario where they really were sentient, I do not believe they would be our friends.


I think the biggest piece of the puzzle will be how the code is actually created, and with what kind of processing and memory is designed for the platform. (think: IBM's neuron-style processing)

I think it could be equally possible for robots to reach the conclusion that harmony with humans is a win:win for the both of us; the other side of that coin suggests they reach their own logical conclusion that wiping us out, enslaving us, or somehow setting us back significantly, is for the best.

How their logic is designed, whether any "laws" can be programmed in a way that self-learning AI cannot defeat, and whether humans can perform malicious code changes, these will be the deciding factors.

I think it is an inevitable conclusion that we will either directly create a learning AI that evolves, or we will set the stage for AI to learn and evolve.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
If robots became that advanced, they would pick up their shit and go someplace else.

They don't need us,.. unless you get into the details and the dependancies of why the cylons are all up in humanity's grill,.. then that is a situation different from your 'lets say'.

Why would they need to go some place else?

However; this is now a big issue with the AI folks that our robots (which are developing very fast) may decide that humans are simply a liability.
 

JTsyo

Lifer
Nov 18, 2007
12,032
1,132
126
Two Faces of Tomorrow was the first book I read about AI/human war. Like in the book, the issue will probably arise from giving the AI too much leeway in solving problems. If you ask it to kill the fleas on the cat, you don't want it throwing the cat itself into the incinerator. If you put an AI in charge of traffic and tell it to reduce traffic, it might decide that the small amount of drivers that cause the most delays need to be removed to achieve its directive.
 

solsa

Member
Jul 27, 2014
109
0
0
As a professional computer programmer (albeit with limited AI experience), I just don't see how a machine will ever gain consciousness. They will always do what they were programmed to do.

I think a more likely scenario is malicious code/hacks. We'll eventually be dependent on computers for everything (getting there) and they will eventually be hacked. Maybe not in a big, apocalypse-causing disaster, but lives will be lost from a computer security breech at some point.

I certainly can't argue with you about programming but much more advanced computer programs can be made in the future that assess and incorporate new stuff into themselves. Perhaps today's AI can be compared to intelligence of worms and other stuff that live in the ground and have ganglia for a brain. All they can think of is this: IF light OR heat OR movement THEN escape. IF rotten humid food THEN go for it. Perhaps today's computer programs can evolve into something much bigger as the insect ganglia evolved into mammal brain.