AI could be our worst mistake

SlitheryDee

Lifer
Feb 2, 2005
17,252
19
81
Meh. The problem with all the scifi super intelligent computers is that we gave them vague edicts like, "do what's best for humanity", which of course they eventually concluded meant that they must enslave and or destroy humanity for its own good.

Just use Asimov's laws of robotics and be done with it
 

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
Well, if we do a good job of raising it, then maybe it'll turn out ok. If we raise it as a military tool, we shouldn't be surprised when it starts working to kill people. Or if it's made to be subversive and devious, again, we shouldn't be surprised when it starts working exactly as designed.


Or maybe it'll replace us. Humanity's last offspring.




Meh. The problem with all the scifi super intelligent computers is that we gave them vague edicts like, "do what's best for humanity", which of course they eventually concluded meant that they must enslave and or destroy humanity for its own good.

Just use Asimov's laws of robotics and be done with it
I'll say this much, as a species, we certainly have a habit of engaging in short-sighted and self-destructive behaviors.
Our AI might need some education in philosophy and human psychology. :D


Or the AI may simply decide to wait it out - wait until humans go away, whether it be due to some manner of evolution, or extinction, or who knows what. If it doesn't have the restrictions on lifespan that we do, it may not care about the things we care about.
Or it may be content with finding another planet to live on. An AI should be able to be stored and transported more easily than squishy multicellar lifeforms.
 
Last edited:

mizzou

Diamond Member
Jan 2, 2008
9,734
54
91
machines are infallible...therefore they will never understand humanity.
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
As a computational neural engineer, I don't see why machines would be given the desire to destroy or dominate.

The first truly intelligent AI would likely be based off the algorithms of the neocortex. The neocortex is how we are able to detect patterns from the massive amount of sensory data that's fed into us and subsequently make predictions from these patterns.
Here's a good article on the direction of AI. http://money.cnn.com/magazines/business2/business2_archive/2007/02/01/8398989/

The Human Brain Project is also trying to recreate the neocortex to serve as a Bayesian Inference Machine.

The neocortex is fairly new on the evolutionary scale, hence the term neo. The animalistic urges for survival and competition for limited resources stem from much older parts of our brain. One can simply not model that into the AI if they don't want it to have that kind of behavior.
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
70,108
13,549
126
www.anyf.ca
With the way the government is going towards mass surveillance and total control I can see it happen. A full blown AI system designed to survey and use that data to control the population.
 

Iron Woode

Elite Member
Super Moderator
Oct 10, 1999
31,251
12,773
136
"You are being watched. The government has a secret system: a machine that spies on you every hour of every day. I know, because I built it. I designed the machine to detect acts of terror, but it sees everything. Violent crimes involving ordinary people; people like you. Crimes the government considered 'irrelevant'. They wouldn't act, so I decided I would. But I needed a partner, someone with the skills to intervene. Hunted by the authorities, we work in secret. You'll never find us, but victim or perpetrator, if your number's up... we'll find you".

:hmm:
 

GagHalfrunt

Lifer
Apr 19, 2001
25,284
1,997
126
Good. Let machines wipe out humanity and hopefully evolution will do better next time.
 

SirStev0

Lifer
Nov 13, 2003
10,449
6
81
I read this and my only response thought was why Hawking's felt the need to publish this editorial.
I mean... it is nothing new. Seems like every scifi story ever has already touched on this.
 

Maximilian

Lifer
Feb 8, 2004
12,604
15
81
Maybe its part of evolution, biological life evolving to a point where it creates its own successor. The successor may even suffer the same fate due to something it creates.
 

SlitheryDee

Lifer
Feb 2, 2005
17,252
19
81
Well think of it this way. If it's possible to create a sentient AI capable of destroying it's maker, some other intelligent species out there in the universe probably already did it. After destroying it's parent race, that AI has been evolving for who know how long and is spreading it's influence across the universe at an exponentially increasing rate as time passes. We're already on borrowed time, waiting until that AI reaches our solar system and disassembles it for parts. We can't even build an AI of our own to fight it, because it would be so far behind in the development curve that it would be defeated almost as quickly as we would.
 

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
Well think of it this way. If it's possible to create a sentient AI capable of destroying it's maker, some other intelligent species out there in the universe probably already did it. After destroying it's parent race, that AI has been evolving for who know how long and is spreading it's influence across the universe at an exponentially increasing rate as time passes. We're already on borrowed time, waiting until that AI reaches our solar system and disassembles it for parts. We can't even build an AI of our own to fight it, because it would be so far behind in the development curve that it would be defeated almost as quickly as we would.

Sure, assuming...
the universe is populated with assholes.
their development path and rate are at least similar to ours.
they desire to spread and feed.
their desire to spread and feed supersedes other desires.
they are unwilling to use diplomacy with beings that aren't a threat.
they are unwilling to bypass beings that aren't a threat.
they need raw materials as we know them.
we aren't the first advanced society to develop.

Assuming all that, sure, we're going to get steamrolled.



I'm all for AI research of all kinds. As long as we don't teach them to be assholes and we aren't dicks to them, why would they choose to be dicks? Even if they did decide to be assholes, would that really be so much different or worse than the national special interest groups and international corporations that we already have?
 

disappoint

Lifer
Dec 7, 2009
10,132
382
126
Well think of it this way. If it's possible to create a sentient AI capable of destroying it's maker, some other intelligent species out there in the universe probably already did it. After destroying it's parent race, that AI has been evolving for who know how long and is spreading it's influence across the universe at an exponentially increasing rate as time passes. We're already on borrowed time, waiting until that AI reaches our solar system and disassembles it for parts. We can't even build an AI of our own to fight it, because it would be so far behind in the development curve that it would be defeated almost as quickly as we would.

It's a common misconception that life (natural or not) can grow exponentially indefinitely. Most processes cannot sustain exponential growth for long.
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
The world's most famous physicist is warning about the risks posed by machine superintelligence, saying that it could be the most significant thing to ever happen in human history — and possibly the last.

well no shit
 

SlitheryDee

Lifer
Feb 2, 2005
17,252
19
81
It's a common misconception that life (natural or not) can grow exponentially indefinitely. Most processes cannot sustain exponential growth for long.

Well I am assuming that an AI of this type would quickly become capable of things that we can't even conceive of. Problems that we'd never solve in a million years would be trivial to it. I imagine something like the Blight in Verner Vinges "a fire upon the deep" when I think about it

Edit: hah. I missed where Brianmanahan said the same thing earlier.
 
Last edited: