AI could be our worst mistake

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
Also, this problem has already been solved.

Just give the AI a six foot extension cord so it can't chase us.

officedrawing.jpg
 

norseamd

Lifer
Dec 13, 2013
13,990
180
106
As a computational neural engineer, I don't see why machines would be given the desire to destroy or dominate. The first truly intelligent AI would likely be based off the algorithms of the neocortex. The neocortex is how we are able to detect patterns from the massive amount of sensory data that's fed into us and subsequently make predictions from these patterns. Here's a good article on the direction of AI. http://money.cnn.com/magazines/busin...02/01/8398989/ The Human Brain Project is also trying to recreate the neocortex to serve as a Bayesian Inference Machine. The neocortex is fairly new on the evolutionary scale, hence the term neo. The animalistic urges for survival and competition for limited resources stem from much older parts of our brain. One can simply not model that into the AI if they don't want it to have that kind of behavior.

all mammals have a neocortex

birds also have a neopallidum which more or less works the same
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
all mammals have a neocortex

birds also have a neopallidum which more or less works the same

Yes, all mammals do have a neocortex and mammals are relatively new on the evolutionary scale. The animalistic aggression for survival and competition that Hawking speaks of with AI is much older than mammals and birds.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
The problem most don't see here (yet our world's think tanks do) is there is a HUGE push to militarize AI and like the drone, simply replace the soldier on the ground.

The scary part is this is one of the key things to actually turn one's military/police against it's own people. Machines don't have emotions.

You give them an edict and they follow it to the letter, give them AI and they can expound on that a bit.

The trends are within 10-20 years we can replace surgeons, lawyers, engineers with AI driven robots. Already they have devices that can process all of man's written knowledge and make decisions based on it in real-time.
 

monkeydelmagico

Diamond Member
Nov 16, 2011
3,961
145
106
I read this and my only response thought was why Hawking's felt the need to publish this editorial.
I mean... it is nothing new. Seems like every scifi story ever has already touched on this.


This is why one of the brightest minds has chosen this point in time to add his "voice" to the debate

The problem most don't see here (yet our world's think tanks do) is there is a HUGE push to militarize AI and like the drone, simply replace the soldier on the ground. .

The slippery slope is automation. There are some significant debates about allowing drones to make deadly decisions without human interaction. Alot of people believe our military drone fleet is akin to remote controlled toys with humans at the controls. The cutting edge reality is armed instruments of destruction that are able to recon, loiter, ID targets, process the sitrep internally, and execute based upon their programming. Today. Right now. Battle tested. Coming to your local police force within a decade at most.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
The slippery slope is automation. There are some significant debates about allowing drones to make deadly decisions without human interaction. Alot of people believe our military drone fleet is akin to remote controlled toys with humans at the controls. The cutting edge reality is armed instruments of destruction that are able to recon, loiter, ID targets, process the sitrep internally, and execute based upon their programming. Today. Right now. Battle tested. Coming to your local police force within a decade at most.

Exactly, right now our killing drones are human-controlled. The next step is automate them. They will have facial recognition, dna markers, etc uploaded or even available online. Scope out a target, compare it to a list and see if it's "safe or kill". Even police cruisers can process 1 million tags in a minute or less just driving around.

Police are already buying up assault rifles and hardened vehicles in major numbers. FN is offering free trades to LEO's still using M16's for there bullpup models....and even cash back.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I remember watching that movie then going to google afterwards for a bit of "wtf did i just watch" kind of search. That whole backstory seemed more interesting than what happened in the movie.

Reading the entire original six-novel series is really an experience without comparison.

The original movie, and the two mini-series SciFi produced (quite good!), do not do nearly enough justice for the story.
The back story is simply phenomenal, but you get to experience it, rather than reading snippets that help the movie make more sense.

I really should re-read it. I don't know if I'll ever get around to reading the prequel and other novels. I read the two sequels that Frank's son helped write - the ones that closed out the original timeline started in the six-novel series - but I never picked up the others.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
The problem most don't see here (yet our world's think tanks do) is there is a HUGE push to militarize AI and like the drone, simply replace the soldier on the ground.

The scary part is this is one of the key things to actually turn one's military/police against it's own people. Machines don't have emotions.

You give them an edict and they follow it to the letter, give them AI and they can expound on that a bit.

The trends are within 10-20 years we can replace surgeons, lawyers, engineers with AI driven robots. Already they have devices that can process all of man's written knowledge and make decisions based on it in real-time.
That's the biggest risk.
If we design machines specifically to kill people, no one should be surprised if that's exactly what they do.



Raise a kid to be a serial killer, but only of the people you specifically want him to kill.
Then he starts killing other people for what he believes to be good reasons.
How many people are going to be surprised when that happens?



If the AI is designed and raised to be benevolent and kind, and is not instilled with the inherently-aggressive behaviors harbored by many lifeforms on Earth, this might not be an issue at all. It may question our own occasional bouts of insanity. ("These groups of leaders disagree on something, so they make the citizenry go kill each other? This is a systemic mental disorder, right?")
 
Last edited:

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
If the AI is designed and raised to be benevolent and kind, and is not instilled with the inherently-aggressive behaviors harbored by many lifeforms on Earth, this might not be an issue at all.

What could possibly go wrong?

ed-209.jpeg
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
AI could take over as the God humanity has always pretended existed.

I'm beginning to think that, no matter how malevolent or benevolent, such an AI "being" is the only sure-fire way we'll see humanity actually survive long-term.

I think we've demonstrated we cannot be trusted with our own future as a civilization, because we do not address ourselves as a collective civilization. We routinely fail to plan long-term for anything that is entirely, 100% aimed at the public's best interest - there is always an element of individual power and control, and collective self-serving actions for and by those in power, or close to power. That is the human way, and the very nature of man that leads me to predict that, without drastic intervention, we'll surely fail to actually save ourselves from whatever challenges our entire species.

Even if said AI is to imprison and/or enslave us for thousands of years, and we eventually manage to overthrow it, that will be all that much better for us in the long-term.
:)
 

BurnItDwn

Lifer
Oct 10, 1999
26,353
1,862
126
There have been 10000 books written on the subject of AI becoming conscious and self aware and destroying humanity.

I do not know if we will ever truly write an AI that advanced.

If we do manage do to it, it's going to take an amazing computer to handle that AI.

Think of our selves as people. our entire bodies and organs all essentially exist to keep the brain alive.

While the capacity to create a superbeing with super AI may some day be in our grasp, it's still a faraway dream for computer scientists.
 

John Connor

Lifer
Nov 30, 2012
22,757
619
121
I predict that the F-22 Raptor will be the first aircraft to go autonomous. The jet is cable of so much more, but the computer limits what the pilot can do otherwise it would kill him. They could easily make military jets unmanned.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
There have been 10000 books written on the subject of AI becoming conscious and self aware and destroying humanity.

I do not know if we will ever truly write an AI that advanced.

If we do manage do to it, it's going to take an amazing computer to handle that AI.

Think of our selves as people. our entire bodies and organs all essentially exist to keep the brain alive.

While the capacity to create a superbeing with super AI may some day be in our grasp, it's still a faraway dream for computer scientists.

Hey bro, computers don't need organs.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
Maybe.....

Military aviation is usually at least a decade ahead of what the public is aware of.

That's the biggest risk.
If we design machines specifically to kill people, no one should be surprised if that's exactly what they do.



Raise a kid to be a serial killer, but only of the people you specifically want him to kill.
Then he starts killing other people for what he believes to be good reasons.
How many people are going to be surprised when that happens?



If the AI is designed and raised to be benevolent and kind, and is not instilled with the inherently-aggressive behaviors harbored by many lifeforms on Earth, this might not be an issue at all. It may question our own occasional bouts of insanity. ("These groups of leaders disagree on something, so they make the citizenry go kill each other? This is a systemic mental disorder, right?")

exactly, but that is probably why the uber rich are building their homes to withstand missiles and the like.
 
Mar 10, 2005
14,647
2
0
when AI gets out of line there's 2 solutions to go for:

logic bomb, or my favorite, seduction ;).

seriously though, man will never make a computer that some 10 year old asshole somewhere won't run amok with.
 

BurnItDwn

Lifer
Oct 10, 1999
26,353
1,862
126
Hey bro, computers don't need organs.

they do not need organic organs, true. But they need power source, they need abillity to regulate temperature (cooling), they need interfaces (some sensor or method of getting input and output) ... Everything needs to talk to the CPU or the brain through some sort of bus, sort of like the nerve network in a body.

No, they do not need organs, but there is much similarity between a human body and it's relationship to the brain as a computer AI would have to whatever hardware it's trapped inside of.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
they do not need organic organs, true. But they need power source, they need abillity to regulate temperature (cooling), they need interfaces (some sensor or method of getting input and output) ... Everything needs to talk to the CPU or the brain through some sort of bus, sort of like the nerve network in a body.

No, they do not need organs, but there is much similarity between a human body and it's relationship to the brain as a computer AI would have to whatever hardware it's trapped inside of.

It's going to be hard for robots that have AI to find power in most of the world?
 

BurnItDwn

Lifer
Oct 10, 1999
26,353
1,862
126
Quantum computing is just around the corner.

Quantum computing was just around the corner in the 1990s. We've had known quantum computers since 2001.


But that doesn't mean anything.

Even if we had the hardware (which certainly is something that eventually will be doable), the software needed to do it may be many decades or centuries away.

In other words, anything related to AI that we could do in the next 100 or so years, may contribute to our long term threat, but', won't directly be that threat.