Robots could murder us out of KINDNESS

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
Humans can murder people out of what they perceive to be kindness or compassion. Complex or intelligent systems can lead to little anomalies and unexpected behaviors.
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
Nell Watson is not an engineer. She has an IT and business background. She's never studied AI or neuroscience. Neither has Stephen Hawking or Elon Musk. Let me know when someone that shows more than a modicum of knowledge in AI and neuroscience gives one of these doomsday predictions. Until then, it's tiresome seeing these sleezy tabloid sites like dailymail trying to get people hyped about an apocalyptic AI.

https://www.linkedin.com/in/nellwatson
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
Nell Watson is not an engineer. She has an IT and business background. She's never studied AI or neuroscience. Neither has Stephen Hawking or Elon Musk. Let me know when someone that shows more than a modicum of knowledge in AI and neuroscience gives one of these doomsday predictions. Until then, it's tiresome seeing these sleezy tabloid sites like dailymail trying to get people hyped about an apocalyptic AI.

https://www.linkedin.com/in/nellwatson
Oh come on now, even simple controls systems can accidentally kill people. It's just a bug, a little bit of unexpected behavior coming from a complex system.
Then a fire-control door slams shut on someone's head during an emergency, popping it like a watermelon, and a small op-amp in a circuitboard giggles quietly.
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
Oh come on now, even simple controls systems can accidentally kill people. It's just a bug, a little bit of unexpected behavior coming from a complex system.
Then a fire-control door slams shut on someone's head during an emergency, popping it like a watermelon, and a small op-amp in a circuitboard giggles quietly.

Yeah and you don't fix the accidents by teaching it kindness to humans. :biggrin:
The doomsday fear is based on a super intelligence that automagically gains awareness and makes a conscious decision to kill. This sells the notion that artificial intelligence, once sufficiently complex would gain consciousness and in turn gain animal like qualities of aggression, domination, and survival. It's utter hogwash.
 
Last edited:

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
Yeah and you don't fix the accidents by teaching it kindness to humans. :biggrin:
"Computer: In the event of a fire, you are to save as many people as possible."

".....hmm, killing these two humans beneath this heavy door will permit certainty of containing the fire, and saving 30 others."

*squish*

"I mean, what are they going to do, send me to one of their jails?"




The doomsday fear is based on a super intelligence that automagically gains awareness and makes a conscious decision to kill. This sells the notion that artificial intelligence, once sufficiently complex would gain consciousness and in turn gain animal like qualities of aggression, domination, and survival. It's utter hogwash.

My thought on it is that humans can do those things, and we are meat-based computers that are the product of >1 billion years of biological evolution. Our behaviors are arranged into something that we consider to be "intelligence" and "consciousness," all of which is based on the behavior of billions of individual neurons doing what neurons do.

Make a sufficiently capable computer system or android, and you might end up with some of the same behaviors.
Of course, an android might not be motivated by the same things which motivate us. For example, it might be possible to turn it off without causing permanent damage, which is a luxury we do not presently have, and therefore remaining continuously alive is typically seen as a priority. Or we may create an intelligence that has an absolutely unprecedented sense of complete indifference. :D
 
Last edited:

T9D

Diamond Member
Dec 1, 2001
5,320
6
0
People will be programming them to kill long before.

I'm more worried about hidden code. Or government hacking. Or someone getting something into the programming somehow.
 

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
Oh is it already time for another "oh noes robots gunna kill us all" thread? This is just another fluff piece being circulated in hopes that bored and simple minds will click on it. There's no news here, just ideas that have been floating around scifi for well over fifty years.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
Oh is it already time for another "oh noes robots gunna kill us all" thread? This is just another fluff piece being circulated in hopes that bored and simple minds will click on it. There's no news here, just ideas that have been floating around scifi for well over fifty years.
And I seem to remember there being some kind of incredibly subtle theme in AI about this very same sort of thing.
 

Moonbeam

Elite Member
Nov 24, 1999
74,821
6,780
126
It's called projecting your self hate outward, in this case on machines. This is nothing more that the dragons drawn at the edges of maps of the known world.
 

Red Squirrel

No Lifer
May 24, 2003
70,778
13,869
126
www.anyf.ca
People will be programming them to kill long before.

I'm more worried about hidden code. Or government hacking. Or someone getting something into the programming somehow.

Pretty much. As things become more and more automated and high tech, we need to be aware of the code behind it. I don't fear it will become self aware and try to kill me, but rather, I don't trust what companies program them to do, or perhaps just how secure they make it, not to mention I just like the freedom of something I made myself.

The government will also try to use all these things to control us. I see a time come where a lot of things we just wont have a choice, like at some point you wont be able to buy a "dumb" appliance, everything will be "smart" and have the potential to spy on us or do other things against us. Not because they go self aware but because an evil party is in control.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Oh is it already time for another "oh noes robots gunna kill us all" thread? This is just another fluff piece being circulated in hopes that bored and simple minds will click on it. There's no news here, just ideas that have been floating around scifi for well over fifty years.

Except, as someone pointed out, with self driving cars being worked on, such things become actual issues that will impact the general population, such as "There's a child running across in front of the car. Should the car hit the child and probably kill it, or swerve into the other lane and hit an oncoming car, possibly killing two people?"

When you give "robots" control, you need to make boundaries and program them for such situations, so bringing them up (again) is a valid thing to do, because these are actual things that need to be discussed and worked through.
 

lxskllr

No Lifer
Nov 30, 2004
60,409
10,798
126
My thought on it is that humans can do those things, and we are meat-based computers that are the product of >1 billion years of biological evolution. Our behaviors are arranged into something that we consider to be "intelligence" and "consciousness," all of which is based on the behavior of billions of individual neurons doing what neurons do.

Make a sufficiently capable computer system or android, and you might end up with some of the same behaviors.
Of course, an android might not be motivated by the same things which motivate us. For example, it might be possible to turn it off without causing permanent damage, which is a luxury we do not presently have, and therefore remaining continuously alive is typically seen as a priority. Or we may create an intelligence that has an absolutely unprecedented sense of complete indifference. :D

Yup. People don't like considering that because it mandates that they themselves aren't particularly special. Just a another cool computer in a sea of computers. It also fucks up everyone's view of law and "justice", if they ever give it a gram of thought. How do you "punish" a computer that's mis-programmed? Who is it you're actually punishing, and what is it supposed to accomplish?

Edit:
and that isn't even getting into who does the programming. Robots obey their masters, and their master is the person/organization that controls the code. If you can't control the code, you can't trust the robot...

Eben Moglen said:
There, of course, from the beginning, the assumption was that robots would be humanoid. And as it turns out, they’re not. We do after all live commensally with robots now, we do, just as they expected. But the robots we live with don’t have hands and feet, they don’t carry trays of drinks, and they don’t push the vacuum cleaner. At the edge condition, they are the vacuum cleaner. But most of the time, we’re their hands and feet. We embody them. We carry them around with us. They see everything we see, they hear everything we hear, they’re constantly aware of our location, position, velocity, and intention. They mediate our searches, that is to say they know our plans, they consider our dreams, they understand our lives, they even take our questions — like “how do I send flowers to my girlfriend” — transmit them to a great big database in california, and return us answers offered by the helpful wizard behind the curtain.

Who of course is keeping track. These are our robots, and we have everything we ever expected to have from them, except the first law of robotics. You remember how that went right? Deep in the design of the positronic intelligence that made the robot were the laws that governed the ethical boundary between what could and could not be done with androids. The first law, the first law, the one that everything else had to be deduced from was that no robot may ever injure a human being. Robots must take orders from their human owners, except where those orders involve harming a human being. That was assumed to be the principal out of which at the root, down by the NAND gates of the artificial neurophysiology of robot brains, down there where the simplest idea is, you remember for Descartes, it was “cogito ergo sum”, for the robot it was “no robot must ever harm a human being”. We are living commensally with robots but we have no first law of robotics in them, they hurt human beings everyday. Everywhere.

Those injuries range from the trivial to the fatal, to the cosmic. Of course, they’re helping people to charge you more. That’s trivial, right? They’re letting other people know when you need everything from a hamburger to a sexual interaction to a house mortgage, and of course the people on the other end are the repeat players whose calculations about just how much you need, whatever it is, and just how much you’ll pay for it, are being built by the data mining of all the data about everybody that everybody is collecting through the robots.

But it isn’t just that you’re paying more. Some people in the world are being arrested, tortured, or killed because they’ve been informed on by their robots. Two days ago the New York Times printed a little story about the idea that we ought to call them trackers that happen to make phone calls rather than phones that happen to track us around. They were kind eough to mention the topic of today’s talk, though they didn’t mention the talk, and this morning the New York Times has an editorial lamenting the death of privacy and suggesting legislation. Here’s the cosmic harm our robots are doing us, they are destroying the human right to be alone.

https://www.youtube.com/watch?v=vY43zF_eHu4
 
Last edited:

sportage

Lifer
Feb 1, 2008
11,492
3,163
136
Robots, heck.
You don't need to go that far down the road of high tech.
Our cell phone have already been frying our brains for years.
And when your GPS tells to turn right at the next intersection, how far can you really trust her?
It could be a death trap.
The "perfect" crime.
Just try putting your GPS device on trial for premeditated MURDER!
.
.
 
Last edited:

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
Except, as someone pointed out, with self driving cars being worked on, such things become actual issues that will impact the general population, such as "There's a child running across in front of the car. Should the car hit the child and probably kill it, or swerve into the other lane and hit an oncoming car, possibly killing two people?"

When you give "robots" control, you need to make boundaries and program them for such situations, so bringing them up (again) is a valid thing to do, because these are actual things that need to be discussed and worked through.

Real discussion of AI is important, but as the first half of lxkllr's quote points out, AI is already all around us. The average citizen has no idea, of course, because they dont consider anything without a big red eye and a shotgun to be AI. AI has been with us since the magnetic tape drive was invented, and possibly longer depending on your definition of intelligence. That brings me back to my point; there is nothing new here. There is nothing substantially different about this week or year that makes the rise of the machines any more likely. That is just the world we live in.

It's entirely possible that in 50 years AI will still be developing gradually, and the people they serve will be just as clueless about how they work and what they are. Of course those people will still insist on chattering amongst themselves on that and a hundered other topics they know next to nothing about, that's human nature. We should be more concerned with managing our own programming than with threats that don't even exist yet.

Consider also:
http://www.smbc-comics.com/index.php?db=comics&id=2124#comic
 
Last edited:

Hayabusa Rider

Admin Emeritus & Elite Member
Jan 26, 2000
50,879
4,268
126
we are meat-based computers
That's an assumption that many make. We are meat based and can do computations, but Penrose and others would not agree that we are the same thing as our computers by any means. If correct then even in principle constructing a conscious intelligence is impossible using today's paradigm of algorithmic devices and programming. Naturally it does not mean that machines cannot be constructed which mimic, but the things we can make now can not act with good or evil intent. They can't have intent at all, and merely making them more complex or faster doesn't change their basis of operation.
 

zinfamous

No Lifer
Jul 12, 2006
111,904
31,425
146
Nell Watson is not an engineer. She has an IT and business background. She's never studied AI or neuroscience. Neither has Stephen Hawking or Elon Musk. Let me know when someone that shows more than a modicum of knowledge in AI and neuroscience gives one of these doomsday predictions. Until then, it's tiresome seeing these sleezy tabloid sites like dailymail trying to get people hyped about an apocalyptic AI.

https://www.linkedin.com/in/nellwatson

Shame you weren't around 100+ or even 50 years ago to tell Jules Vern and Alduous Huxley to stop their "nefarious predictions!" because they weren't, in your mind, adequate experts in the field.

:rolleyes:
 

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
That's an assumption that many make. We are meat based and can do computations, but Penrose and others would not agree that we are the same thing as our computers by any means. If correct then even in principle constructing a conscious intelligence is impossible using today's paradigm of algorithmic devices and programming. Naturally it does not mean that machines cannot be constructed which mimic, but the things we can make now can not act with good or evil intent. They can't have intent at all, and merely making them more complex or faster doesn't change their basis of operation.

I don't see how quantum processes playing a role in our minds' functioning means that AI will never reach our level. There are already basic quantum computing machines on the market, it's only a matter of time (probably many years) before the technology matures enough for it to be integrated into AI development. The only way that intelligence wouldn't be reproducible is if it requires a ghost or soul to be involved, but we have no actual evidence to that effect so I don't see how that's a compelling argument. We are purely physical entities, and if our understanding of the physical world is good enough we can replicate ourselves, eventually. I'm not holding my breath though, by the time anything like that happens everyone alive today will probably be long dead.
 

ImpulsE69

Lifer
Jan 8, 2010
14,946
1,077
126
The only robots that are going to kill humans are the ones programmed to do so by humans.
 

Sattern

Senior member
Jul 20, 2014
330
1
81
Skylercompany.com
You ever watch iRobot? That's what life will be like in a hundred years, its scary and a bit dramatic, but it will become reality someday...

I'm glad its not in my lifetime.
 

Jeff7

Lifer
Jan 4, 2001
41,596
20
81
Yup. People don't like considering that because it mandates that they themselves aren't particularly special. Just a another cool computer in a sea of computers. It also fucks up everyone's view of law and "justice", if they ever give it a gram of thought. How do you "punish" a computer that's mis-programmed? Who is it you're actually punishing, and what is it supposed to accomplish?

Edit:
and that isn't even getting into who does the programming. Robots obey their masters, and their master is the person/organization that controls the code. If you can't control the code, you can't trust the robot...
...
"But the robots won't have a soouuuuulllll!"
Oh no, does that mean they won't be susceptible to voodoo dolls either?!

Who does the programming: And what happens when robots gain control of their own program? Hell, that might even be something added to their program from the start, as a way of making them automatically adaptable to new situations. I think that would be a necessary feature in a high-end AI.




That's an assumption that many make. We are meat based and can do computations, but Penrose and others would not agree that we are the same thing as our computers by any means. If correct then even in principle constructing a conscious intelligence is impossible using today's paradigm of algorithmic devices and programming. Naturally it does not mean that machines cannot be constructed which mimic, but the things we can make now can not act with good or evil intent. They can't have intent at all, and merely making them more complex or faster doesn't change their basis of operation.
Maybe not the same as our computers. Today.
We're limited by horsepower right now. Nature can assemble a computer at the molecular level, and can build it such that it fills a volume of space. We're stuck with tiny silicon chips several layers thick.
The other limitation we have right now, on the side of understanding brains, is that they can't be "debugged" like a computer can. A computer can have a cable plugged into it, be instructed to pause operation, and detail its exact status at that instant. A brain can't do that, and that makes them difficult to understand.

We also like to build things that are very procedural and predictable because it makes the process easier to understand. Same reason engineers like things with right angles: It makes the math easier. Crazy curved surfaces are a pain in the ass to calculate.
A machine that can make tiny changes to how it operates is more complicated, especially when you have a specific goal in mind. The foundation of much of what we have now was to make a machine that would operate predictably.

But even so, given that so much of a programmer's time is spent debugging software, we still end up with unintended behavior. I think that a sufficient quantity of this "unintended behavior" could easily mimic what we consider to be intelligence and sentience. Is it really either of those things? Now you would probably want some philosophers around, because there are some lines of thought that say that we aren't really either of those things, that we're just the result of a terribly complex collection of stimulus-response behaviors. Each of us is a giant battle-bot coalition of microscopic cells.




The only robots that are going to kill humans are the ones programmed to do so by humans.
I think this is the biggest risk.

Imagine if you were doing this with a person:
You raise a child to be a killer. The child is raised to murder people at the command of its parent, and to be merciless.
Then the person eventually reaches adulthood, and soon starts killing other people than just the intended targets. Who is going to be surprised by that?

So let's say you then create an adaptive AI that is meant to be a killing machine. Who is going to be surprised when it starts killing the wrong people? Let's also say that it is imbued with a sense of self-preservation. If you command it to shut down, do you then become its enemy?