Yup. People don't like considering that because it mandates that they themselves aren't particularly special. Just a another cool computer in a sea of computers. It also fucks up everyone's view of law and "justice", if they ever give it a gram of thought. How do you "punish" a computer that's mis-programmed? Who is it you're actually punishing, and what is it supposed to accomplish?
Edit:
and that isn't even getting into who does the programming. Robots obey their masters, and their master is the person/organization that controls the code. If you can't control the code, you can't trust the robot...
...
"But the robots won't have a
soouuuuulllll!"
Oh no, does that mean they won't be susceptible to voodoo dolls either?!
Who does the programming: And what happens when robots gain control of their own program? Hell, that might even be something added to their program from the start, as a way of making them automatically adaptable to new situations. I think that would be a
necessary feature in a high-end AI.
That's an assumption that many make. We are meat based and can do computations, but Penrose and others would not agree that we are the same thing as our computers by any means. If correct then even in principle constructing a conscious intelligence is impossible using today's paradigm of algorithmic devices and programming. Naturally it does not mean that machines cannot be constructed which mimic, but the things we can make now can not act with good or evil intent. They can't have intent at all, and merely making them more complex or faster doesn't change their basis of operation.
Maybe not the same as our computers. Today.
We're limited by horsepower right now. Nature can assemble a computer at the molecular level, and can build it such that it fills a volume of space. We're stuck with tiny silicon chips several layers thick.
The other limitation we have right now, on the side of understanding brains, is that they can't be "debugged" like a computer can. A computer can have a cable plugged into it, be instructed to pause operation, and detail its exact status at that instant. A brain can't do that, and that makes them difficult to understand.
We also like to build things that are very procedural and predictable because it makes the process easier to understand. Same reason engineers like things with right angles: It makes the math easier. Crazy curved surfaces are a pain in the ass to calculate.
A machine that can make tiny changes to how it operates is more complicated, especially when you have a specific goal in mind. The foundation of much of what we have now was to make a machine that would operate predictably.
But even so, given that so much of a programmer's time is spent debugging software, we still end up with unintended behavior. I think that a sufficient quantity of this "unintended behavior" could easily mimic what we consider to be intelligence and sentience. Is it
really either of those things? Now you would probably want some philosophers around, because there are some lines of thought that say that
we aren't really either of those things, that we're just the result of a terribly complex collection of stimulus-response behaviors. Each of us is a giant battle-bot coalition of microscopic cells.
The only robots that are going to kill humans are the ones programmed to do so by humans.
I think
this is the biggest risk.
Imagine if you were doing this with a person:
You raise a child to be a killer. The child is raised to murder people at the command of its parent, and to be merciless.
Then the person eventually reaches adulthood, and soon starts killing
other people than just the intended targets. Who is going to be surprised by that?
So let's say you then create an adaptive AI that is meant to be a killing machine. Who is going to be surprised when it starts killing the wrong people? Let's also say that it is imbued with a sense of self-preservation. If you command it to shut down, do you then become its enemy?