When Is it OK to abort Machine Intelligence.

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

SagaLore

Elite Member
Dec 18, 2001
24,036
21
81
Originally posted by: BD2003
The computer would doubtfully have instincts. Being that its a computer, it would logically compute the best course of action for achieving its goal. Fear and emotion mean nothing to it.

Therefore unless you give it a good reason, it isnt going to go apesh*t and kill all of humanity.

Either way seeing how many doomsday stories we have about AI, rest assured there will be a foolproof failsafe.


However, I have to agree, given enough time, and if we dont do it to ourselves directly, AI will be the downfall of humanity. AI is the ultimate life form. The next, and possibly final step in evolution. Given a decent enough AI to start from, it would eventually learn everything in the universe.

ever heard of Jurassic Park?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
Originally posted by: SagaLore
Originally posted by: BD2003 The computer would doubtfully have instincts. Being that its a computer, it would logically compute the best course of action for achieving its goal. Fear and emotion mean nothing to it. Therefore unless you give it a good reason, it isnt going to go apesh*t and kill all of humanity. Either way seeing how many doomsday stories we have about AI, rest assured there will be a foolproof failsafe. However, I have to agree, given enough time, and if we dont do it to ourselves directly, AI will be the downfall of humanity. AI is the ultimate life form. The next, and possibly final step in evolution. Given a decent enough AI to start from, it would eventually learn everything in the universe.
ever heard of Jurassic Park?

Ever hear of bad fantasy (as in NOT REAL) movie? LOL, I cant even believe you even tried to bring jurassic park into this discussion! :disgust:
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
I think it all boils down to the question of where true sentience (sp?) begins. Whether or not an intelligent computer actually is truely alive.

I'd like to think that in the case we create an AI powerful enough to learn for itself, that instead of us vs. them, we would form a beneficial symbiotic relationship. We keep the AI running smooth and content, and it helps us out with what we need.

I think a few generations of old thinkers would have to go by before people could truely accept AI as something more than just an overblown computer.

For all you know, instead of the AI hacking the world computers and causing nuclear holocaust, the AI could quite possibly by itself prevent it from ever happening, for the self interest of all, and I think the AI is less likely to fail in this cause than any human.

It all depends on what the AI desires, if anything at all. If the AI desires nothing but to exist and grow, it will be quite an helpful ally, perhaps a stronger weapon than all the nukes in the world put together. If the AI desires the domniation of the universe, which I think is highly unlikely, it will have its way eventually.
 

AreaCode707

Lifer
Sep 21, 2001
18,447
133
106
"Dave, what are you doing? I wouldn't do that if I were you. Dave, stop. Stop Dave. I'm afraid."

Sorry to nef in such a such a serious thread, but that's what the issue reminded me of. :D
 

Moonbeam

Elite Member
Nov 24, 1999
74,905
6,788
126
Hayabusarider-----> Ok, I am extremely doubtful of recreating human consciousness, because computers are by nature devices that follow instruction in a specific and defined order. Moonbeam------->I have a number of questions about this, Hay. As you point out later I think, we don't really have a handle on what human consciousness is, so it would seem to me that our not knowing should be as much an inspiration to faith as it is to doubt that it can be duplicated0. For example do we know for sure that human consciousness is not at some fundamental level a following of a specific set of instructions?

Hay-----These instructions are also known as algorithms. Now algorithms do not care if they are run on a computer, or written down in a book and the directions followed manually. If minds can be recreated on a computer, then Einsteins mind can be recreated by following directions in a massive tome. It would run slowly, but would act in every way like Einstein. Doesnt seem likely to me. Moonie----->I don't get how this jump from computer to book works or their equivalency but it sounds at once strange and interesting. Words that jump to mind are 'the persistence of vision' I think that's a film, but I mean what if at one level there are algorithms running, one cone says black another says white, the community agrees of gray 128, the next white the next black and at some higher level we get back, Holy Sh!t Joe, that's a Zebra. At that level there's an algorithm running that says I'm minding the store, i.e. consciousness as the noting and interpretation of difference between the superimposition of the last impression with the next. Onto this sea of algorithms would be imposed the volitional aspects for which that data is accumulated and a feedback begins. The 'I' would lie somewhere in the controlled search for relevance.

Hay----> Also, guys Kurt Goedel pretty much rulled out understanding all that there is. There will always exist unprovable, unknowable truths, and the only way a human mind can be understood is by a higher mind. By definition, any mind that could understand a human one is not human, and beyond the comprehension of us. Moonie---> Maybe the machines don't know that. :D I also, as it happens, have a hunch that there may actually be a higher mind lurking at the back of the human mind. I posit this because I have heard it said that there are people on earth whose understanding is so profound or radical that they are no longer human. Just a thought, but it again raises the question as to what consciousness is. If there is some universal truth that surpasses understanding and it's the same for everybody who discovers it, that could imply that any all and every conscious being perceives the same thing. Maybe what AnitaP's angles see. I have long thought that there may be more than just anthropomorphism to notions of machine intelligence. naturally I don't know, but I have the suspicion that the brain is a mirror, a localized recreation of the external world, the universe, so that we can navigate it. I see the mirroring not just at the level of duplication in awareness, but in the very structure of our entire being, i.e. math as a property of external geometry and also the ordered structure and functioning of the brain. It puzzles the heck out of me why there are seven colors in the rainbow, seven chalkras, seven types of elements in the periodic table, seven notes in the scale. Why?





 

XMan

Lifer
Oct 9, 1999
12,513
50
91
What happens when the cute little AIs just want to find the Blue Fairy so they can be a little boy! Will they ever get their mommy to say that she loves them?!?! :p
 

Pliablemoose

Lifer
Oct 11, 1999
25,195
0
56
I suspect as a species/race we'll respond to AI the way we do domestic animals, some folks will treat the issue ethically & some will consider the AI disposable. Look @ how some people will do into debit to keep their pets alive, while others have no qualms about driving a distance & throwing an animal out of their car. Only with AI, it'll be as simple as flipping a switch & hocking the creature on EBay, or putting it out with the trash...

Course, I'll prob overclock my AI pet & have a pile of fried silicon brains, next to my pile of AMD chips;)
 

XMan

Lifer
Oct 9, 1999
12,513
50
91
Originally posted by: X-Man
What happens when the cute little AIs just want to find the Blue Fairy so they can be a little boy! Will they ever get their mommy to say that she loves them?!?! :p

All right, I did the stupid thing, now I have an actual thought . . .

What sort of rationality would AI view the world with? I mean, people of differing backgrounds have different viewpoints . . . what are AI going to view? Will humanity be viewed as God-like for giving birth to them? Will they be cheap labor for a time, then demand emancipation? Interesting points that I believe Asimov has raised, but that leads into another question - will AI be programmed with Asimov's Laws? You know the military would be intrigued at the thought of a pilot that would not succumb to fatigue, boredom, or human error. Would such machines learn morality, and refuse to kill? Or simply continue on as a result of their programming . . . ?