let's discuss artificial intelligence

hellokeith

Golden Member
Nov 12, 2004
1,664
0
0
I've been wanting to make an AI thread for a while. Slow day today, and it's on my mind, so here we go..

When I think about AI, a number of different aspects / goals come to mind:

* menial & repetitive tasks
* fault tolerance, emergency situations
* uninhabitable environments
* entertainment
* companionship
* research / "can it be done?"

and lower level things like:

* recognition
* pathfinding
* strategy
* learning
* judgement
* personality

Ok some random thoughts and statements:

Are there situations that need AI assistance but ultimately require human responsibility? How do we decide which situations should have autonomous AI vs semi-control / "man-in-the-loop"?

If an autonomous AI injurs or kills a human, who is responsible?

What is the purpose of attempting to "humanize" an artificial intelligence? If it is beneficial, is the benefit for the AI or for humans interacting with the AI?

Does modeling an AI in such a way that it could develop human traits such as tastes, interests, likes & dislikes, personality, etc create a "child / parent" or "creation / god" relationship with the human developer? Should AI ever even be allowed to develop these capacities? Can an AI be treated badly by its human developer?

Does judgement require human traits? For example, an autonomous vehicle comes to an intersection, but due to 3rd-party human error, it has to make a choice between hitting a vehicle in the intersection or a bicyclist on the sidewalk. If the vehicle in the intersection was a loaded elementary school bus, this situation would be difficult for a human to decide, let alone an AI.

If an AI has learning capability and it solves a problem, such as a long-unsolved math problem, who gets the credit? What impact would this have on research? What if the AI develops a model or formula or algorithm for something, would you inherently trust it / distrust it because of the source?

more to come..
 

The Bakery

Member
Mar 24, 2008
145
0
0
Wow you bring up a lot there.

In terms of credit, I would venture to say that the AI that accomplishes something based on it's basic programming is not entitled to it's accomplishment. However, if it is accomplished as a direct result of the AI developing new methods of inspection or execution - it gets credit.

Otherwise, it should get no more credit than c++ does for windows.

Same above for injury and responsibility.

As to the human decision issue - I do not believe that decisions of efficiency or mortality are necessarily human. AI would be programmed to minimize human loss, and would likely make better "game-time" decisions as to automotive crashes. The important thing here is not to act humanly, but to restrict the incidence of human death. I think that's separate from "human" intuition.

What is the purpose of attempting to "humanize" an artificial intelligence? If it is beneficial, is the benefit for the AI or for humans interacting with the AI?

Does modeling an AI in such a way that it could develop human traits such as tastes, interests, likes & dislikes, personality, etc create a "child / parent" or "creation / god" relationship with the human developer? Should AI ever even be allowed to develop these capacities? Can an AI be treated badly by its human developer?

The cynic in me says the underlying psychology in that decision is to play god. Plain and simple. Also, my heart tells me it's really just an attempt to learn more about ourselves. I find that interesting though, as we have such a feeble understanding of how the traits that make us human work - that we can imagine any AI created, without substantial increase in our understandings of human neurology and psychology, would be decidedly not human beyond novelty.

I don't think we can just imbue a computer with emotion, reason, feelings and then watch it grow to see what happens.
I think that consciousness is vastly complicated and misunderstood. This post was probably disorded but I'm not going
to edit it ;)

I'd love to read more about it though.
 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: hellokeith
If an autonomous AI injurs or kills a human, who is responsible?
The AI of course, no less than a parent would be responsible for the actions of the child. If there was no intention to do harm, I don't see why there should be punishment, it's a live and learn situation.

Originally posted by: hellokeith
What is the purpose of attempting to "humanize" an artificial intelligence?

I think the difficulty will be to dehumanize AI rather than humanize. AI will more likely than not be humanized by default, assuming your definition of AI includes any cognitive being within a computer, rather than a strictly artificial architecture, which isn't precisely the correct definition but it's the common one.

Originally posted by: hellokeith
Can an AI be treated badly by its human developer?

Absolutely. I think this could be a great humanitarian disaster before people accept that a fully featured humanoid AI is no less perceptive than a human made out of meat. Much like we had a cultural hierarchy in the past between humans, we may have the same between humans and humanoids in the future. Hopefully we have learned something in all those centuries and this can be avoided, but I don't have much confidence for humanity in this regard. The best thing to do may be to continue improving narrow AI systems rather than creating something general. The easiest way to create a general computer intelligence would be to copy human brain architecture, so this is likely the path we will take, however such an intelligence clearly cannot be used as an expendable service to biological humans. I think we would fare well giving the beings human rights and encouraging them to pursue scientific ventures. Something that would result in less of a moral dilemma would be to transfer the contents of our own brain on an apt computational platform, in which case it could simply be us that will be the super-intelligence we were looking to create.

Originally posted by: hellokeith
Does judgement require human traits? For example, an autonomous vehicle comes to an intersection, but due to 3rd-party human error, it has to make a choice between hitting a vehicle in the intersection or a bicyclist on the sidewalk. If the vehicle in the intersection was a loaded elementary school bus, this situation would be difficult for a human to decide, let alone an AI.

Which human traits? Some, of course, the ones that bring about judgement. How the AI would respond in such a situation would largely depend whether it's a scripted AI or a simulated AI. A scripted AI would have a very consistent and reliable response, presuming a wireless network between local vehicles to provide some assistance with data. A simulated AI would be much more flexible under varying circumstances, but less reliable to make what we would consider an ideal decision. More likely to fail, but less likely to fail horribly would be a way to sum it up.

 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
I'm sorry, but everything here is entirely theoretical because I'm not aware of any real application of AI.

99% of what is considered AI software reverse engineers into nothing more than complex mathematical algorithms anyways.

How can something that's based on binary digital processing 'think' in a way that can't be predicted by just applying a more complex algorithm to it? Please show me an example of a machine that actually thinks and we can discuss the rest.
 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: spikespiegal
I'm sorry, but everything here is entirely theoretical because I'm not aware of any real application of AI.

You may be surprised to find that in all probability it's AI that is managing your retirement fund. If that's not good enough for you, your life probably depends on the artificial intelligence embedded in nuclear reactor control systems, and as a matter of fact, the power grid as a whole.

Originally posted by: spikespiegal
99% of what is considered AI software reverse engineers into nothing more than complex mathematical algorithms anyways.

Obviously you're not a computer scientist. All software reverse engineers to mathematical algorithms, not 99%. Actually, that's being modest, all physical systems 'reverse engineer' to mathematical algorithms.

Originally posted by: spikespiegal
How can something that's based on binary digital processing 'think' in a way that can't be predicted by just applying a more complex algorithm to it?

The brain does it, so obviously it can be done. There's no threshold of 'predictability', the more complex the system, the less certain the outcome. The brain can certainly be predicted with algorithms. Has in fact been predicted with algorithms.

Originally posted by: spikespiegal
Please show me an example of a machine that actually thinks and we can discuss the rest.

Again, you're your own example. Biological machine but a machine none the less.
 

hellokeith

Golden Member
Nov 12, 2004
1,664
0
0
One reason some AI systems are predictable is because of their limited input data / limited recognition capability.

In a difficult situation, like the intersection example I provided in the OP, a human might not make the same decision two times in a row. That is because we have > terabytes of data coming in each 1/10th second leading up to the decision to hit the bus with little kids and maybe hurting a bunch of them or mowing over the bicyclist for certain killing him. Right now, AI systems just don't have the scale of input that humans have. Nanostuff said it very correctly, "no threshold of 'predictability', the more complex the system, the less certain the outcome." Also, humans have preconceived notions, emotions, prejudices, and can be distracted. All this plays out to the point of decision. Give an AI a hundredth of our data input capablity and object recognition, and indeed predictability goes out the window.
 

Braznor

Diamond Member
Oct 9, 2005
4,767
435
126
Here is a very important question. Can an artificially intelligent machine be able to conceive a true random number without the need for a seed? in a spontaneous kind of way?
 

wwswimming

Banned
Jan 21, 2006
3,695
1
0
i can propose an AI programming assignment. some of the keywords would be "ginormous",
"insane", "awesome", and "sick". input sensors would be a wave buoy and a sound meter
to measure crowd applause, and functions would include a voice synthesizer capable of
multiple voices.

mission #1 - to narrate the surf contest at Maverick's.

oh yeah, the machine has to be able to flash a virtual 'shaka', AND maintain a shaka-awesome
ratio close to a normal pro-surfer-turned-webcaster.
 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: Braznor
Can an artificially intelligent machine be able to conceive a true random number without the need for a seed? in a spontaneous kind of way?

No more than humans can. The only way this can be achieved is if it has access to a quantum generator, however I don't see how this can be integrated into the architecture where it would be a natural conception. Rather it would be much like humans, where an instrument is observed and a response returned.
 

hellokeith

Golden Member
Nov 12, 2004
1,664
0
0
Originally posted by: Braznor
Here is a very important question. Can an artificially intelligent machine be able to conceive a true random number without the need for a seed? in a spontaneous kind of way?

How random are the numbers that humans generate? I would make a guess that over a large set, human-generated numbers aren't very random.
 

Braznor

Diamond Member
Oct 9, 2005
4,767
435
126
Originally posted by: NanoStuff
Originally posted by: Braznor
Can an artificially intelligent machine be able to conceive a true random number without the need for a seed? in a spontaneous kind of way?

No more than humans can. The only way this can be achieved is if it has access to a quantum generator, however I don't see how this can be integrated into the architecture where it would be a natural conception. Rather it would be much like humans, where an instrument is observed and a response returned.

Our brains does have access to a computational process decoupled from reality, something which is capable of a top down architecture. In such a case, AI will be restricted to mathematical algorithms and structured responses until we can really make machines access this computational process as well.

It's this computation process which provides us a frame of mental reference, we call consciousness. IMHO
 

Braznor

Diamond Member
Oct 9, 2005
4,767
435
126
Originally posted by: hellokeith
Originally posted by: Braznor
Here is a very important question. Can an artificially intelligent machine be able to conceive a true random number without the need for a seed? in a spontaneous kind of way?

How random are the numbers that humans generate? I would make a guess that over a large set, human-generated numbers aren't very random.

I would venture a guess and say extremely random. Even a child can choose a random number out of a given set instinctively whereas a machine must be programmed to make a choice and that too only by input of a seed.

 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: Braznor
Our brains does have access to a computational process decoupled from reality

That's absurd, I assure you.

Originally posted by: Braznor
Even a child can choose a random number out of a given set

No, a child can't. Nobody can. Randomness and pseudo-randomness are different things entirely, and a binary computer can do the latter just as well. Much better in fact.
 

LostUte

Member
Oct 13, 2005
98
0
0
Originally posted by: Braznor
Originally posted by: hellokeith
Originally posted by: Braznor
Here is a very important question. Can an artificially intelligent machine be able to conceive a true random number without the need for a seed? in a spontaneous kind of way?

How random are the numbers that humans generate? I would make a guess that over a large set, human-generated numbers aren't very random.

I would venture a guess and say extremely random. Even a child can choose a random number out of a given set instinctively whereas a machine must be programmed to make a choice and that too only by input of a seed.

Humans are terrible at it. For example, if you ask a human to write a sequence of heads and tails (like they are flipping a coin), it is usually quite easy to discriminate between a human and the actual results of a sequence of coin flips (assuming the sequence is large).

Humans are too good at seeing patterns to be truly random. In fact, we often see them where they don't exist. As a result, humans tend to have too few repeating sequences because they are trying to make their output appear random. Seeing a group of 5 or 6 heads or tails in a row doesn't seem random to humans, so they don't put it in their own sequence.
 

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
If I make an AI and name it Bill, then someone torrents Bill for their own use, where did he go? I still have him, but the exact same "personality" and such are over in India at the exact same time.

If I jack into a machine, and someone pulls a shotgun on it, is that assault? What if I'm extending myself to an army of farm equipment and my jealous neighbor pulls his shotgun on a combine, is that assault or just plain ol' property damage?

If I've extended myself into a computer network where am I? My body is sitting at home in a chair with a cable in its neck, but I'm not conscious in it, and I'm equally conscious in a number of machines across the network. It's kindof like asking where the internet is.

Transhumanism philosophy sure is just bizarre, but fun to think about
 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: ADDAvenger
Transhumanism philosophy sure is just bizarre

It's more of a deep technological favoritism than a philosophy. I suppose it can seem like it to people who do not understand. Perhaps this is true of everything categorized as philosophy, it's only philosophy to the observer.

The situation you described is rationally explainable. Unless your combine thinks and feels on a human level, it's property damage.

Originally posted by: ADDAvenger
If I've extended myself into a computer network where am I? My body is sitting at home in a chair with a cable in its neck, but I'm not conscious in it, and I'm equally conscious in a number of machines across the network. It's kindof like asking where the internet is.

You are where you are, in a computer chair, with data being transmitted in and out of your brain from the network, not unlike what is being done now. Why would you be equally conscious in a number of other machines? If you want to copy yourself to other machines, sure, but not otherwise.
 

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
Originally posted by: NanoStuff
Originally posted by: ADDAvenger
Transhumanism philosophy sure is just bizarre

It's more of a deep technological favoritism than a philosophy. I suppose it can seem like it to people who do not understand. Perhaps this is true of everything categorized as philosophy, it's only philosophy to the observer.

The situation you described is rationally explainable. Unless your combine thinks and feels on a human level, it's property damage.

Originally posted by: ADDAvenger
If I've extended myself into a computer network where am I? My body is sitting at home in a chair with a cable in its neck, but I'm not conscious in it, and I'm equally conscious in a number of machines across the network. It's kindof like asking where the internet is.

You are where you are, in a computer chair, with data being transmitted in and out of your brain from the network, not unlike what is being done now. Why would you be equally conscious in a number of other machines? If you want to copy yourself to other machines, sure, but not otherwise.

Call it what you like, but there is a branch of philosophy that deals with transhumanism, and I was actually proposing a couple different, but related, thought experiments ;)

In the example you quoted, why should we say I'm in a computer chair? My human body would be, but I'm talking about where my mind would be. If I were jacked into a network where, for example, there were a number of combines that are both housing my consciousness and being controlled by me, thus thinking and feeling on a human level, except that "I" am in several of these things at the same time, and no one of them would be able to contain "me." In this scenario, I would be a human mind that is functioning entirely inside computers, basically an AI, except not artificial. Suppose that together all these combines on this megafarm have more computing power than my brain, so it is more effective for me to possess them than to remain in my human brain and simply mind control them. If all this were the case, where would the line between farm machinery and me be drawn? For all intents and purposes, I would be the farm machinery while I'm at work, and I would still be a person.
We could take this further and suppose that instead of farm machinery we're talking about racks, and instead of a farm we're talking about a datacenter. If someone, for whatever reason, efficiency or plain ol' immortality, uploaded themself into such a datacenter and just let their human body die, how would we even begin to think about them in legal terms? Place of residence shouldn't be too hard, that's their datacenter; but what about other stuff, say marriage, could two computer-people marry? Their minds were originally human, so there's no doubt they're people, but they're also sentient machines now. How will we treat our computer-people, AI, and machinery when the lines between them all blur? I'm not so sure all this will happen within my lifetime, but it will probably happen eventually, and in the mean time it does make you think about how we think about what makes something whatever it is.
 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: ADDAvenger
If I were jacked into a network where, for example, there were a number of combines that are both housing my consciousness and being controlled by me, thus thinking and feeling on a human level, except that "I" am in several of these things at the same time, and no one of them would be able to contain "me." In this scenario, I would be a human mind that is functioning entirely inside computers, basically an AI, except not artificial. Suppose that together all these combines on this megafarm have more computing power than my brain, so it is more effective for me to possess them than to remain in my human brain and simply mind control them. If all this were the case, where would the line between farm machinery and me be drawn? For all intents and purposes, I would be the farm machinery while I'm at work, and I would still be a person.

Alright, I see, but the line would still be obvious. If you have an instance of your mind inside a combine, the computer platform running your mind would then be the instance of you, and the surrounding metal components would be the combine. There's still a physical compartment processing your thoughts, which will surely be protected by law from mistreatment. You wouldn't be 'farm machinery' any more than a regular combine operator is farm machinery.

Originally posted by: ADDAvenger
If someone, for whatever reason, efficiency or plain ol' immortality, uploaded themself into such a datacenter and just let their human body die, how would we even begin to think about them in legal terms?

I'm certain this is inevitable in human-technological evolution, so it won't be much longer before we find out.

Originally posted by: ADDAvenger
could two computer-people marry?

Can't find a reason why not, apart from spending near-eternity with the same person.

Originally posted by: ADDAvenger
they're also sentient machines now

Nothing has changed here.

Originally posted by: ADDAvenger
How will we treat our computer-people, AI, and machinery when the lines between them all blur?

Treat anything you give people-capabilities like people.
 

hellokeith

Golden Member
Nov 12, 2004
1,664
0
0
Interesting that you bring up that subject area: not too long ago, "remote hunting" was outlawed here in Texas. Yes that's right.. people would pay to get to shoot an animal via a web interface.
 

Ruptga

Lifer
Aug 3, 2006
10,246
207
106
Originally posted by: NanoStuff
Originally posted by: ADDAvenger
If I were jacked into a network where, for example, there were a number of combines that are both housing my consciousness and being controlled by me, thus thinking and feeling on a human level, except that "I" am in several of these things at the same time, and no one of them would be able to contain "me." In this scenario, I would be a human mind that is functioning entirely inside computers, basically an AI, except not artificial. Suppose that together all these combines on this megafarm have more computing power than my brain, so it is more effective for me to possess them than to remain in my human brain and simply mind control them. If all this were the case, where would the line between farm machinery and me be drawn? For all intents and purposes, I would be the farm machinery while I'm at work, and I would still be a person.

Alright, I see, but the line would still be obvious. If you have an instance of your mind inside a combine, the computer platform running your mind would then be the instance of you, and the surrounding metal components would be the combine. There's still a physical compartment processing your thoughts, which will surely be protected by law from mistreatment. You wouldn't be 'farm machinery' any more than a regular combine operator is farm machinery.

Originally posted by: ADDAvenger
If someone, for whatever reason, efficiency or plain ol' immortality, uploaded themself into such a datacenter and just let their human body die, how would we even begin to think about them in legal terms?

I'm certain this is inevitable in human-technological evolution, so it won't be much longer before we find out.

Originally posted by: ADDAvenger
could two computer-people marry?

Can't find a reason why not, apart from spending near-eternity with the same person.

Originally posted by: ADDAvenger
they're also sentient machines now

Nothing has changed here.

Originally posted by: ADDAvenger
How will we treat our computer-people, AI, and machinery when the lines between them all blur?

Treat anything you give people-capabilities like people.

Oh you're no fun, I could've come up with all those one-liner answers, I thought this was a speculation thread? Pick up an interesting idea and run with it already
 

NanoStuff

Banned
Mar 23, 2006
2,981
1
0
Originally posted by: ADDAvenger
I could've come up with all those one-liner answers

You came up with the questions, this suggests you didn't have the answers, one liner or not :)
 

Gannon

Senior member
Jul 29, 2004
527
0
0
The idea of machine consciousness is deeply flawed to begin with, it's based on a naive understanding of nature. For the first 2-3 years of human life you are not conscious, i.e. you are not aware at all, it is exactly like death.

Somewhere along the developmental path of the mind humans become self-aware in a way other animals are not. No other animal has developed culture to the extent of human beings, I wouldn't be surprised to find out that a huge portion of the animal kingdom is not aware of it's existence, it is only machine-reactive.

Can you anesthetize an AI consciousness? Doubt it. There is some hard wet-ware issues we dont understand about self-awareness.

Because you can problem solve and respond to an environment, does not mean you are self-aware at the level of a child post 3yrs old.

Would an AI understand self-destruction for instance if one didn't program it to? why should it value itself?

Humans have built in systems based on animalistic principles: Territory, feelings, personal space, etc. Why would a machine have any of that unless human beings gave it all the baggage we have (which I know is going to happen ... sigh).

I think A.I. will simply augment human intelligence rather then replace it, we don't know what kinds of problems A.I. won't be good at for instance.

All of our technology is merely prosthetics for human minds, and human hardware. What is a monitor without human eyes for instance? It's optimized for human eyesight.
 

SphinxnihpS

Diamond Member
Feb 17, 2005
8,368
25
91
All human-created AI is and will be always be human. Humans are indistinguisable from their technology To give you an extreme example, the fictional Cylons from Battlestar Galactica are human.

AI is the next step of human evolution. This may sound absurd on it's face, but it is quite logically true. We are used to evolution taking immense amounts of tme, but we have effectively sped up evolution with the creation of AI. Example: An animal evolves a trait that helps it survive; longer claws, bigger eyes, faster legs, a more fluid-dynamic design, a BIGGER BRAIN. A human has a big brain which helps it maniupulate the world around it. Humans have since used their brains to speed up evolution by using tools, forming language and culture, and organizing. We are now at the point where our tools themselves are beginning to think. We are also at the point where our tools are themselves augmenting or surpassing our ability to design more better faster tools. It is only a matter of time before the tools we have made are capable of fully modeling the form and function of the human brain itself. Then they will make it bigger better faster (probably smaller actually). Augmentation of the human brain will just be an intermediary step. Eventually it will be replaced by a mechanical brain, as will the body.

The questions most people have are self-centered, non-thinking drivel. What will be the law? How will we treat AI people and computer intelligences? This is the garbage of the William Gibson realm. We will be the machines. We will be enhanced and then move completely to machines. Why? because they will be better.

The only serious moral question will be what to do with the innefficient resource-sucking meat people too ignorant, stupid, to desire this. My guess is we will exterminate them.

Oh, and in case you think this is like 500 years or more in the future, try about 100, and you will be alive (and doing science).
 

SphinxnihpS

Diamond Member
Feb 17, 2005
8,368
25
91
I ABSOLUTELY CAN'T RESIST THIS!

Originally posted by: Gannon
The idea of machine consciousness is deeply flawed to begin with, it's based on a naive understanding of nature. For the first 2-3 years of human life you are not conscious, i.e. you are not aware at all, it is exactly like death.

A. You are a machine.

B. I have concrete memories which have been verified by my parents/grandparents/old family friends, of being less than 2 years old.

C. All kidding aside, humans are self-aware far earlier than you suspect (try fetuses), and not being self-aware is not the same as being dead. In fact, at the ages you suggest, the human brain is the busiest organ in the body, what with all the rewiring going on.

The other day I kicked a dog. It bit me. I think it was self-aware.

Somewhere along the developmental path of the mind humans become self-aware in a way other animals are not. No other animal has developed culture to the extent of human beings, I wouldn't be surprised to find out that a huge portion of the animal kingdom is not aware of it's existence, it is only machine-reactive.

This is the direct result of the lack of pure processing horsepower. Your last statement in this paragraph once again points out your misconception of machines. Again, you are one.

Can you anesthetize an AI consciousness? Doubt it. There is some hard wet-ware issues we dont understand about self-awareness.

An AI capable of being anesthetized would require a pain function, no? For now, let's just say you could easily make a switch to turn of the self-diagnostic function of any AI, which is all that our pain response is.

Because you can problem solve and respond to an environment, does not mean you are self-aware at the level of a child post 3yrs old.

There is currently no AI even remotely close to the processing power of an ant. The full functionality of a 3 year old human mind is decades away.

Would an AI understand self-destruction for instance if one didn't program it to? why should it value itself

How do you understand it? You were programmed. YES YOU WERE!

Humans have built in systems based on animalistic principles: Territory, feelings, personal space, etc. Why would a machine have any of that unless human beings gave it all the baggage we have (which I know is going to happen ... sigh).

This is simple function of the Darwinian principle of natural selection. We have these instincts (programmed into us by evolution and experience), because they are a survival tool. Call it baggage and give it a negative connotation all you wish, but the simple fact is this, we are here because we are the best killers nature has yet devised. Rather than question your instincts, try questioning your altruism (the bleeding heart, as fictional as Jesus, ideals/morals you feel set you apart from "animals"). Perhaps your programming contains errors?

I think A.I. will simply augment human intelligence rather then replace it, we don't know what kinds of problems A.I. won't be good at for instance.

Actually, we do already know what the best calculator will be for each kind of problem. The fact that some of the calculators do not exist yet is extraneous to knowing how to use them when they become available; i.e. I already know everything I need to about how to get myself to the moon. It is even possible, just not practical. 100 years ago, it was also known, just not yet possible.

All of our technology is merely prosthetics for human minds, and human hardware. What is a monitor without human eyes for instance? It's optimized for human eyesight.

Yet all still undeniably human and just part of getting to the next paradigm of evolution.

 

Muse

Lifer
Jul 11, 2001
40,866
10,221
136
Originally posted by: SphinxnihpS
All human-created AI is and will be always be human. Humans are indistinguisable from their technology To give you an extreme example, the fictional Cylons from Battlestar Galactica are human.

AI is the next step of human evolution. This may sound absurd on it's face, but it is quite logically true. We are used to evolution taking immense amounts of tme, but we have effectively sped up evolution with the creation of AI. Example: An animal evolves a trait that helps it survive; longer claws, bigger eyes, faster legs, a more fluid-dynamic design, a BIGGER BRAIN. A human has a big brain which helps it maniupulate the world around it. Humans have since used their brains to speed up evolution by using tools, forming language and culture, and organizing. We are now at the point where our tools themselves are beginning to think. We are also at the point where our tools are themselves augmenting or surpassing our ability to design more better faster tools. It is only a matter of time before the tools we have made are capable of fully modeling the form and function of the human brain itself. Then they will make it bigger better faster (probably smaller actually). Augmentation of the human brain will just be an intermediary step. Eventually it will be replaced by a mechanical brain, as will the body.

The questions most people have are self-centered, non-thinking drivel. What will be the law? How will we treat AI people and computer intelligences? This is the garbage of the William Gibson realm. We will be the machines. We will be enhanced and then move completely to machines. Why? because they will be better.

The only serious moral question will be what to do with the innefficient resource-sucking meat people too ignorant, stupid, to desire this. My guess is we will exterminate them.

Oh, and in case you think this is like 500 years or more in the future, try about 100, and you will be alive (and doing science).

I agree with you up to a point, and that point is your belief that machines will replace us. They are extensions of ourselves, in that you are correct, and like Marshall McCluhan reflected, the computer is an extension of the brain, just as my roller skates (and cars) are extensions of my feet. However, you are not going to remove or replace my feet if I can help it and you better get up your guard if you try to replace my brain!

Edit: I knew a guy (only met him) who was a fellow student of one of my very best friends who attended Cal Tech. He seemed like quite a nice guy, but reserved. It was many years ago (before the advent of the personal computer) and he had the same opinion, that machines would replace us in terms of brain power, make us obsolete. Not long afterward he committed suicide. Somehow it didn't surprise me.