- Nov 12, 2004
- 1,664
- 0
- 0
I've been wanting to make an AI thread for a while. Slow day today, and it's on my mind, so here we go..
When I think about AI, a number of different aspects / goals come to mind:
* menial & repetitive tasks
* fault tolerance, emergency situations
* uninhabitable environments
* entertainment
* companionship
* research / "can it be done?"
and lower level things like:
* recognition
* pathfinding
* strategy
* learning
* judgement
* personality
Ok some random thoughts and statements:
Are there situations that need AI assistance but ultimately require human responsibility? How do we decide which situations should have autonomous AI vs semi-control / "man-in-the-loop"?
If an autonomous AI injurs or kills a human, who is responsible?
What is the purpose of attempting to "humanize" an artificial intelligence? If it is beneficial, is the benefit for the AI or for humans interacting with the AI?
Does modeling an AI in such a way that it could develop human traits such as tastes, interests, likes & dislikes, personality, etc create a "child / parent" or "creation / god" relationship with the human developer? Should AI ever even be allowed to develop these capacities? Can an AI be treated badly by its human developer?
Does judgement require human traits? For example, an autonomous vehicle comes to an intersection, but due to 3rd-party human error, it has to make a choice between hitting a vehicle in the intersection or a bicyclist on the sidewalk. If the vehicle in the intersection was a loaded elementary school bus, this situation would be difficult for a human to decide, let alone an AI.
If an AI has learning capability and it solves a problem, such as a long-unsolved math problem, who gets the credit? What impact would this have on research? What if the AI develops a model or formula or algorithm for something, would you inherently trust it / distrust it because of the source?
more to come..
When I think about AI, a number of different aspects / goals come to mind:
* menial & repetitive tasks
* fault tolerance, emergency situations
* uninhabitable environments
* entertainment
* companionship
* research / "can it be done?"
and lower level things like:
* recognition
* pathfinding
* strategy
* learning
* judgement
* personality
Ok some random thoughts and statements:
Are there situations that need AI assistance but ultimately require human responsibility? How do we decide which situations should have autonomous AI vs semi-control / "man-in-the-loop"?
If an autonomous AI injurs or kills a human, who is responsible?
What is the purpose of attempting to "humanize" an artificial intelligence? If it is beneficial, is the benefit for the AI or for humans interacting with the AI?
Does modeling an AI in such a way that it could develop human traits such as tastes, interests, likes & dislikes, personality, etc create a "child / parent" or "creation / god" relationship with the human developer? Should AI ever even be allowed to develop these capacities? Can an AI be treated badly by its human developer?
Does judgement require human traits? For example, an autonomous vehicle comes to an intersection, but due to 3rd-party human error, it has to make a choice between hitting a vehicle in the intersection or a bicyclist on the sidewalk. If the vehicle in the intersection was a loaded elementary school bus, this situation would be difficult for a human to decide, let alone an AI.
If an AI has learning capability and it solves a problem, such as a long-unsolved math problem, who gets the credit? What impact would this have on research? What if the AI develops a model or formula or algorithm for something, would you inherently trust it / distrust it because of the source?
more to come..