- Mar 23, 2009
- 5,499
- 2
- 0
maybe i've been drinking and/or smoking too much pot, but the thought occurred to me that true 'artificial intelligence' doesn't seem possible nor useful.
we have machines that can take input data (from sensors, other computers, user input, whatever), process it according to preprogrammed parameters and perform the output functions it deems appropriate. they are of course capable of 'learning' (e.g. adaptive strategies in computer-controlled car engines), but only by way of preprogrammed data that essentially tells the computer HOW to learn. we could essentially make a 'perfect' humanoid robot if we could overcome engineering problems (extremely sophisticated inputs and output devices, power issues, processing power, an insane amount of programming, ect).
but to make it truly 'human,' it would need to be flawed. logic is often incorrect and irrational. swayed by emotion. all that jazz.
so exactly what constitutes AI? seems like a grey area rather than a 'we finally did it!' type thing. it's kind of like curing a disease like cancer- continuously better treatment will be devised, but we're unlikely to ever actually 'cure' it.
forgive me if this seems highly stupid, my mental consternation on this subject is something that's hard to express. it's just that i don't see how what we typically call 'AI' as even being close to such. take a movie like, say, i robot (and no, i've never read the asimov stuff); the protagonist robot is supposed to be 'intelligent,' but constantly talks about how he was programmed, and uses calculations and logic to make decisions. the only thing that seems to make him 'intelligent' is emotion, which seems like a bad thing for a computer to have.
there's the 'self-awareness' thing, but i don't get that either. wouldn't you have to tell the computer that it's a computer before it would realize such? wouldn't it still 'learn' through preprogrammed logic and inputs? and skynet didn't have emotion. so why was it such a dick to everyone? :X knowing what i do about computers (a workable knowledge but it's not like i have a CE or EE degree), the whole 'AI' concept just seems weird when you think hard about it. and no, wikipedia has not helped my confusion, it just seems to reinforce what i said about AI development being an ongoing process with no clear end.
anyway: discuss. or give me links to read (good links, don't just google the same shit i already have).
also hi AT i miss posting here.
we have machines that can take input data (from sensors, other computers, user input, whatever), process it according to preprogrammed parameters and perform the output functions it deems appropriate. they are of course capable of 'learning' (e.g. adaptive strategies in computer-controlled car engines), but only by way of preprogrammed data that essentially tells the computer HOW to learn. we could essentially make a 'perfect' humanoid robot if we could overcome engineering problems (extremely sophisticated inputs and output devices, power issues, processing power, an insane amount of programming, ect).
but to make it truly 'human,' it would need to be flawed. logic is often incorrect and irrational. swayed by emotion. all that jazz.
so exactly what constitutes AI? seems like a grey area rather than a 'we finally did it!' type thing. it's kind of like curing a disease like cancer- continuously better treatment will be devised, but we're unlikely to ever actually 'cure' it.
forgive me if this seems highly stupid, my mental consternation on this subject is something that's hard to express. it's just that i don't see how what we typically call 'AI' as even being close to such. take a movie like, say, i robot (and no, i've never read the asimov stuff); the protagonist robot is supposed to be 'intelligent,' but constantly talks about how he was programmed, and uses calculations and logic to make decisions. the only thing that seems to make him 'intelligent' is emotion, which seems like a bad thing for a computer to have.
there's the 'self-awareness' thing, but i don't get that either. wouldn't you have to tell the computer that it's a computer before it would realize such? wouldn't it still 'learn' through preprogrammed logic and inputs? and skynet didn't have emotion. so why was it such a dick to everyone? :X knowing what i do about computers (a workable knowledge but it's not like i have a CE or EE degree), the whole 'AI' concept just seems weird when you think hard about it. and no, wikipedia has not helped my confusion, it just seems to reinforce what i said about AI development being an ongoing process with no clear end.
anyway: discuss. or give me links to read (good links, don't just google the same shit i already have).
also hi AT i miss posting here.
