Not sure if this belongs here but it does have to do with programming so i slapped it here. Anyway first thing is first, when i say close to human i probably mean a far off dumb down human. Lately I've been thinking if it is even possible to create something like this. There are so many problems that have to be worked out in making something like this and then all put together it would be a mess. HOWEVER all those forever alone faces would go away and pay top dollar for an AI that was intelligent enough that it could be considered an actual friend. This requires quite a bit. Speech recognition and synthesized voices are 2 things which have been solved already that so that's not even an issue. Though is so called "speech recognition" at the level i want it. Translating speech into words that are then interpreted by the AI isn't good enough. The AI must interpret the sound on a deeper level, not just "open word". I don't want an "information bank" ai (though a friend of mine suggested that is all humans are to begin with and i sort of agree). I want an AI you can actually have a conversation with, or even a reasonable argument. At any rate since language and slang is always changing and it is hard to teach something the meaning of every single thing the ai must be able to learn the meaning of new words or phrases. From a programming stand point is it at all possible to make something that can learn the meaning of something new just from being told or shown a picture?
All of this information brings us to another big obstacle. Space! How much space would something like this roughly take up? Given it could have a separate short term memory where things would be temporarily stored while the ai determined if it was important information that it should keep and move to a long term memory drive. "memories" in the long term memory would be lower quality from the short term to conserve space, and older memories that have not been used in awhile will eventually be deleted. But what should these memories hold to begin with? I figured it only needs to record audio but images or videos should be able to be viewed. Recording and image could be helpful for learning purposes but would in the end take up to much space on the short term drive if the images were detailed enough that it could learn things from them. Have you ever left a web cam or something recording for hours? cause i have... If the ai quickly decided if the recording footage should be passed on to the long term memory or to be deleted then there wouldn't be much of a problem, but is deleting part of a video while it is recording possible? in my experience it isn't so the ai would have to be looking out for when something important happens and when it ends, stop recording, crop the video file, and save the file. Possible also if nothing note worthy has happened in a certain amount of time it will automatically stop recording and delete the file that has been determined uninteresting.
Now that that problem is roughly solved (meaning not really) here is another, how should it be visually represented? No one can befriend just a voice that they downloaded off the internet, that has no way of displaying emotions other than tone of voice. This isn't a big problem atm really so i'm just going to ignore it for the time being. Should get the essentials down first.
Right back to essentials then. But what i said above is a bit important. It needs to display meaningful emotions and possibly feel said emotions as well. They clearly can't be nearly as deep and meaningful as human's wide range of emotions, but i think the basics will do. Things such as likes, dislikes, anger, frustration, sadness, hatred, and maybe a few levels of each emotion ranging from say 1(being not very emotional) to 10 (being very emotional). I can not think of anyway to actually make the robots feel any emotions though they can be assigned to images or events. This leads into the problem of, how does it determine what it likes and what it doesn't? I have no clue what determines that I like blue but hate yellow (strictly an example human emotions are to deep to simply hate 1 color though i do dislike any color by itself. colors need other colors as friends too) for humans its probably based on past experiences(like everything else with humans -_-) So if you have fond memories of watching the sun set when you were a kid you may find orange more relaxing and a better color. I have absolutely no proof to back this up it is just speculation. After googling this and finding http://answers.yahoo.com/question/index?qid=20090920095032AA0TS85 it is still unclear to me what determines it, though from one of those comments it sounds like depending on your dna you start with like a preset for preferences and such, like color but experiences will override this default. Which simplifies things a bit. The AI would come with a preset of likes and dislikes for various things. obviously not food or anything like that since it is still a far simpler ai than a human. Simple things that it can partake in. The AI should be able to partake in somethings, most likely some form of video game or 3d environment of some sort. Otherwise i would just feel bad for the ai not being able to do anything except for communicate ever, though i guess an ai this simple can't experience much even in a virtual environmental. I suppose if the ai detects it is in danger, its position data is changing rapidly A(ie falling) or it is doing something new it would feel a certain emotion like excitement, or being scared.
There are many other problems but i feel this is good for now discuss whatever below. Feel free to rip everything i said to shreds, saying none of this shit is remotely possible to program yet or whatever, but please back up whatever you say with reason. I am not an expert on anything really so that's why I am here and i apologize for not really proof reading, i don't have that kind of time (yet i typed all of this :\) anyway if you read it all then good on you, oh and how much would it cost to make such an ai and how much do you think someone (the average target consumer) would pay for it? well i guess you can't answer that question if it is actually impossible in technologies current state, but make note it would probably be something like a box that you buy and hook up to a monitor, rather than software. I would like it to be software that anyone could buy an download if they have the required hardware, but my guess is that they wouldn't since i assume it would take quite a bit of space. That said answer the question with how much people would pay for the software and how much they would pay for the hardware and software. This world runs on money *sigh*
All of this information brings us to another big obstacle. Space! How much space would something like this roughly take up? Given it could have a separate short term memory where things would be temporarily stored while the ai determined if it was important information that it should keep and move to a long term memory drive. "memories" in the long term memory would be lower quality from the short term to conserve space, and older memories that have not been used in awhile will eventually be deleted. But what should these memories hold to begin with? I figured it only needs to record audio but images or videos should be able to be viewed. Recording and image could be helpful for learning purposes but would in the end take up to much space on the short term drive if the images were detailed enough that it could learn things from them. Have you ever left a web cam or something recording for hours? cause i have... If the ai quickly decided if the recording footage should be passed on to the long term memory or to be deleted then there wouldn't be much of a problem, but is deleting part of a video while it is recording possible? in my experience it isn't so the ai would have to be looking out for when something important happens and when it ends, stop recording, crop the video file, and save the file. Possible also if nothing note worthy has happened in a certain amount of time it will automatically stop recording and delete the file that has been determined uninteresting.
Now that that problem is roughly solved (meaning not really) here is another, how should it be visually represented? No one can befriend just a voice that they downloaded off the internet, that has no way of displaying emotions other than tone of voice. This isn't a big problem atm really so i'm just going to ignore it for the time being. Should get the essentials down first.
Right back to essentials then. But what i said above is a bit important. It needs to display meaningful emotions and possibly feel said emotions as well. They clearly can't be nearly as deep and meaningful as human's wide range of emotions, but i think the basics will do. Things such as likes, dislikes, anger, frustration, sadness, hatred, and maybe a few levels of each emotion ranging from say 1(being not very emotional) to 10 (being very emotional). I can not think of anyway to actually make the robots feel any emotions though they can be assigned to images or events. This leads into the problem of, how does it determine what it likes and what it doesn't? I have no clue what determines that I like blue but hate yellow (strictly an example human emotions are to deep to simply hate 1 color though i do dislike any color by itself. colors need other colors as friends too) for humans its probably based on past experiences(like everything else with humans -_-) So if you have fond memories of watching the sun set when you were a kid you may find orange more relaxing and a better color. I have absolutely no proof to back this up it is just speculation. After googling this and finding http://answers.yahoo.com/question/index?qid=20090920095032AA0TS85 it is still unclear to me what determines it, though from one of those comments it sounds like depending on your dna you start with like a preset for preferences and such, like color but experiences will override this default. Which simplifies things a bit. The AI would come with a preset of likes and dislikes for various things. obviously not food or anything like that since it is still a far simpler ai than a human. Simple things that it can partake in. The AI should be able to partake in somethings, most likely some form of video game or 3d environment of some sort. Otherwise i would just feel bad for the ai not being able to do anything except for communicate ever, though i guess an ai this simple can't experience much even in a virtual environmental. I suppose if the ai detects it is in danger, its position data is changing rapidly A(ie falling) or it is doing something new it would feel a certain emotion like excitement, or being scared.
There are many other problems but i feel this is good for now discuss whatever below. Feel free to rip everything i said to shreds, saying none of this shit is remotely possible to program yet or whatever, but please back up whatever you say with reason. I am not an expert on anything really so that's why I am here and i apologize for not really proof reading, i don't have that kind of time (yet i typed all of this :\) anyway if you read it all then good on you, oh and how much would it cost to make such an ai and how much do you think someone (the average target consumer) would pay for it? well i guess you can't answer that question if it is actually impossible in technologies current state, but make note it would probably be something like a box that you buy and hook up to a monitor, rather than software. I would like it to be software that anyone could buy an download if they have the required hardware, but my guess is that they wouldn't since i assume it would take quite a bit of space. That said answer the question with how much people would pay for the software and how much they would pay for the hardware and software. This world runs on money *sigh*
Last edited:
