Close to human AI possible?

Would you buy such an AI

  • Yes.

  • No.

  • depends if it isn't very affordable.

  • This isn't remotely possible so it doesn't matter.

  • As long as it agrees to not take over the world or not.

  • Only if it can sing well!


Results are only viewable after voting.

kolop97

Junior Member
Apr 17, 2013
8
0
0
Not sure if this belongs here but it does have to do with programming so i slapped it here. Anyway first thing is first, when i say close to human i probably mean a far off dumb down human. Lately I've been thinking if it is even possible to create something like this. There are so many problems that have to be worked out in making something like this and then all put together it would be a mess. HOWEVER all those forever alone faces would go away and pay top dollar for an AI that was intelligent enough that it could be considered an actual friend. This requires quite a bit. Speech recognition and synthesized voices are 2 things which have been solved already that so that's not even an issue. Though is so called "speech recognition" at the level i want it. Translating speech into words that are then interpreted by the AI isn't good enough. The AI must interpret the sound on a deeper level, not just "open word". I don't want an "information bank" ai (though a friend of mine suggested that is all humans are to begin with and i sort of agree). I want an AI you can actually have a conversation with, or even a reasonable argument. At any rate since language and slang is always changing and it is hard to teach something the meaning of every single thing the ai must be able to learn the meaning of new words or phrases. From a programming stand point is it at all possible to make something that can learn the meaning of something new just from being told or shown a picture?

All of this information brings us to another big obstacle. Space! How much space would something like this roughly take up? Given it could have a separate short term memory where things would be temporarily stored while the ai determined if it was important information that it should keep and move to a long term memory drive. "memories" in the long term memory would be lower quality from the short term to conserve space, and older memories that have not been used in awhile will eventually be deleted. But what should these memories hold to begin with? I figured it only needs to record audio but images or videos should be able to be viewed. Recording and image could be helpful for learning purposes but would in the end take up to much space on the short term drive if the images were detailed enough that it could learn things from them. Have you ever left a web cam or something recording for hours? cause i have... If the ai quickly decided if the recording footage should be passed on to the long term memory or to be deleted then there wouldn't be much of a problem, but is deleting part of a video while it is recording possible? in my experience it isn't so the ai would have to be looking out for when something important happens and when it ends, stop recording, crop the video file, and save the file. Possible also if nothing note worthy has happened in a certain amount of time it will automatically stop recording and delete the file that has been determined uninteresting.

Now that that problem is roughly solved (meaning not really) here is another, how should it be visually represented? No one can befriend just a voice that they downloaded off the internet, that has no way of displaying emotions other than tone of voice. This isn't a big problem atm really so i'm just going to ignore it for the time being. Should get the essentials down first.

Right back to essentials then. But what i said above is a bit important. It needs to display meaningful emotions and possibly feel said emotions as well. They clearly can't be nearly as deep and meaningful as human's wide range of emotions, but i think the basics will do. Things such as likes, dislikes, anger, frustration, sadness, hatred, and maybe a few levels of each emotion ranging from say 1(being not very emotional) to 10 (being very emotional). I can not think of anyway to actually make the robots feel any emotions though they can be assigned to images or events. This leads into the problem of, how does it determine what it likes and what it doesn't? I have no clue what determines that I like blue but hate yellow (strictly an example human emotions are to deep to simply hate 1 color though i do dislike any color by itself. colors need other colors as friends too) for humans its probably based on past experiences(like everything else with humans -_-) So if you have fond memories of watching the sun set when you were a kid you may find orange more relaxing and a better color. I have absolutely no proof to back this up it is just speculation. After googling this and finding http://answers.yahoo.com/question/index?qid=20090920095032AA0TS85 it is still unclear to me what determines it, though from one of those comments it sounds like depending on your dna you start with like a preset for preferences and such, like color but experiences will override this default. Which simplifies things a bit. The AI would come with a preset of likes and dislikes for various things. obviously not food or anything like that since it is still a far simpler ai than a human. Simple things that it can partake in. The AI should be able to partake in somethings, most likely some form of video game or 3d environment of some sort. Otherwise i would just feel bad for the ai not being able to do anything except for communicate ever, though i guess an ai this simple can't experience much even in a virtual environmental. I suppose if the ai detects it is in danger, its position data is changing rapidly A(ie falling) or it is doing something new it would feel a certain emotion like excitement, or being scared.

There are many other problems but i feel this is good for now discuss whatever below. Feel free to rip everything i said to shreds, saying none of this shit is remotely possible to program yet or whatever, but please back up whatever you say with reason. I am not an expert on anything really so that's why I am here and i apologize for not really proof reading, i don't have that kind of time (yet i typed all of this :\) anyway if you read it all then good on you, oh and how much would it cost to make such an ai and how much do you think someone (the average target consumer) would pay for it? well i guess you can't answer that question if it is actually impossible in technologies current state, but make note it would probably be something like a box that you buy and hook up to a monitor, rather than software. I would like it to be software that anyone could buy an download if they have the required hardware, but my guess is that they wouldn't since i assume it would take quite a bit of space. That said answer the question with how much people would pay for the software and how much they would pay for the hardware and software. This world runs on money *sigh*
 
Last edited:

wirednuts

Diamond Member
Jan 26, 2007
7,121
4
0
I would imagine someday we could make robots that don't know how to write paragraphs..
 
Jan 31, 2013
108
0
0
Get me a absolutely perfect voice/speech recognition API to work with, and I could create IronMan's "Jarvis" easily.
 

kolop97

Junior Member
Apr 17, 2013
8
0
0
Okay i did separate it into paragraphs, guess i should have checked the preview before posting... There now its a bit more obvious guys. Still didn't feel like making it entirely right, but at least its divided a bit now
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
All we've done so far is to make faster and faster calculators and come up with increasingly clever ways for them to retrieve and manipulate larger and larger amounts of hard coded data. It's so far removed from what a brain does, or seems to do, that I don't even know where to begin addressing your questions. We may _approximate_ a brain someday. We may even create a near-complete simulation of one (larger than a mouse, which has already been done I believe). But even when we do that it will still be light years from the conscious, self-aware, abstract reasoning thing that is inside our skulls. We don't even know enough to begin to guess how far off we are. We don't have the right words yet. Our memory doesn't act like storage. Our reasoning ability is not like a processor. Our senses do not act like input and output devices. It's been a mistake, since the very inception of the so-called AI movement, to think that computers were an infant stage in the development of a thinking thing. Not even close.
 

PhatoseAlpha

Platinum Member
Apr 10, 2005
2,131
21
81
Generally speaking, if actual human intelligences dislike you, it's rather foolhardy to expect a human equivalent AI not to dislike you for the exact same reasons.

We could eventually create a humanlike AI - whether a mind is based on a substrate of physics or math doesn't much matter to the mind in question - but we won't. It's entirely too much trouble for too little payoff. A humanlike AI is going to function just like a human - generally pretty slow, making a lot of "close enough" approximations, and lots of "A wrong answer now is better then a right answer two days from now."

Thing is, if that's what you want, it's far, far more efficient to just go find a another human. They're available now, in great quantities.

Plus, if you can find any that suit your needs, making new humans is more fun then arguing with AI algorithms.
 

kolop97

Junior Member
Apr 17, 2013
8
0
0
Generally speaking, if actual human intelligence dislike you, it's rather foolhardy to expect a human equivalent AI not to dislike you for the exact same reasons.

Its hard to say, since not every human will dislike you unless you actually go out an try to get everyone to. But if you play your cards right you can get almost any human to like you, or at least not dislike you.
 

kolop97

Junior Member
Apr 17, 2013
8
0
0
Guys, i'm not talking about in a few years im talking about maybe in 10 or 20 years. I know it probably isn't possible for it to be made in like 2-5 years. Well unless google or apple just drop everything and say, WE NEED THIS, but that's not going to happen :D
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Right now there isn't even a roadmap for this.

To use an example, you can predict that the best CGI in today's movies (done 1 frame per X minutes) will be matched by computer games in real time within Y years. That's just saying the CPU and GPU power will continue to evolve, there are no unsolved problems.

We don't know enough yet to make similar predictions about AI. We can't carry out the steps or recipe at even 1/1000th speed now. We don't know what those steps are.

The computers that beat chess masters or win at Jeopardy are just crunching numbers and text and spitting out matches, there is no self-awareness, no "thinking".
 

kolop97

Junior Member
Apr 17, 2013
8
0
0
Dave that's what i'm saying :D. I guess the first step is actually getting a computer to recognize ideas from words or images. And i don't mean putting in a google search and picking the most likely meaning for something. This is a seemingly impossible task though isn't. if we had the same amount of time nature had to do all of this...
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
What's the difference between simulating a feeling and actually feeling it?

So if an AI-Robot laughs at your jokes and smiles at you when your greet, does it matter if he "feels" it (whatever that exactly means!) as long as the robot "does the appropriate thing"?

It's a matter of definition.

Meaning given enough crunching power and data, creating such a human brain simulation is possible and if you can disguise it in "human body" with a real, natural voice (eg. like the Terminator) we could not determine if it is a robot or real human. I think that is theoretically possible. IMHO this is a lot more complex than a Turing-Test because of the direct interaction.

But I would not bet on this happening in this century or even at alll due to the required effort (cost) and even more so social / cultural implications.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Not possible with current storage technology.

Just a simple image recognition, comparison, and memory recall can pull up several gigabytes of random access storage from long term memory near instantly in a brain. It may take a while to remember the primary key some times but when you finally do, all associated memory with that prompt is recalled near instantly , every vivid detail in IRL resolutions in multiple dimensions.

Do that on a hard drive and you are talking kilobytes per second...

We will be severely limited in our computer capabilities for a long time until we develop a fast high density non volatile universal main memory that is actually as fast or faster than the CPU.

Until then forget about it. We can have all the CPU power in the world but our primitive storage and network speeds are a joke. We need RAM that keeps data indefinitely until changed that can do 50+ GB/sec with nanosecond access times before actual human capable neural net AI is possible.
 
Last edited:

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
I don't think it is the technology that limits us. It isn't the processing speed or storage space or algorithms or recognition, it isn't any of that.

I think our limitation is the lack of concept of how to make something self improve. I'm not going to go into a LOT of details, below is a bunch of paraphrasing.

Let me give an example. We take for granted "human intelligence" and we have defined a set of criteria for human intelligence, much like what the OP has done when he says "shows emotion" etc. There is nothing inherently special about a human that allows this "human intelligence" to happen.
For example, if someone were to "grow" a human in a vat well into their 30's and we had the ability to program the brain to whatever we want (like a matrix download), how would we define what makes a human a human? What do we put in there? Information? Rules? Skills? Opinions? Sense of self? Social rules? When it comes down to it, we have no flippen clue what to put in there. That 30 year old human will be like a "robot" regardless what we put in there because they have not made a single improvement on their base information.

So, what do I think it takes to make a true human-like AI? First, breakdown the basics. Emotions. Emotions come from the most basic parts of our brains and have very basic set of rules. Happiness (something is beneficial to the organism), fear/flight (survival), sadness (empathy/survival), laughter (relief), greed (motivation/survival). The base emotions only get complicated because... what we experience in life shapes how and when our emotions are triggered and what actions we take once they are triggered.

Second, establish rules on self-improvement. Rules such as curisoty, how to form a conclusion based on incomplete data, how to observe, how to mimic, a basic set of ethics, how to process failures/success, etc.

Third, give this AI (even if all we want is the "intelligence" out of it) senses. Give it a way to perceive the world.

Finally, time. You won't be able to grow a 50 year old adult in a jar and instantly have a "Jarvis." You will need to give this entity time to learn, adapt, grow, evolve.

So because of my conclusion, I think the biggest limitation we have of creating a "human-like" AI is time. We just can't spare 30 years.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
How do we code something that has to learn new things, yet uncoded? Human intelligence is the ability to learn new things, have original ideas, and interpret things into knowledge previously unknown. How can we code this? How does one write a program based on strict limits of how it is programmed, to go beyond those parameters?

When someone can answer that question, we can think about making an AI. I'd like us to make a decent Go AI that doesn't just emulate games, but actually uses strategy to establish value based moves. Then we can start worrying about AI beyond such a strictly limited set of rules. Chess AI works because we have computers capable of brute forcing every move 10 moves in advance.
 

kolop97

Junior Member
Apr 17, 2013
8
0
0
What's the difference between simulating a feeling and actually feeling it?

So if an AI-Robot laughs at your jokes and smiles at you when your greet, does it matter if he "feels" it (whatever that exactly means!) as long as the robot "does the appropriate thing"?

It's a matter of definition.

Meaning given enough crunching power and data, creating such a human brain simulation is possible and if you can disguise it in "human body" with a real, natural voice (eg. like the Terminator) we could not determine if it is a robot or real human. I think that is theoretically possible. IMHO this is a lot more complex than a Turing-Test because of the direct interaction.

But I would not bet on this happening in this century or even at alll due to the required effort (cost) and even more so social / cultural implications.

Humans are ran by wanting this natural high of doing something exciting and having fun. If it doesn't actually feel a sensation it can't strive for more of this or even understand why humans like what it has programmed as happiness at all. For the purpose of this program I suppose it doesn't actually need to feel emotions and could just have emotions be another number, and the program wants to raise the number (i mean be happier or excited) but still has to take into consideration of other factors like other's safety. It does seem silly that you could program actual emotions.

As for the time thing, technological advances are hard to predict so its really impossible to say its not possible in this century, though definitely not going to happen in this decade. I guess you were talking about a perfect AI in a human like body that you can't tell apart from another human, so yeah that probably won't happen in this century.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
How do we code something that has to learn new things, yet uncoded? Human intelligence is the ability to learn new things, have original ideas, and interpret things into knowledge previously unknown. How can we code this? How does one write a program based on strict limits of how it is programmed, to go beyond those parameters?

When someone can answer that question, we can think about making an AI. I'd like us to make a decent Go AI that doesn't just emulate games, but actually uses strategy to establish value based moves. Then we can start worrying about AI beyond such a strictly limited set of rules. Chess AI works because we have computers capable of brute forcing every move 10 moves in advance.

You can't. You program the substrate (Eg the operation of a neural network), but the consciousness is the intangible "virtual" layer of sentience that arises within the network. In other words the person is simply the state of the trained neutral network after X years, the data. It has nothing to do with the "BIOS" code so to speak.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Honestly, I'd be excited if they could build an RPG with NPCs that act somewhat real. That's a closed, invented and constrained world where all the variables are under the creator's control. We have multiple 3+ Ghz 64-bit processor cores, eight or more billion bytes of fast volatile storage, vast amounts of slower persistent storage, and still the characters in every modern game/simulation are dumb as rocks. That fact is as telling as anything.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
Honestly, I'd be excited if they could build an RPG with NPCs that act somewhat real. That's a closed, invented and constrained world where all the variables are under the creator's control. We have multiple 3+ Ghz 64-bit processor cores, eight or more billion bytes of fast volatile storage, vast amounts of slower persistent storage, and still the characters in every modern game/simulation are dumb as rocks. That fact is as telling as anything.

While that wouldn't be actual AI, it would sure make games a lot more impressive. Even if all variables are accounted for and they react the same, I still would welcome it.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
AI on any level of sophistication will most likely come from distributed computing. That is about the closest concept to how the brain works and how insight is made. Right now, distributed computing is more on the ant colony level of awareness since most systems server a single task. Skynet is more of a possible reality than individual autonomous robots with AI.
 

kolop97

Junior Member
Apr 17, 2013
8
0
0
AI on any level of sophistication will most likely come from distributed computing. That is about the closest concept to how the brain works and how insight is made. Right now, distributed computing is more on the ant colony level of awareness since most systems server a single task. Skynet is more of a possible reality than individual autonomous robots with AI.

Ah makes sense when you say it that way... a little