• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

so what the hell is AI, anyway?

brblx

Diamond Member
Mar 23, 2009
5,499
2
0
maybe i've been drinking and/or smoking too much pot, but the thought occurred to me that true 'artificial intelligence' doesn't seem possible nor useful.

we have machines that can take input data (from sensors, other computers, user input, whatever), process it according to preprogrammed parameters and perform the output functions it deems appropriate. they are of course capable of 'learning' (e.g. adaptive strategies in computer-controlled car engines), but only by way of preprogrammed data that essentially tells the computer HOW to learn. we could essentially make a 'perfect' humanoid robot if we could overcome engineering problems (extremely sophisticated inputs and output devices, power issues, processing power, an insane amount of programming, ect).

but to make it truly 'human,' it would need to be flawed. logic is often incorrect and irrational. swayed by emotion. all that jazz.

so exactly what constitutes AI? seems like a grey area rather than a 'we finally did it!' type thing. it's kind of like curing a disease like cancer- continuously better treatment will be devised, but we're unlikely to ever actually 'cure' it.

forgive me if this seems highly stupid, my mental consternation on this subject is something that's hard to express. it's just that i don't see how what we typically call 'AI' as even being close to such. take a movie like, say, i robot (and no, i've never read the asimov stuff); the protagonist robot is supposed to be 'intelligent,' but constantly talks about how he was programmed, and uses calculations and logic to make decisions. the only thing that seems to make him 'intelligent' is emotion, which seems like a bad thing for a computer to have.

there's the 'self-awareness' thing, but i don't get that either. wouldn't you have to tell the computer that it's a computer before it would realize such? wouldn't it still 'learn' through preprogrammed logic and inputs? and skynet didn't have emotion. so why was it such a dick to everyone? :X knowing what i do about computers (a workable knowledge but it's not like i have a CE or EE degree), the whole 'AI' concept just seems weird when you think hard about it. and no, wikipedia has not helped my confusion, it just seems to reinforce what i said about AI development being an ongoing process with no clear end.

anyway: discuss. or give me links to read (good links, don't just google the same shit i already have).

also hi AT i miss posting here.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
I think your confusing two different things. AI is about thinking based on what you already know and drawing conclusions based on that information adding to it as you learn. I think what you are talking about is being self aware or consciousness. That is a whole another area of thinking. Something can be very intelligent but not be conscious of itself which is what a machine would have to be to be human like.

A prime example is the IBM supercomputer Watson. It could understand questions and even things that were implied in text like humor or sarcasm but it didn't understand what the words actually meant and the meaning behind them, it was all numbers and probability.

There are some scientist doing AI studies that are trying different approaches. One is the idea that if you tell a computer enough facts and information that somehow it will evolve into its own consciousness but they have no successes so far. I think consciousness is a bigger goal than AI. To make a machine self aware and actually care about it's own existence and purpose for a reason other than that you told it that it should.
 
Last edited:

shortylickens

No Lifer
Jul 15, 2003
80,287
17,081
136
Every time this issue comes up its always obvious that people dont know what "artificial" means.
Artificial intelligence does not mean REAL intelligence. Thats why they call it artificial intelligence and not real intelligence. Comparing it to the real thing is pointless. The whole purpose of AI is to get results that kinda sorta resemble the results of human thinking, but I'm sure we all know that the process is not even close to a live brain.

Process and results are two different things, as are real and artificial.
And consciousness is something else entirely.
 

esun

Platinum Member
Nov 12, 2001
2,214
0
0
Something that can pass the Turing test is an AI.

Defining "emotion" or "self-awareness" in terms of what is happening in terms of electrical circuits or brain waves is difficult. However, we can make judgements based on external interactions. If a computer can satisfy all external appearances (aside from physical structure) of being an intelligent being, then we can call it artificial intelligence.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
Playing off ethnocentric attitudes and the tendency to anthropomorphise are how movies make money dramatizing issues and often has little resemblance to reality.

The simple fact is the more intelligent the animal the more varied and complex their emotional lives and there is no reason to assume the same will not also apply to self-aware computers. In all likelihood they will laugh and cry with the rest of us and be every bit as unique and individual. If they become more intelligent then people then their emotional responses might become so complex as to baffle us just as an adult's emotional responses can seem unfathomable to a child.

All of this is merely a straightforward extrapolation of what we know about intelligence. No wild speculations, dubious philosophical assumptions, or highly technical knowledge required.
 
May 11, 2008
22,557
1,471
126
There is intelligence and there is self awareness.

Being conscious or aware about yourself requires a lot of input about your physical shape and translate it to a "mental" representation. Your body is wired with nerves that relay constant information about temperature, pressure or pain. We are not aware of it until we consciously think about it but our brain is processing information about our entire body non stop. Thus our brain is aware of our body. This system is not perfect however, ask people with phantom pain in removed limbs. For example some people who lost an arm have a pain in their hand while it is no longer attached to the body. This has probably something to do with the way the different ganglia in the body process information. Ganglia are relay stations but also do preprocessing of neuronal input and IMHO perhaps some form of compression to that information is done. These ganglia may be dependent on information of other ganglia. If for example a ganglia in your shoulder expects some form of input from the ganglia in your arm, it may interpret things wrong because of cut nerves because the arm is amputated.

Why this long story ?

Look at the current generation of robots. None of them has the sophisticated sensory as lifeforms have such as a human or typically any mammal. From the movie terminator, it was about a virus called skynet that came aware. Well, if one could create software that is constantly aware of what is happening all around it, that would be a start. But then there need to be a drive. A motivation. And that motivation is with lifeforms emotions. Emotions are motivators to use data in a certain way. Those who read this and use for example psychological medicines that modulate serotonin , know what i mean. Less drive. Instincts are another form of motivators but are hardwired and typically not flexible.

But emotions are IMHO in the brain also used as a form of compression and as a tag system for prefetching memories. Thus IMHO what you get is a system that when you remember something, also the belonging emotional state is set when everything works normal. But IHMO it works also the other way, when you always have a certain emotional state, the brain only recalls memories about situations where you had that emotional state. Thus the emotions cause a bias to memories.
This system should work evolutionary wise very good. Because as a prey the emotional state is loaded when a prey enters a certain area. Thus being aware of the possible situation that a predator may arise and that action must be taken based on input data. From a certain point of view, this is predicting. Not probability calculation because that is what you do after you have input. Prediction is based on passed events and you use it for future events that may or may not happen. Only after you have more input from your sensors, you raise the probability about something going wrong or going right.

With AI it is all about logic. With being aware it is a combination of logic
and prediction. If it is alive, it must be able to daydream.
 
Last edited:
May 11, 2008
22,557
1,471
126
Addendum :

What makes humans so special, is that we have a layer of separation between input data from the sensors and the input data that we create from what i call virtual sensors. Thus we create in our brain a reality that does not exist. We use this sophisticated form of prediction to predict events that did not happen yet. You may think that is nonsense, but for example simple building regulations are a result of that system. Calculus, algebra and math are an extension of that system. To be able to more accurately predict events.

People who have limited awareness about themselves may suffer from this :
Having the strong feeling that your mind exists separately from your body and you do not feel comfortable or happy. Interesting, because for some reason that mental mapping is not used properly.

http://en.wikipedia.org/wiki/Depersonalization
 

PsiStar

Golden Member
Dec 21, 2005
1,184
0
76
It has been several years since i studied AI & Expert Systems. I took a few classes at Wharton in the day ... fun and interesting stuff. Their focus of course was all business relevant which is not a bad thing.

Problems as, you run a business in a particular industry. Therefore you have competitor(s). In an Expert System you enter information about the decisions you would make given various circumstances. (Thus fuzzy logic was coined.) And hopefully you have been well advised so your information is "expert" or the better or even the best decisions ... from a business point of view.

The competitor does not have an expert system. So it is expected that will eventually make less than optimal decisions. As they do so, you add this to your expert system as intelligence to your own system and the expectation is that you will be more successful.

Of course much of this information is not trivial. Imagine gleaning patents, historical stock exchange information, who is hired & their own decision making history ... ... ... I am just trying to emphasize the effort necessary to teach a blank mind of a computer that does not even have the advantage of DNA or some innate programming to get the self interest booted so that it can does of this itself. Then of course how do you keep it focused?????
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
A prime example is the IBM supercomputer Watson. It could understand questions and even things that were implied in text like humor or sarcasm but it didn't understand what the words actually meant and the meaning behind them, it was all numbers and probability.

And what makes you think you understand what those qords really mean? It's all just flashes of electrical charges running between neurons.
 
May 11, 2008
22,557
1,471
126
And what makes you think you understand what those qords really mean? It's all just flashes of electrical charges running between neurons.

It is not the flashes of electricity that creates the mind. It is the synchronized flashes of large groups of neurons that creates the mind. The mind is in reality not much more then an endless oscillation over billions of neurons. Because you can alter chemically the path of that oscillation, you can actually produce different intermediate results. These are the thoughts. It is known for some while that groups of neurons each oscillate in their own way where the groups are competing with each other. When the right input comes along, one or the other group will win for a short while, modulating the oscillation. If you ever feel in some situation that you are fighting against your own brain, then you just where aware for a short while how your brain really works.

If you prefer to think in rapid images, then words do not have enough "oompf"
to express yourself. That can be very frustrating. The groups of neurons that are responsible for thought translation to word formation are just not capable of rapidly translating all details of images to words. "An image explains more then a thousand words"...
 
Last edited:

ArisVer

Golden Member
Mar 6, 2011
1,345
32
91
AI is the intelligence of humans transferred to mechanical devices. Since these devices operate on a true/false logical operations, their operations (thoughts) arelimited to what humans have programmed for these devices.

Example, in movement, if hit, then turn 100 degrees and go.
 

Net

Golden Member
Aug 30, 2003
1,592
3
81
if you haven't already you would enjoy reading about programming for neural networks and machine learning.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,211
537
126
we have machines that can take input data (from sensors, other computers, user input, whatever), process it according to preprogrammed parameters and perform the output functions it deems appropriate. they are of course capable of 'learning' (e.g. adaptive strategies in computer-controlled car engines), but only by way of preprogrammed data that essentially tells the computer HOW to learn.

To this I call your attention to FPGA's. genetic algorithms, and LISP programming language. There was an article a few years back that I am trying to find/remember, but essentially, an experiment was setup to design some type of signal rectifier (or amplifier, or something to that effect). We know the ideal solution would transform the signal in a certain measurable way and not add distortion, noise, or other interference. What was done was LISP code was written to control the configuration of a FPGA. They gave the LISP program all the tools to make any change possible to the FPGA, and setup a "reward" function for producing designs which came closest to the ideal transformation that they were trying to make, and gave it a standard design for that operation setup on the FPGA as a starting point, and let it change the design, and learn how to manipulate the circuit to do what we wanted it to do. After a few generations, it had designed a circuit that human circuit design engineers no longer could figure out how it worked, but it worked better than the best know design for that type of circuit. There was a portion of the circuit which was not even connected to the rest of the system, yet, if you removed it, the circuit no longer functioned properly! It had learned how to design this circuit more efficiently.

I found another article while searching for the one I was speaking about. In this case, they had a computer design a circuit that would differentiate between two different audio tones (1khz and 10khz) using just 100 logic gates (a normal, human, design would typically take a few thousand). I won't go into the details, but let us just say that no one knows how the design of the circuit works, only that it uses a fraction of the resources that a human designing it would do. It even would respond to voice commands to start and stop analysis, all in that same 100 logic gates!!!!

http://www.damninteresting.com/on-the-origin-of-circuits
 

MrPickins

Diamond Member
May 24, 2003
9,125
792
126
To this I call your attention to FPGA's. genetic algorithms, and LISP programming language. There was an article a few years back that I am trying to find/remember, but essentially, an experiment was setup to design some type of signal rectifier (or amplifier, or something to that effect). We know the ideal solution would transform the signal in a certain measurable way and not add distortion, noise, or other interference. What was done was LISP code was written to control the configuration of a FPGA. They gave the LISP program all the tools to make any change possible to the FPGA, and setup a "reward" function for producing designs which came closest to the ideal transformation that they were trying to make, and gave it a standard design for that operation setup on the FPGA as a starting point, and let it change the design, and learn how to manipulate the circuit to do what we wanted it to do. After a few generations, it had designed a circuit that human circuit design engineers no longer could figure out how it worked, but it worked better than the best know design for that type of circuit. There was a portion of the circuit which was not even connected to the rest of the system, yet, if you removed it, the circuit no longer functioned properly! It had learned how to design this circuit more efficiently.

I found another article while searching for the one I was speaking about. In this case, they had a computer design a circuit that would differentiate between two different audio tones (1khz and 10khz) using just 100 logic gates (a normal, human, design would typically take a few thousand). I won't go into the details, but let us just say that no one knows how the design of the circuit works, only that it uses a fraction of the resources that a human designing it would do. It even would respond to voice commands to start and stop analysis, all in that same 100 logic gates!!!!

http://www.damninteresting.com/on-the-origin-of-circuits

It seems that disabling the clock caused the evolved circuits to act as analog devices dependent on the physical characteristics of the individual chip.

Damn interesting, indeed. :thumbsup:
 
Last edited:
May 11, 2008
22,557
1,471
126
It seems that disabling the clock caused the evolved circuits to act as analog devices dependent on the physical characteristics of the individual chip.

Damn interesting, indeed. :thumbsup:


That is interesting indeed.

IMHO, because this is very similar as how life functions.
Constant transformation of energy by use of EM radiation towards atoms and the parts that make up atoms. Because it is happening on such a large scale, it appears to be chaotic. For example inside a cell or inside living tissue, it is not uncommon that surrounding EM radiation is used. For example heat, heat can be transferred through conduction, thermal radiation and convection. This is often used to move around, to energize reactions. The environment is just as important as are the individual elements and what those elements are comprised of.

To come back to the chip :
With such a chip, the analog behavior (electron and hole noise and noise of em radiation) causes for random behavior. But it is not as random as it seems.
When raising the temperature while having knowledge of the elements used, you can actually control the behavior of the chip even if it is working in "analog" mode. Those of us who work with electronic components, know the resulting effects of tolerances and temperature ranges. When looking at it from a different point of view(how to control chaos and use chaos to transfer information) it is quite fascinating. And because it comes so close to the natural way the human mind works, it is easier to understand the effects.
I may sound like some mad scientist but anybody who has ever designed an electronic circuit and the copper trace layout, and calculated/tested the resulting electronic device in a wide temperature range (and in an EM noisy environment) understand that it is normal behavior and the reason why components have specifications.
Physics can never be ignored and must always be used. This is the wheel of fate. (The title of a song i am listening to now : DJ Phrenetic - The Wheel Of Faith ... ^_^ )
 
Last edited:
May 11, 2008
22,557
1,471
126
I just read the complete article.
It is truly fascinating.
Funny is that the moral of the story is that if there is no clock-signal to synchronize and a limited amount of logic gates, the evolutionary software came up with an analog solution to discriminate between two electrical signals with different frequencies.

Ordinarily using analog circuits to discriminate between two tones is not that difficult. Just use two tuned resonating circuits at the two desired frequencies. When the Q of the lc circuit (or rc circuit with amplifiers , phase reversal producing oscillators) is properly chosen in a proper circuit, it will work. But this will ask the use of pretty large components or layouts. Fascinating is that the evolutionary program came up with a solution using the physical layout of the gates on the chip and the interference between these gates and the individual physical(physics) properties of each gate. And that for low frequencies of 1kHz and 10kHz. It is intriguing.

To explain for the spoken commands, the words "go" and "stop" have a specific characteristic in the audio frequency spectrum. "Stop" is of higher pitch then "go". Using only that information, a sharp frequency filter can be used to select between go and stop ignoring all the other relevant information.

What is more amazing is that program code only worked for that chip because it was optimized using the behavior of that chip. The playing field (aka the environment) determined the final evolutionary solution to the problem. The program code did not work in another FPGA. This also gives me the idea that the evolutionary program used a computational method to predict possible useful intermediate results and a trial and error method for testing intermediate promising results.

This music fits :

The art of (EM)noise - Moments in love.

http://www.youtube.com/watch?v=Dl1FnycngW8
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
Human A.I. is an interesting subject, because it involves what boils down to essentially "thinking about thinking." When you write a piece of software, you have to take this core problem and attempt to generate a pieced solution, but consider it from the vantage point of a computer. So, the real issue isn't just in the realm of computing, but I'd say that it also exists in the problem's relations to psychology.

I believe what makes a human A.I. difficult relates to devolving human thought into such primitive functions and how to actually handle said functions. When I sit around pondering certain primitive functions of thought, it usually comes down to "how the hell would I do that?" The biggest problem to me is data recollection, but I think that's a bit easier if you look at human recollection in a rudimentary way.

I recall mentioning a similar example in Off-Topic, so I'll reuse it here, but in a slightly simpler manner. Young children are a great example to look at, because it's a lot easier to monitor their learning. They're the ones that are currently learning the "basics" (all those things you consider "obvious"). For example, if a kid always saw Red Delicious apples (they're red) and then you showed him a Granny Smith apple (it's green), would he be able to realize that it's an apple? I've never seen a study on something like this, but I'd be willing to say no.

I believe we associate rudimentary descriptors (color, shape, etc) to items, which is why the child would be confused by the Granny Smith... to him, apples are red not green. In that same manner, I'd state that we also associate emotions with objects (including people). Although, now I'm a bit curious how events would be properly associated to include "who" and "what". I could easily associate a person with an event, but what about parts of it... would they simply be sub-events? If Fred spilled a drink on my favorite shirt, I could create an associated event (underneath the party) that links to Fred containing some sort of emotion relating to being unhappy (as well as maybe a link to said shirt).

Then there comes to biggest problem... wonderful, you have a huge spiderweb of information, but how do you find something? :p

What made Watson a bit more disappointing to me is that it was more of a really smart "database parser." It was fed pre-programmed information with a capability to understand questions at a much more complex level. I don't recall if it could "learn" any sort of nuances though.
 

iCyborg

Golden Member
Aug 8, 2008
1,353
62
91
I found another article while searching for the one I was speaking about. In this case, they had a computer design a circuit that would differentiate between two different audio tones (1khz and 10khz) using just 100 logic gates (a normal, human, design would typically take a few thousand). I won't go into the details, but let us just say that no one knows how the design of the circuit works, only that it uses a fraction of the resources that a human designing it would do. It even would respond to voice commands to start and stop analysis, all in that same 100 logic gates!!!!

http://www.damninteresting.com/on-the-origin-of-circuits
Awesome article, thanks for the link. Amazing how in a resource-strapped environment (only 100 gates) the process uses everything and anything to get the job done, including electrical quirks of the particular fpga and nondigital nature of currents in the transistors.
 
May 11, 2008
22,557
1,471
126
Human A.I. is an interesting subject, because it involves what boils down to essentially "thinking about thinking." When you write a piece of software, you have to take this core problem and attempt to generate a pieced solution, but consider it from the vantage point of a computer. So, the real issue isn't just in the realm of computing, but I'd say that it also exists in the problem's relations to psychology.

It sure is an interesting subject.
It is a problem that can only be solved with a combination of hardware and software. Computers have data paths. The brain also uses data paths. But with a computer it is binary coded data lines. 3 data lines can carry 8 different combinations of data. I always wonder if the neurons uses pwm signals to transfer information over a very large data path.
Current computers use caching strategies to get data over a relatively narrow data path. But it is always a limited set of data because of physical limitations of the data path and cache size. Imagine a cpu with a core to memory connection of 2^20 data lines(If you think that is a lot it is but IIRC current state of the art designs use internal data paths of 1024 bits already) . That would be extreme and current core designs would never be able to handle such an amount of data. There is just not enough cores to process all that information while neglecting the physical constraints.

To do it with a "small" cpu :
In software i hope to one day write an "os" where the kernel(actually a process always running the highest priority) spawns processes based upon data. Each new form of input data will spawn a process where process can bid up against each other for priority. The process with the highest priority will get most execution time. Thus determining the behavior of the system. It will have motivators similar as we have emotions and instincts to give processes another boost in priority. Up to this point i will probably be able to create it and solve issues.

This is the hard part and i will not be able to do it because i am not a math prodigy, i can calculate a bit but that is where it ends :
The data will be compressed depending on the data itself. Luckily, compression systems such as jpeg actually resemble this feature. And the compression tag will be based on the motivator (emotion).
Thus the tag will determine which data has to be decompressed. Again, the emotion determines the data as well.


I believe what makes a human A.I. difficult relates to devolving human thought into such primitive functions and how to actually handle said functions. When I sit around pondering certain primitive functions of thought, it usually comes down to "how the hell would I do that?" The biggest problem to me is data recollection, but I think that's a bit easier if you look at human recollection in a rudimentary way.

See above. The trick is not to think how you would do that. You are the result of interaction between neurons inside an environment (your body), an constantly modulated oscillation. I or anybody else is not different at this point. The trick is to ask how would the brain do it.

I recall mentioning a similar example in Off-Topic, so I'll reuse it here, but in a slightly simpler manner. Young children are a great example to look at, because it's a lot easier to monitor their learning. They're the ones that are currently learning the "basics" (all those things you consider "obvious"). For example, if a kid always saw Red Delicious apples (they're red) and then you showed him a Granny Smith apple (it's green), would he be able to realize that it's an apple? I've never seen a study on something like this, but I'd be willing to say no.
This i believe is also reality. If someone has never seen a mirror and his or hers reflection. How would that person respond to a reflection in a mirror ? I would think that person would assume first that there is another person. After learning, the wrong assumption disappears. And here learning means, not making the wrong conclusion.

I believe we associate rudimentary descriptors (color, shape, etc) to items, which is why the child would be confused by the Granny Smith... to him, apples are red not green. In that same manner, I'd state that we also associate emotions with objects (including people). Although, now I'm a bit curious how events would be properly associated to include "who" and "what". I could easily associate a person with an event, but what about parts of it... would they simply be sub-events? If Fred spilled a drink on my favorite shirt, I could create an associated event (underneath the party) that links to Fred containing some sort of emotion relating to being unhappy (as well as maybe a link to said shirt).

Then there comes to biggest problem... wonderful, you have a huge spiderweb of information, but how do you find something? :p

What made Watson a bit more disappointing to me is that it was more of a really smart "database parser." It was fed pre-programmed information with a capability to understand questions at a much more complex level. I don't recall if it could "learn" any sort of nuances though.

I to think the brain has a fixed set of descriptors to start with and later just increases the amounts of descriptors for a memory to increase details. When you think, it is no different as when you look around. Only a small part is sharp and in focus(meaning detailed). All the rest is blurred and is of no interest until you change your focus to another point of interest. We are visually based life forms.


If you learn yourself to think while keeping everything in focus(keeping track of everything), you may experience the scary feeling of "disappearing", i know i do. But it sure is a kick. It is similar as conscious dreaming(lucid dreaming). But i have to admit it is a quick way to a "burn out". If you can no longer concentrate and keep focus on or enjoy even simple details, life is no fun. :(
 
Last edited:

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
It sure is an interesting subject.
It is a problem that can only be solved with a combination of hardware and software. Computers have data paths. The brain also uses data paths. But with a computer it is binary coded data lines. 3 data lines can carry 8 different combinations of data. I always wonder if the neurons uses pwm signals to transfer information over a very large data path.

I'm not so certain that there's really as much of a hardware issue as people like to put forth. Human artificial intelligence requires that we emulate the way a human brain functions on whatever piece of target hardware is chosen. But there are really two key words in what I just said: emulate and functions. I'll get into more of that later.

In software i hope to one day write an "os" where the kernel(actually a process always running the highest priority) spawns processes based upon data.

Is it supposed to be a representation of the senses?

See above. The trick is not to think how you would do that. You are the result of interaction between neurons inside an environment (your body), an constantly modulated oscillation. I or anybody else is not different at this point. The trick is to ask how would the brain do it.

I think we may be talking about the same thing. When I refer to "how would I do that?", I mean, "how would I mimic that functionality in a computer?" That's really what we're aiming to do... take the functionality that a human brain uses and mimic/emulate it. It will most likely boil down to attaining and correctly processing sensory data.


This i believe is also reality. If someone has never seen a mirror and his or hers reflection. How would that person respond to a reflection in a mirror ? I would think that person would assume first that there is another person. After learning, the wrong assumption disappears. And here learning means, not making the wrong conclusion.

You can have similar fun by putting a cat in front of a mirror and watching them :).

I to think the brain has a fixed set of descriptors to start with and later just increases the amounts of descriptors for a memory to increase details. When you think, it is no different as when you look around. Only a small part is sharp and in focus(meaning detailed). All the rest is blurred and

I'm not certain that I'd use the word "fixed" there... it's too finite. I do think we tend to work with a smaller subset of descriptors at first, but that's because we use associative reasoning. If we cannot deduce the sensory information with a limited set of descriptors, we may need to ascertain more to better understand what we're dealing with.
 
May 11, 2008
22,557
1,471
126
I'm not so certain that there's really as much of a hardware issue as people like to put forth. Human artificial intelligence requires that we emulate the way a human brain functions on whatever piece of target hardware is chosen. But there are really two key words in what I just said: emulate and functions. I'll get into more of that later.

The issue is time. And time means here responsiveness.
A lot of information from the senses has only value for a specific moment.
After that moment has passed, the information looses value and becomes noise.

Is it supposed to be a representation of the senses?

That is something (i hope) i will be doing with separate hardware and software. The kernel i was writing about is only aware of the outside world by use of data in memory through a unified interface(which i have not designed yet). The whole AI hardware/software combination has no direct interaction with sensors or actuators. It is the only way to fast responsiveness and it is automatically highly modular, similar as the brain.

I think we may be talking about the same thing. When I refer to "how would I do that?", I mean, "how would I mimic that functionality in a computer?" That's really what we're aiming to do... take the functionality that a human brain uses and mimic/emulate it. It will most likely boil down to attaining and correctly processing sensory data.

It is indeed.

You can have similar fun by putting a cat in front of a mirror and watching them :).
Yes indeed or this little spider : Salticidae.

http://en.wikipedia.org/wiki/Jumping_spider
Jumping spider and mirror : :)
http://www.youtube.com/watch?v=iND8ucDiDSQ


I'm not certain that I'd use the word "fixed" there... it's too finite. I do think we tend to work with a smaller subset of descriptors at first, but that's because we use associative reasoning. If we cannot deduce the sensory information with a limited set of descriptors, we may need to ascertain more to better understand what we're dealing with.

I would not be in doubt at all that the brain has the ability to modify existing descriptors into more complex ones. Maybe it copies an existing descriptor and adapts it to new incoming data that is experienced a lot. I do not know.
 
Last edited:

WHAMPOM

Diamond Member
Feb 28, 2006
7,628
183
106
maybe i've been drinking and/or smoking too much pot, but the thought occurred to me that true 'artificial intelligence' doesn't seem possible nor useful.

we have machines that can take input data (from sensors, other computers, user input, whatever), process it according to preprogrammed parameters and perform the output functions it deems appropriate. they are of course capable of 'learning' (e.g. adaptive strategies in computer-controlled car engines), but only by way of preprogrammed data that essentially tells the computer HOW to learn. we could essentially make a 'perfect' humanoid robot if we could overcome engineering problems (extremely sophisticated inputs and output devices, power issues, processing power, an insane amount of programming, ect).

but to make it truly 'human,' it would need to be flawed. logic is often incorrect and irrational. swayed by emotion. all that jazz.

so exactly what constitutes AI? seems like a grey area rather than a 'we finally did it!' type thing. it's kind of like curing a disease like cancer- continuously better treatment will be devised, but we're unlikely to ever actually 'cure' it.

forgive me if this seems highly stupid, my mental consternation on this subject is something that's hard to express. it's just that i don't see how what we typically call 'AI' as even being close to such. take a movie like, say, i robot (and no, i've never read the asimov stuff); the protagonist robot is supposed to be 'intelligent,' but constantly talks about how he was programmed, and uses calculations and logic to make decisions. the only thing that seems to make him 'intelligent' is emotion, which seems like a bad thing for a computer to have.

there's the 'self-awareness' thing, but i don't get that either. wouldn't you have to tell the computer that it's a computer before it would realize such? wouldn't it still 'learn' through preprogrammed logic and inputs? and skynet didn't have emotion. so why was it such a dick to everyone? :X knowing what i do about computers (a workable knowledge but it's not like i have a CE or EE degree), the whole 'AI' concept just seems weird when you think hard about it. and no, wikipedia has not helped my confusion, it just seems to reinforce what i said about AI development being an ongoing process with no clear end.

anyway: discuss. or give me links to read (good links, don't just google the same shit i already have).

also hi AT i miss posting here.

Adding two plus two and getting five.
 
May 11, 2008
22,557
1,471
126
There is progress :

Researchers have made a sensor module to emulate the pressure and temperature sensors in the skin.

http://www.dailytech.com/Robots+Become+More+Lifelike+with+Sensory+Skin/article22047.htm


Imagine that a patch of artificial "skin" could be bought where sensors are listed in a grid or array. Each sensor can be read out by addressing it in an XY format and then using analog values from an ADC. For example 4 by 4 sensors.

For example : a hair like antenna inside a gel mass that has little carbon particles in it and an x y z arrangement ( just conductive contacts) to measure the position of the "hair" sensor in 6 directions by use of resistance changes. If the "hair" could move like a little joystick, then relative motion can be detected from the 16 hairs and even speed detection when something brushes up to it. Combine it with a heat sensor in the middle, that could be fun.
It would have to be read at high speed and averaged to reduce error in measurements and to not loose the very important "time" factor.
 

pw38

Senior member
Apr 21, 2010
294
0
0
Why is the first thing I thought of Star Trek: First Contact when I read your post? lol