Idea About Intelligence Based on Collections

mbass

Junior Member
May 2, 2007
2
0
0
So I ran across this comic someone had on their door. The comic has some doctor (or something) standing next to another guy (biologist) who is telling him that medicine is just applied biology. Then another guy is standing down a little ways and spouts out that biology is just applied chemistry. Of course a little further along is a guy mentioning that chemistry is just applied physics. Then a good strech further is a guy all by himself saying how physics is just applied math.

So yeah, this was in the Mathmatics department of a university as you might have guessed. It occurs to me though that you can continue that line of thinking, in thatmath is just applied logic. The same basic logic that integrated circuits perform bit by bit, which allows us to program them to model math, physics, chemistry, etc for us. Consider then that logic can be described as applied sets - the various ways that collections can be compared to each other.

This intrigues me because there has been such an effort to create an artificial intelligence based on the computer systems we have built based on logic operations. But when I try to think how a child's first thoughts might form, I envision the various physical senses bombarding the brain with collections of images, soungs, and other feelings. How does the brain know how to start putting meaning to things? It makes sense that it would start by comparing the collections that it receives and try to find trends (not so different from basic intersection and union comparisons). As some collections prove to appear more and more often (indicating increased value over other collections or concepts), the brain would take those collections as they exist in memory and make them a little more solid. That way, the less interconnected concepts are allowed to fade away while.

Here's a firmer hypothesis that might explain what I mean more clearly (while I'm sure it is wildly innacurate):

= - = - =
To begin the brain's memory is essentially blank, and data begins to flood in. The brain caches what it can, but is quickly filled to capacity with this raw, uncompressed data. Then comes the role of sleep, or unconsiousness due to the brain meeting its capacity. While unconsious, the brain examines more closely those collections it has cached, making comparisons between what the senses provided, when it was provided, in what combinations, etc. As comparisons are made and certain collections are elevated in value, those collections are "copied" to a more permanent memory. The brain's temporary cache is later purged, and a new day (a new period of consiousness) begins.

On day 2, as collections of data flood in via the senses, it is compared on the fly to the previously elevated collections as they are known to be more likely to have relavence. When duplication is encountered, the temporary cache replaces the presence of the entire collection with simply a pointer to the location of the identical, elevated collection. New relavent data is marked in the temporary cache as requiring elevation (this elevation must wait until the next period of unconsiousness). In this way, temporary memory can hold more data when it is rellying heavily on well-established concepts. The presence of many new concepts however would not be able to draw as heavily on previous concepts and therefore result in temporary memory being filled to capacity more quickly. Either way, meeting capacity brings on the onset of sleep.
= - = - =

Like I said, all of this is likely way off from reality. However, I think it does offer some clues as to why intelligence is so hard to model with logic. Namely, because it is a lower concept than logic, not a higher concept. I am eager to hear what others think on the subject.
 

Gibsons

Lifer
Aug 14, 2001
12,530
35
91
this?
purity.png
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
The human brain is unlike any computer in existence. Not only can we program the brain, but the actual circuitry changes in response to the program which, in turn, affects how the programs run. Thus it falls under the category of Systems Science.

Likewise, mathematicians and logicians ultimately defer to philosophers. There is no single mathematics or logistics that describes everything we observe but, rather, a large collection of them that is still growing to this day thanks in no small part to the work of logistical and mathematical philosophers. The very idea that everything is "mathematical" is an ancient philosophical one debated to this day and certainly is not based on any mathematical proof or empirical evidence.
 

Lemon law

Lifer
Nov 6, 2005
20,984
3
0
Perhaps some of the flaw in the argument is that fact the human brain is born pre-programmed. Even a very young infant is born with a fear of heights, has an innate ability to remember to breath and the list is semi-endless. In many lower animals, the amount of pre-programming tends to be higher the lower the lower the level of evolution.

But still, our OP poses some very interesting questions. But are there experimental methods practically available to tease some of these answers out. And what are the enablers of this pre-programming, is it genes, nerve cells, chemicals in the brain, or some combination of those and others mechanisms. At least on lower animals there a wealth of such experimental data.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
While math is involved in everything, the other sciences are definitely 'different'.

Math is a pretty broad subject though.

If one wanted to simulate something, it indeed comes down to a lot of math/logic.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
There was a very good NOVA episode on this last week talking about the Watson Jeopardy challenge and the problems they ran into developing the software. It also goes a lot into AI and the problem with just filling a pc with data and trying to form an AI. Jeopardy is a lot more complicated for a computer than I first thought. You can watch it online
http://www.pbs.org/wgbh/nova/tech/smartest-machine-on-earth.html
 

SMOGZINN

Lifer
Jun 17, 2005
14,359
4,640
136
Like I said, all of this is likely way off from reality. However, I think it does offer some clues as to why intelligence is so hard to model with logic. Namely, because it is a lower concept than logic, not a higher concept. I am eager to hear what others think on the subject.

What it sounds like you are talking about is the difference between top-down and bottom-up approaches to AI. And, as others have posted, the human brain does not start as a blank slate, and almost certainly has some kind of natural system to categorize information from the get go and develops better systems as it collects information so, it uses both a bottom-up and top-down approach, meaning the real answer probably is some type of middle to the outside approach. Realistically, IMHO, bottom-up approaches are closer to this then top-down because the systems used will have a degree of order built into them already.

EDIT: Another interesting thing to consider that that Human Intelligence might not be the only way to reach AI. Even if our intelligence is not logic based, that does not mean that intelligence can not be reached with a logical approach.
 
Last edited:

Weenoman

Member
Dec 5, 2010
60
0
0
So I ran across this comic someone had on their door. The comic has some doctor (or something) standing next to another guy (biologist) who is telling him that medicine is just applied biology. Then another guy is standing down a little ways and spouts out that biology is just applied chemistry. Of course a little further along is a guy mentioning that chemistry is just applied physics. Then a good strech further is a guy all by himself saying how physics is just applied math.

So yeah, this was in the Mathmatics department of a university as you might have guessed. It occurs to me though that you can continue that line of thinking, in thatmath is just applied logic. The same basic logic that integrated circuits perform bit by bit, which allows us to program them to model math, physics, chemistry, etc for us. Consider then that logic can be described as applied sets - the various ways that collections can be compared to each other.

This intrigues me because there has been such an effort to create an artificial intelligence based on the computer systems we have built based on logic operations. But when I try to think how a child's first thoughts might form, I envision the various physical senses bombarding the brain with collections of images, soungs, and other feelings. How does the brain know how to start putting meaning to things? It makes sense that it would start by comparing the collections that it receives and try to find trends (not so different from basic intersection and union comparisons). As some collections prove to appear more and more often (indicating increased value over other collections or concepts), the brain would take those collections as they exist in memory and make them a little more solid. That way, the less interconnected concepts are allowed to fade away while.

Here's a firmer hypothesis that might explain what I mean more clearly (while I'm sure it is wildly innacurate):

= - = - =
To begin the brain's memory is essentially blank, and data begins to flood in. The brain caches what it can, but is quickly filled to capacity with this raw, uncompressed data. Then comes the role of sleep, or unconsiousness due to the brain meeting its capacity. While unconsious, the brain examines more closely those collections it has cached, making comparisons between what the senses provided, when it was provided, in what combinations, etc. As comparisons are made and certain collections are elevated in value, those collections are "copied" to a more permanent memory. The brain's temporary cache is later purged, and a new day (a new period of consiousness) begins.

On day 2, as collections of data flood in via the senses, it is compared on the fly to the previously elevated collections as they are known to be more likely to have relavence. When duplication is encountered, the temporary cache replaces the presence of the entire collection with simply a pointer to the location of the identical, elevated collection. New relavent data is marked in the temporary cache as requiring elevation (this elevation must wait until the next period of unconsiousness). In this way, temporary memory can hold more data when it is rellying heavily on well-established concepts. The presence of many new concepts however would not be able to draw as heavily on previous concepts and therefore result in temporary memory being filled to capacity more quickly. Either way, meeting capacity brings on the onset of sleep.
= - = - =

Like I said, all of this is likely way off from reality. However, I think it does offer some clues as to why intelligence is so hard to model with logic. Namely, because it is a lower concept than logic, not a higher concept. I am eager to hear what others think on the subject.

The mind is not completely blank when we start out. Humans might not have the instincts of less mentally-equipped animals, but we do have instincts. Babies can swim, and are afraid of heights, without any conditioning, for example.
 

C1

Platinum Member
Feb 21, 2008
2,425
133
106
The human brain is unlike any computer in existence. Not only can we program the brain, but the actual circuitry changes in response to the program which, in turn, affects how the programs run. Thus it falls under the category of Systems Science.
Yes, philosophy.

If the human brain is so great, then who or what was smart enough to build it? The brain obviously didnt build itself as, at least of this writing, it admits that it cant understand how "itself" even works (eg, how does memory work; what is the source of consciousness).

If you found a wristwatch laying on the moon, would you therefore conclude that it was just there by chance? Or would one be forced to conclude that such a thing was left there instead by an intellect greater than the workings of the watch?

So tell me then, who or what built the human brain?

And while your at it, consult that agent about your proposed intelligent design.
=====================
Where is Bill Gaatjes when you need him?
=====================
"Women who seek to be equal with men lack ambition." - Marilyn Monroe
 
Last edited:
May 11, 2008
23,331
1,575
126
I always have one question when thinking about the brain.

How do we get from a "simple" system that responds to external stimuli in an environment by the uses of senses to a system that is actually predicting situations in that environment that have not yet happened or are going to happen or never will happen ?

In order to propose an idea to that question, we can look at insects for example but also small single cell organisms. Here actions are performed as an response to external stimuli.

We can also suggest that copying genes is easier then waiting for random chance and get the proper gene because of foreign viruses, bacteria, fungi and other environmental stimuli such as ionizing radiation and toxic chemicals.
So we assume that copying genes and therefore copying an existing function is easier then getting a gene that would be handy but depends on chance.

We know that basically there is instinct only behaving on external stimuli.
IIRC the most primitive version of the reptile brain seems to do this.
IIRC the brain stem or the basal ganglia.

Then we get the limbic system. In combination with emotions this system can learn from the environment.

Then we get the neocortex.

Now lets take this weird and incomplete idea :
Suppose that somewhere someplace long ago the genes with the building plan and the migration plan for neurons got copied. And that under normal circumstances these neurons would connect to senses as for example eyes, ears, touch. Let's imagine that an creature came to life a long time ago some where on this planet with a double set of neurons. One set of neurons that is actually wired to the senses or sensors. Another set of same neurons that are not connected to sensors but connected to the neurons processing the information from the senses and connected to them selves. What we would get is one set of neurons processing sensor stimuli and another set of neurons processing the result of the neurons processing the information from the sensors while also responding to neurotransmitter changes by emotions and instinctive behavior.

This special set of neurons is actually "convinced" it is using sensor information but it is not. It is using processed sensor information combined with memories. The creature can now learn about danger and avoid dangerous situations where sensor information is produced that matches stored sensor information, also known as memories. Or it can remember where food is.

Now we go one step further. Another set of copied genes.
These last group of neurons does not use any direct sensor information at all. Only indirect information. It uses processed sensor information known as memories only. Memories stored as electrical activity in the limbic system and hardwired memories in the cortex. This system is only performing simulations based on past events. Non stop all the time. It can even use phantom memories. These phantom memories are just temporary. Phantom memories do not exist for real. Phantom memories just aid in the simulation only and are discarded afterwards. That is if there is no psychological disorder that keeps these phantom memories active to create delusions.

I think that if we want to create AI, we need to create a system where highly compressed data can be used to do simulations. For instance , if an predator is hunting you or sneaking up on you, it is not important that that predator has a scar on the back of it's head when it is going to attack you.

The answer to a self learning AI is to find a way to compress data based on external stimuli. The way the data is compressed is depending on the environment and internal factors. Interesting are autistic people and especially idiot savants. The level of detail some autistic people can processes is amazing, almost like a photograph. But what happens to an autistic person when presented to external stimuli, as for example a predator like a lion ?



On a side note :
A friend of mine and his wife have a 1 year old little lovely guy. The father told me something interesting. Whenever the little guy has a learning moment, his face completely relaxes, his lower yaw muscles relax where the little toddler opens his mouth. He seems to freeze for a very short moment while having an "EUREKA !" moment.

At first the little guy as most babies/toddlers just copies behavior. If you loose your keys for example, pretty big chance the little toddler has copied your behavior of throwing stuff in the garbage bin. If it starts to mess with the stove, it is because it basically has learned that whenever you touch that stove, food will come. When you touch the remote, music and flashing pictures come. Thus the remote is interesting when the toddler is bored.




I apologize for the typing errors. My spell checker seems to have taken a vacation. I have to figure out why it is not active while i have turned it on.
 
Last edited:

fail

Member
Jun 7, 2010
37
0
0
So I ran across this comic someone had on their door. The comic has some doctor (or something) standing next to another guy (biologist) who is telling him that medicine is just applied biology. Then another guy is standing down a little ways and spouts out that biology is just applied chemistry. Of course a little further along is a guy mentioning that chemistry is just applied physics. Then a good strech further is a guy all by himself saying how physics is just applied math.

So yeah, this was in the Mathmatics department of a university as you might have guessed. It occurs to me though that you can continue that line of thinking, in thatmath is just applied logic. The same basic logic that integrated circuits perform bit by bit, which allows us to program them to model math, physics, chemistry, etc for us. Consider then that logic can be described as applied sets - the various ways that collections can be compared to each other.

This intrigues me because there has been such an effort to create an artificial intelligence based on the computer systems we have built based on logic operations. But when I try to think how a child's first thoughts might form, I envision the various physical senses bombarding the brain with collections of images, soungs, and other feelings. How does the brain know how to start putting meaning to things? It makes sense that it would start by comparing the collections that it receives and try to find trends (not so different from basic intersection and union comparisons). As some collections prove to appear more and more often (indicating increased value over other collections or concepts), the brain would take those collections as they exist in memory and make them a little more solid. That way, the less interconnected concepts are allowed to fade away while.

Here's a firmer hypothesis that might explain what I mean more clearly (while I'm sure it is wildly innacurate):

= - = - =
To begin the brain's memory is essentially blank, and data begins to flood in. The brain caches what it can, but is quickly filled to capacity with this raw, uncompressed data. Then comes the role of sleep, or unconsiousness due to the brain meeting its capacity. While unconsious, the brain examines more closely those collections it has cached, making comparisons between what the senses provided, when it was provided, in what combinations, etc. As comparisons are made and certain collections are elevated in value, those collections are "copied" to a more permanent memory. The brain's temporary cache is later purged, and a new day (a new period of consiousness) begins.

On day 2, as collections of data flood in via the senses, it is compared on the fly to the previously elevated collections as they are known to be more likely to have relavence. When duplication is encountered, the temporary cache replaces the presence of the entire collection with simply a pointer to the location of the identical, elevated collection. New relavent data is marked in the temporary cache as requiring elevation (this elevation must wait until the next period of unconsiousness). In this way, temporary memory can hold more data when it is rellying heavily on well-established concepts. The presence of many new concepts however would not be able to draw as heavily on previous concepts and therefore result in temporary memory being filled to capacity more quickly. Either way, meeting capacity brings on the onset of sleep.
= - = - =

Like I said, all of this is likely way off from reality. However, I think it does offer some clues as to why intelligence is so hard to model with logic. Namely, because it is a lower concept than logic, not a higher concept. I am eager to hear what others think on the subject.

LOL. OK. Simple question:


"Namely, because it is a lower concept than logic, not a higher concept."


What are you trying to say? What do you mean by "Lower" "Higher"??? Are you making up new definitions? Please no more vague crap in your reply.
 
Last edited:

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
Yes, philosophy.

So tell me then, who or what built the human brain?

And while your at it, consult that agent about your proposed intelligent design.


We could assume that someone made our brain, but that just raises the question as to who made the person that made our brain ad naseam. Or we can merely defer to the observation that things change becoming first more complex, and then less as time goes by.
 
May 11, 2008
23,331
1,575
126
Addendum to my other post :


The phantom memories i talked about can be seen as variables in an equation.

For example :
Let say if we imagine an apple moving forward. Then we have in our mind a compressed representation of an apple that we apply a compressed representation of an force too. It is not really happening. It is an phantom memory of an apple that we apply a phantom memory to, the forward pushing force vector. Here we see an apple, but in reality we no longer view the apple when we think of the force vectors. Because we only think of the shape of the apple when we add this as another variable (aka phantom memory ) as well. Thus, the imagined apple & force combination becomes more complex and more realistic. But we start with a simple baseline idea. And later on through (automatic pattern recognition) we add more variables.
Why ? because our brain automatically starts searching for matching patterns. And when applicable, these matching patterns(which are just real memories or phantom memories) are added. When something here goes wrong, meaning we can no longer discriminate between real memories and phantom memories we get wrong answers or worse we get psychological disorders. Another point is that because of the constant adding of memories,
the brain changes constantly. And as such the behavior changes constantly if this is not compensated for. In normal life we do this daily by thinking of our own behavior and having a social life. Also known as self reflection.


I posted this idea in another thread :
How to mix emotions with memories.

It seems new neurons are formed in the hippocampus which is part of the limbic system. This system also plays an important part when it comes to emotions. Now let's say that since old neurons die of and new neurons take the place, that a neuron when taking it's position and used as part of a long term memory, is programmed in the limbic system to become more active during a certain specific emotion ? Then when the specific emotion is experienced, this neuron will respond stronger then when other emotions are active. It's receptors would be tuned to a certain combination of neurotransmitters specific for that emotion. It is always assumed that neurotransmitters modulate emotion but for some reason it makes also sense that it works the other way around. Emotions modulate the release of combinations of neurotransmitters and types of neurotransmitters. If a memory has a more or less emotional charge, i would think that the neural connections recreating the memory that are polarized for a specific emotion would become more active and as such would be remembered more strongly if that specific emotional state was happening.

Mood is very important when recalling memories or even storing memories.


I noticed that generally speaking people learn faster when feeling happy a large part of the time. When people feel all the time negative, they do not learn as easy or fast when compared when they have a positive feeling, mood, self image. But i think that the continuous strain of stress hormones
(Google Robert Sapolsky neuro-degeneration for a wonderful explanation) is responsible for the decline in learning behavior. A short period of stress might actually increase learning capabilities independent of the active emotion.
The reason why i think of this ,is because in a situation with a lot of fear, people also learn.When a predator attacks you, you do not feel happy you are afraid.

If we can assume the following :

The way to AI is to compress data based on external stimuli and internal factors.
The internal factors are emotion driven compression of the data.
But when you do a search, you search for the compression factors aka emotion driven compression first.Then you search for the data. This allows for highly parallel processing and large amounts of data being stored
in the alu it self. This speeds up processing and awareness greatly. This allows for self learning systems.

It is no need to copy the neuron and all related biochemicals such as neurotransmitters, only the function the neuron and neurotransmitters perform are needed. And i think can be captured by quasi digital logic.

Logic where the ALU itself is part of the memory. Thus no separate bus to memory.

Too run this system on traditional digital logic :
When you have an OS, the processes are executed according to process priority. Imagine that we have an OS that does not execute hardware interrupts. It solely executes processes based on priority. And we change the priority depending on the emotion and occurring instinct ( such as feeding time). The priority modifiers here are the emotions. The processes are the memories. With such a system, you can change the behavior based on external input(the processes) and internal input(priority).

However, The learning part is a separate part. One solution can be, this system would need to go to sleep in order to be able to update the memories as well. When it is awake it can spawn as many processes as it has free ram memory and execution unit's. It would scale indefinitely.
 

Cr0nJ0b

Golden Member
Apr 13, 2004
1,141
29
91
meettomy.site
The mind is not completely blank when we start out. Humans might not have the instincts of less mentally-equipped animals, but we do have instincts. Babies can swim, and are afraid of heights, without any conditioning, for example.

Tell that to my 1 year old who literally walked off the second floor stair case. I barely caught her after like 2 tumbles (carpet...no harm done). She just walked straight over the stairs, like the floor continued out...you know like the cartoons...She was scared, but unhurt. Once She checked out ok, i thought it was kind of funny. No fear, not even an idea of what had happened. Gates remained locked until she could figure out that crawling down the stairs backward was a much more sensible solution to the problem.
 

Weenoman

Member
Dec 5, 2010
60
0
0
Tell that to my 1 year old who literally walked off the second floor stair case. I barely caught her after like 2 tumbles (carpet...no harm done). She just walked straight over the stairs, like the floor continued out...you know like the cartoons...She was scared, but unhurt. Once She checked out ok, i thought it was kind of funny. No fear, not even an idea of what had happened. Gates remained locked until she could figure out that crawling down the stairs backward was a much more sensible solution to the problem.

Actually at certain ages children forget how to swim, as well.
 
May 11, 2008
23,331
1,575
126
Actually at certain ages children forget how to swim, as well.

It could be an urban legend and not true, but it seems that babies also have a period where they do not know fear. There are some youtube movies of babies playing with dangerous animals. It seems that the need for learning is at this early stage more important then the need to recognize and respond to danger. It would make sense logically, and it can make sense evolutionary too. A human baby is as helpless as newborns come. It cannot flee the environment when in danger. For example an giraffe or antelope can run and walk almost from birth and have some chance of survival. This however could significantly reduce the ability to learn. Because a more hardwired brain from birth would be needed.


As a side note :
I still wonder if the early evolutionary ancestors of the human race where more predators then prey. Humans do not posses extraordinary defenses or strength. Only the brain to create what the human body lacks. No claws or large sharp teeth ? Create them. No furry skin to protect from the cold ?
Use the scalped hide from an animal that has a furry skin. The brain allows us to solve issues without the use of brute strength or speed or other brute force tactics. And we collect materials too, collecting is handy because it allows too overcome uncertain futures. Such as collecting food or supplies.
Gathering is handy too. It allows you to sleep while another stay on watch against other predators.


I do not think that the early humans where scavengers.
To much possible pathogens because of the result of eating rotten dead animals would have caused much more differentiation in the human genome... But maybe i am wrong though, i do not have all data.
 

Weenoman

Member
Dec 5, 2010
60
0
0
It could be an urban legend and not true, but it seems that babies also have a period where they do not know fear. There are some youtube movies of babies playing with dangerous animals. It seems that the need for learning is at this early stage more important then the need to recognize and respond to danger. It would make sense logically, and it can make sense evolutionary too. A human baby is as helpless as newborns come. It cannot flee the environment when in danger. For example an giraffe or antelope can run and walk almost from birth and have some chance of survival. This however could significantly reduce the ability to learn. Because a more hardwired brain from birth would be needed.

Hence my point, we have instincts. Not the kind of instincts that other animals have, but instincts none the less.
 
May 11, 2008
23,331
1,575
126
Hence my point, we have instincts. Not the kind of instincts that other animals have, but instincts none the less.

Indeed. A totally blank brain would be useless. There has to be some "boot" code to kick start the brain. After that, it modifies itself as it learns. We even have some mixture of reptilian and primate instincts, we use or are bothered with during social interaction. Not handy at all, but we have them.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Yes, philosophy.

If the human brain is so great, then who or what was smart enough to build it? The brain obviously didnt build itself as, at least of this writing, it admits that it cant understand how "itself" even works (eg, how does memory work; what is the source of consciousness).

Actually it did build itself. It grows spontaneously given the right environment.

If you found a wristwatch laying on the moon, would you therefore conclude that it was just there by chance? Or would one be forced to conclude that such a thing was left there instead by an intellect greater than the workings of the watch?

That's not a similar comparison. Biological systems are known to self assemble. Simple mechanical devices are not.

So tell me then, who or what built the human brain?
Well, parents of course.

And while your at it, consult that agent about your proposed intelligent design.

Ok, let's go even further than that. Prove to me that you exist.
 

Matt1970

Lifer
Mar 19, 2007
12,320
3
0
Kinda makes ya think a little. The more and more we think we have tapped into the brain, we really have barely even scratched the surface.
 

fail

Member
Jun 7, 2010
37
0
0
Yes, philosophy.

If the human brain is so great, then who or what was smart enough to build it? The brain obviously didnt build itself as, at least of this writing, it admits that it cant understand how "itself" even works (eg, how does memory work; what is the source of consciousness).

If you found a wristwatch laying on the moon, would you therefore conclude that it was just there by chance? Or would one be forced to conclude that such a thing was left there instead by an intellect greater than the workings of the watch?

So tell me then, who or what built the human brain?

And while your at it, consult that agent about your proposed intelligent design.
=====================
Where is Bill Gaatjes when you need him?
=====================
"Women who seek to be equal with men lack ambition." - Marilyn Monroe


WTF is up with this non sequitur crap? Is this in response to a comment wuliheron made in another thread?
 
Last edited:

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
If you found a wristwatch laying on the moon, would you therefore conclude that it was just there by chance?

There's a golfball and a rover with a dead battery on the moon, but I woulnd't conclude Tiger Woods played 18 there. You probably would though.

"Uh oh - we're going to talk about God now, aren't we? 'Cause if we are, I'm going to need another drink." - Dr. Quinn Burchenal / Red Planet

Getting back to this whole A.I. thing for a minute....I find it totally discouraging and an utter lack of comprehension that we define a well written data-base algorithm such as Watson as 'intelligent'. If these software programs were truly A.I. in the sense that human being had intelligence then Watson would be randomly thinking about the boobs of it's fellow competitor if it were male or the types of shoes they were wearing if it were female. Watson can't do that, and neither can a chess playing computer because it's not in the software. It's not A.I.

Software will never be able to produce a truly non code origin A.I. All it can produce is something that mimmicks what we think is A.I, because we're preceptually too lazy to care.

I also find it utterly retarded that we're still trying to encode the universe as a granular represented version of binary logic because we simply haven't got technologically beyond switch based computing. It's either on, or it's off, and if we do that really, really fast it kinda looks like it's smart. My understanding of physics is no two points in the universe or two particles have the exact same physical characteristics at any given time, but our system of math and computer engineering relies on this being the case.
 
Last edited:

Weenoman

Member
Dec 5, 2010
60
0
0
Software will never be able to produce a truly non code origin A.I. All it can produce is something that mimmicks what we think is A.I, because we're preceptually too lazy to care.

Since when is something code-origin automatically not A.I.?

I know this is highly technical and we all just like to stack our posts with S.A.T. vocab and mix in completely irrelevant philosophical ramblings, but could we please think through statements before we make them?
 
May 11, 2008
23,331
1,575
126
There's a golfball and a rover with a dead battery on the moon, but I woulnd't conclude Tiger Woods played 18 there. You probably would though.

"Uh oh - we're going to talk about God now, aren't we? 'Cause if we are, I'm going to need another drink." - Dr. Quinn Burchenal / Red Planet

Getting back to this whole A.I. thing for a minute....I find it totally discouraging and an utter lack of comprehension that we define a well written data-base algorithm such as Watson as 'intelligent'. If these software programs were truly A.I. in the sense that human being had intelligence then Watson would be randomly thinking about the boobs of it's fellow competitor if it were male or the types of shoes they were wearing if it were female. Watson can't do that, and neither can a chess playing computer because it's not in the software. It's not A.I.

Software will never be able to produce a truly non code origin A.I. All it can produce is something that mimmicks what we think is A.I, because we're preceptually too lazy to care.

I also find it utterly retarded that we're still trying to encode the universe as a granular represented version of binary logic because we simply haven't got technologically beyond switch based computing. It's either on, or it's off, and if we do that really, really fast it kinda looks like it smart. My understanding of physics is no two points in the universe or two particles have the exact same physical characteristics at any given time, but our system of math and computer engineering relies on this being the case.

That is why i think that part of the trick lies in the averaging and integrating that neurons do. I would not even be surprised if there is also a from of an differentiator. To react fast if needed while physically unable, although more susceptible to errors.

IMHO the averaging algorithm neurons use to fire or not to fire seems to me like one pwm signal or several pwm signals that get added and averaged. A threshold level needs to be reached before the neuron will fire along the axon. This threshold can be neurotransmitter based. Now we have three thresholds. The width or the repetition time of the input signal and the threshold of the input. When looking at synapses , this should work when thinking of the effects of inhibiting re-uptaking of neurotransmitters.

It seems several types of neurons exist. Some have limited capabilities but a fast response and relay time. Others are capable of it seems simple 1 bit arithmetic but are much slower.

The neurons are memory and alu at the same time. And when averaging is used, the threshold is high enough for that the neuron will not respond to every signal on its dendrite. Then a noise environment is not a problem. I am sure of it there is even a chemical version of hysteresis present inside neurons. Creating what we call in digital logic a schmitt-trigger input.

When we have logic circuits where each gate is an opamp that can be setup to be an integrator, an differentiator or an comparator with adjustable hysteresis and adjustable threshold and adjustable gain, it will be quite easy.
The output is a monostable multivibrator where the firing of the monostable vibrator is dependent on the input.
Because then you get :

An analog system that is relatively slow but also immune to noise.
Where the speed of signal across these artificial neurons is directly proportional to the thresholds of the inputs. Thus when the thresholds are smaller, the inputs become more susceptible to noise. This would be analogous to geniuses who are also on the verge of insanity. The freedom to think without boundaries is also the freedom to step into insanity.

An analog system where alu and memory are the same.
And where part of the coding of the memory can not only be stored in the connection between artificial neurons but also in the threshold value and in the function the artificial neuron will perform. Just as with real neurons.

Real neurons just as all cells die when the ATP reserves are depleted. Because the internal biochemical state machine comes to a halt. This statemachine performs clean up, re- uptake, re-initializing, production of needed proteins to keep on going. ^_^

It is not the absolute value that is important. It is the relative distance between 2 values that is important. A floating average with a floating hysteresis.