So I ran across this comic someone had on their door. The comic has some doctor (or something) standing next to another guy (biologist) who is telling him that medicine is just applied biology. Then another guy is standing down a little ways and spouts out that biology is just applied chemistry. Of course a little further along is a guy mentioning that chemistry is just applied physics. Then a good strech further is a guy all by himself saying how physics is just applied math.
So yeah, this was in the Mathmatics department of a university as you might have guessed. It occurs to me though that you can continue that line of thinking, in thatmath is just applied logic. The same basic logic that integrated circuits perform bit by bit, which allows us to program them to model math, physics, chemistry, etc for us. Consider then that logic can be described as applied sets - the various ways that collections can be compared to each other.
This intrigues me because there has been such an effort to create an artificial intelligence based on the computer systems we have built based on logic operations. But when I try to think how a child's first thoughts might form, I envision the various physical senses bombarding the brain with collections of images, soungs, and other feelings. How does the brain know how to start putting meaning to things? It makes sense that it would start by comparing the collections that it receives and try to find trends (not so different from basic intersection and union comparisons). As some collections prove to appear more and more often (indicating increased value over other collections or concepts), the brain would take those collections as they exist in memory and make them a little more solid. That way, the less interconnected concepts are allowed to fade away while.
Here's a firmer hypothesis that might explain what I mean more clearly (while I'm sure it is wildly innacurate):
= - = - =
To begin the brain's memory is essentially blank, and data begins to flood in. The brain caches what it can, but is quickly filled to capacity with this raw, uncompressed data. Then comes the role of sleep, or unconsiousness due to the brain meeting its capacity. While unconsious, the brain examines more closely those collections it has cached, making comparisons between what the senses provided, when it was provided, in what combinations, etc. As comparisons are made and certain collections are elevated in value, those collections are "copied" to a more permanent memory. The brain's temporary cache is later purged, and a new day (a new period of consiousness) begins.
On day 2, as collections of data flood in via the senses, it is compared on the fly to the previously elevated collections as they are known to be more likely to have relavence. When duplication is encountered, the temporary cache replaces the presence of the entire collection with simply a pointer to the location of the identical, elevated collection. New relavent data is marked in the temporary cache as requiring elevation (this elevation must wait until the next period of unconsiousness). In this way, temporary memory can hold more data when it is rellying heavily on well-established concepts. The presence of many new concepts however would not be able to draw as heavily on previous concepts and therefore result in temporary memory being filled to capacity more quickly. Either way, meeting capacity brings on the onset of sleep.
= - = - =
Like I said, all of this is likely way off from reality. However, I think it does offer some clues as to why intelligence is so hard to model with logic. Namely, because it is a lower concept than logic, not a higher concept. I am eager to hear what others think on the subject.
So yeah, this was in the Mathmatics department of a university as you might have guessed. It occurs to me though that you can continue that line of thinking, in thatmath is just applied logic. The same basic logic that integrated circuits perform bit by bit, which allows us to program them to model math, physics, chemistry, etc for us. Consider then that logic can be described as applied sets - the various ways that collections can be compared to each other.
This intrigues me because there has been such an effort to create an artificial intelligence based on the computer systems we have built based on logic operations. But when I try to think how a child's first thoughts might form, I envision the various physical senses bombarding the brain with collections of images, soungs, and other feelings. How does the brain know how to start putting meaning to things? It makes sense that it would start by comparing the collections that it receives and try to find trends (not so different from basic intersection and union comparisons). As some collections prove to appear more and more often (indicating increased value over other collections or concepts), the brain would take those collections as they exist in memory and make them a little more solid. That way, the less interconnected concepts are allowed to fade away while.
Here's a firmer hypothesis that might explain what I mean more clearly (while I'm sure it is wildly innacurate):
= - = - =
To begin the brain's memory is essentially blank, and data begins to flood in. The brain caches what it can, but is quickly filled to capacity with this raw, uncompressed data. Then comes the role of sleep, or unconsiousness due to the brain meeting its capacity. While unconsious, the brain examines more closely those collections it has cached, making comparisons between what the senses provided, when it was provided, in what combinations, etc. As comparisons are made and certain collections are elevated in value, those collections are "copied" to a more permanent memory. The brain's temporary cache is later purged, and a new day (a new period of consiousness) begins.
On day 2, as collections of data flood in via the senses, it is compared on the fly to the previously elevated collections as they are known to be more likely to have relavence. When duplication is encountered, the temporary cache replaces the presence of the entire collection with simply a pointer to the location of the identical, elevated collection. New relavent data is marked in the temporary cache as requiring elevation (this elevation must wait until the next period of unconsiousness). In this way, temporary memory can hold more data when it is rellying heavily on well-established concepts. The presence of many new concepts however would not be able to draw as heavily on previous concepts and therefore result in temporary memory being filled to capacity more quickly. Either way, meeting capacity brings on the onset of sleep.
= - = - =
Like I said, all of this is likely way off from reality. However, I think it does offer some clues as to why intelligence is so hard to model with logic. Namely, because it is a lower concept than logic, not a higher concept. I am eager to hear what others think on the subject.
