imported_IIB
Member
bump
Originally posted by: IIB
Originally posted by: Matthias99
Originally posted by: IIB
But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron.
Basically: please come back when you have some clue what you're talking about. While individual neurons are certainly interesting and complex systems, they're not really all that powerful (or unique) in terms of computational ability. What makes them so capable is that they are built into highly organized (and in fact self-organizing) networks consisting of billions of cells and potentially trillions of interconnections. But actually exploiting the computational capabilities of such networks is EXTREMELY difficult.
While neurons may not be the most exceptional at mathematical computation, they are far more advanced in cognitive capabilities than all conventional computing technologies. Why wouldn?t they use the natural method to build a cognitive system? How does it make more sense to pursue conventional approaches that that require extreme hardware and software configurations, when we can use less of each and allow the neuron processors to do the rest?
The convergence of these technologies puts the advanced conventional computing methods directly at the neuron-tips of these brains, which can and will take neuron embodiments to incomprehensible levels of computation and cognition. Conventional hardware will continue to increase in ingenuity and capability, and it?s still a long shot from brain technology cognition, but offers a serious edge in adding to neuron computing power. If we could achieve super intelligence by cheating our way with neuron power, we could then use that neuron power to finish the rest, could we not?
?Apparently, neurons, themselves the tools of learning, smartly synthesize proteins where they are needed. Two recent publications demonstrate that neurons are capable of localized translation in dendrites and in axons.?
http://www.jcb.org/cgi/content/full/158/5/831
Are you going to argue with Christof Koch?
?From the perspective of Christof Koch?s Biophysics of Computation the situation is quite different. A neuron can no longer be viewed as a single switch; it is more or less analogous to an integrated circuit chip.?
http://www.klab.caltech.edu/~koch/bioph...book/biophysics-book-review-scott.html
Neurons aren?t powerful or unique? Neurons aren?t feasible? Conventional technologies will give us cognitive computing first, you say?
1. If we had a silicon chip that had as many ?connections? would it become intelligent?
2. It?s suggested that glial cells even help electrically ?compute?, and it isn?t known how significant their function is. How do glial cells fit into the mathematical model of trying to replicate ?neuron power??
3. Does anyone think that we will ever have self-repairing silicon chips?
4. Neurons grow in different shapes & sizes, with varying amounts & lengths of dendrites and axons. Why wouldn?t they all be exactly the same if they aren?t special?
How do rearranging, growing/stretching and expanding cells, with synaptic plasticity, fit into the mathematical model of trying to replicate ?neuron power??
5. It?s suggested that neuron dendrites and axons reverse fire. How does that fit into the model?
6. Neuron can have up to roughly 100,000 dendrtic synapses, with multi-connected axons. How does that fit into the model? How big can this model get until we decide to just use neurons instead?
7. Conventional computers use base 2 binary, and neurons are analog of about 25KHz. How much conventional CPU and RAM overhead will It consume to crunch base 25K over base 2 (not even counting all of the other dynamics of neurons)?
8. Memories and synaptic weights involve biochemistry. How does that fit into the ?simple? model?
http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm
9. Right now we?re (publicly) using rat brain neuron networks to study neural processing and cognition. This appears to be a step in surpassing human cognition.
10. We still don?t know exactly what goes on inside the neuron, yet you describe them as being ?simple??
Considering 1-10 (I?m sure there?s some things I missed), it seems obvious that trying to build a mathematical model of the brain?s neuron ?computing? processes, and trying to program that into hardware, would go against Occam?s Razor.
If we can perfect it then we can grow them in larger scales, and get both ?hardware? and ?software? coupled inside of each ?processor?. They?re already devoting significant computers to their efforts, this would be a matter of converting those over to ?data acquisition hardware? and building cube shaped ?brains? that have massive bio-silicon interface chips (better than massive mea?s, must I explain?) on all sides; they could even do more extravagant geometric shapes using silicon..
How isn?t it more feasible to choose neurons over conventional technologies, for cognitive computing?
They could go rather far with this, using some current technologies alone. 1. They could start with "Doogie" strains of rats or mice, which have increased learning and memory abilities. 2. Then, they could humanize them using stem cell treatments. This doesn?t actually humanize all of the cells but rather sprouts human cells in mix. 3. An important key is whether or not spindle or mirror neurons can be harvested like this. 4. If so they could then use developing ?assembly line? technology to clone those cells in large scales giving them superior processor media. 5. Advanced Nootropic drugs and bioengineering can further enhance cognitive capabilities.
http://www.princeton.edu/pr/news/99/q3/0902-smart.htm
http://en.wikipedia.org/wiki/Spindle_neuron
http://www.washingtonpost.com/wp-dyn/co...rticle/2005/12/12/AR2005121201388.html
http://news.bbc.co.uk/1/hi/sci/tech/1308732.stm
It wont ruin their goal even if #3 above isn?t possible, for at least two reasons. First, they?ll have advanced rat-human chimæra neuron media, that includes more powerful rat cells in the mix. Second, they can simply use ?suspended animation? technology to harvest live human brains which would give them significant amounts of spindle and mirror neurons, to name two. Do you have "Organ Donor" checked on your license? Do ethical laws apply to people that do?
http://www.websters-online-dictionary.org/definition/CHIMAERA
http://smh.com.au/news/health-and-fitne...success/2006/01/20/1137553739997.html#
I spent two years doing research in a cognitive science lab at a major university that was close to the cutting edge in some of these areas. Believe me when I say that while you can do some cool stuff at small scales, dealing with larger populations of neurons gets exponentially more difficult, both computationally and in terms of physical interfacing.
How long ago was that?
Do you think that Demarse?s setup couldn?t be better optimized for a larger array? Do you think that his hardware/software setup won?t be dwarfed in technological comparison, isn?t it already?
The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.
We know how to do some really pretty basic and high-level things 'in there'; building low-level neurological systems at any but the most trivial sizes is still not really feasable.
See above.
It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?).
While DARPA does some cool stuff, ultimately much of what they do is subcontracting out research to other people. They are not some shadowy super-secret organization with unlimited resources. They compete for government funding along with everything else, and science research in general tends to not be the highest priority.
How much of the high end stuff did you work on personally?
Are you saying that DARPA?s high end projects don?t go to more private and secure laboratories than the universities?
Which, are hooked into the TeraGrid, that foreign powers probably data-mine themselves. Do you consider that each and every bocomputation project they do is unrelated, nor do they data mine the TeraGrid, nor do they combine findings. Do you seriouslt believe them when they say that they don?t have an actual lab or headquaters? When I see the top military technology firm in the world say that, I tend to think that they wont disclose the locations of important facilities, but hey they tell US everything right?
http://www.darpa.mil/body/pdf/BridgingTheGap_Feb_05.pdf
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104248
And that?s what isn?t secret?
O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?
What you are defining as 'intelligence' is a superset of what you are defining as 'learning'. 'Intelligent' systems must be able to adapt, but adaptive systems are not necessarily 'intelligent' (at least in terms of things like self-awareness).
?Self Awareness? is a critical component cognitive computing. Learning is a critical component of both cognition and intelligence. Neurons are the only proof that cognition exists, and offer learning. You do the math.
Originally posted by: Matthias99
First, you suck at teh quoting. I'm fixing this one, but please try not to mangle your posts so badly that they're unreadable.
Originally posted by: IIB
Originally posted by: Matthias99
Originally posted by: IIB
But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron.
Basically: please come back when you have some clue what you're talking about. While individual neurons are certainly interesting and complex systems, they're not really all that powerful (or unique) in terms of computational ability. What makes them so capable is that they are built into highly organized (and in fact self-organizing) networks consisting of billions of cells and potentially trillions of interconnections. But actually exploiting the computational capabilities of such networks is EXTREMELY difficult.
While neurons may not be the most exceptional at mathematical computation, they are far more advanced in cognitive capabilities than all conventional computing technologies. Why wouldn?t they use the natural method to build a cognitive system? How does it make more sense to pursue conventional approaches that that require extreme hardware and software configurations, when we can use less of each and allow the neuron processors to do the rest?
Again: please come back when you have a clue. Saying they are 'more advanced in cognitive capabilities' is meaningless drivel.
The convergence of these technologies puts the advanced conventional computing methods directly at the neuron-tips of these brains, which can and will take neuron embodiments to incomprehensible levels of computation and cognition. Conventional hardware will continue to increase in ingenuity and capability, and it?s still a long shot from brain technology cognition, but offers a serious edge in adding to neuron computing power. If we could achieve super intelligence by cheating our way with neuron power, we could then use that neuron power to finish the rest, could we not?
?Apparently, neurons, themselves the tools of learning, smartly synthesize proteins where they are needed. Two recent publications demonstrate that neurons are capable of localized translation in dendrites and in axons.?
http://www.jcb.org/cgi/content/full/158/5/831
Are you going to argue with Christof Koch?
?From the perspective of Christof Koch?s Biophysics of Computation the situation is quite different. A neuron can no longer be viewed as a single switch; it is more or less analogous to an integrated circuit chip.?
http://www.klab.caltech.edu/~koch/bioph...book/biophysics-book-review-scott.html
Neurons aren?t powerful or unique? Neurons aren?t feasible? Conventional technologies will give us cognitive computing first, you say?
1. If we had a silicon chip that had as many ?connections? would it become intelligent?
2. It?s suggested that glial cells even help electrically ?compute?, and it isn?t known how significant their function is. How do glial cells fit into the mathematical model of trying to replicate ?neuron power??
3. Does anyone think that we will ever have self-repairing silicon chips?
4. Neurons grow in different shapes & sizes, with varying amounts & lengths of dendrites and axons. Why wouldn?t they all be exactly the same if they aren?t special?
How do rearranging, growing/stretching and expanding cells, with synaptic plasticity, fit into the mathematical model of trying to replicate ?neuron power??
5. It?s suggested that neuron dendrites and axons reverse fire. How does that fit into the model?
6. Neuron can have up to roughly 100,000 dendrtic synapses, with multi-connected axons. How does that fit into the model? How big can this model get until we decide to just use neurons instead?
7. Conventional computers use base 2 binary, and neurons are analog of about 25KHz. How much conventional CPU and RAM overhead will It consume to crunch base 25K over base 2 (not even counting all of the other dynamics of neurons)?
8. Memories and synaptic weights involve biochemistry. How does that fit into the ?simple? model?
http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm
9. Right now we?re (publicly) using rat brain neuron networks to study neural processing and cognition. This appears to be a step in surpassing human cognition.
10. We still don?t know exactly what goes on inside the neuron, yet you describe them as being ?simple??
Considering 1-10 (I?m sure there?s some things I missed), it seems obvious that trying to build a mathematical model of the brain?s neuron ?computing? processes, and trying to program that into hardware, would go against Occam?s Razor.
If we can perfect it then we can grow them in larger scales, and get both ?hardware? and ?software? coupled inside of each ?processor?. They?re already devoting significant computers to their efforts, this would be a matter of converting those over to ?data acquisition hardware? and building cube shaped ?brains? that have massive bio-silicon interface chips (better than massive mea?s, must I explain?) on all sides; they could even do more extravagant geometric shapes using silicon..
How isn?t it more feasible to choose neurons over conventional technologies, for cognitive computing?
They could go rather far with this, using some current technologies alone. 1. They could start with "Doogie" strains of rats or mice, which have increased learning and memory abilities. 2. Then, they could humanize them using stem cell treatments. This doesn?t actually humanize all of the cells but rather sprouts human cells in mix. 3. An important key is whether or not spindle or mirror neurons can be harvested like this. 4. If so they could then use developing ?assembly line? technology to clone those cells in large scales giving them superior processor media. 5. Advanced Nootropic drugs and bioengineering can further enhance cognitive capabilities.
http://www.princeton.edu/pr/news/99/q3/0902-smart.htm
http://en.wikipedia.org/wiki/Spindle_neuron
http://www.washingtonpost.com/wp-dyn/co...rticle/2005/12/12/AR2005121201388.html
http://news.bbc.co.uk/1/hi/sci/tech/1308732.stm
It wont ruin their goal even if #3 above isn?t possible, for at least two reasons. First, they?ll have advanced rat-human chimæra neuron media, that includes more powerful rat cells in the mix. Second, they can simply use ?suspended animation? technology to harvest live human brains which would give them significant amounts of spindle and mirror neurons, to name two. Do you have "Organ Donor" checked on your license? Do ethical laws apply to people that do?
http://www.websters-online-dictionary.org/definition/CHIMAERA
http://smh.com.au/news/health-and-fitne...success/2006/01/20/1137553739997.html#
I spent two years doing research in a cognitive science lab at a major university that was close to the cutting edge in some of these areas. Believe me when I say that while you can do some cool stuff at small scales, dealing with larger populations of neurons gets exponentially more difficult, both computationally and in terms of physical interfacing.
How long ago was that?
Do you think that Demarse?s setup couldn?t be better optimized for a larger array? Do you think that his hardware/software setup won?t be dwarfed in technological comparison, isn?t it already?
The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.
We know how to do some really pretty basic and high-level things 'in there'; building low-level neurological systems at any but the most trivial sizes is still not really feasable.
See above.
It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?).
While DARPA does some cool stuff, ultimately much of what they do is subcontracting out research to other people. They are not some shadowy super-secret organization with unlimited resources. They compete for government funding along with everything else, and science research in general tends to not be the highest priority.
How much of the high end stuff did you work on personally?
Are you saying that DARPA?s high end projects don?t go to more private and secure laboratories than the universities?
Which, are hooked into the TeraGrid, that foreign powers probably data-mine themselves. Do you consider that each and every biocomputation project they do is unrelated, nor do they data mine the TeraGrid, nor do they combine findings. Do you seriously believe them when they say that they don?t have an actual lab or headquaters? When I see the top military technology firm in the world say that, I tend to think that they wont disclose the locations of important facilities, but hey they tell US everything right?
http://www.darpa.mil/body/pdf/BridgingTheGap_Feb_05.pdf
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104248
And that?s what isn?t secret?
O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?
What you are defining as 'intelligence' is a superset of what you are defining as 'learning'. 'Intelligent' systems must be able to adapt, but adaptive systems are not necessarily 'intelligent' (at least in terms of things like self-awareness).
?Self Awareness? is a critical component cognitive computing. Learning is a critical component of both cognition and intelligence. Neurons are the only proof that cognition exists, and offer learning. You do the math.
IMHO, it could theoretically get there, if the system(s) comprising it were designed "better", meaning more resembling the way that the human mind stores and processes information.Originally posted by: IIB
In this case it would, neurons and brains learn. 500 quantum or other high end processors wouldn't become intelligent, the internet still isnt even intelligent or self aware. Neurons and brains learn on their own, and it's more than just algorithmic. Conventional processors just crunch number per request.
Originally posted by: ed21x
we honestly don't know how memory works so far. Right now, we believe that protein based receptors are strengthened through conditioning, and various parts of the brain (eg amygdala, Basal Ganglia) are responsible for different aspects of human emotion. Little by little, we're learning the mechanics of how neurons work, but are still nowhere close to figuring out how people think, feel, and remember. I appreciate you linking to a bunch of scientific articles but they really don't have any direct application to the implications that you are making in your original post. Heck, most of quantum physics is still theoretical, and nobody really believes in string theory.
Originally posted by: IIB
"Animats are artificial animals. The term includes physical robots and virtual simulations. Animat research, a subset of Artificial Life studies, has become rather popular since Rodney Brooks' seminal paper "Intelligence without reason". The word was coined by S.W. Wilson in 1991."
http://en.wikipedia.org/wiki/Animat
*And here's where that paragraph came from:
I then applied the living neurons to handle a more interesting real-world problem in real-time. In this project an animat was created that combines a living neural network with a virtual body in an effort to create a system where the living neural network could be studied. The animat was successful at tracking and maintaining distance from a reference object, which can be considered both an approach and avoidance task. Part of the robustness of the animat in the current project is that it reacts more strongly when necessary to correct for error. Thus, if it has an error on sometrial, the error will not be fatal because in the next trial it will make up for it. The animat in this project is unique because I am replacing algorithmic components of control with neural computation.
That's very interesting. One hypothesis that I had, was that if neurons use quantum sub-spaces to store information, that they might also store what are essentially akin to "stored procedures" in databases.Originally posted by: IIB
?Hawkins focuses mainly on the cortex, the most evolutionarily recent part of the brain.
the cortex, in his view, uses memory rather than computation to solve problems. Consider the problem of catching a ball. A robotic arm might be programmed for this task, but achieving it is extremely difficult and involves reams of calculations. The brain, by contrast, draws upon stored memories of how to catch a ball, modifying those memories to suit the particular conditions each time a ball is thrown. ?
?The cortex also uses memories to make predictions. It is engaged in constant, mostly unconscious prediction about everything we observe. When something happens that varies from prediction?if you detect an unusual motion, say, or an odd texture?it is passed up to a higher level in the cortex?s hierarchy of neurons. The new memories are then parlayed into further predictions. Prediction, in Hawkins? telling, is the sine qua non of intelligence. To understand something is to be able to make predictions about it.?
http://www.reason.com/0504/cr.ks.are.shtml
Speaking of which - has everyone seen the anime series "Bubblegum Crisis: Tokyo 2047"? The concept of the "boomers", sub-human animat-like creatures/creations, built from a fusion of human neurons and nano-scale electronics, to build helpers for humanity... and the ensuing ethical debates surrounding their existance. Is DARPA planning on building the same things? Hmm.Originally posted by: IIB
Lab Mice Grow Human Brain Cells After Injections
http://www.foxnews.com/story/0,2933,178498,00.html