Originally posted by: Matthias99
Originally posted by: IIB
But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron.
Basically: please come back when you have some clue what you're talking about. While individual neurons are certainly interesting and complex systems, they're not really all that powerful (or unique) in terms of computational ability. What makes them so capable is that they are built into highly organized (and in fact self-organizing) networks consisting of billions of cells and potentially trillions of interconnections. But actually exploiting the computational capabilities of such networks is EXTREMELY difficult.
While neurons may not be the most exceptional at mathematical computation, they are far more advanced in cognitive capabilities than all conventional computing technologies. Why wouldn?t they use the natural method to build a cognitive system? How does it make more sense to pursue conventional approaches that that require extreme hardware and software configurations, when we can use less of each and allow the neuron processors to do the rest?
The convergence of these technologies puts the advanced conventional computing methods directly at the neuron-tips of these brains, which can and will take neuron embodiments to incomprehensible levels of computation and cognition. Conventional hardware will continue to increase in ingenuity and capability, and it?s still a long shot from brain technology cognition, but offers a serious edge in adding to neuron computing power. If we could achieve super intelligence by cheating our way with neuron power, we could then use that neuron power to finish the rest, could we not?
?Hawkins focuses mainly on the cortex, the most evolutionarily recent part of the brain.
the cortex, in his view, uses memory rather than computation to solve problems. Consider the problem of catching a ball. A robotic arm might be programmed for this task, but achieving it is extremely difficult and involves reams of calculations. The brain, by contrast, draws upon stored memories of how to catch a ball, modifying those memories to suit the particular conditions each time a ball is thrown. ?
?The cortex also uses memories to make predictions. It is engaged in constant, mostly unconscious prediction about everything we observe. When something happens that varies from prediction?if you detect an unusual motion, say, or an odd texture?it is passed up to a higher level in the cortex?s hierarchy of neurons. The new memories are then parlayed into further predictions. Prediction, in Hawkins? telling, is the sine qua non of intelligence. To understand something is to be able to make predictions about it.?
http://www.reason.com/0504/cr.ks.are.shtml
?Apparently, neurons, themselves the tools of learning, smartly synthesize proteins where they are needed. Two recent publications demonstrate that neurons are capable of localized translation in dendrites and in axons.?
http://www.jcb.org/cgi/content/full/158/5/831
Are you going to argue with Christof Koch?
?From the perspective of Christof Koch?s Biophysics of Computation the situation is quite different. A neuron can no longer be viewed as a single switch; it is more or less analogous to an integrated circuit chip.?
http://www.klab.caltech.edu/~koch/bioph...book/biophysics-book-review-scott.html
Neurons aren?t powerful or unique? Neurons aren?t feasible? Conventional technologies will give us cognitive computing first, you say?
1. If we had a silicon chip that had as many ?connections? would it become intelligent? No. In neuronss the ?software? and the ?ROM memory? are built in, even the RAM seems to be. There are certain areas or ?parts? that play important roles in consciousness, that wouldn?t exist in a puddle or blob of neurons, but the fact remains that the power is in those neurons. Neurons process and they store memories. How do memory capabilities fit into the mathematical model of trying to replicate ?neuron power??
2. It?s suggested that glial cells even help electrically ?compute?, and it isn?t known how significant their function is. How do glial cells fit into the mathematical model of trying to replicate ?neuron power??
3. Does anyone think that we will ever have self-repairing silicon chips? Neuron networks self-repair, and self-form, which would take serious overhead of the software from the hardware. DARPA does have 3D chips in its thrust, but silicon chips are still flat for a reason.
4. Neurons grow in different shapes & sizes, with varying amounts & lengths of dendrites and axons. Why wouldn?t they all be exactly the same if they aren?t special? How do rearranging, growing/stretching and expanding cells, with synaptic plasticity, fit into the mathematical model of trying to replicate ?neuron power??
5. It?s suggested that neuron dendrites and axons reverse fire. How does that fit into the model?
6. Neuron can have up to roughly 100,000 dendrtic synapses, with multi-connected axons. How does that fit into the model? How big can this model get until we decide to just use neurons instead?
7. Conventional computers use base 2 binary, and neurons are analog of about 25KHz. How much conventional CPU and RAM overhead will It consume to crunch base 25K over base 2 (not even counting all of the other dynamics of neurons)?
8. Memories and synaptic weights involve biochemistry. How does that fit into the ?simple? model?
http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm
9. Right now we?re (publicly) using rat brain neuron networks to study neural processing and cognition. This appears to be a step in surpassing human cognition. The NSF has awarded Demarse $500,000 to take his F22 brain findings, and research, to attempt to build a mathematical model of neuron networks. While those ?loose? findings would be important for progress, rat brain neuron nets are still nothing like humans. Not in neuron types/capacity, or brain complexity.
10. We still don?t know exactly what goes on inside the neuron, yet you describe them as being ?simple??
Considering 1-10 (I?m sure there?s some things I missed), it seems obvious that trying to build a mathematical model of the brain?s neuron ?computing? processes, and trying to program that into hardware, would go against Occam?s Razor. All I see an ?s curve? to reaching the capability of proper neuron firing and ?programming? to ultimately reach super intelligence. Our brains with their ?simple neurons? already spank any computer out there in intelligence and cognition (at least from an unclassified standpoint), and a great deal of the brain is used for body motor and life support systems ? things these computers won?t need, and would have all the hardware extras that our brains don?t.
If we can perfect it then we can grow them in larger scales, and get both ?hardware? and ?software? coupled inside of each ?processor?. They?re already devoting significant computers to their efforts, this would be a matter of converting those over to ?data acquisition hardware? and building cube shaped ?brains? that have massive bio-silicon interface chips (better than massive mea?s, must I explain?) on all sides; they could even do more extravagant geometric shapes using silicon.. How isn?t it more feasible to choose neurons over conventional technologies, for cognitive computing?
They could go rather far with this, using some current technologies alone. 1. They could start with "Doogie" strains of rats or mice, which have increased learning and memory abilities. 2. Then, they could humanize them using stem cell treatments. This doesn?t actually humanize all of the cells but rather sprouts human cells in mix. 3. An important key is whether or not spindle or mirror neurons can be harvested like this. 4. If so they could then use developing ?assembly line? technology to clone those cells in large scales giving them superior processor media. 5. Advanced Nootropic drugs and bioengineering can further enhance cognitive capabilities.
http://www.princeton.edu/pr/news/99/q3/0902-smart.htm
http://en.wikipedia.org/wiki/Spindle_neuron
http://www.washingtonpost.com/wp-dyn/co...rticle/2005/12/12/AR2005121201388.html
http://news.bbc.co.uk/1/hi/sci/tech/1308732.stm
It wont ruin their goal even if #3 above isn?t possible, for at least two reasons. First, they?ll have advanced rat-human chimæra neuron media, that includes more powerful rat cells in the mix. Second, they can simply use ?suspended animation? technology to harvest live human brains which would give them significant amounts of spindle and mirror neurons, to name two. Do you have "Organ Donor" checked on your license? Do ethical laws apply to people that do?
http://www.websters-online-dictionary.org/definition/CHIMAERA
http://smh.com.au/news/health-and-fitne...success/2006/01/20/1137553739997.html#
[/quote]I spent two years doing research in a cognitive science lab at a major university that was close to the cutting edge in some of these areas. Believe me when I say that while you can do some cool stuff at small scales, dealing with larger populations of neurons gets exponentially more difficult, both computationally and in terms of physical interfacing.[/quote]
How long ago was that?
Do you think that Demarse?s setup couldn?t be better optimized for a larger array? Do you think that his hardware/software setup won?t be dwarfed in technological comparison, isn?t it already?
~For the DeMarse?s F22 brains the ?Measurements of neural activity were conducted using Multichannel System?s data acquisition hardware and custom software on an Apple XServe with 3.5 Terabytes of Xraid disk storage. Raw electrical activity was recorded for each of the 60 channels on the MEA sampled and digitized at 25KHz per channel. This data was then streamed via TCP/IP to an Apple G5 client computer over a local gigabit network. The client then performed further data processing detecting action potentials (APs) (deviations in voltage above or below 5.0 x standard deviation of estimated noise per channel) and mapping telemetry from the flight simulator to schedule stimulations, while sending control commands to the aircraft, and logging the data. An F-22 Raptor was simulated with the commercially available XPlane aircraft simulation software. The aircraft simulator was run on a separate computer (Dell PC) communicating with a client via UDP (transmitting flight telemetry: heading, speed, altititude, pitch and roll angle) every 200 ms. The simulator also received commands to adjust the angle of the aircraft?s aileron and elevator control surfaces modifying the plane?s in-flight roll and pitch angles, respectively.? (DeMarse, "Adaptive Flight Control With Living Neuronal Networks on Microelectrode Arrays?)
http://www.apple.com/xserve/
The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.
We know how to do some really pretty basic and high-level things 'in there'; building low-level neurological systems at any but the most trivial sizes is still not really feasable.[/quote]
See above.
It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?).
While DARPA does some cool stuff, ultimately much of what they do is subcontracting out research to other people. They are not some shadowy super-secret organization with unlimited resources. They compete for government funding along with everything else, and science research in general tends to not be the highest priority.[/quote]
How much of the high end stuff did you work on personally? Are you saying that DARPA?s high end projects don?t go to more private and secure laboratories than the universities? Which, are hooked into the TeraGrid, that foreign powers probably data-mine themselves. Do you consider that each and every bocomputation project they do is unrelated, nor do they data mine the TeraGrid, nor do they combine findings. Do you seriouslt believe them when they say that they don?t have an actual lab or headquaters? When I see the top military technology firm in the world say that, I tend to think that they wont disclose the locations of important facilities, but hey they tell US everything right?
http://www.darpa.mil/body/pdf/BridgingTheGap_Feb_05.pdf
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104248
Why would it be necassary to complete the Human Cognome Project to find our own way to super intelligence? It's a matter of learning how to communicate with desired modules and completing modules. DARPA and NASA have block diagrams of how they intend to do their Cognitive computer and Intelligent Archives, it's a matter of building those modules.
Which is a RIDICULOUSLY complicated and overly ambitious goal. They have a 'block diagram'? Great. That's a long, long ways from the sort of systems you are talking about.[/quote]
So has been all conventional AI work.
A block diagram is a major step. Without a goal in anything, how far can one get with anything? ?Cognitive AI? and ?Intelligent Archives? need to start somewhere: Research to specifying the necessary ?blocks? for the desired system(s), completion of the block overview, required specifications ofeach block, R&D and then construction and utilization. In this case it?s a goal for super-intelligence, do you agree that if they achieved it they would be able to rapidly expand it using it itself?
NASA Intelligent Archives:
http://daac.gsfc.nasa.gov/intelligent_archive/IA_report_8-27-02_baseline.pdf
Biologically-Inspired Cognitive Architectures
http://www.darpa.mil/ipto/programs/bica/vision.htm
?4.2 Thrust B - Neurobiologically Inspired Architectures
In Thrust B, DARPA seeks a dramatic improvement in our understanding of the brain?s
functions and processes. Initially, we seek a major leap in the learning performance of
traditional AI systems by augmenting and informing their designs with neuroscience principles. Such machines might demonstrate functions such as imagination, social
intelligence and/or the anticipation of behavior of other intelligent agents. ?Learning? (in
the sense of interest in this BAA) involves the intense interaction of three processes:
attending, remembering, and reasoning. Because of the highly integrated nature of the
brain, learning cannot be viewed separately from other brain activities. In the follow-on
phase, we expect to implement a new class of hybrid AI systems ? using a mixture of
psychology-based and neuroscience-based architectures. Our ultimate goal is to approach
brain-like performance in learning, use of experience, sensorimotor integration and other
complex processes. At the same time we expect to develop a global theory of cognition
and one or more neurobiologically-inspired, integrated cognitive architectures.?
http://www.darpa.mil/BAA/pdfs/baa05-18pip.pdf
And that?s what isn?t secret?
O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?
What you are defining as 'intelligence' is a superset of what you are defining as 'learning'. 'Intelligent' systems must be able to adapt, but adaptive systems are not necessarily 'intelligent' (at least in terms of things like self-awareness).[/quote]
?Self Awareness? is a critical component cognitive computing. Learning is a critical component of both cognition and intelligence. Neurons are the only proof that cognition exists, and offer learning. You do the math.