Originally posted by: Matthias99
First, you suck at teh quoting. I'm fixing this one, but please try not to mangle your posts so badly that they're unreadable.
Originally posted by: IIB
Originally posted by: Matthias99
Originally posted by: IIB
But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron.
Basically: please come back when you have some clue what you're talking about. While individual neurons are certainly interesting and complex systems, they're not really all that powerful (or unique) in terms of computational ability. What makes them so capable is that they are built into highly organized (and in fact self-organizing) networks consisting of billions of cells and potentially trillions of interconnections. But actually exploiting the computational capabilities of such networks is EXTREMELY difficult.
While neurons may not be the most exceptional at mathematical computation, they are far more advanced in cognitive capabilities than all conventional computing technologies. Why wouldn?t they use the natural method to build a cognitive system? How does it make more sense to pursue conventional approaches that that require extreme hardware and software configurations, when we can use less of each and allow the neuron processors to do the rest?
Again: please come back when you have a clue. Saying they are 'more advanced in cognitive capabilities' is meaningless drivel.
No it's not. I hate to break it to you but Cognitive 'computing' is more than just number crunching.
The convergence of these technologies puts the advanced conventional computing methods directly at the neuron-tips of these brains, which can and will take neuron embodiments to incomprehensible levels of computation and cognition. Conventional hardware will continue to increase in ingenuity and capability, and it?s still a long shot from brain technology cognition, but offers a serious edge in adding to neuron computing power. If we could achieve super intelligence by cheating our way with neuron power, we could then use that neuron power to finish the rest, could we not?
Again, you're just blathering on randomly.[/quote]
You're dismissing neurons as if theyre logic gates, still.
?Apparently, neurons, themselves the tools of learning, smartly synthesize proteins where they are needed. Two recent publications demonstrate that neurons are capable of localized translation in dendrites and in axons.?
http://www.jcb.org/cgi/content/full/158/5/831
Yes, they're adaptable.[/quote]
Are you going to argue with Christof Koch?
?From the perspective of Christof Koch?s Biophysics of Computation the situation is quite different. A neuron can no longer be viewed as a single switch; it is more or less analogous to an integrated circuit chip.?
http://www.klab.caltech.edu/~koch/bioph...book/biophysics-book-review-scott.html
Haven't read the book. Certainly the theories being discussed there are not universally accepted.[/quote]
And neither is your THEORY of "they're not really all that powerful (or unique) in terms of computational ability". What books have you written? I'm citing experts here, you're just dismissing things with your opinion.
Neurons aren?t powerful or unique? Neurons aren?t feasible? Conventional technologies will give us cognitive computing first, you say?
I didn't say any of those things.[/quote]
"they're not really all that powerful (or unique) in terms of computational ability" I'm pretty sure you said 'isnt feasible" in her esomewhere. Ok, so then if my theory isnt right, and conventional methods arent the key, then what is? Note that in a couple places I refered to quantum computing as conventional because it's still number crunching. Those photons dont exactly store actual memories.
1. If we had a silicon chip that had as many ?connections? would it become intelligent?
No, just like an individual neuron is not 'intelligent' either, no matter how many interconnections it has.[/quote]
Explain this finding by Christof Koch:
http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm
2. It?s suggested that glial cells even help electrically ?compute?, and it isn?t known how significant their function is. How do glial cells fit into the mathematical model of trying to replicate ?neuron power??
You'd have to understand them more completely first to model them.[/quote]
Of course, along ith all of these other things I'm mentioning and the ones that I havent yet.
3. Does anyone think that we will ever have self-repairing silicon chips?
Where the hell did this one come from?[/quote]
THis is an important dynamic when determining the feasibility of neuron processors, they self repair and can potentially reach lifespans of over 100 years. How many 100 year old complex silicon chips or hard drives are you aware of?
4. Neurons grow in different shapes & sizes, with varying amounts & lengths of dendrites and axons. Why wouldn?t they all be exactly the same if they aren?t special?
Uh... what?[/quote]
This is an important dynamic in judging if we're really going to build a mathemattical model of neurons, to achieve cognitive computng.
How do rearranging, growing/stretching and expanding cells, with synaptic plasticity, fit into the mathematical model of trying to replicate ?neuron power??
That would be why they also model the interconnections between neurons.[/quote]
Even if they completly modeled rat cortical neurons that's a long shot from modeling far more powerful human neurons typeS. But considering all of the dynamics of neurons (that we STILL dont fully understand) that's still a long shot. Modeling the way they connect is only the tip of the iceberg, its a long way off from FULLY understanding what's realy going on inside them and how to easily (low resource overhead) incorporate their memory abilities into the math model. How is it more feasible to wait possibly several more decades to finally gain a complete understanding of neurons to get a proper math model when we can possibly skip that and build brains our selves?
5. It?s suggested that neuron dendrites and axons reverse fire. How does that fit into the model?
It's taken explicitly into account in some of them.[/quote]
That's fine, but its seriously another example of the type of hardware overhead this software would demand.
6. Neuron can have up to roughly 100,000 dendrtic synapses, with multi-connected axons. How does that fit into the model? How big can this model get until we decide to just use neurons instead?
How many computers can you put in a network?[/quote]
Your response is out of context. The point is we could A: potentially hook several massive brain arrays up to single quantum rigs, OR B: hook up massvie arrays or quantum rigs including massive storage arrays and gross amounts of RAM.
It's a matter of perfecting the brains, that couldpotentially live for over 100 years once grown and taught. Less conventional hardware components would have to be used, and replaced every so often. Instead of replacing and upgrading a countless number of CPU's, RAM and permanent storage they would only have to keep the brains stable and replace far less conventional components.
7. Conventional computers use base 2 binary, and neurons are analog of about 25KHz. How much conventional CPU and RAM overhead will It consume to crunch base 25K over base 2 (not even counting all of the other dynamics of neurons)?
This question basically implies that you don't really know much if anything about signal processing or computational theory.[/quote]
It could have been worded better. The point is that neurons are analog, conventional electronics are binary. It seems like you're talking about running mathematical models (THAT COMPLETELY REPLICATE NEURONS IN ALL OF THEIR COMPLEXITY) over conventional computers. If you were smart you could have brought in the emreging quantum computers capabilities, but i would stiill have my argument that complete math models to completely replicate neuron power is a LONG way off.
8. Memories and synaptic weights involve biochemistry. How does that fit into the ?simple? model?
http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm
Again, you're assuming that the neural medium has something 'special' about it that is impossible to capture or model in a useful way. This kind of thinking is really not shared by most of the scientific community.[/quote]
Again, you're assuming that neurons are just simple logic gates. I never said that biologically inspired neural nets cant be powerful, but I think your assumption that neurons are essentially logic gates is completely unfounded.
9. Right now we?re (publicly) using rat brain neuron networks to study neural processing and cognition. This appears to be a step in surpassing human cognition.
...that wasn't really a question. And yes, it is a "step", a VERY early one.[/quote]
lol. Ya I forgot to finish that one. IT should have said "Does this realy appear to be a critical step in surpassing human cognition? But you said it: "a VERY early one". Exactly my point. You pretend as if conventional AI is the asnwer, blasting my theory as being too far off, but SO IS your conventional thinking. Which could come first, or more importantly get us there first is the key. Since all you're personally seen is simple neuron nets you assume that there's no way neuron brains is possible.
10. We still don?t know exactly what goes on inside the neuron, yet you describe them as being ?simple??
And you're considering them a magical black box that can never be understood. "simple" was perhaps an understatement of the problem, but the complexity here is limited in scope.[/quote]
But their still not understood and everyone assumes that we'll completely model them before we can just put them to use?
Considering 1-10 (I?m sure there?s some things I missed), it seems obvious that trying to build a mathematical model of the brain?s neuron ?computing? processes, and trying to program that into hardware, would go against Occam?s Razor.
That's not really what Occam's Razor is.

And no, this is not 'obvious' to a lot of people who actually do this research, and they generally know more than you.[/quote]
Well I thought occams razor is:
Given two equally predictive theories, choose the simpler, and The simplest answer is usually the correct answer.
The point is that you havent acually demonstrated that their not, you've just been dismissing the potential of live neurons. We have live neuron power at our dispossal, but we dont even fully know how powerful they are but we can tell theyre more than simple math gates. We're trying to study them to achieve cognitive computers, but we're only studying simple rat cortex neurons - and you think that that can get us there first?
If we can perfect it then we can grow them in larger scales, and get both ?hardware? and ?software? coupled inside of each ?processor?. They?re already devoting significant computers to their efforts, this would be a matter of converting those over to ?data acquisition hardware? and building cube shaped ?brains? that have massive bio-silicon interface chips (better than massive mea?s, must I explain?) on all sides; they could even do more extravagant geometric shapes using silicon..
Again, you're theorizing that any of this will exist anytime soon, and that it will all work out nicely the way you think it should.[/quote]
How can say for sure. Have you been analyzing all of these finds and all of the data scientific data on the web and in the TeraGrid? I'm sure they have been. Those contrats you talk about arent just for the hell of it, they're for the obvious:
http://www.darpa.mil/ipto/programs/bica/index.htm
How isn?t it more feasible to choose neurons over conventional technologies, for cognitive computing?
It might be, it might not be. Research really isn't far enough along yet to say. Using living cells has some inherent disadvantages -- they're fragile, they're harder to build exactly the way you want them, and trying to interface between analog and digital systems of any substantial size is a hard problem in itself.
Exactly: "It might be, it might not be." So dont go dismissing this just yet. That answer also applies to how far along they could be, right now. We cant know for sure, so therrefor it'd be foolish and ignorat to proclaim that there's no way its not going to be used. It could be next year, it could be by 2010. Honestly, it could already be old news.
They could go rather far with this, using some current technologies alone. 1. They could start with "Doogie" strains of rats or mice, which have increased learning and memory abilities. 2. Then, they could humanize them using stem cell treatments. This doesn?t actually humanize all of the cells but rather sprouts human cells in mix. 3. An important key is whether or not spindle or mirror neurons can be harvested like this. 4. If so they could then use developing ?assembly line? technology to clone those cells in large scales giving them superior processor media. 5. Advanced Nootropic drugs and bioengineering can further enhance cognitive capabilities.
http://www.princeton.edu/pr/news/99/q3/0902-smart.htm
http://en.wikipedia.org/wiki/Spindle_neuron
http://www.washingtonpost.com/wp-dyn/co...rticle/2005/12/12/AR2005121201388.html
http://news.bbc.co.uk/1/hi/sci/tech/1308732.stm
...Uh, yeah. You get right on that.[/quote]
This isnt my goal, I'm just trying to determine what they can, will and are doing.
It wont ruin their goal even if #3 above isn?t possible, for at least two reasons. First, they?ll have advanced rat-human chimæra neuron media, that includes more powerful rat cells in the mix. Second, they can simply use ?suspended animation? technology to harvest live human brains which would give them significant amounts of spindle and mirror neurons, to name two. Do you have "Organ Donor" checked on your license? Do ethical laws apply to people that do?
http://www.websters-online-dictionary.org/definition/CHIMAERA
http://smh.com.au/news/health-and-fitne...success/2006/01/20/1137553739997.html#
Earth to poster, Earth to poster, come in please.
I'm not the mad scientist here, it's really you people. I didnt think this stuff up, it was people like thos einvolved in this list of research that I'm citing, and others like you who are involved in these sciences, and dont try telling me that it will all always be used for good things like treating parkinsons or alzheimers.
I spent two years doing research in a cognitive science lab at a major university that was close to the cutting edge in some of these areas. Believe me when I say that while you can do some cool stuff at small scales, dealing with larger populations of neurons gets exponentially more difficult, both computationally and in terms of physical interfacing.
How long ago was that?
2001-2003. I've followed some of the research from that lab and others since then.[/quote]
Again you're basing your arguments from the simple low level studies that they're doing. It isnt just 'your' lab, it's virtually all US labs doing their share of DARPA's research. They sit at the top fitting all of the pieces together, and for what? BICA
Do you think that Demarse?s setup couldn?t be better optimized for a larger array? Do you think that his hardware/software setup won?t be dwarfed in technological comparison, isn?t it already?
Are they going to keep working on this? Yes. Is it at the point where we can throw out our PCs and build everything using neurons? No. Will we someday get to that point? Maybe, but I think not for general-purpose computing.
I never said that "we" would be throwing out our old computers. The government will not allow us to have cognitive computers, probably not for at least a decade until after they've done it long after they have far superior ones. By then they plan on having us all neurally jacked into the system, we wont need them anyways.
The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.
We know how to do some really pretty basic and high-level things 'in there'; building low-level neurological systems at any but the most trivial sizes is still not really feasable.
See above.
What am I not "seeing"? Everything you're talking about is still an idea or on the drawing board somewhere.[/quote]
And all of your arguments are based on assuming tha tneurons are essentially simple, and that there's no way that the government isnt doing this because you've seen lab cultures of up to about 25K neurons (it's beyond "dozens" as you misleadingly described it earlier). It sure is on the drawing board, has been for soem time.
It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?).
While DARPA does some cool stuff, ultimately much of what they do is subcontracting out research to other people. They are not some shadowy super-secret organization with unlimited resources. They compete for government funding along with everything else, and science research in general tends to not be the highest priority.
How much of the high end stuff did you work on personally?
Define 'high-end stuff'. The lab I worked in was doing research in intracellular recording and modelling, partially under a DARPA contract. Since I left they've been doing some human trials (they run experiments on patients already undergoing neurosurgery, with their permission).
Are you saying that DARPA?s high end projects don?t go to more private and secure laboratories than the universities?
Which, are hooked into the TeraGrid, that foreign powers probably data-mine themselves. Do you consider that each and every biocomputation project they do is unrelated, nor do they data mine the TeraGrid, nor do they combine findings. Do you seriously believe them when they say that they don?t have an actual lab or headquaters? When I see the top military technology firm in the world say that, I tend to think that they wont disclose the locations of important facilities, but hey they tell US everything right?
http://www.darpa.mil/body/pdf/BridgingTheGap_Feb_05.pdf
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104248
You're basing all of this on the thinking that DARPA is some shadowy super-organization that has unlimited resources. They are not. They are, for the most part, an organization that guides, funds, and organizes research.[/quote]
Not entirely. i forgot to apply the question mark afte rthe second question. But ya, it's important to note who we're talking about here: the top (known) military technology agency in the US, which has the top technologies in the world.
And that?s what isn?t secret?
Again with the conspiracy theories.[/quote]
Again with assuming that conspiricie sdont exist and believing that the top military agency doesnt have secrets. :lol:
O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?
What you are defining as 'intelligence' is a superset of what you are defining as 'learning'. 'Intelligent' systems must be able to adapt, but adaptive systems are not necessarily 'intelligent' (at least in terms of things like self-awareness).
?Self Awareness? is a critical component cognitive computing. Learning is a critical component of both cognition and intelligence. Neurons are the only proof that cognition exists, and offer learning. You do the math.
I tried. Yours doesn't add up.[/quote]
How doesnt it? There's enough material on the table, explain how it's impossible and absolutely infeasible, dont just nitpick each point in tis ever growing debate. Explain how you know for a fact that neurons are essentially simple, and how it's not possbile to do what i'm saying, and how DARPA has absolutely no secrets. Feel free to cite some actual science instead of just dismiss all of this for the mathematical model that the conventionalist still think is the answer.