• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Beyond AI Today:

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Matthias99

"You're postulating evolutionary software development using biological processors rather than silicone based?"

*No I'm saying they will bypass evolutionary software development by utilizing biological learning components.

"Um... it's a little more complicated than that."

*I never said it wasnt complicated.



"Or are you making the deeply stupid assumption that processing power = intelegence."

*In this case it would, neurons and brains learn. 500 quantum or other high end processors wouldn't become intelligent, the internet still isnt even intelligent or self aware. Neurons and brains learn on their own, and it's more than just algorithmic. Conventional processors just crunch number per request.

"'Neurons' are just specialized cells that behave in certain more or less deterministic ways. Saying a neuron 'learns on its own' is kind of like saying that a transistor 'computes stuff'. Individual neurons can be viewed, in some sense, as just 'number-crunching' elements. Computationally, there is little difference from a computer-modelled neuron element and a biological one."

Neurally Controlled Simulated Robot: Applying Cultured Neurons to Handle an Approach / Avoidance Task in Real Time and a Framework for Studying Learning in Vitro
http://web.mit.edu/shkolnik/www/projects/neurobot.pdf :
?One of the main benefits of living neural networks as opposed to digital computing is a built in ability to learn based on experience. When learning occurs, synaptic weights adjust (often referred to as synaptic plasticity) and thenceforth the system?s behavior changes even if given the same input conditions as before the learning had occurred. Finite State Automata theory breaks down when trying to emulate neural learning, because the neurons can ?rewire? themselves automatically, thus changing the possible states the system may enter on a given input. Any change in synaptic weights may therefore be considered a doubling of the states in the automaton. If one tries to emulate learning to infinite precision (since the synaptic weights are analog), one may realize that the living neural network may actually have an infinite number of states. Though neural networks are chaotic systems, infinite precision is probably not required to model an analog synapse. Even so, the number of finite states that a network of neurons may enter may be unreasonably many to consider with standard automata theory. Given that learning may increase the processing power of living neuronal networks?
...
?The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response. There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons. ?







"Brains are a highly organized collection of neurons, just like CPUs are a highly organized collection of transistors. The way brains are organized results in them taking input from external sources, combining it with various feedback loops, and using that to modify the system itself -- what you might call 'learning'. However, it's a long way from a 'learning' system to an 'intelligent' system. And we're a LONG way from building any neuron-based systems that are anywhere NEAR the complexity of the human brain. The sorts of systems used in research right now are in the range of dozens of neurons; even a relatively 'simple' mammalian brain has millions."

I said: "what if they grew them in enourmous sizes with massive sized arrays of electrodes and in massive arrays in grid topologies with other sophisticated components?"


 
Originally posted by: Bootstrap
You missed the whole point of my post. I was talking about artificial neural networks, and the point still stands. You're assuming that putting real neurons in a computer will suddenly make the computer intelligent. It won't. Neurons by themselves are only capable of representing rather simple mathematical functions. You're just replacing electrical hardware with biological hardware. Humans and animals use complicated networks of real neurons for learning and memory, but we still really don't understand how this works. If we did, we could just as well simulate learning in electrical hardware.

I'm not assuming anything, you're assuming tha they wouldnt:

Removing some ?A? from AI: Embodied Cultured Networks
http://www.neuro.gatech.edu/groups/potter/papers/DagstuhlAIBakkumpreprint.pdf
?We wish to continue this trend by studying the network processing of ensembles of living neurons that lead to higher-level cognition and intelligent behavior.?
?A better understanding of the processes leading to biological cognition can, in turn, facilitate progress in understanding neural pathologies, designing neural prosthetics, and creating fundamentally different types of artificial inteligence.?
?By using biology directly, we hope to remove some of the 'A' from AI.?
?No one would argue that environmental interaction, or embodiment, is unimportant in the wiring of the brain; no one is born with the innate ability to ride a bicycle or solve algebraic equations. Practice is needed. An individual's unique environmental interactions lead to a continuous 'experience-dependent' wiring of the brain [1]. This makes evolutionary sense as it is helpful to learn new abilities throughout life: if there are some advantageous features of an organism that can be attained through learning, then the ability to learn such features can be established through evolution (the Baldwin effect) [2]. Thus, the ability to learn is innate (learning usually being defined as the acquisition of novel behavior through experience [3]). ?
?We use biological neural networks not as substitutes to artificial neural networks, but to tease out the intricacies of biological processing to inform future development of artificial processing. In particular, we analyzed how the properties of neurons lead to real-time control and adaptation to novel environments.?
?New findings about the dynamics of living neural networks might be used to
design more biological, less artificial AI.?
?Environmental deprivation leads to abnormal brain structure and function, and environmental exposure shapes neural development. Similarly, patterned stimulation supplied to cultured neurons may lead to more robust network structure
and functioning than with trivial or no stimulation. The most dramatic examples of the importance of embodiment come from studies during development, when the brain is most malleable. ?

Distributed processing in cultured neuronal networks
Steve M. Potter 2001
http://www.neuro.gatech.edu/groups/potter/papers/PotterDistProcPreprint.pdf
?An embodied culture capable of behaving may then exhibit changes in behavior as a result of experience, that is, learning.?

Learning in Networks of Cortical Neurons
http://brc.technion.ac.il/learning.pdf
Goded Shahaf and Shimon Marom November 15, 2001
?Learning a new behavioral task is an exploration process that involves the formation and modulation of sets of associations between stimuli and responses?
?The experiments described above show that sufficient conditions for the realization of learning by a selection process, ithout the involvement of a neural rewarding entity, are embodied in large random networks of neurons maintained ex vivo. These networks form a large space of connectivity configurations that are stable over many hours. The connectivity can be modulated by external focal stimulation in an activity-dependent manner. Most importantly, the networks explore the space of possible responses and stabilize at configurations that remove the stimuli.?
?Specifically, we show that, during regular low-frequency stimulation, the network explores a large space of possible connections and can be instructed to select and stabilize one or a subset of them by withdrawing the stimulus at the point that the connection is observed.?

HYBROTS: HYBRIDS OF LIVING NEURONS AND ROBOTS FOR STUDYING NEURAL COMPUTATION
http://web.mit.edu/shkolnik/www/publications/BICS_2004.pdf
?By combining small networks of real brain cells, computer simulations, and robotics into new hybrid neural microsystems (which we call Hybrots), we hope to determine which neural properties are essential for the kinds of collective dynamics that might be used in artificially intelligent systems.?
?If we and others are successful with this new approach, we will learn the cell- and network-level substrates of memory, thought, and behavioral control, and may then be able to develop more brain-like artificial intelligences.?









You're treating these groups of real neurons as a "magic" black box, the same way artificial neural networks were originally treated. While interesting, it's not going to be really beneficial until we actually understand what's going on on the inside, and can characterize what the limitations of these networks are. Scientists orignally thought artificial neural networks would be the solution to all learning problems, until they realized that they're just a special type of representation for a nonlinear function.

That's because artificial networks are just math crunching nodes, from whati''ve found anyways. Neurons actually show signs of intelligence:

http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm


You said, "You left out "arrays of", "in grid topologies" ". I have no idea what this statement is trying to say. All neural networks, artificial and real, have a network topology, if you want to call it that. The overall layout of the network doesn't change the expressive power of a neural nets in general.

Neurons rearrange themselves as they LEARN, through synaptic plasticity. In topology I'm not talking about in the neural network but the topolgy of different 'brains' and other computational equipment such an entire conventional computers.
 
Originally posted by: SlitheryDee
Crazyness. We're nowhere near that kind of technology. We know few things about neural networks, but saying we're nearly ready to grow gigantic brain-networks and hook them up to quantum computers in order to create some sort of super-intelligent entity(s) is...well it's out there.

That's like thinking you can build a car just because you figured out how to pop the hood 😕

Edit: I'm no expert in any of the related fields so I could be wrong...

There's no telling how far along DARPA is. Just because individual studies from seperate universities only have so much success doesnt mean that DARPA isnt light years ahead. Remember we're talking about Manhattan Project level programs. But all of those findings, all new science findings actually, are right at DARPA's fingertips:
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104248
http://www.nsf.gov/crcns
This could probably turn into a huge list if anyone insists.


 
Originally posted by: Soviet
General handware huh? Should be in Highly Technical. Last time i checked the population of anandtech wasent having problems sorting out their rat brain cpu's 😛

Move it there i dont care. Sorry.
 
Originally posted by: ed21x
we honestly don't know how memory works so far. Right now, we believe that protein based receptors are strengthened through conditioning, and various parts of the brain (eg amygdala, Basal Ganglia) are responsible for different aspects of human emotion. Little by little, we're learning the mechanics of how neurons work, but are still nowhere close to figuring out how people think, feel, and remember. I appreciate you linking to a bunch of scientific articles but they really don't have any direct application to the implications that you are making in your original post. Heck, most of quantum physics is still theoretical, and nobody really believes in string theory.

We dont necassarily have to fully understand how they learn as long as we manage to interface with them properly, the precise electro-chemical processes arent important. If we give them enough data they should be able to learn it themselves.

Ya those were just some random links to show him that this isnt fiction.
 
Neurally Controlled Simulated Robot: Applying Cultured Neurons to Handle an Approach / Avoidance Task in Real Time and a Framework for Studying Learning in Vitro
http://web.mit.edu/shkolnik/www/projects/neurobot.pdf :
?One of the main benefits of living neural networks as opposed to digital computing is a built in ability to learn based on experience. When learning occurs, synaptic weights adjust (often referred to as synaptic plasticity) and thenceforth the system?s behavior changes even if given the same input conditions as before the learning had occurred. Finite State Automata theory breaks down when trying to emulate neural learning, because the neurons can ?rewire? themselves automatically, thus changing the possible states the system may enter on a given input. Any change in synaptic weights may therefore be considered a doubling of the states in the automaton. If one tries to emulate learning to infinite precision (since the synaptic weights are analog), one may realize that the living neural network may actually have an infinite number of states. Though neural networks are chaotic systems, infinite precision is probably not required to model an analog synapse. Even so, the number of finite states that a network of neurons may enter may be unreasonably many to consider with standard automata theory. Given that learning may increase the processing power of living neuronal networks?
...

This is called reural plasticity. a simple way to describe this is "Neurons that fire together, wire together". When neurons fire in a specific pattern, the proteins involved in the synaptic cleft changes to optimized the signal. This is nothing new. The concept of simulating an infinite analog state machine like the brain with a simpler finite state machine is nothing new. All you have to do is account for fewer emotions. Once again, this is talking about artificial intelligence to simulate the brain, but it does not imply the biological implication of fusing a human brain with a computer.



?The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response. There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons. ?

So the author describes how the animal converts sensory stimulus into info stored in the cortex via the PFC, neocortex, and hippocampus. There is no computers involved in this. in fact, why are you even citing this paragraph? All he says is that watching animals in action implies computational power in cultured neurons, not that cultured neurons can actually be used computationally, or in any way remotely close to replacing the computational algorithms used today.
 
Originally posted by: ed21x
Alright, I actually went through that OP article (vaguely skimmed it). Since I'm working here at the VLSB Labs here in the Berkeley BioMedical Engineering department, I honestly know everything that is written in it (it is not a very specific specific article, more like a general Master's thesis).

For those who are too lazy to read it:
All this writeup is, is a collection of articles summarizing the applications of biotechnology in the field of biomimetics, nanotechnolgy, artificial intelligence, Genetics, drug delivery systems, microvalves/pumps and prosthetics.

Nothing pointing to the doomsday prophesy of the original poster. The whole concept of convergence simply states that applications in one field can be applied to another, and thus help to advance biotechnology as a whole.


Are you talking about this article:
http://www.wtec.org/ConvergingTechnologies/
? It's abit more than an article being a whopping 482 pages.

It goes into great detail about converging on all levels(departments, agencies, labratories, universities) to converge for convergenece.

And it describes the creation of this emerging NBIC technology, which stands for "Nano-Bio-Info-Cogno" "convergence". It's basically fusing (converging) nanotechnology (nanobots), biotechnology (biological engineering), information technology (computers and communications) and Cognitive science (brain, consciousness) into one (NBIC) at the atomic (nano) scale. It's embedding nanobot parts into synthetically grown biological cells, that have electronic DNA (more advanced than humans). --- But that's not important at this time. First we discuss the biocomputation.

 
Originally posted by: Markbnj
I don't think that the basic premise is outlandish. It's just still in the realm of the highly theoretical. I have no doubt it will happen one day, assuming we get that far. There are a host of problems that aren't addressed by the _extremely_ limited experiments performed so far.

On the other hand, if you want people to take a post like this seriously, avoid sentences like this one:

If you know your stuff then you'll also know about the fact that the government is converging on all levels(departments, agencies, labratories, universities) to converge for convergenece.

^Read above to see what that means^
 
Originally posted by: IIB

I'm not assuming anything, you're assuming tha they wouldnt:

*Sigh* I'm not assuming anything, and I've already explained why. It's clear that you're not comprehending what you're reading. You're simply spewing a bunch of quotes which you're misinterpreting as supporting your viewpoint, when in reality they don't say anything that other people haven't already trying to been telling you.

It's clear to me that future discussion on this topic is pointless, so I'm out. Best of luck in building better brains.

 
Originally posted by: ed21x
Neurally Controlled Simulated Robot: Applying Cultured Neurons to Handle an Approach / Avoidance Task in Real Time and a Framework for Studying Learning in Vitro
http://web.mit.edu/shkolnik/www/projects/neurobot.pdf :
?One of the main benefits of living neural networks as opposed to digital computing is a built in ability to learn based on experience. When learning occurs, synaptic weights adjust (often referred to as synaptic plasticity) and thenceforth the system?s behavior changes even if given the same input conditions as before the learning had occurred. Finite State Automata theory breaks down when trying to emulate neural learning, because the neurons can ?rewire? themselves automatically, thus changing the possible states the system may enter on a given input. Any change in synaptic weights may therefore be considered a doubling of the states in the automaton. If one tries to emulate learning to infinite precision (since the synaptic weights are analog), one may realize that the living neural network may actually have an infinite number of states. Though neural networks are chaotic systems, infinite precision is probably not required to model an analog synapse. Even so, the number of finite states that a network of neurons may enter may be unreasonably many to consider with standard automata theory. Given that learning may increase the processing power of living neuronal networks?
...

This is called reural plasticity. a simple way to describe this is "Neurons that fire together, wire together". When neurons fire in a specific pattern, the proteins involved in the synaptic cleft changes to optimized the signal. This is nothing new. The concept of simulating an infinite analog state machine like the brain with a simpler finite state machine is nothing new. All you have to do is account for fewer emotions. Once again, this is talking about artificial intelligence to simulate the brain, but it does not imply the biological implication of fusing a human brain with a computer.

So you dismiss the learning and REWIRING abilities of neurons?

You're missing this, the focus here is not on human mind-machine interfaces, it's about developing and growing machne and neuron brain arrays for computation. It's not about AI to stimulate the brain.

?The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response. There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons. ?

So the author describes how the animal converts sensory stimulus into info stored in the cortex via the PFC, neocortex, and hippocampus. There is no computers involved in this. in fact, why are you even citing this paragraph? All he says is that watching animals in action implies computational power in cultured neurons, not that cultured neurons can actually be used computationally, or in any way remotely close to replacing the computational algorithms used today.[/quote]

To demonstrate that neurons aren't simple algorythms, they're chaotic.
 
Originally posted by: IIB
Originally posted by: ed21x
Neurally Controlled Simulated Robot: Applying Cultured Neurons to Handle an Approach / Avoidance Task in Real Time and a Framework for Studying Learning in Vitro
http://web.mit.edu/shkolnik/www/projects/neurobot.pdf :
?One of the main benefits of living neural networks as opposed to digital computing is a built in ability to learn based on experience. When learning occurs, synaptic weights adjust (often referred to as synaptic plasticity) and thenceforth the system?s behavior changes even if given the same input conditions as before the learning had occurred. Finite State Automata theory breaks down when trying to emulate neural learning, because the neurons can ?rewire? themselves automatically, thus changing the possible states the system may enter on a given input. Any change in synaptic weights may therefore be considered a doubling of the states in the automaton. If one tries to emulate learning to infinite precision (since the synaptic weights are analog), one may realize that the living neural network may actually have an infinite number of states. Though neural networks are chaotic systems, infinite precision is probably not required to model an analog synapse. Even so, the number of finite states that a network of neurons may enter may be unreasonably many to consider with standard automata theory. Given that learning may increase the processing power of living neuronal networks?
...

This is called reural plasticity. a simple way to describe this is "Neurons that fire together, wire together". When neurons fire in a specific pattern, the proteins involved in the synaptic cleft changes to optimized the signal. This is nothing new. The concept of simulating an infinite analog state machine like the brain with a simpler finite state machine is nothing new. All you have to do is account for fewer emotions. Once again, this is talking about artificial intelligence to simulate the brain, but it does not imply the biological implication of fusing a human brain with a computer.

So you dismiss the learning and REWIRING abilities of neurons?

You're missing this, the focus here is not on human mind-machine interfaces, it's about developing and growing machne and neuron brain arrays for computation. It's not about AI to stimulate the brain.

?The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response. There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons. ?

So the author describes how the animal converts sensory stimulus into info stored in the cortex via the PFC, neocortex, and hippocampus. There is no computers involved in this. in fact, why are you even citing this paragraph? All he says is that watching animals in action implies computational power in cultured neurons, not that cultured neurons can actually be used computationally, or in any way remotely close to replacing the computational algorithms used today.

To demonstrate that neurons aren't simple algorythms, they're chaotic.
[/quote]

of course I'm not rejecting the rewiring abilities of neurons. Heck, the professor for one of my past classes (Dr Mariam Diamond pioneered the concept of Hebbian Synapses). The paragraph that you cited is not about developing and growing machine and neuron brain arrays for computation. All I did was summarize it to show that it has nothing to do with what you're mentioning. if you reread that paragraph you cited, it literally talks about using finite systems to simulate an infinite state machine, and you will never find a mention of state machines except in the case of artificial intelligence.
 
But would you argue that hes not "replacing algorithmic components of control with neural computation"?

Would you argue that this isnt fundamentally different than conventional thought of AI?

You surely seem like the right person to talk to here, what do you think about all of this overall? You havent necassarily refuted my theory here unless I missed somthing.
 
^Read above to see what that means^

I wasn't confused over what you were trying to say 🙂.

and you will never find a mention of state machines except in the case of artificial intelligence.

Not sure if you mean what I think you meant, but state machines are not a concept that is solely, or even primarily, associated with artificial intelligence. State machines are almost by definition too simplistic to achieve anything that gives meaning to the term "intelligence."
 
*?The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response. There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons. ?
--------------------------------------------------------------------------------



"So the author describes how the animal converts sensory stimulus into info stored in the cortex via the PFC, neocortex, and hippocampus. There is no computers involved in this. "

*That is exactly my point.

"All he says is that watching animals in action implies computational power in cultured neurons, not that cultured neurons can actually be used computationally, or in any way remotely close to replacing the computational algorithms used today."

*Now that I rexamine this post I see what I forgot address, how you dont even understand this technology. You just described him talking about the animat as if it were an ANIMAL. Here is an animat:

"Animats are artificial animals. The term includes physical robots and virtual simulations. Animat research, a subset of Artificial Life studies, has become rather popular since Rodney Brooks' seminal paper "Intelligence without reason". The word was coined by S.W. Wilson in 1991."
http://en.wikipedia.org/wiki/Animat

*And here's where that paragraph came from:

"In this project, I showed that probing with varying delay between the probes produces a predictable non-linear response. I showed how to emulate digital logic using such a response, thus proving that cultured neurons can theoretically execute a computer program with polynomial slow-down. I then applied the living neurons to handle a more interesting real-world problem in real-time. In this project an animat was created that combines a living neural network with a virtual body in an effort to create a system where the living neural network could be studied. The animat was successful at tracking and maintaining distance from a reference object, which can be considered both an approach and avoidance task. Part of the robustness of the animat in the current project is that it reacts more strongly when necessary to correct for error. Thus, if it has an error on sometrial, the error will not be fatal because in the next trial it will make up for it. The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons. Furthermore, the animat provides a scheme for testing the effects of plasticity in cultured neurons, and these effects are visible through quantifiable measurements of robotic behavior. The animat opens the door for additional experiments, both to determine interesting robotic behavior, such as tracking and following a movingreference object, and to determine the effects of varying types of plasticity that may be induced. One may ask the question, ?how much intelligence does the animat display?? An in-depth philosophical discussion of this topic is beyond the scope of this thesis, but one should consider that the animat is handling a relatively difficult approach / avoidance task in real-time. A truly intelligent machine should of course be able to handle a variety oftasks, including tasks that it has never faced before. To accomplish such a goal, more complex sensory information needs to be encoded (rather than simply the direction and distance of a single object within the environment). Thus, an important future direction in the development of this animat may involve the utilization of additional channels for stimulation. I hope that the animat built in this project is at the beginning of its development cycle. In the near future, the animat can be converted into a real robot fairly simply. Small changes, such as improvements to the mapping schemes to make them more dynamic can improve the performance of the animat. For example, with each step the animat takes, the individual channel histograms used for lock/key decoding may be slightly modified. This will take into account any slight ?drift? in precisely timed spikes over time."





Now would anyone care to analyze this thread and debunk my theory concpet here?
 
Using animal tissue to mirror the actions of silicone is rather different to intelegence. Untill you understand that you're nothing more than someone who thinks google substitutes for knowledge.
 
Originally posted by: Matthias99
Originally posted by: IIB
"You're postulating evolutionary software development using biological processors rather than silicone based?"

*No I'm saying they will bypass evolutionary software development by utilizing biological learning components.

Um... it's a little more complicated than that.

"Or are you making the deeply stupid assumption that processing power = intelegence."

In this case it would, neurons and brains learn. 500 quantum or other high end processors wouldn't become intelligent, the internet still isnt even intelligent or self aware. Neurons and brains learn on their own, and it's more than just algorithmic. Conventional processors just crunch number per request.

'Neurons' are just specialized cells that behave in certain more or less deterministic ways. Saying a neuron 'learns on its own' is kind of like saying that a transistor 'computes stuff'. Individual neurons can be viewed, in some sense, as just 'number-crunching' elements. Computationally, there is little difference from a computer-modelled neuron element and a biological one.

Brains are a highly organized collection of neurons, just like CPUs are a highly organized collection of transistors. The way brains are organized results in them taking input from external sources, combining it with various feedback loops, and using that to modify the system itself -- what you might call 'learning'. However, it's a long way from a 'learning' system to an 'intelligent' system. And we're a LONG way from building any neuron-based systems that are anywhere NEAR the complexity of the human brain. The sorts of systems used in research right now are in the range of dozens of neurons; even a relatively 'simple' mammalian brain has billions.

fixed
 
Originally posted by: IIB
"Or are you making the deeply stupid assumption that processing power = intelegence."

*In this case it would, neurons and brains learn. 500 quantum or other high end processors wouldn't become intelligent, the internet still isnt even intelligent or self aware. Neurons and brains learn on their own, and it's more than just algorithmic. Conventional processors just crunch number per request.

The problem here is you're equating 'learning' (which would be more accurately phrased as something like 'adaptation' in this context) with 'intelligence' or 'self-awareness'. You can build computer systems that 'learn' (either by emulating neural networks or through other methods); you can also have biological systems that 'learn'.

What makes them 'intelligent' is something else entirely, and is not (IMO and in much but not all of the scientific community's opinion) dependent on having the computational medium be biological.

"'Neurons' are just specialized cells that behave in certain more or less deterministic ways. Saying a neuron 'learns on its own' is kind of like saying that a transistor 'computes stuff'. Individual neurons can be viewed, in some sense, as just 'number-crunching' elements. Computationally, there is little difference from a computer-modelled neuron element and a biological one."

Neurally Controlled Simulated Robot: Applying Cultured Neurons to Handle an Approach / Avoidance Task in Real Time and a Framework for Studying Learning in Vitro
http://web.mit.edu/shkolnik/www/projects/neurobot.pdf :
?One of the main benefits of living neural networks as opposed to digital computing is a built in ability to learn based on experience. When learning occurs, synaptic weights adjust (often referred to as synaptic plasticity) and thenceforth the system?s behavior changes even if given the same input conditions as before the learning had occurred. Finite State Automata theory breaks down when trying to emulate neural learning, because the neurons can ?rewire? themselves automatically, thus changing the possible states the system may enter on a given input.

Right -- but nobody in their right mind would try to use an FSM as a learning machine, since as the name implies it has a strictly limited number of possible states and so cannot adapt.

Any change in synaptic weights may therefore be considered a doubling of the states in the automaton. If one tries to emulate learning to infinite precision (since the synaptic weights are analog), one may realize that the living neural network may actually have an infinite number of states. Though neural networks are chaotic systems, infinite precision is probably not required to model an analog synapse. Even so, the number of finite states that a network of neurons may enter may be unreasonably many to consider with standard automata theory. Given that learning may increase the processing power of living neuronal networks?

(emphasis added)

All he's saying here is that large neural networks are potentially a more powerful computational model than finite state machines. You'll have to excuse me if I answer that with "duh".

?The animat in this project is unique because I am replacing algorithmic components of control with neural computation. In the animat, sensory information is encoded into stimulus information, which induces a given reaction in the neural network. The behavior of the animat is determined solely on this neural response. There is no algorithmic component converting sensory information into animat movement. Thus, the animat demonstrates some of the computational power of cultured cortical neurons.?

Neurons do computation. So do computers. So do a lot of processes. He's not proving anything new here.

"Brains are a highly organized collection of neurons, just like CPUs are a highly organized collection of transistors. The way brains are organized results in them taking input from external sources, combining it with various feedback loops, and using that to modify the system itself -- what you might call 'learning'. However, it's a long way from a 'learning' system to an 'intelligent' system. And we're a LONG way from building any neuron-based systems that are anywhere NEAR the complexity of the human brain. The sorts of systems used in research right now are in the range of dozens of neurons; even a relatively 'simple' mammalian brain has millions."

I said: "what if they grew them in enourmous sizes with massive sized arrays of electrodes and in massive arrays in grid topologies with other sophisticated components?"

Which cannot be done with current or near-future technology. This stuff is at a pretty early research phase. Trust me; I worked in a lab at college that did intracellular neural recording (among other things), and had some of those DARPA contracts you're talking about.
 
Originally posted by: Bobthelost
Using animal tissue to mirror the actions of silicone is rather different to intelegence. Untill you understand that you're nothing more than someone who thinks google substitutes for knowledge.

There's a BIG difference between "animal tissue" and animal brain neurons, and now they're even humanizing the rat neurons:
Lab Mice Grow Human Brain Cells After Injections
http://www.foxnews.com/story/0,2933,178498,00.html
 
Originally posted by: Inspector Jihad
Originally posted by: Matthias99
Originally posted by: IIB
"You're postulating evolutionary software development using biological processors rather than silicone based?"

*No I'm saying they will bypass evolutionary software development by utilizing biological learning components.

"Um... it's a little more complicated than that."

Is that a fact? It's not like I'm saying this is easy but that doesnt mean that its not possible and that DARPA isnt already doing it.

"Or are you making the deeply stupid assumption that processing power = intelegence."

In this case it would, neurons and brains learn. 500 quantum or other high end processors wouldn't become intelligent, the internet still isnt even intelligent or self aware. Neurons and brains learn on their own, and it's more than just algorithmic. Conventional processors just crunch number per request.

'Neurons' are just specialized cells that behave in certain more or less deterministic ways. Saying a neuron 'learns on its own' is kind of like saying that a transistor 'computes stuff'. Individual neurons can be viewed, in some sense, as just 'number-crunching' elements. Computationally, there is little difference from a computer-modelled neuron element and a biological one.

Brains are a highly organized collection of neurons, just like CPUs are a highly organized collection of transistors. The way brains are organized results in them taking input from external sources, combining it with various feedback loops, and using that to modify the system itself -- what you might call 'learning'. However, it's a long way from a 'learning' system to an 'intelligent' system. And we're a LONG way from building any neuron-based systems that are anywhere NEAR the complexity of the human brain. The sorts of systems used in research right now are in the range of dozens of neurons; even a relatively 'simple' mammalian brain has billions.

fixed

But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron. The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.

It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?). Why would it be necassary to complete the Human Cognome Project to find our own way to super intelligence? It's a matter of learning how to communicate with desired modules and completing modules. DARPA and NASA have block diagrams of how they intend to do their Cognitive computer and Intelligent Archives, it's a matter of building those modules.

I'll have some sources and such later , but I have to leave for work.

O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?
 
Originally posted by: IIB
But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron.

Basically: please come back when you have some clue what you're talking about. While individual neurons are certainly interesting and complex systems, they're not really all that powerful (or unique) in terms of computational ability. What makes them so capable is that they are built into highly organized (and in fact self-organizing) networks consisting of billions of cells and potentially trillions of interconnections. But actually exploiting the computational capabilities of such networks is EXTREMELY difficult.

I spent two years doing research in a cognitive science lab at a major university that was close to the cutting edge in some of these areas. Believe me when I say that while you can do some cool stuff at small scales, dealing with larger populations of neurons gets exponentially more difficult, both computationally and in terms of physical interfacing.

The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.

We know how to do some really pretty basic and high-level things 'in there'; building low-level neurological systems at any but the most trivial sizes is still not really feasable.

It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?).

While DARPA does some cool stuff, ultimately much of what they do is subcontracting out research to other people. They are not some shadowy super-secret organization with unlimited resources. They compete for government funding along with everything else, and science research in general tends to not be the highest priority.

Why would it be necassary to complete the Human Cognome Project to find our own way to super intelligence? It's a matter of learning how to communicate with desired modules and completing modules. DARPA and NASA have block diagrams of how they intend to do their Cognitive computer and Intelligent Archives, it's a matter of building those modules.

Which is a RIDICULOUSLY complicated and overly ambitious goal. They have a 'block diagram'? Great. That's a long, long ways from the sort of systems you are talking about.

O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?

What you are defining as 'intelligence' is a superset of what you are defining as 'learning'. 'Intelligent' systems must be able to adapt, but adaptive systems are not necessarily 'intelligent' (at least in terms of things like self-awareness).
 
Originally posted by: IIB
Originally posted by: Bobthelost
Using animal tissue to mirror the actions of silicone is rather different to intelegence. Untill you understand that you're nothing more than someone who thinks google substitutes for knowledge.

There's a BIG difference between "animal tissue" and animal brain neurons, and now they're even humanizing the rat neurons:
Lab Mice Grow Human Brain Cells After Injections
http://www.foxnews.com/story/0,2933,178498,00.html


Being the ability to make the "6 Billion dollar rat" and what you're discussing are decades apart.
 
Originally posted by: IIB
Originally posted by: ed21x
Alright, I actually went through that OP article (vaguely skimmed it). Since I'm working here at the VLSB Labs here in the Berkeley BioMedical Engineering department, I honestly know everything that is written in it (it is not a very specific specific article, more like a general Master's thesis).

For those who are too lazy to read it:
All this writeup is, is a collection of articles summarizing the applications of biotechnology in the field of biomimetics, nanotechnolgy, artificial intelligence, Genetics, drug delivery systems, microvalves/pumps and prosthetics.

Nothing pointing to the doomsday prophesy of the original poster. The whole concept of convergence simply states that applications in one field can be applied to another, and thus help to advance biotechnology as a whole.


Are you talking about this article:
http://www.wtec.org/ConvergingTechnologies/
? It's abit more than an article being a whopping 482 pages.

It goes into great detail about converging on all levels(departments, agencies, labratories, universities) to converge for convergenece.

And it describes the creation of this emerging NBIC technology, which stands for "Nano-Bio-Info-Cogno" "convergence". It's basically fusing (converging) nanotechnology (nanobots), biotechnology (biological engineering), information technology (computers and communications) and Cognitive science (brain, consciousness) into one (NBIC) at the atomic (nano) scale. It's embedding nanobot parts into synthetically grown biological cells, that have electronic DNA (more advanced than humans). --- But that's not important at this time. First we discuss the biocomputation.

to converge for convergence is redundant and doesnt make any sense.

Cars drive for driving.

Crows fly for flying...

You catch my drift? I suspect you need more practice working your macro-rat-brain-array.
 

Originally posted by: Matthias99
Originally posted by: IIB
But you're still underestimating the power of the neuron. We can observe it as being that "simple", but neurons are far more adnvanced than any sort of transitor and they clearly store memories somehow and they reroute themselves to build those memories, so therefore they are more powerful than any simple algorythmic software neural net node that i've ever heard of, and that power is built into each neuron.

Basically: please come back when you have some clue what you're talking about. While individual neurons are certainly interesting and complex systems, they're not really all that powerful (or unique) in terms of computational ability. What makes them so capable is that they are built into highly organized (and in fact self-organizing) networks consisting of billions of cells and potentially trillions of interconnections. But actually exploiting the computational capabilities of such networks is EXTREMELY difficult.

While neurons may not be the most exceptional at mathematical computation, they are far more advanced in cognitive capabilities than all conventional computing technologies. Why wouldn?t they use the natural method to build a cognitive system? How does it make more sense to pursue conventional approaches that that require extreme hardware and software configurations, when we can use less of each and allow the neuron processors to do the rest?

The convergence of these technologies puts the advanced conventional computing methods directly at the neuron-tips of these brains, which can and will take neuron embodiments to incomprehensible levels of computation and cognition. Conventional hardware will continue to increase in ingenuity and capability, and it?s still a long shot from brain technology cognition, but offers a serious edge in adding to neuron computing power. If we could achieve super intelligence by cheating our way with neuron power, we could then use that neuron power to finish the rest, could we not?

?Hawkins focuses mainly on the cortex, the most evolutionarily recent part of the brain.
the cortex, in his view, uses memory rather than computation to solve problems. Consider the problem of catching a ball. A robotic arm might be programmed for this task, but achieving it is extremely difficult and involves reams of calculations. The brain, by contrast, draws upon stored memories of how to catch a ball, modifying those memories to suit the particular conditions each time a ball is thrown. ?
?The cortex also uses memories to make predictions. It is engaged in constant, mostly unconscious prediction about everything we observe. When something happens that varies from prediction?if you detect an unusual motion, say, or an odd texture?it is passed up to a higher level in the cortex?s hierarchy of neurons. The new memories are then parlayed into further predictions. Prediction, in Hawkins? telling, is the sine qua non of intelligence. To understand something is to be able to make predictions about it.?
http://www.reason.com/0504/cr.ks.are.shtml

?Apparently, neurons, themselves the tools of learning, smartly synthesize proteins where they are needed. Two recent publications demonstrate that neurons are capable of localized translation in dendrites and in axons.?
http://www.jcb.org/cgi/content/full/158/5/831

Are you going to argue with Christof Koch?
?From the perspective of Christof Koch?s Biophysics of Computation the situation is quite different. A neuron can no longer be viewed as a single switch; it is more or less analogous to an integrated circuit chip.?
http://www.klab.caltech.edu/~koch/bioph...book/biophysics-book-review-scott.html

Neurons aren?t powerful or unique? Neurons aren?t feasible? Conventional technologies will give us cognitive computing first, you say?

1. If we had a silicon chip that had as many ?connections? would it become intelligent? No. In neuronss the ?software? and the ?ROM memory? are built in, even the RAM seems to be. There are certain areas or ?parts? that play important roles in consciousness, that wouldn?t exist in a puddle or blob of neurons, but the fact remains that the power is in those neurons. Neurons process and they store memories. How do memory capabilities fit into the mathematical model of trying to replicate ?neuron power??

2. It?s suggested that glial cells even help electrically ?compute?, and it isn?t known how significant their function is. How do glial cells fit into the mathematical model of trying to replicate ?neuron power??

3. Does anyone think that we will ever have self-repairing silicon chips? Neuron networks self-repair, and self-form, which would take serious overhead of the software from the hardware. DARPA does have 3D chips in its thrust, but silicon chips are still flat for a reason.

4. Neurons grow in different shapes & sizes, with varying amounts & lengths of dendrites and axons. Why wouldn?t they all be exactly the same if they aren?t special? How do rearranging, growing/stretching and expanding cells, with synaptic plasticity, fit into the mathematical model of trying to replicate ?neuron power??

5. It?s suggested that neuron dendrites and axons reverse fire. How does that fit into the model?

6. Neuron can have up to roughly 100,000 dendrtic synapses, with multi-connected axons. How does that fit into the model? How big can this model get until we decide to just use neurons instead?

7. Conventional computers use base 2 binary, and neurons are analog of about 25KHz. How much conventional CPU and RAM overhead will It consume to crunch base 25K over base 2 (not even counting all of the other dynamics of neurons)?

8. Memories and synaptic weights involve biochemistry. How does that fit into the ?simple? model?
http://cbcl.mit.edu/cbcl/news/files/kreiman-hogan-5-05.htm

9. Right now we?re (publicly) using rat brain neuron networks to study neural processing and cognition. This appears to be a step in surpassing human cognition. The NSF has awarded Demarse $500,000 to take his F22 brain findings, and research, to attempt to build a mathematical model of neuron networks. While those ?loose? findings would be important for progress, rat brain neuron nets are still nothing like humans. Not in neuron types/capacity, or brain complexity.

10. We still don?t know exactly what goes on inside the neuron, yet you describe them as being ?simple??

Considering 1-10 (I?m sure there?s some things I missed), it seems obvious that trying to build a mathematical model of the brain?s neuron ?computing? processes, and trying to program that into hardware, would go against Occam?s Razor. All I see an ?s curve? to reaching the capability of proper neuron firing and ?programming? to ultimately reach super intelligence. Our brains with their ?simple neurons? already spank any computer out there in intelligence and cognition (at least from an unclassified standpoint), and a great deal of the brain is used for body motor and life support systems ? things these computers won?t need, and would have all the hardware extras that our brains don?t.

If we can perfect it then we can grow them in larger scales, and get both ?hardware? and ?software? coupled inside of each ?processor?. They?re already devoting significant computers to their efforts, this would be a matter of converting those over to ?data acquisition hardware? and building cube shaped ?brains? that have massive bio-silicon interface chips (better than massive mea?s, must I explain?) on all sides; they could even do more extravagant geometric shapes using silicon.. How isn?t it more feasible to choose neurons over conventional technologies, for cognitive computing?

They could go rather far with this, using some current technologies alone. 1. They could start with "Doogie" strains of rats or mice, which have increased learning and memory abilities. 2. Then, they could humanize them using stem cell treatments. This doesn?t actually humanize all of the cells but rather sprouts human cells in mix. 3. An important key is whether or not spindle or mirror neurons can be harvested like this. 4. If so they could then use developing ?assembly line? technology to clone those cells in large scales giving them superior processor media. 5. Advanced Nootropic drugs and bioengineering can further enhance cognitive capabilities.
http://www.princeton.edu/pr/news/99/q3/0902-smart.htm
http://en.wikipedia.org/wiki/Spindle_neuron
http://www.washingtonpost.com/wp-dyn/co...rticle/2005/12/12/AR2005121201388.html
http://news.bbc.co.uk/1/hi/sci/tech/1308732.stm

It wont ruin their goal even if #3 above isn?t possible, for at least two reasons. First, they?ll have advanced rat-human chimæra neuron media, that includes more powerful rat cells in the mix. Second, they can simply use ?suspended animation? technology to harvest live human brains which would give them significant amounts of spindle and mirror neurons, to name two. Do you have "Organ Donor" checked on your license? Do ethical laws apply to people that do?
http://www.websters-online-dictionary.org/definition/CHIMAERA
http://smh.com.au/news/health-and-fitne...success/2006/01/20/1137553739997.html#


[/quote]I spent two years doing research in a cognitive science lab at a major university that was close to the cutting edge in some of these areas. Believe me when I say that while you can do some cool stuff at small scales, dealing with larger populations of neurons gets exponentially more difficult, both computationally and in terms of physical interfacing.[/quote]

How long ago was that?

Do you think that Demarse?s setup couldn?t be better optimized for a larger array? Do you think that his hardware/software setup won?t be dwarfed in technological comparison, isn?t it already?

~For the DeMarse?s F22 brains the ?Measurements of neural activity were conducted using Multichannel System?s data acquisition hardware and custom software on an Apple XServe with 3.5 Terabytes of Xraid disk storage. Raw electrical activity was recorded for each of the 60 channels on the MEA sampled and digitized at 25KHz per channel. This data was then streamed via TCP/IP to an Apple G5 client computer over a local gigabit network. The client then performed further data processing detecting action potentials (APs) (deviations in voltage above or below 5.0 x standard deviation of estimated noise per channel) and mapping telemetry from the flight simulator to schedule stimulations, while sending control commands to the aircraft, and logging the data. An F-22 Raptor was simulated with the commercially available XPlane aircraft simulation software. The aircraft simulator was run on a separate computer (Dell PC) communicating with a client via UDP (transmitting flight telemetry: heading, speed, altititude, pitch and roll angle) every 200 ms. The simulator also received commands to adjust the angle of the aircraft?s aileron and elevator control surfaces modifying the plane?s in-flight roll and pitch angles, respectively.? (DeMarse, "Adaptive Flight Control With Living Neuronal Networks on Microelectrode Arrays?)
http://www.apple.com/xserve/

The key is for us to learn how to utilize them and communicate with them. We have various eye implants and methods, Cochlear implants, hippocampus implants, human neural interfaces, animats and F22 'brains' - all significant interfacing. It's not like we dont know how to get things done in there, and you shouldnt consider each of those seperate.

We know how to do some really pretty basic and high-level things 'in there'; building low-level neurological systems at any but the most trivial sizes is still not really feasable.[/quote]

See above.

It's not 2 corporations trying to compete and witholding secrets, it's DARPA with full intentions to do and and all possible science and academic data at their fingertips (plus virtually unlimited resources, like we REALLY know how much money their top project are getting?).

While DARPA does some cool stuff, ultimately much of what they do is subcontracting out research to other people. They are not some shadowy super-secret organization with unlimited resources. They compete for government funding along with everything else, and science research in general tends to not be the highest priority.[/quote]

How much of the high end stuff did you work on personally? Are you saying that DARPA?s high end projects don?t go to more private and secure laboratories than the universities? Which, are hooked into the TeraGrid, that foreign powers probably data-mine themselves. Do you consider that each and every bocomputation project they do is unrelated, nor do they data mine the TeraGrid, nor do they combine findings. Do you seriouslt believe them when they say that they don?t have an actual lab or headquaters? When I see the top military technology firm in the world say that, I tend to think that they wont disclose the locations of important facilities, but hey they tell US everything right?
http://www.darpa.mil/body/pdf/BridgingTheGap_Feb_05.pdf
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104248

Why would it be necassary to complete the Human Cognome Project to find our own way to super intelligence? It's a matter of learning how to communicate with desired modules and completing modules. DARPA and NASA have block diagrams of how they intend to do their Cognitive computer and Intelligent Archives, it's a matter of building those modules.

Which is a RIDICULOUSLY complicated and overly ambitious goal. They have a 'block diagram'? Great. That's a long, long ways from the sort of systems you are talking about.[/quote]

So has been all conventional AI work.

A block diagram is a major step. Without a goal in anything, how far can one get with anything? ?Cognitive AI? and ?Intelligent Archives? need to start somewhere: Research to specifying the necessary ?blocks? for the desired system(s), completion of the block overview, required specifications ofeach block, R&D and then construction and utilization. In this case it?s a goal for super-intelligence, do you agree that if they achieved it they would be able to rapidly expand it using it itself?
NASA Intelligent Archives:
http://daac.gsfc.nasa.gov/intelligent_archive/IA_report_8-27-02_baseline.pdf



Biologically-Inspired Cognitive Architectures
http://www.darpa.mil/ipto/programs/bica/vision.htm
?4.2 Thrust B - Neurobiologically Inspired Architectures
In Thrust B, DARPA seeks a dramatic improvement in our understanding of the brain?s
functions and processes. Initially, we seek a major leap in the learning performance of
traditional AI systems by augmenting and informing their designs with neuroscience principles. Such machines might demonstrate functions such as imagination, social
intelligence and/or the anticipation of behavior of other intelligent agents. ?Learning? (in
the sense of interest in this BAA) involves the intense interaction of three processes:
attending, remembering, and reasoning. Because of the highly integrated nature of the
brain, learning cannot be viewed separately from other brain activities. In the follow-on
phase, we expect to implement a new class of hybrid AI systems ? using a mixture of
psychology-based and neuroscience-based architectures. Our ultimate goal is to approach
brain-like performance in learning, use of experience, sensorimotor integration and other
complex processes. At the same time we expect to develop a global theory of cognition
and one or more neurobiologically-inspired, integrated cognitive architectures.?
http://www.darpa.mil/BAA/pdfs/baa05-18pip.pdf

And that?s what isn?t secret?


O ya, woud you argue that learning couldnt lead to intellignece? Could you be intelligent if you couldnt learn?

What you are defining as 'intelligence' is a superset of what you are defining as 'learning'. 'Intelligent' systems must be able to adapt, but adaptive systems are not necessarily 'intelligent' (at least in terms of things like self-awareness).[/quote]
?Self Awareness? is a critical component cognitive computing. Learning is a critical component of both cognition and intelligence. Neurons are the only proof that cognition exists, and offer learning. You do the math.
 
Originally posted by: Acanthus
Originally posted by: IIB
Originally posted by: ed21x
Alright, I actually went through that OP article (vaguely skimmed it). Since I'm working here at the VLSB Labs here in the Berkeley BioMedical Engineering department, I honestly know everything that is written in it (it is not a very specific specific article, more like a general Master's thesis).

For those who are too lazy to read it:
All this writeup is, is a collection of articles summarizing the applications of biotechnology in the field of biomimetics, nanotechnolgy, artificial intelligence, Genetics, drug delivery systems, microvalves/pumps and prosthetics.

Nothing pointing to the doomsday prophesy of the original poster. The whole concept of convergence simply states that applications in one field can be applied to another, and thus help to advance biotechnology as a whole.


Are you talking about this article:
http://www.wtec.org/ConvergingTechnologies/
? It's abit more than an article being a whopping 482 pages.

It goes into great detail about converging on all levels(departments, agencies, labratories, universities) to converge for convergenece.

And it describes the creation of this emerging NBIC technology, which stands for "Nano-Bio-Info-Cogno" "convergence". It's basically fusing (converging) nanotechnology (nanobots), biotechnology (biological engineering), information technology (computers and communications) and Cognitive science (brain, consciousness) into one (NBIC) at the atomic (nano) scale. It's embedding nanobot parts into synthetically grown biological cells, that have electronic DNA (more advanced than humans). --- But that's not important at this time. First we discuss the biocomputation.

to converge for convergence is redundant and doesnt make any sense.

Cars drive for driving.

Crows fly for flying...

You catch my drift? I suspect you need more practice working your macro-rat-brain-array.

Yes it does. They're converging all levels of govenment, universities and national laboratories (and corporations). To converge N-B-I sciences into one, to converge that into Cognition, or shall i say to converge us into all of that...
 
Back
Top