Whoa, dude... it's like... Skynet!

bshole

Diamond Member
Mar 12, 2013
8,315
1,215
126
https://thenextweb.com/artificial-i...r-algorithm-may-emerge-sooner-than-you-think/

Pretty scary stuff, the concept that feedback loops will enable a "true" machine consciousness, that could rule over other machine consciousnesses.

As a software engineer, I just don't see it. I did a few neutral network and fuzzy logic algorithms when I was getting my master's degree. In the end it is all just logic states on silicon. All machines ever do is the next line of machine code (with interrupts to prioritize which machine code to run). This hasn't changed in my lifetime. The software engineer tries to anticipate every input that the machine will ever get and then programs the appropriate response to the input. At a fundamental level it is very simple and straightforward and does not indicate a pathway toward sentience.

In my opinion, sentient artificial life can only arise when man understands the human brain much better. If it is understood at a level than can be modeled, I suspect the model could not reside on a silicon hardware platform (which has only has two states) but rather on a different substrate that can have at least as many states as a neuron and whose states are affected by the states of nearby elements. It is frankly baffling how one could program such a beast, conceptually it seems impossible.
 

Elenax

Junior Member
Apr 20, 2018
5
0
1
Wow perfect recipe of chaos....there is a ted talk of zeynep tufekci on big data and future of AI that explains this whole mess better.
 

cytg111

Lifer
Mar 17, 2008
24,467
13,923
136
As a software engineer, I just don't see it. I did a few neutral network and fuzzy logic algorithms when I was getting my master's degree. In the end it is all just logic states on silicon. All machines ever do is the next line of machine code (with interrupts to prioritize which machine code to run). This hasn't changed in my lifetime. The software engineer tries to anticipate every input that the machine will ever get and then programs the appropriate response to the input. At a fundamental level it is very simple and straightforward and does not indicate a pathway toward sentience.

In my opinion, sentient artificial life can only arise when man understands the human brain much better. If it is understood at a level than can be modeled, I suspect the model could not reside on a silicon hardware platform (which has only has two states) but rather on a different substrate that can have at least as many states as a neuron and whose states are affected by the states of nearby elements. It is frankly baffling how one could program such a beast, conceptually it seems impossible.

I have also dabbled in ANNs too and to my mind it is all about the resolution .. what you are describing is the scenario where a logic circuit will always produce 1+1 = 2 .. So can the neural net except for the cases where 1+1 does not equal 2.
 

bshole

Diamond Member
Mar 12, 2013
8,315
1,215
126
I have also dabbled in ANNs too and to my mind it is all about the resolution .. what you are describing is the scenario where a logic circuit will always produce 1+1 = 2 .. So can the neural net except for the cases where 1+1 does not equal 2.

Here is a simple question. How is the number 3 represented in the human brain? In silicon, it is easy, 14 transistors are off and the last two are on. I suspect with the human brain, the number 3 is represented by thousands of neurons with each being at a particular state.... think about writing software for THAT.


"Maybe we need to find a way of representing and interconnecting facts, knowledge, and methods of reasoning — the whole structure of what intelligence looks like might be necessary to understand in order to achieve sentient reasoning in AI, but we don't even know what that looks like in the human brain.

"Whether or not we can achieve that without mimicking the human brain, I don't know. But there's something fundamental that we're not getting, in terms of how AI should be structured."

"Programs that think like humans are so far beyond where we are right now. I don't think the field even knows the right direction to in go to achieve that goal, let alone what the big roadblocks are.

"What's missing from a lot of these systems at the moment is a notion of a will — a desire to do something in the world. It's just doing what it's told, which is to map inputs to outputs.

"There's not a lot of room for creativity there. The kinds of problems we are asking these systems to do are not really on a path towards a sentient system."



http://www.businessinsider.com/bigg...t-how-to-build-thinking-conscious-machines-14
 

cytg111

Lifer
Mar 17, 2008
24,467
13,923
136
Here is a simple question. How is the number 3 represented in the human brain? In silicon, it is easy, 14 transistors are off and the last two are on. I suspect with the human brain, the number 3 is represented by thousands of neurons with each being at a particular state.... think about writing software for THAT.










http://www.businessinsider.com/bigg...t-how-to-build-thinking-conscious-machines-14
I gather that you dont write software for that... your job is to provide the neural architecture and infrastructure, when that job is done its all about the training. I wonder if it has to have the human experience in order to reach that next level.... sort of arrogant isnt it? :).