BoberFett
Lifer
- Oct 9, 1999
- 37,562
- 9
- 81
What else would we expect John Conner to say?![]()
I'm disappointed it took until the 8th post...
What else would we expect John Conner to say?![]()
As a computational neural engineer, I don't see why machines would be given the desire to destroy or dominate. The first truly intelligent AI would likely be based off the algorithms of the neocortex. The neocortex is how we are able to detect patterns from the massive amount of sensory data that's fed into us and subsequently make predictions from these patterns. Here's a good article on the direction of AI. http://money.cnn.com/magazines/busin...02/01/8398989/ The Human Brain Project is also trying to recreate the neocortex to serve as a Bayesian Inference Machine. The neocortex is fairly new on the evolutionary scale, hence the term neo. The animalistic urges for survival and competition for limited resources stem from much older parts of our brain. One can simply not model that into the AI if they don't want it to have that kind of behavior.
all mammals have a neocortex
birds also have a neopallidum which more or less works the same
I read this and my only response thought was why Hawking's felt the need to publish this editorial.
I mean... it is nothing new. Seems like every scifi story ever has already touched on this.
The problem most don't see here (yet our world's think tanks do) is there is a HUGE push to militarize AI and like the drone, simply replace the soldier on the ground. .
The slippery slope is automation. There are some significant debates about allowing drones to make deadly decisions without human interaction. Alot of people believe our military drone fleet is akin to remote controlled toys with humans at the controls. The cutting edge reality is armed instruments of destruction that are able to recon, loiter, ID targets, process the sitrep internally, and execute based upon their programming. Today. Right now. Battle tested. Coming to your local police force within a decade at most.
I remember watching that movie then going to google afterwards for a bit of "wtf did i just watch" kind of search. That whole backstory seemed more interesting than what happened in the movie.
That's the biggest risk.The problem most don't see here (yet our world's think tanks do) is there is a HUGE push to militarize AI and like the drone, simply replace the soldier on the ground.
The scary part is this is one of the key things to actually turn one's military/police against it's own people. Machines don't have emotions.
You give them an edict and they follow it to the letter, give them AI and they can expound on that a bit.
The trends are within 10-20 years we can replace surgeons, lawyers, engineers with AI driven robots. Already they have devices that can process all of man's written knowledge and make decisions based on it in real-time.
If the AI is designed and raised to be benevolent and kind, and is not instilled with the inherently-aggressive behaviors harbored by many lifeforms on Earth, this might not be an issue at all.
AI could take over as the God humanity has always pretended existed.
Exactly, right now our killing drones are human-controlled.
If we do manage do to it, it's going to take an amazing computer to handle that AI.
There have been 10000 books written on the subject of AI becoming conscious and self aware and destroying humanity.
I do not know if we will ever truly write an AI that advanced.
If we do manage do to it, it's going to take an amazing computer to handle that AI.
Think of our selves as people. our entire bodies and organs all essentially exist to keep the brain alive.
While the capacity to create a superbeing with super AI may some day be in our grasp, it's still a faraway dream for computer scientists.
Maybe.....
Military aviation is usually at least a decade ahead of what the public is aware of.
Maybe.....
Military aviation is usually at least a decade ahead of what the public is aware of.
That's the biggest risk.
If we design machines specifically to kill people, no one should be surprised if that's exactly what they do.
Raise a kid to be a serial killer, but only of the people you specifically want him to kill.
Then he starts killing other people for what he believes to be good reasons.
How many people are going to be surprised when that happens?
If the AI is designed and raised to be benevolent and kind, and is not instilled with the inherently-aggressive behaviors harbored by many lifeforms on Earth, this might not be an issue at all. It may question our own occasional bouts of insanity. ("These groups of leaders disagree on something, so they make the citizenry go kill each other? This is a systemic mental disorder, right?")
Hey bro, computers don't need organs.
they do not need organic organs, true. But they need power source, they need abillity to regulate temperature (cooling), they need interfaces (some sensor or method of getting input and output) ... Everything needs to talk to the CPU or the brain through some sort of bus, sort of like the nerve network in a body.
No, they do not need organs, but there is much similarity between a human body and it's relationship to the brain as a computer AI would have to whatever hardware it's trapped inside of.
Quantum computing is just around the corner.
when AI gets out of line there's 2 solutions to go for: logic bomb, or my favorite, seduction.
