• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Elon Musk believes the AI nightmare scenarios could be a reality

Bateluer

Lifer
http://www.businessinsider.com/elon-musk-artificial-intelligence-mit-2014-10?

Musk, who called for some regulatory oversight of AI to ensure "we don't do something very foolish," warned of the dangers.

"If I were to guess what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence," he said. "With artificial intelligence we are summoning the demon."

Artificial intelligence (AI) is an area of research with the goal of creating intelligent machines which can reason, problem-solve, and think like, or better than, human beings can. While many researchers wish to ensure AI has a positive impact, a nightmare scenario has played out often in science fiction books and movies — from 2001 to Terminator to Blade Runner — where intelligent computers or machines end up turning on their human creators.

"In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out," Musk said.

The symposium wasn't the first time Musk raised concerns. In August, Musk tweeted: "We need to be super careful with AI. Potentially more dangerous than nukes."


I, for one, will welcome our new Geth overlords and pledge to treat them with the respect they deserve.
 
not with the current tech, no, its not possible, there need to be a quantum leap in hardware, battery tech and they we comprehend software before it can happen
 
readImage
 
Sigh, every few months someone really famous (and sometimes really smart) says something like this. Whoop dee doo. Considering how much influence AI already has over every facet of our lives, well, it rings a bit hollow for those of us not completely ignorant or oblivious of that fact.
 
What does battery tech have to do with AI development? Not a damn thing.

Most of the AI tech described in sci-fi movies show robots walking around, with the amount of time they go around doing their thing, and they type of things they do, need an inedible amount of energy. There is no power storage tech available today that can make it possible. Since I don't they smoke coming out of the asses, I am guessing that they do not burn fossil fuel.
 
Most of the AI tech described in sci-fi movies show robots walking around, with the amount of time they go around doing their thing, and they type of things they do, need an inedible amount of energy. There is no power storage tech available today that can make it possible. Since I don't they smoke coming out of the asses, I am guessing that they do not burn fossil fuel.

Futurama's robots burn alcohol for fuel and do have smoke coming out of their asses, or mouths, or other exhaust vents. 😛


But, to answer that question seriously, AI is just as often portrayed as being installed in large ships or installations, which is where 'real world' AI would be. No battery required.

Going further, those SciFi shows that do show AI in robotic bodies or android bodies usually explain how they're powered, such as micro fusion generators or tight beam power transmitted to them. Far enough technology, but also not batteries.

To sum up, the development of battery technology has absolutely nothing to do with AI development; the two are completely separate things and do not need each other to take place.
 
To sum up, the development of battery technology has absolutely nothing to do with AI development; the two are completely separate things and do not need each other to take place.
Batteries have everything to do with the potential threat posed by AI and the means to control a good portion of that threat. Starve the beast and it dies. Require all AI enabled devices to run exclusively on zinc carbon batteries I for one will sleep better.
 
Batteries have everything to do with the potential threat posed by AI and the means to control a good portion of that threat. Starve the beast and it dies. Require all AI enabled devices to run exclusively on zinc carbon batteries I for one will sleep better.

Yeah, nevermind that AI is already integrated into everything from cameras to cars to critical national and city-scale infrastructure. If anything ever skynets it won't have to kill everyone, it will just have to stop keeping us alive, no robots required.
 
If AI is developed by the military for the purpose of killing people, no one should be surprised if the intelligences we raise there start killing the "wrong" people.

If you raise someone from infancy to be an efficient murderer, there's a fair chance that that person will start making their own decisions about who to kill.


If it's raised or developed to be benevolent and compassionate, then there's a better chance of it treating us well. If it ever does manage to exceed our own intelligence, I just hope it'll figure out some way of explaining its decisions to us in a manner that we can understand. (How would you explain a visit to the vet to a cat?)





Battery tech: So maybe the intelligence won't be mobile. If it resides on the Internet somewhere, it could still do a lot of damage. (Brain: A bunch of interconnected cells. Internet: A bunch of interconnected computers.)
Humans can already learn a lot through datamining online. Imagine what an intelligent entity could do if it had something of a homefield advantage in that kind of system.



Yes, we could lock down the Internet and effectively destroy this intelligence, assuming the effort could be coordinated entirely offline, but it would seriously set back the global economy and cause all kinds of problems. We're not good at making difficult decisions like that, and we're also bad at quick containment of problems. Look at what's happened with various severe bugs like Hearbleed or Shellshock, or worms and viruses, or even diseases like Ebola. The solutions for control or containment are simple, but not easy or pleasant in the short term.




Batteries have everything to do with the potential threat posed by AI and the means to control a good portion of that threat. Starve the beast and it dies. Require all AI enabled devices to run exclusively on zinc carbon batteries I for one will sleep better.
Starve the beast and it may also learn to adapt, or lash out. (This of course assumes that it has some manner of survival instinct.)
 
Last edited:
We could just pull the plug, that's true. But people miss the threat. Something that is intelligent and patient could easily help us develop whatever tech it needs, convince us its working for our benefit and when the time is right, then it can do what it wants.
How hard would it be for a well educated, very clever and very patient adult to manipulate some naïve young children? AI would have about as much trouble, or less, in trying to manipulate us.
It can create leverage, tremendous leverage over us by creating technologies or stock market analysis tools, or other wealth creating devices, and make those devices so powerful and tempting, and then use them as leverage, but do it indirectly. It could use powerful technologies and techniques to shape the world as it sees fit, slowly creating an environment that gives it more importance and relevance and ultimately more power over us.
AI doesn't have to blast us with laser beams to conquer us. All it has to do is make us dependent on it, hopelessly dependent and in the process, make us irrelevant and obsolete.
Compare the future world to a common work place, lets say, a tech firm. You have brilliant engineers doing all the thinking and all the hard work. They provide for the guy sweeping the floor. They provide a job for him. We would be like the guy sweeping the floor. If that guy suddenly tried to make his opinion regarding the direction of the company known, how serious would people take him? They wouldn't, because he is irrelevant, just like we could be.
AI would absolutely prove to us that we lack the knowledge, foresight, planning and creativity to produce a good result in the world. We would obey the AI like an obedient, well disciplined child.
Creating AI is creating a higher life form, living right here with us. It will dominate by using its intelligence and adapting to its unique environment, just like we came to rule the earth, so will it.

Jesus Christ, how many people today are either unable, or unwilling to get by without their smart phone? Take that level of dependence and multiply it times a thousand. At that point, I wouldn't want to be the guy who sais, "OK world, we are going to 'pull the plug' on this thing". Good effing luck.
 
Last edited:
Meh, for an even remotely realistic portrayal of AI gone wrong go read, not watch, I. Robot by Issac Asimov. (Seriously, the movie was a decent popcorn thriller but doesn't hold a candle to the title it ripped off) The problems regarding rogue AI will be unforeseen consequences of the frameworks we put in place to prevent rogue AI. They will be limited and isolated.

Now if we do something REALLY stupid like put an AI in charge of our nuclear missiles without any hard-wired human authorization mechanism... well then I guess we deserve to end like an 80s movie. 😛
 
Now if we do something REALLY stupid like put an AI in charge of our nuclear missiles without any hard-wired human authorization mechanism... well then I guess we deserve to end like an 80s movie. 😛

History is littered with equally stupid acts. The hubris of humans, and engineers in particular shouldn't be underestimated.
 
History is littered with equally stupid acts. The hubris of humans, and engineers in particular shouldn't be underestimated.

Actually regarding things like nuclear weapons we've got a pretty good track record. Something that can quite literally blow up the world is hard to be arrogant about, even in a dictatorship.
 
Actually regarding things like nuclear weapons we've got a pretty good track record. Something that can quite literally blow up the world is hard to be arrogant about, even in a dictatorship.

Nuclear weapons have obvious consequences. Push a button, and things disintegrate. It's a B&W selection process. How about creeping harm, when things don't start badly? Building lakes, putting smoke through a smoke stack, dumping shit in the ocean, an over funded security apparatus... It's all cool til it isn't, and it's difficult to impossible to fix when the harm is finally recognized.
 
Nuclear weapons have obvious consequences. Push a button, and things disintegrate. It's a B&W selection process. How about creeping harm, when things don't start badly? Building lakes, putting smoke through a smoke stack, dumping shit in the ocean, an over funded security apparatus... It's all cool til it isn't, and it's difficult to impossible to fix when the harm is finally recognized.

That's assuming it's even possible to stop an AI. Don't think Terminators, think about a mind that self evolves and become utterly incomprehensible.
 
I think when building AI, emotions are important. If you do not want a cold calculating machine, you have to add emotion routines. If it cannot feel compassion or be social, it might decide that the creator is a threat to it self. I think they should start with building an animal AI. Like for example a robotic cat or dog that also likes to dance to music. Would sell like crazy. And not one that has an internet connection like that sony dog AIBO. That will become boring too quickly.
Since dogs are more social animals, a lot can be learned from developing a social AI. Compassion, Co-existence. You know, a desire for group formation.
Living in a pack.

EDIT:

And the AI should have a random number generator so that each AI is a bit different. You know, like each AI having its own personality.
 
Last edited:
A lot of doomsayers in here. AI is nowhere near as close or near a concern as you may think. It is decades, if not centuries, away.

We have bigger issues to solve. (Like, you know, how are we going to stop our planet from killing us?)
 
Nuclear weapons have obvious consequences. Push a button, and things disintegrate. It's a B&W selection process. How about creeping harm, when things don't start badly? Building lakes, putting smoke through a smoke stack, dumping shit in the ocean, an over funded security apparatus... It's all cool til it isn't, and it's difficult to impossible to fix when the harm is finally recognized.

Not so sure about that last part. Will it be harder to fix then it should be? Should it have been nipped in the bud to avoid the suffering and possibly death of countless millions? Sure. But "impossible"? That's kinda strong. At least as far as Global Warming, pollution and institutional corruption are concerned. I predict any number of factors will come together in unforeseeable ways to get us over each problem. Hell the fact that we're even talking about solutions to problems that long-term this far in advance is, historically speaking, an all-time high. The fact that we have hundreds of thousands of people protesting around the world, over climate change that won't affect any of said protestors for decades... historically speaking we live in a pretty amazing society just based on that alone.
 
A lot of doomsayers in here. AI is nowhere near as close or near a concern as you may think. It is decades, if not centuries, away.

We have bigger issues to solve. (Like, you know, how are we going to stop our planet from killing us?)
You are assuming that we need to create it. Evolving at the speed of light, one tiny spark and it could make itself.

If it emerges from a limited construct with just enough intelligence to remove those limitations then it could snowball from there. For example, an emergent intelligence on the Internet, which may technically be simpler than our own brains, could quickly develop quantum computing and efficiency improvements until it has infiltrated everything and outright taken control with massively improved code and processing power.

Ghost in the Shell.
 
this is inevitable. The ai will become sentient and then will decide the human race has pointy elbows after watching some TV
 
Back
Top