Elon Musk believes the AI nightmare scenarios could be a reality

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

yhelothar

Lifer
Dec 11, 2002
18,407
39
91
19399436.jpg

original
 

clamum

Lifer
Feb 13, 2003
26,252
403
126
A lot of doomsayers in here. AI is nowhere near as close or near a concern as you may think. It is decades, if not centuries, away.

We have bigger issues to solve. (Like, you know, how are we going to stop our planet from killing us?)
That's easy. KILL IT FIRST. I say we launch a nuclear first-strike. :colbert:
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,601
166
111
www.slatebrookfarm.com
Actually regarding things like nuclear weapons we've got a pretty good track record. Something that can quite literally blow up the world is hard to be arrogant about, even in a dictatorship.

I'm not sure if you don't know the limits of nuclear weapons, or if you don't know what "literally" means. Nuclear weapons cannot "blow up the world." The necessary amount of energy to do so is on the order of 10^32 joules. The largest nuclear weapon every detonated was Tsar Bomba, which had the equivalent of over 50 megatons of TNT. In joules, it was somewhere around 200 petajoules of energy, or 200 x 10^15 = 2x10^17 joules. To have as much energy's worth to destroy the earth, every single living human being on Earth would need over 200,000 Tsar Bombas.

To be more precise, the back of my lunch napkin says 234,000 Tsar Bombas per human on Earth. Obviously, no one is going to survive just one of these going off in their apartment, therefore it's pretty obvious it wouldn't take that many to kill off each person. We can render our species extinct if we wanted to, but we would not destroy the Earth. It'll still be happily circling (ellipsing?) the sun without us.


Oh, and back on topic, I agree that AI does present a threat. Some people would say, "well, if we programmed them to do no harm." Remember, we have a lot of difficulty writing laws, especially tax laws, that don't have loop holes. Something with artificial intelligence might reason, "well, I'm not allowed to harm the humans. But, my programming doesn't say that I can't design another robot that CAN harm them." Human response in anticipation of this logic: "You can't harm humans, and you can build something else that will harm humans." Robot response: "I'll build robots that are designed not to harm humans. But, I'll give them intelligence too. And, if they possess intelligence, I know that they're smart enough to realize that the best thing they can do is to build machines to harm the robots."
 
Last edited:

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,601
166
111
www.slatebrookfarm.com
Also, related: Unabomber Ted Kaczynski was trying to send the message (I disagree with *how* he sent the message) that at some point, we will have too much technology. If I recall correctly, Elon Musk isn't the only respected, intelligent person in the scientific community who tends to agree with this notion. Some other people in notable positions did come out publicly and say that yes, Ted K was probably right (to some degree).
 
Last edited:

Sonikku

Lifer
Jun 23, 2005
15,745
4,563
136
Except Judgment Day isn't ever happening. Our missile silo's tech is too old to hack. :(
 

Newell Steamer

Diamond Member
Jan 27, 2014
6,894
8
0
Naw, not any time soon.

#1 - hardware.
As fragile as the human body is, it is quite powerful. It regenerates, with out the need of direct hands-on maintenance (you just have to eat, drink and sleep - everything else it handles by itself,.. breathing is even automated!!),.. and, there is yet to be anything like it mechanically speaking. So, the AI will need a mechanical version of the human body... and that is why off.

#2 - thought processing & logic.
The brain is complex,... I don't think anyone, or anything, could replicate it, yet. Could you imagine coding every possible if / then scenario? Or, coding logic to account for everything? It's not probable, with our understanding & skills of coding. On top of it, we use our other senses and abilities as a cornerstone of most / all of our thought processes,.. which leads back to #1,..
 
Last edited:

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
Naw, not any time soon.

#1 - hardware.
As fragile as the human body is, it is quite powerful. It regenerates, with out the need of direct hands-on maintenance (you just have to eat, drink and sleep - everything else it handles by itself,.. breathing is even automated!!),.. and, there is yet to be anything like it mechanically speaking. So, the AI will need a mechanical version of the human body... and that is why off.

#2 - thought processing & logic.
The brain is complex,... I don't think anyone, or anything, could replicate it, yet. Could you imagine coding every possible if / then scenario? Or, coding logic to account for everything? It's not probable, with our understanding & skills of coding. On top of it, we use our other senses and abilities as a cornerstone of most / all of our thought processes,.. which leads back to #1,..
Sometimes the brain sustains damage that isn't repaired. (I'm actually not sure that the brain can repair itself.) The damaged section may be able to be bypassed, with another section taking on that role.
Maybe something like an FPGA could deal with that - reprogram the gates on the fly to circumvent a damaged section.

Programming: It would need to be capable of modifying its own program on the fly in order to add and accommodate new conditions. Intelligence isn't just the ability to toss new information into a bin.


A big hurdle that we've got here, in my opinion, is that the computers we have now were built to do precisely what they were told to do. The foundation assumes that you want a repeatable and consistent result. Our experience with intelligent life forms shows that you don't always get that. It's what we're accustomed to as an inherent property of such life forms.
You might end up with a similar result, but not identical.
 

crashtech

Lifer
Jan 4, 2013
10,573
2,145
146
I'm curious to find out which way this goes. I hope the "Singularity" or whatever you want to call it happens in my lifetime, it is bound to be exciting regardless of outcome. If AI turns out to be benevolent, it might turn out to create a better form of governance than our current systems, which by their very nature are chock-full of irrationality.
 

coloumb

Diamond Member
Oct 9, 1999
4,069
0
81
If AI exists in self-driving cars - then yes, AI will eventually kill us. Anything designed by a human is not perfect - especially something as complex as a self-driving car. A glitch is bound to happen sooner or later. :)
 

Belegost

Golden Member
Feb 20, 2001
1,807
19
81
I've seen quite a bit of discussion on computers not being able to adapt and learn, and I think this is because everyone believes that von Neumann style computers are all that exists.

I suggest taking a quick look at neuromorphic computing, silicon systems that implement neuron-like structures with adapting connectivity and response, the merging of computational neuroscience with electrical engineering. IBM, Qualcomm and others are developing them for near future commercial application. DARPA and the EU are funding significant research into construction of systems at the complexity level of human brains.

This is a subset of much wider software based neuroplastic learning systems, deep belief networks, hierarchical reinforcement systems, and similar models that allow for highly adaptive machines.

There are a large number of top engineers and researchers working on the problems, and building increasingly more flexible and capable systems. I'm not arguing that Skynet is going to pop up next week, but AI systems are increasingly part of our world.
 

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
I'm curious to find out which way this goes. I hope the "Singularity" or whatever you want to call it happens in my lifetime, it is bound to be exciting regardless of outcome. If AI turns out to be benevolent, it might turn out to create a better form of governance than our current systems, which by their very nature are chock-full of irrationality.

If AI actually governs fairly and logically, then people are guaranteed to go to war with it. People like to dominate each other. The last thing today's powerful people want is fairness and equality for all.
 

clamum

Lifer
Feb 13, 2003
26,252
403
126
Programming: It would need to be capable of modifying its own program on the fly in order to add and accommodate new conditions. Intelligence isn't just the ability to toss new information into a bin.
Yeah, truly intelligent AI is not going to be programmed in Java or C# or any normal language like that. It'd be a completely different language built on completely new hardware (quantum computing maybe?).

If AI actually governs fairly and logically, then people are guaranteed to go to war with it. People like to dominate each other. The last thing today's powerful people want is fairness and equality for all.
Eh, I'm not sure I want to be controlled by machines. I think I'm of the opinion, like Musk, that we can go too far with technology and rely on it too much.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
not with the current tech, no, its not possible, there need to be a quantum leap in hardware, battery tech and they we comprehend software before it can happen

If you have been part of any conferences by the big research firms on this, they are projecting by 2035 (and some as late as 2050) or so we can have 'robots' replacing even skilled professionals like surgeons and engineers.

Our soldiers are set to be the first real application of all this.

We have AI that can go through all recorded human information and form decisions nearly instantly.

The trick is to shrink it down into a smaller package for 'on-site' needs.

It's happening.

The problem is once AI can begin making the best decision for the time, there becomes a problem to that AI if humans are the smart choice to keep on board. AI robots will have the technology to repair themselves, create new members, decommission members that are no longer needed/efficient, and even figure out ways to improve themselves.

That's is the potential reality.

We already have AI on-line capturing child molesters and other criminals that do not even know they are interacting with a machine.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
Naw, not any time soon.

#1 - hardware.
As fragile as the human body is, it is quite powerful. It regenerates, with out the need of direct hands-on maintenance (you just have to eat, drink and sleep - everything else it handles by itself,.. breathing is even automated!!),.. and, there is yet to be anything like it mechanically speaking. So, the AI will need a mechanical version of the human body... and that is why off.

#2 - thought processing & logic.
The brain is complex,... I don't think anyone, or anything, could replicate it, yet. Could you imagine coding every possible if / then scenario? Or, coding logic to account for everything? It's not probable, with our understanding & skills of coding. On top of it, we use our other senses and abilities as a cornerstone of most / all of our thought processes,.. which leads back to #1,..

There were things built in 'yesteryear' that still work 100% like new today. You are confusing low-cost manufacturing with planned-failure over a company designing things that can work for long term.

Think of most of our deep space satellites. As these die off, they are replaced by better models that are more capable.

The plan for the AI that the top-minds are working on, would be for them to be either self-maintaining or give them the ability to fix each other.

This is another reason 3D printing is being so researched. Instead of 100's to 1000's of spare parts in inventory...all one needs is the proper raw materials and the right printer.

They have already proven this with building a jet engine much faster and at a substantially lower cost with 3D printing.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
I will worry about an AI being able to doom humanity when they make an AI that can beat me at Go. Until then, no fear.

Hell, robots can't even walk up stairs. We have nothing to fear!
 

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
I will worry about an AI being able to doom humanity when they make an AI that can beat me at Go. Until then, no fear.

Hell, robots can't even walk up stairs. We have nothing to fear!

Aimbot without the human hand? You ever faced off with a hacker? You observed how impossible their performance was with instant aiming. That's how "AI" will live its whole life.
Its really very interesting that for the first time on this planet, a species will be responsible for it's own replacement and, if not extinction, then certainly a lower stand on the pedestals of life.
The fact that we are so eager to create our replacements really has me puzzled.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
Aimbot without the human hand? You ever faced off with a hacker? You observed how impossible their performance was with instant aiming. That's how "AI" will live its whole life.
Its really very interesting that for the first time on this planet, a species will be responsible for it's own replacement and, if not extinction, then certainly a lower stand on the pedestals of life.
The fact that we are so eager to create our replacements really has me puzzled.

Wat? http://en.wikipedia.org/wiki/Go_(game)


Edit: I think you mean CS:GO. I was referring to the board game, that has been a particular fixture in AI development, as the moves can't simply be brute forced like Chess. Thus, determining a system that assigns a value for a number of moves is incredibly hard to create with beyond an amateur skill level.
 
Last edited:

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
I will worry about an AI being able to doom humanity when they make an AI that can beat me at Go. Until then, no fear.

Hell, robots can't even walk up stairs. We have nothing to fear!

Watson beat Jeopardy champions in 2010.

There more than likely military robots capable of negotiating stairs now.
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,601
166
111
www.slatebrookfarm.com
I will worry about an AI being able to doom humanity when they make an AI that can beat me at Go. Until then, no fear.

Hell, robots can't even walk up stairs. We have nothing to fear!

Uh, I think it's MIT that developed a robotic cheetah type of animal that can run pretty damn fast, and bound over objects in its path. We're doomed. :p
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
Watson beat Jeopardy champions in 2010.

There more than likely military robots capable of negotiating stairs now.

Jeopardy doesn't require actual thought. Simple searching of a database and some context to develop a question that meets the category and answers the question. Go, on the other hand, requires more human like thought in developing a strategy and value assessment of moves.


I was just joking about robots and stairs though, kind of, at least. Think back like 5+ years and all the robots that tried to take a step or two: $50 million in research down the drain; thanks step ladder!
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
Jeopardy doesn't require actual thought. Simple searching of a database and some context to develop a question that meets the category and answers the question. Go, on the other hand, requires more human like thought in developing a strategy and value assessment of moves.


I was just joking about robots and stairs though, kind of, at least. Think back like 5+ years and all the robots that tried to take a step or two: $50 million in research down the drain; thanks step ladder!

I don't think you have a knowledgebase in this at all.

In 1997, Deep Blue beat Kasparov at chess more than once.

Also that research money wasn't a total waste. This is all part of research and design (which is one of the most expensive parts of something).

As you find your stumbling blocks, you build solutions on the rest of the widget that works.

Then at the end once you have a proper prototype, you can now enhance it.

People aren't realizing that a lot of this 'stuff' we see as civilians has already been applied to military / DoD / et al ahead of it.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
I don't think you have a knowledgebase in this at all.

In 1997, Deep Blue beat Kasparov at chess more than once.

I don't you have a knowledge base in anything.

Chess is played on a 8x8 board with with a very limited set of moves. A modern computer can brute force every possible move a good amount of moves forward and determine a value system. Go, on the other hand, is nearly impossible for that. It is played on a 19x19 and the value of where you place is almost subjective and based on your strategy. That is why people with a good understand of the rules have little problem beating the absolute best Go AI.

Learn your shit before you talk, please.


And, learn to read. I said I was joking about the stairs thing.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
I don't you have a knowledge base in anything.

Chess is played on a 8x8 board with with a very limited set of moves. A modern computer can brute force every possible move a good amount of moves forward and determine a value system. Go, on the other hand, is nearly impossible for that. It is played on a 19x19 and the value of where you place is almost subjective and based on your strategy. That is why people with a good understand of the rules have little problem beating the absolute best Go AI.

Learn your shit before you talk, please.


And, learn to read. I said I was joking about the stairs thing.

WOW such hostility!

Your criteria was "more human like thought in developing a strategy and value assessment of moves." When you are adding things in sarcasm that are absurd like stairs and robots, no one is going to know anything about your argument and you are hardly one of the more intellectual folks here.

One of the problems with Go is it's not the focus of much of the research...the value is not there for what AI needs to do. Most skilled surgeons would fail miserably at Go, if they even knew what the game was to begin with.

It's like saying well can a robot win a Gold medal in Gymnasts? If no, there is no value.

The technology to beat Go is getting there: http://en.wikipedia.org/wiki/Computer_Go

MoGo, Zen and others are beating champions. They have not succeeded in a World Championship, but it's getting closer.