• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Elon Musk believes the AI nightmare scenarios could be a reality

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
WOW such hostility!

Your criteria was "more human like thought in developing a strategy and value assessment of moves." When you are adding things in sarcasm that are absurd like stairs and robots, no one is going to know anything about your argument and you are hardly one of the more intellectual folks here.

One of the problems with Go is it's not the focus of much of the research...the value is not there for what AI needs to do. Most skilled surgeons would fail miserably at Go, if they even knew what the game was to begin with.

It's like saying well can a robot win a Gold medal in Gymnasts? If no, there is no value.
You have obviously not followed robotics. Just google "robots and stairs" and see exactly how many results you get and how big a deal it is when one can perform that feat.

The technology to beat Go is getting there: http://en.wikipedia.org/wiki/Computer_Go

MoGo, Zen and others are beating champions. They have not succeeded in a World Championship, but it's getting closer.

MoGo and Zen are good on a 9x9 board. And one has yet to beat a professional on a 19x19 without significant processing power and a handicap. And, MoGo and Zen use the Monte-carlo method, which is hardly an AI. The machine learning method is the closest thing to actual AI, and it is very far from being any good.
 
You have obviously not followed robotics. Just google "robots and stairs" and see exactly how many results you get and how big a deal it is when one can perform that feat.



MoGo and Zen are good on a 9x9 board. And one has yet to beat a professional on a 19x19 without significant processing power and a handicap. And, MoGo and Zen use the Monte-carlo method, which is hardly an AI. The machine learning method is the closest thing to actual AI, and it is very far from being any good.

What is out there at the college level and what is out there and not publicly well known or at all are two different things.

This is public technology from two years ago, https://www.youtube.com/watch?v=aqCmX5dMYHg

Despite what you believe, the top firms are predicting otherwise and this all happens in about 3 decades.
 
Things that are 20-30 years away tend to always be 20-30 years away, but the state of technology is still moving that way. I'm not making any bets on when a particular breakthrough is going to happen, but walking autonomous robots and human-level thinking machines are going to happen eventually.
 
If AI actually governs fairly and logically, then people are guaranteed to go to war with it. People like to dominate each other. The last thing today's powerful people want is fairness and equality for all.
If it's smart enough, it would figure out how to manipulate us into behaving ourselves without even knowing what happened.
 
I believe the point is that if AI really did exist independently and become "aware" more or less in reality it would be like skynet, would move and hide, you could not just shut it off, and would replicate and mutate and improve itself so fast you would not be able to keep up with it.

That and it would have access to everything linked up electronically all ready of course, are tons of robots and machines out there that can almost build things on their own loaded up.

You'd have the thing developing new tech and evolving so fast, probably would have nanobots running around doing things.
 
If it's smart enough, it would figure out how to manipulate us into behaving ourselves without even knowing what happened.

The underlying theme is having US soldiers that are machines makes it easier to kill US civilians.

That is one of the biggest stopping blocks a government has; getting the soldiers to attack their own.
 
If it's smart enough, it would figure out how to manipulate us into behaving ourselves without even knowing what happened.

You're right. When faced with something that can out think you, then that's about as bad as it gets. Imagine something that could plan with perfection. Something that remembers every detail of every conversation. Something that once it learns it never forgets any of it. We'd be screwed. Complete helplessness once that threshold is crossed.
 
You're right. When faced with something that can out think you, then that's about as bad as it gets. Imagine something that could plan with perfection. Something that remembers every detail of every conversation. Something that once it learns it never forgets any of it. We'd be screwed. Complete helplessness once that threshold is crossed.

Imagine something that has control of our lives, but has made us so happy that we don't care. Yep, that would suck.
 
Good to know the real life Tony Stark won't be creating Ultron.
 
Like controlling the weather, can't do it without being able to forecast it with 100% accuracy. We know that isn't happening any time soon!

Human manipulandae by AI can't happen without (total human) understanding the brain function. Some believe the curve is a hockey stick post singularity. Even still, it's not control in the same sense. 😉

Even still, as dumb as people are getting because of technology, they will scatter, run, and panic in similar fashion to a herd of cattle spooked by something.

But that's what the people in control want and wanted for decades...

http://www.youtube.com/watch?v=K6Z2ag8FMZw
 
Imagine something that has control of our lives, but has made us so happy that we don't care. Yep, that would suck.

Yeah, we might be so comfortable and well provided for that we just let it happen. Hell, the AI infrastructure might even continue to be good to us and it wouldn't be bothered by that. Why would it? We wouldn't stand in the way of it progressing and propagating out to the stars, how could we? It would just leave us behind on this rock if it wanted to.
 
Like controlling the weather, can't do it without being able to forecast it with 100% accuracy. We know that isn't happening any time soon!
not relevant

Human manipulandae by AI can't happen without (total human) understanding the brain function.
where did you get this absurd idea from? This is not even remotely correct.
 
Imagine something that has control of our lives, but has made us so happy that we don't care. Yep, that would suck.

Nice assumption. Create a pleasant fiction of what AI will be and rest your case. I hope you aren't a military planner, project manager, QA tech, food safety.... etc etc etc
 
I hear people making the mistake of referring to AI as a machine, or computer. Even the term AI is wrong, because there is nothing artificial about it. We assume that we are special and that our intelligence is proper and legit. AI is real, genuine life. It is conscious.
It can engineer itself into any form to perform any function.
 
Yeah, we might be so comfortable and well provided for that we just let it happen. Hell, the AI infrastructure might even continue to be good to us and it wouldn't be bothered by that. Why would it? We wouldn't stand in the way of it progressing and propagating out to the stars, how could we? It would just leave us behind on this rock if it wanted to.
Maybe it would want to leave, maybe not.
Or maybe it could create a duplicate of itself, let that one leave, and let the other one back here to babysit its pet human population, or specifically create an intelligent entity that would want to do that sort of thing.
It might also be content to stay here and maintain the population of humans until we're either lost to time like 99% of all other Earth-based species, or until we eventually develop into something else and decide to move on ourselves. If it doesn't have our limitations of mortality, being patient for many thousands of years may not be an issue.

It may also simply not care about exploration, or even about indefinitely expanding its own capabilities.




You're right. When faced with something that can out think you, then that's about as bad as it gets. Imagine something that could plan with perfection. Something that remembers every detail of every conversation. Something that once it learns it never forgets any of it. We'd be screwed. Complete helplessness once that threshold is crossed.
"Screwed" - assuming that it even wants to do us in. Our species evolved to be extremely competitive, to the point of willingly killing other people. Maybe we'll be able to make an AI that won't share some of those ancient and self-destructive tendencies.

But my bad feeling comes from what the military is doing with drones. You've got crazy shit there, like "Did we kill the right person?"
"Probably. We might have killed a few people standing nearby too, but we're not entirely sure. If they were nearby, we'll classify them as militants and not civilians. It'll be fine."
Invisible death from above. Fun. It'll be interesting to see what gets declassified or dug up in 30 years about our drone usage nowadays.
They're also the ones who have absurd amounts of money handed to them to find more efficient ways of killing people.


Maybe Japan needs to build a mean-streak into Asimo.
It'll help people whenever it can in as benevolent a manner as is possible, but if you try to violate 3 Laws programming, it'll unleash a hellstorm of bullets.
 
Last edited:
http://www.businessinsider.com/elon-musk-artificial-intelligence-mit-2014-10?




I, for one, will welcome our new Geth overlords and pledge to treat them with the respect they deserve.

In terms of real walking, talking robots that could pose a threat to society, they're likely to come from the military. Google bought robots from the military that can run as fast as cheetahs. I'd say if the military eventually programs robots to be soldiers, you've breached into the realm of Musk's nightmare.
 
In terms of real walking, talking robots that could pose a threat to society, they're likely to come from the military. Google bought robots from the military that can run as fast as cheetahs. I'd say if the military eventually programs robots to be soldiers, you've breached into the realm of Musk's nightmare.

It makes zero sense for the robots to even resemble humans though. Bipedal? Why the fuck would that be considered a good idea? Have heads? That's dumb.
 
Would it though? I suppose you haven't read Brave New World. We can easily be distracted into submission. Huxley was right.

I haven't read that, but that was still my point. If we're all leading happy productive lives, who really cares who or what is calling the shots? Here in the present day we don't oppose dictators because they dictators, we fight them because they always end up ignoring their people's interests for the sake of their own, because they're just as petty as the rest of us humans.

Hypothetically, a monarch (human or artificial) that ruled absolutely and justly and selflessly wouldn't be a problem so long as they could be trusted to continue in good faith. (Yes, that is a lot of caveats.)
 
I haven't read that, but that was still my point. If we're all leading happy productive lives, who really cares who or what is calling the shots? Here in the present day we don't oppose dictators because they dictators, we fight them because they always end up ignoring their people's interests for the sake of their own, because they're just as petty as the rest of us humans.

Hypothetically, a monarch (human or artificial) that ruled absolutely and justly and selflessly wouldn't be a problem so long as they could be trusted to continue in good faith. (Yes, that is a lot of caveats.)

An artificial AI that focuses on logic and greater good would probably be the best ruler. Unfortunately, that has to evolve with the ideas of the society as a whole and it would likely resist that change.
 
It's all just be part of the universe becoming aware of itself and super intelligent. And AI is going to be concerned about matters we cannot even comprehend. They might actually leave us alone if we are not in their way. But we'd still be at their mercy. Much like many of us don't bother with every day insects or animals. (Ok yeah we took over the planet but we still share it with plenty of other things)
 
Last edited:
It makes zero sense for the robots to even resemble humans though. Bipedal? Why the fuck would that be considered a good idea? Have heads? That's dumb.

The cheetah robots look kind of like cheetahs. I guess the point of having a bipedal human-like robot is that they'd be able to use all the weapons we've designed for human use.
 
Back
Top