Poll : Do you believe we will see an AI singularity before 2030?

Will an AI singularity be achieved by the year 2030?

  • Yes, it will happen

  • No, it will not happen


Results are only viewable after voting.

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
I will keep my answer and thoughts out of the OP as to let things progress with no particular directive.
 

guyver01

Lifer
Sep 25, 2000
22,135
5
61
It already has occurred.

Arcadio is a sentinet AI ... proof that all AI must be destroyed before it reaches sentience.
 

Locut0s

Lifer
Nov 28, 2001
22,205
43
91
No. I think intelligence and consciousness is more than just an emergent property of the number of connections and processing power of a network. By this I do NOT mean to invoke some kind of ID explanation or religion. I just think that there has to be some sort of selection mechanism at work, like natural selection in evolution, to "evolve" something as complex as the human brain. It's not just a matter of having x number of interconnected systems. And as for programming something like intelligence by hand we are a LONG ways away from that. We are just barely starting to piece together how the brain operates on a neuronal level. I'm not ruling it out but I would put it FAR past 2030.
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
Nope. Ray Kurzweil's predictions are a bit overambitious.
He predicted that we'd all have implantable computers by now.
 

frostedflakes

Diamond Member
Mar 1, 2005
7,925
1
81
Yeah, I'd take whatever year the futurists are predicting, then add a few decades to that. Most tend to be pretty optimistic. Weren't people predicting we'd have flying cars by 2000? Then again, there's probably not any technological barriers for flying cars. But just because we have the technology to build something doesn't mean it would be practical.

Kurzweil makes some pretty interesting points, though. For example, about the exponential growth in computational power, regardless of the underlying technology used (starting out with mechanical computers, then vacuum tubes, discrete transistors, and now integrated circuits). There have always been barriers for a particular technology, but then something new is developed and these barriers are overcome. It won't be any different with integrated circuits, we will figure out something better before its limits are reached.

So I think by 2030 we may have the computing power to simulate the brain. I'm less optimistic about whether we'll have the necessary understanding of the human brain to properly model it, though. But who knows, 20 years is a long time. :)
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
Yeah, I'd take whatever year the futurists are predicting, then add a few decades to that. Most tend to be pretty optimistic. Weren't people predicting we'd have flying cars by 2000? Then again, there's probably not any technological barriers for flying cars. But just because we have the technology to build something doesn't mean it would be practical.

Kurzweil makes some pretty interesting points, though. For example, about the exponential growth in computational power, regardless of the underlying technology used (starting out with mechanical computers, then vacuum tubes, discrete transistors, and now integrated circuits). There have always been barriers for a particular technology, but then something new is developed and these barriers are overcome. It won't be any different with integrated circuits, we will figure out something better before its limits are reached.

So I think by 2030 we may have the computing power to simulate the brain. I'm less optimistic about whether we'll have the necessary understanding of the human brain to properly model it, though. But who knows, 20 years is a long time. :)

That's pretty much my conclusion. I wouldn't rule it out, but I think it may take longer than ~20 years from now. Of course, 20 years ago things looked a lot different as well :p
 

yhelothar

Lifer
Dec 11, 2002
18,409
39
91
Here are Ray Kurzweil's 2010 predictions.

2010

Supercomputers will have the same raw computing power as human brains (though not yet the software to emulate human thinking).
Computers will start to disappear as distinct physical objects, meaning many will have nontraditional shapes or will be embedded in clothing and everyday objects.
Full-immersion audio-visual virtual reality will exist.

We are quite close with a full visual immersion with the advent of virtual retinal displays that can reproduce all stereoscopic cues, but the resolution and picture quality of it is still very primitive. Full audio immersion virtual reality, I'd say has already been here for a while.
http://www.hitl.washington.edu/projects/true3d/

I don't think we're close to the computing power of a human brain yet. But how do you even quantify that to begin with? But as far as brain simulators go, we've only been able to simulate about a small fraction of a rat brain with even the most powerful supercomputers.
http://seedmagazine.com/content/article/out_of_the_blue/?page=all&p=y
 

OverVolt

Lifer
Aug 31, 2002
14,278
89
91
No. I think intelligence and consciousness is more than just an emergent property of the number of connections and processing power of a network. By this I do NOT mean to invoke some kind of ID explanation or religion. I just think that there has to be some sort of selection mechanism at work, like natural selection in evolution, to "evolve" something as complex as the human brain. It's not just a matter of having x number of interconnected systems. And as for programming something like intelligence by hand we are a LONG ways away from that. We are just barely starting to piece together how the brain operates on a neuronal level. I'm not ruling it out but I would put it FAR past 2030.

Yep. Knowing how hard to brake for a redlight to everyone is just kind of trial and error experience at mashing on the brakes. I'm not like "I have 2.1 seconds left on the yellow light and I'm moving 33mph so I need to apply the brakes with X lbs of force" My brain just goes "derrr oh uh-oh a yellow light." and I hit the brakes however hard I feel I need to compared to past experiences. To get a computer to solve it, it would be doing tons of mathematical problem solving. I think computers are too married to problem solving with math to ever really create a working AI.
 

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
That's not true. Arcadio hasn't passed the turing test.
Sadly, several computers have already "passed" the Turing Test, from what I've read.
The thing is, it's not that computers are smart, it's that people are gullible and easy to fool. :D
 

brblx

Diamond Member
Mar 23, 2009
5,499
2
0
artificial intelligence, by definition, is not intelligent. it lacks sentience.

there is nothing that we can do that will allow a computer to make decisions without consulting code that uses if/and/ect type commands. sure, we can program a computer with the knowledge and instincts of a human, maybe even add some randomness in to account for human unpredictability- but that still doesn't make it capable of rational thought.

edit- i guess technically it would be 'rational thought,' but all rationality would be derived from pre-programmed information. kind of like a republican.
 
Last edited:

dighn

Lifer
Aug 12, 2001
22,820
4
81
artificial intelligence, by definition, is not intelligent. it lacks sentience.

there is nothing that we can do that will allow a computer to make decisions without consulting code that uses if/and/ect type commands. sure, we can program a computer with the knowledge and instincts of a human, maybe even add some randomness in to account for human unpredictability- but that still doesn't make it capable of rational thought.

edit- i guess technically it would be 'rational thought,' but all rationality would be derived from pre-programmed information. kind of like a republican.

unless you believe in the existence of souls or some other equivalent incorporeal mechanism, human sentience is nothing more than the internal state of a vastly complex network of relatively simple components at least by current science. if sentience can come from a mere biochemical machine, there's no theoretical reason why it cannot be replicated using man-made machines.

anyway I'm not sure about such "predictions". AI is as much a theoretical problem as it is a engineering one, arguably more so. just throwing more processing power at it won't help. predicting theoretical advances is not very realistic imho.
 

coloumb

Diamond Member
Oct 9, 1999
4,069
0
81
No - for it to happen the machine would need to have a "soul" - free will to make the choice of right and wrong and change it's decisions upon how it feels about certain situations, objects, living organisms, etc. The machine would also need to be capable of lying, deception, laughter, etc. "Bicentennial Man" is a good example of what would need to happen for singularity - if it does happen, it would be an accidental one time fluke...

2030 may have something that appears to be singularity - but it's a long way off before the human race will actually produce a machine that can act exactly like a human.
 

Hayabusa Rider

Admin Emeritus & Elite Member
Jan 26, 2000
50,879
4,267
126
unless you believe in the existence of souls or some other equivalent incorporeal mechanism, human sentience is nothing more than the internal state of a vastly complex network of relatively simple components at least by current science. if sentience can come from a mere biochemical machine, there's no theoretical reason why it cannot be replicated using man-made machines.

anyway I'm not sure about such "predictions". AI is as much a theoretical problem as it is a engineering one, arguably more so. just throwing more processing power at it won't help. predicting theoretical advances is not very realistic imho.

There is really no good way to answer the question because the semantics aren't well defined. What is the goal, to make a program that can reasonably do what humans do, or make a self aware mind? This is as much a philosophical problem as anything else. If you and I were to look at an object and our basic comprehension of it were to be switched would it be even comprehensible? It's the old problem of other minds that Russel and others pondered for centuries. If that remains unresolved with humans, how do we understand how an AI relates to us, who are the ultimate judges of all? How do we comprehend ourselves in a higher context which is greater than we can grasp?

These aren't trivial questions. They are at the heart of what it means to be.

BTW, no one seriously entertains the Turing test as being able to determine what thinks. Turing himself knew better. It one of the poorer understood concepts.
 

JEDI

Lifer
Sep 25, 2001
29,391
2,737
126
I will keep my answer and thoughts out of the OP as to let things progress with no particular directive.

Skynet will never happen.

1) we're all dead by 2012. The mayan's predicted this. Whatever that is coming in 2012 wiped out the Mayans when they found out.

2) or we're dead sooner thanks to the Large Hadron Collider (LHC)
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Programmers are struggling with using all 4 cores on a quad core for regular task and you are asking about AI ?
 

Martin

Lifer
Jan 15, 2000
29,178
1
81
If you look back at what people 50-100 years ago thought of the future, you'd discover that the future is actually a pretty boring and ordinary place.

Kurzweil essentially believes that in a few decades, we will all be omniscient, immortal demi-gods.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Yep. Knowing how hard to brake for a redlight to everyone is just kind of trial and error experience at mashing on the brakes. I'm not like "I have 2.1 seconds left on the yellow light and I'm moving 33mph so I need to apply the brakes with X lbs of force" My brain just goes "derrr oh uh-oh a yellow light." and I hit the brakes however hard I feel I need to compared to past experiences. To get a computer to solve it, it would be doing tons of mathematical problem solving. I think computers are too married to problem solving with math to ever really create a working AI.

I truly think the answer lies in humans realizing how retarded we truly are, and creating a run-time language exclusive to AI engineering that would account for this.

However, the problem lies in memory and compression algorithms to get values created on-the-fly to represent a lot of information with a little physical data as possible.
In addition, a high-density holographic memory medium is absolutely required unless we want this smart computer to take up the size of an entire city. Even then, this machine would probably be the size of a building. But all the smart computers of sci-fi are huge, so that's alright.

See, the trick is, not making a computer act based on computations for every event. Instead, make the computer act with no knowledge whatsoever, and make it learn every experience it does. In short, it would be like raising a child. Hold it's hand for a little while, and with repetition, it will begin to understand reasoning and logic. It would understand, if it's code is wirelessly pushed to a robot, that the robot's purpose is to walk, not crawl. To talk, not mumble. To learn 2+2 is 4, not 3.
We can't pre-program anything, we can only merely give it the tools to store experiences, and the experiences must be able to be stored and recalled whenever the AI decides necessary.

If it's in a robot, when walking a long a street and steps into the street and a vehicle rushing along barely misses hitting it, it should store that experience as a near death experience, and be able to learn from it.

I don't think we are far from creating such a coding method if we were to try. But I don't think anyone is going that route, with an attempt to make an AI that truly is as dumb as a box of rocks and has to learn.

Two things are required for intelligence: the tools to learn, and someone who will hold the hands of the learner for some part of their life. No creature learns without someone teaching, and you cannot become capable of learning until you have a brain that can be taught. And more importantly, the student has to be capable of becoming a teacher.
Many creatures out there are capable of learning to a certain degree, but lack the capability to be teachers of the advanced topics they are taught.