Google and Nasa back new school ?Singularity University?

eternalone

Golden Member
Sep 10, 2008
1,500
2
81
From http://www.drudgereport.com/


Google and Nasa back new school for futurists

By David Gelles in San Francisco

Published: February 3 2009 05:02 | Last updated: February 3 2009 05:02

Google and Nasa are throwing their weight behind a new school for futurists in Silicon Valley to prepare scientists for an era when machines become cleverer than people.

The new institution, known as ?Singularity University?, is to be headed by Ray Kurzweil, whose predictions about the exponential pace of technological change have made him a controversial figure in technology circles.
EDITOR?S CHOICE
Tech Blog: Tales from Topographic Oceans - Oct-17
In depth: Browser battles - Sep-03

Google and Nasa?s backing demonstrates the growing mainstream acceptance of Mr Kurzweil?s views, which include a claim that before the middle of this century artificial intelligence will outstrip human beings, ushering in a new era of civilisation.

To be housed at Nasa?s Ames Research Center, a stone?s-throw from the Googleplex, the Singularity University will offer courses on biotechnology, nano-technology and artificial intelligence.

The so-called ?singularity? is a theorised period of rapid technological progress in the near future. Mr Kurzweil, an American inventor, popularised the term in his 2005 book ?The Singularity is Near?.

Proponents say that during the singularity, machines will be able to improve themselves using artificial intelligence and that smarter-than-human computers will solve problems including energy scarcity, climate change and hunger.

Yet many critics call the singularity dangerous. Some worry that a malicious artificial intelligence might annihilate the human race.

Mr Kurzweil said the university was launching now because many technologies were approaching a moment of radical advancement. ?We?re getting to the steep part of the curve,? said Mr Kurzweil. ?It?s not just electronics and computers. It?s any technology where we can measure the information content, like genetics.?

The school is backed by Larry Page, Google co-founder, and Peter Diamandis, chief executive of X-Prize, an organisation which provides grants to support technological change.

?We are anchoring the university in what is in the lab today, with an understanding of what?s in the realm of possibility in the future,? said Mr Diamandis, who will be vice-chancellor. ?The day before something is truly a breakthrough, it?s a crazy idea.?

Despite its title, the school will not be an accredited university. Instead, it will be modelled on the International Space University in Strasbourg, France, the interdisciplinary, multi-cultural school that Mr Diamandis helped establish in 1987.

http://www.ft.com/cms/s/0/8b16...ac.html?nclick_check=1




Do you think its credible that computers could achieve such intelligence? And if they did wouldn't the logical thing be, to get rid of humans, coming from a far superior intelligence? I mean the computers would start looking at as like backwards and a threat to their existence. Maybe they might do it slowly and just assimilate us into machines, until there is no real humans left, that would be the smart thing for them to do I think. But definitely I think humans would not be in the log term equation of super intelligent computers.
 

CLite

Golden Member
Dec 6, 2005
1,726
7
76
As an engineer I find thinking about the future to be an interesting topic. Eventually computers will be able to do what people think of as "left-brain" activity, and therefore artists and others will become more and more important as engineering/science/math become dominated by very advanced computers.

I actually read a terrible book about this, I read about 25% of it, got the jist and thought it was interesting, but the book meandered and was very poorly written.
 

m1ldslide1

Platinum Member
Feb 20, 2006
2,321
0
0
Will humans be competing with the singularity for resources? If so, then anything is possible. I'm not sure I agree however that a sentient intelligence has to have the same traits as 'civilized' humanity - in other words, just because we annihilate species we're in competition with doesn't mean that the AI would have such compulsions.

At this point it's silly to denounce specific lines of research as dangerous, when we have no idea what the outcome will be. The argument could easily be extended to many other branches of science - genetic engineering, fusion technology, quantum physics (LHC anybody?) That's not to say that ethics don't come into the equation, because they obviously do, but I think its WAY too early to prognosticate how this is going to shake out. Its decades away anyway, and to stifle this line of research could be catastrophic.
 

jackschmittusa

Diamond Member
Apr 16, 2003
5,972
1
0
I don't see why intelligent machines would see us as a threat. I would think they would catalog us curious, slow, and inefficient (easily distracted by such things as sex, religion, Twinkies, etc.).
 

dmw16

Diamond Member
Nov 12, 2000
7,608
0
0
Sure it is possible, in fact likely. I (also as an engineer) find pondering such things interesting.

I think that the steady march of technological advancement indisputable. It is in our nature to push the advances forward and we will continue to do so.

Let's say we do reach this singularity (which I think we will), I see no reason to assume the machines' roles will change. Malicious behavior is a human trait. We tend to anthropomorphisize non-living things, but from a logical standpoint, it is silly. Why would the machines one day decide to turn on us?

I think the possibility of machines working to solve the problems we face to be amazing. If in 100 years we no longer fight for food, energy, medicine, etc it could usher in a new phase of humanity. We could turn our attentions to exploring the universe and living in peace with one another. Maybe I'm an optimist.

But to get back to the technical side of things. Just over 100 years ago, we had just flown the first airplane. Within less than that 100 year span we've sent robots to mars. And I think that the development of technology continues to accelerate.
 

shiner

Lifer
Jul 18, 2000
17,112
1
0
Originally posted by: dmw16
Sure it is possible, in fact likely. I (also as an engineer) find pondering such things interesting.

I think that the steady march of technological advancement indisputable. It is in our nature to push the advances forward and we will continue to do so.

Let's say we do reach this singularity (which I think we will), I see no reason to assume the machines' roles will change. Malicious behavior is a human trait. We tend to anthropomorphisize non-living things, but from a logical standpoint, it is silly. Why would the machines one day decide to turn on us?

I think the possibility of machines working to solve the problems we face to be amazing. If in 100 years we no longer fight for food, energy, medicine, etc it could usher in a new phase of humanity. We could turn our attentions to exploring the universe and living in peace with one another. Maybe I'm an optimist.

But to get back to the technical side of things. Just over 100 years ago, we had just flown the first airplane. Within less than that 100 year span we've sent robots to mars. And I think that the development of technology continues to accelerate.

and we invented The Clapper, The Snuggie and the Slap Chop.

 

Train

Lifer
Jun 22, 2000
13,583
80
91
www.bing.com
The singularity does not necesarily mean the dawn of AI, even among those who believe in the singularity, this point is disputed.

I think its more likely the singularity will happen BEFORE the first true AI.

And even more likely that the singularity will then be the CAUSE of the birth of the first AI.

But then again, assuming the exponential explosion of tech that is sort of synonamous with the singularity, AI would happen soon enough after that it might as well be the same pivotal point in time. I imagine most future historians, (err I mean, computer archives?), will remember them as a single event.
 

GeezerMan

Platinum Member
Jan 28, 2005
2,146
26
91
Originally posted by: dmw16
Sure it is possible, in fact likely. I (also as an engineer) find pondering such things interesting.

I think that the steady march of technological advancement indisputable. It is in our nature to push the advances forward and we will continue to do so.

Let's say we do reach this singularity (which I think we will), I see no reason to assume the machines' roles will change. Malicious behavior is a human trait. We tend to anthropomorphisize non-living things, but from a logical standpoint, it is silly. Why would the machines one day decide to turn on us?

I think the possibility of machines working to solve the problems we face to be amazing. If in 100 years we no longer fight for food, energy, medicine, etc it could usher in a new phase of humanity. We could turn our attentions to exploring the universe and living in peace with one another. Maybe I'm an optimist.

But to get back to the technical side of things. Just over 100 years ago, we had just flown the first airplane. Within less than that 100 year span we've sent robots to mars. And I think that the development of technology continues to accelerate.

True. Emotions are a human trait. What if the machines decide in a logical manner that humans are more of a problem for the earth than a positive influence?

 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
This guy's use of the term "singularity" in this manner indicates to me that he can't have that much of an idea about science and technology. I therefore arbitrarily dismiss everything else he said. :p
 

JohnnyGage

Senior member
Feb 18, 2008
699
0
71
Originally posted by: CycloWizard
This guy's use of the term "singularity" in this manner indicates to me that he can't have that much of an idea about science and technology. I therefore arbitrarily dismiss everything else he said. :p

Exactly, he can't even use the word cleverer in a sentence correctly.
 

dmw16

Diamond Member
Nov 12, 2000
7,608
0
0
Originally posted by: GeezerMan
Originally posted by: dmw16
Sure it is possible, in fact likely. I (also as an engineer) find pondering such things interesting.

I think that the steady march of technological advancement indisputable. It is in our nature to push the advances forward and we will continue to do so.

Let's say we do reach this singularity (which I think we will), I see no reason to assume the machines' roles will change. Malicious behavior is a human trait. We tend to anthropomorphisize non-living things, but from a logical standpoint, it is silly. Why would the machines one day decide to turn on us?

I think the possibility of machines working to solve the problems we face to be amazing. If in 100 years we no longer fight for food, energy, medicine, etc it could usher in a new phase of humanity. We could turn our attentions to exploring the universe and living in peace with one another. Maybe I'm an optimist.

But to get back to the technical side of things. Just over 100 years ago, we had just flown the first airplane. Within less than that 100 year span we've sent robots to mars. And I think that the development of technology continues to accelerate.

True. Emotions are a human trait. What if the machines decide in a logical manner that humans are more of a problem for the earth than a positive influence?

Applies human traits. Why do we assume they care about the earth? Just because they are super smart doesn't mean they no longer take "orders" from humans.
 

bbdub333

Senior member
Aug 21, 2007
684
0
0
Originally posted by: jackschmittusa
I don't see why intelligent machines would see us as a threat. I would think they would catalog us curious, slow, and inefficient (easily distracted by such things as sex, religion, Twinkies, etc.).

Unless they become capable of emotion, develop religion, and view humans as their creators. Wouldn't that be an interesting movie plot...
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
I recently donated to the Singularity Institute.

I have also considered quitting my job and bumming it until the singularity arrives. Think about it, why bust your ass working, paying taxes and slaving away for retirement when you just have to wait for the machines to do it all for you, and also when you can upload into a computer?
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
Originally posted by: jackschmittusa
I don't see why intelligent machines would see us as a threat. I would think they would catalog us curious, slow, and inefficient (easily distracted by such things as sex, religion, Twinkies, etc.).

Twinkie? Where!?!