- Nov 28, 2001
- 22,205
- 43
- 91
Let me be clear I'm not talking about general purpose human consciousness level AI. It's hard to know if we will ever get there. However the pase of recent AI development from things like Alpha Go and many many other applications of neural networks is... I have to admit a little unsettling to me. No I'm not some doomsday conspiracy theorist. Skynet isn't here soon, is it ;-)?
The kinds of things we are seeing now in AI though are definitely extremely exciting. I think we are now seeing what we can argue to be "thinking" on a limited level. Remember these don't operate by brute force searches. Take a look at the recent breakthroughs with Alpha Go Zero.
https://en.m.wikipedia.org/wiki/AlphaGo_Zero
It managed to beat the previous Alphago 100 games to 0 and that is the one that beat Lee Sedol. More impressive still it reached that level in just 3 days of training Vs months for the previous. AND it does so on much less hardware. It's also coming up with new go moves that human players have never seen that the master players are now looking to copy.
You may argue that this is not "thinking" because it's just a program. And it's so limited in scope. All it can do is play Go at a superhuman level. Well sure but that's the start. And it's clear to me that lots of animals engage in limited thinking. Remember that computers are able to leverage their own advances and improve upon themselves much faster than biology, at times exponentially.
We are already seeing the use of neural networks in our daily lives in things like steadily improving voice recognition. You don't realise how scarily good it is until you get a Google Home. My father has one and we were playing around with it the other day. Yes I know home itself is just a cheap piece of plastic and the neural networks are in tons of server farms but that's beside the point.
We are in for some very interesting times. I really do think though that we need to start seriously thinking about AI safety. You don't need Skynet or general consciousness for some scary unintended consequence. Especially since these are black boxes to everyone most of the time.
The kinds of things we are seeing now in AI though are definitely extremely exciting. I think we are now seeing what we can argue to be "thinking" on a limited level. Remember these don't operate by brute force searches. Take a look at the recent breakthroughs with Alpha Go Zero.
https://en.m.wikipedia.org/wiki/AlphaGo_Zero
It managed to beat the previous Alphago 100 games to 0 and that is the one that beat Lee Sedol. More impressive still it reached that level in just 3 days of training Vs months for the previous. AND it does so on much less hardware. It's also coming up with new go moves that human players have never seen that the master players are now looking to copy.
You may argue that this is not "thinking" because it's just a program. And it's so limited in scope. All it can do is play Go at a superhuman level. Well sure but that's the start. And it's clear to me that lots of animals engage in limited thinking. Remember that computers are able to leverage their own advances and improve upon themselves much faster than biology, at times exponentially.
We are already seeing the use of neural networks in our daily lives in things like steadily improving voice recognition. You don't realise how scarily good it is until you get a Google Home. My father has one and we were playing around with it the other day. Yes I know home itself is just a cheap piece of plastic and the neural networks are in tons of server farms but that's beside the point.
We are in for some very interesting times. I really do think though that we need to start seriously thinking about AI safety. You don't need Skynet or general consciousness for some scary unintended consequence. Especially since these are black boxes to everyone most of the time.