Anyone else feel we are on the cusp of some really exciting and scary AI?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

monkeydelmagico

Diamond Member
Nov 16, 2011
3,961
145
106
Not really close to any meaningful AI yet. If we equate AI to our most common frame of reference, ourselves, we have at least 100 billion special purpose combination analog/digital processors called neurons that all operate in parallel. Computers are today only able to mimic something as simple as the Aplysia sea slug with its 20,000 or so neurons. Even mimicking the brain of the common housefly drosophila is beyond the capabilities of today's fastest computers because it is still uncharted in detail. Maybe someday digital computers will be able to mimic a human brain, but not within my lifetime, and that's assuming the human race manages to survive in its present form instead of becoming an "Idiocracy".
 

Hayabusa Rider

Admin Emeritus & Elite Member
Jan 26, 2000
50,879
4,265
126
Not really close to any meaningful AI yet. If we equate AI to our most common frame of reference, ourselves, we have at least 100 billion special purpose combination analog/digital processors called neurons that all operate in parallel. Computers are today only able to mimic something as simple as the Aplysia sea slug with its 20,000 or so neurons. Even mimicking the brain of the common housefly drosophila is beyond the capabilities of today's fastest computers because it is still uncharted in detail. Maybe someday digital computers will be able to mimic a human brain, but not within my lifetime, and that's assuming the human race manages to survive in its present form instead of becoming an "Idiocracy".

There's a laxness of terminology which is fairly common and adds to confusion. When most people I know speak of AI they are really referring to intelligent systems. Those are software run on appropriate machines which will follow a complex algorithm to provide useful output.

In that sense we have "AI" which is serves a special purpose to serve the same function as "intelligence". A self driving car isn't intelligent and neither are professional systems which can do a decent job of diagnosis.

For the purpose of this discussion I think this to be sufficient.
 

Locut0s

Lifer
Nov 28, 2001
22,281
43
91
Here's a little thought on possibilities assuming the proper technologies develop concurrently.

So you have no job. You have a universal income but as human societies go we are useless. We had no choice in all of this.

So..

As computing power grows people are paid with VR time. Medicine keeps your body healthy and fit even while sedentary. You go offline to sleep or if you want to get out in "the real world". Everyone gets their own Matrix they can control or have designed for them so there's mystery left. Not glasses and a tank but direct injection into the sensory areas of the brain to override reality, if there really is such a thing.

If you want a nice steak and you have something which cannot be distinguished between the best steakhouse and electronic impulses it makes no difference to the one doing the dining. Everyone gets to be a god or devil or king or serf. A knight, an astronaut, anything at all the the imagination can create.


Now is that all good or bad and why?

It's not good or bad. I think we are a long way from being able to do that because if we could realistically simulate a person in VR (matrix) to the point where you couldn't tell the difference then we have actually solved the problem of creating artificial consciousness. IMHO the Turing test taken to the extreme is enough to satisfy that something IS conscious. That extreme being something like being able to live with a virtual husband or wife your whole life and not know the difference. Doesn't mater if it's a program. Cause we could easily argue that we are meat programs.

And this and other areas you talk about get into areas of philosophy that people have actually studied. Technically speaking there's nothing inherently wrong with the matrix like world you describe IMO. The danger to me is ones total control over it. Because if we assume that the reality created in this matrix is indistinguishable from our "reality" then one has to assume that the beings that inhabit it are as real as those in ours too. And in that case definite problems arrise if your Matrix happens to have you as a despotic dictator sentencing VR people to death. But like I said otherwise assuming even virtual consciences are not harmed I don't see a problem with that. Well other than that it could be the end of the human race biologically. I mean few people would choose to reproduce with a flesh and blood person.

And then at some point you get into the question of whether "people" are even needed in that case. If the reality is so convincing that it fools us totally. Then why not do away with "us" and just run the world on a computer instead of simulate it and feed it to our brains. Our brains then become kind of become the weak link in the chain.

Indeed both philosophers AND physicists have looked into these questions. Is there a possibility that we actually are simulations being run on some alien computer? It sounds stupid but actually it's not a stupid question to ask. Indeed there is some tantalising evidence in particle and quantum physics that the nature of reality as we know it might be in some way digital. I personally don't think this is the case. But it's not outside the realm of possibility.
 
  • Like
Reactions: Hayabusa Rider

Locut0s

Lifer
Nov 28, 2001
22,281
43
91
If it's stupid but it works, it isn't stupid.
That depends on your definition of stupid. For example suppose you tell an AI to maximize the output of food production for the human race. Further you give it all the tools to do so globally. So that it has control of all farm equipment, all land allocation, all distribution etc. The AI can probably do a far better job than we can at making food production efficient. But suppose you don't think of unintended consequences. Suppose the AI doesn't understand that we and other things on the planet need space. Then one possible solution it could come up with is to turn every square millimeter of land on the planet into farm land. That wouldn't be good. You may say that we would be able to stop that long before that happened. And yes in that particular scenario maybe. But not for other more subtle problems.
 

urvile

Golden Member
Aug 3, 2017
1,575
474
96
There have been autonomous weapons systems deployed along the south korea/north korea border (made by samsung) for years. That if fully autonomous are capable of making "their" own "decisions". Based on various inputs. Currently most of these types of weapons systems require a human in the loop.

Take that apple....
https://en.wikipedia.org/wiki/Samsung_SGR-A1

You can see the implications though. The idea of autonomous cars is great but the real money is in weapons for the military. Like fully autonomous armed drones on land sea and air. AI has been used in legal work for years also....

Skynet may be coming kids.

The scary part is once the technology is good enough. You could just deploy autonomous weapons systems and essentially set them and forget them. Until maintenance and maybe reload time of course but autonomous weapons don't have a conscious. It's just a computer program and I think that's what scares people.