If the artificial general intelligence (AGI) is a dream, why the 2020 Survey of Artificial General Intelligence identifies 72 active AGI R&D projects?

Quantum Robin

Member
Jan 3, 2019
50
3
71
Hello!

I found the PDF "2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy" in Internet* about companies that are developing the AGI (artificial general intelligence), for example:

ACT-R
ACT-R, an acronym for Adaptive Control of Thought-Rational, is a research project led by John Anderson of Carnegie Mellon University.
Page 39.

AGI Brain
AGI Brain was founded by Mohammadreza Alidoust in 2009, though it only shows activity in 2018-
2019.
Page 41

Binary Neurons Network
Binary Neurons Network is a project by Ilya Shishkin.
Page 52


He4o
He4o is a GitHub project created by Jia Xiaogang.
Page 71

It is written in page 2 of the PDF "2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy:

"Artificial general intelligence (AGI) is artificial intelligence (AI) that can reason across a wide range of domains. While most AI research and development (R&D) deals with narrow AI, not AGI, there is some dedicated AGI R&D. If AGI is built, its impacts could be profound. Depending on how it is designed and used, it could either help solve the world’s problems or cause catastrophe, possibly even human extinction."

"The 2020 survey has two main findings. First, the expanded search methodology has identified a large number of new AGI R&D projects, bringing a more comprehensive picture of the field of AGI R&D. Second, accounting for the expanded search methodology, there has been little change in the field of AGI R&D between 2017 and 2020. Some 2017 projects are now inactive and some new projects have emerged in 2020, but the overall picture is largely the same. Specific trends are as follows:

The 2020 survey identifies 72 active AGI R&D projects spread across 37 countries. The 2017
survey identified 45 projects in 30 countries. The 2020 survey updates the 2017 dataset, finding 70 projects that were active in 36 countries in 2017, 57 of which remain active in 2020."

It is written in following site (https://www.theverge.com/2024/1/18/2404 ... -interview) about OpenAI, Mark Zuckerberg, Demis Hassabis, the leader of Google’s AI efforts and artificial general intelligence (AGI):

"OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, the leader of Google’s AI efforts, has the same goal.

Now, Meta CEO Mark Zuckerberg is entering the race. While he doesn’t have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he’s shaking things up by moving Meta’s AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta’s apps. The goal is for Meta’s AI breakthroughs to more directly reach its billions of users."

It is written in page 1 of the PDF "Artificial General Intelligence Issues and Opportunities Abstract"about Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI)*:

"Artificial Narrow Intelligence (ANI) sometimes called weak AI is the kind of AI or machine learning that we have today: each software application has a single specific
purpose. However, generalist agents are being created that can perform several functions, but not as general and creative as AGI.

Artificial General Intelligence (AGI) sometimes called strong AI is similar to human capacity for novel problem-solving and reasoning whose goals are set by humans. It can: address complex problems without pre-programing like ANI requires; initiate searches for information worldwide; use sensors and the Internet of Things (IoT) to learn; make phone calls and interview people; make logical deductions; re-write or edit its code to be more intelligent …continually, so it gets smarter and smarter, faster and faster than humans. Some believe this could happen within ten years; some others argue that AGI is impossible for many more years, if ever. Although there is no consensus within the AI community, some would say AGI will have a unique form of sentience.

Artificial Super Intelligence (ASI) is AGI that becomes so advanced that it sets its own goals and strategies independent of human awareness or understanding. It is most
likely to emerge from AGI. It is unknown how fast ASI could emerge from AGI. It could be almost immediately, or years, or never. Hence, research & innovation policy should consider a range of possibilities. Allan Dafoe, of DeepMind and the Future of Humanity Institute, University of Oxford, says that “the governance of AI is the most important issue facing humanity today…” Elon Musk believes that the single, most pressing existential issue we face is how the advent of ASI is or is not symbiotic with humanity".

*Link of the PDF "2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy":

https://www.google.com/url?sa=t&source= ... i=89978449

*Link of the website of the PDF "Artificial General Intelligence Issues and Opportunities Abstract:

https://www.google.com/url?sa=t&source= ... inG7LGRCht "

How to avoid catastrophe, possibly even human extinction caused by artificial general intelligence (AGI)?

If the artificial general intelligence (AGI) is a dream, why there are companies that are developing the AGI (artificial general intelligence), for example, ACT-R, AGI Brain, Binary Neurons Network, He4o?

If the artificial general intelligence (AGI) is a dream, why the 2020 survey identifies 72 active AGI R&D projects spread across 37 countries?

If the artificial general intelligence (AGI) is a dream, why OpenAI and Google’s AI efforts are developing the AGI (artificial general intelligence)?

If the artificial general intelligence (AGI) is a dream, why now, Meta CEO Mark Zuckerberg is entering the race of the developing of the AGI (artificial general intelligence)?
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,163
514
126
I think you need to put things into context. AGI has been talked about and worked on since the 1960's and 1970's (and imagined theoretically years prior). The only model we have for a general artificial intelligence is the model of the human brain.

We are only now in the last couple years have hardware that can help facilitate both the training and running of basic neural networks (things with say 100,000 interconnects). Just 20 years ago, we would be lucky to be able to deal with 100 interconnects. The human brain has on the order of 80-90 billion neurons with on the order of 100 trillion interconnections between them. We need probably 6-10 orders of more densely populated neural networks in order to meet what the human brain does, and the reality is, you need training of the neural network first, which is even more costly than the running of a trained network.

So, yes, from the aspect of what we know of what it takes to have a true general artificial intelligence, we are still orders of magnitude away from the current known capable model of the human brain. It might be possible to cut out a lot of stuff that the brain does and focus strictly on the intelligence pieces (it doesn't make sense to train part of the neural network to run say the autonomic systems of the human body such as signalling for the heart to beat, or breaths of air to be taken and regulating body temperature in an artificial model). But other pieces, like simulating how it can hear or see might still be needed. Context might need to be made for how to perceive the world when you might have access to hundreds or thousands of different inputs like cameras, sensors, or microphones, etc., as humans we do not have such capabilities, and handling those different perceptions is a large part of the training of the neural network.

It is hard to say what part of conscious thought is intrinsically linked directly to how we perceive and process the world around us. So modeling with different perceptions may and would radically change how an intelligence would think. So again, there might not be a whole lot of the brain's function that can be stripped out without risk to the development of a general AI network. In the end, we are still only now just brute forcing basic simplified networks compared to what would be needed for a general network. Small steps are finally being made as we finally have the computing power to perform these brute force operations, and can finally start work on generating basic neural networks and can see how we can treat and possibly combine different networks with each other for more complex results. With those building blocks we can start to work on a more general approach.