It seems to me that most of the nightmare scenario re: ASI is predicated on the AI "building" itself to some extent. If the system is directed to constantly improve its own intelligence then an exponential curve and evolution beyond human capability seems inevitable.
But what if we just don't program/design/give it the resources to do that? Progress will be slower but at least we have a shot at maintaining some control over each iteration.
But what if we just don't program/design/give it the resources to do that? Progress will be slower but at least we have a shot at maintaining some control over each iteration.
