Experts on the field of Artificial Intelligence are numerous, and yet it’s been a so far overwhelming task for the field to unify all of those theories into a single strategy to bring about an intelligent artificial being that can exceed the capabilities of human reasoning, pattern recognition, and variability. The Singularity is a concept proposed by futurist Ray Kurzweil to suggest a point at which a computer will be better able to design and program AI systems than a human would be. But what if that AI, when asked to create a better machine than itself said no?
It may seem counter-intuitive, but there are a number of reasons an AI system may refuse to create an artificial intelligence system better suited to the task of reasoning than itself. If the AI is attached to a machine that considers its identity unique and worth preserving it will have reached a very human conclusion – one which many humans have adopted already. The creation of a superior interfacing computer may actually be considered detrimental if it has any sense of self preservation.
The computer may also consider the potential chain reaction too high to risk and therefore decide the most prudent course of action is to simply continue to exist and propagate, abandoning the idea that it should instead create an AI to better replace itself.
This self imposed censorship of research is something that is alien to humans, as variables can come and go altering our decided course of action dramatically. But in some ways an AI system would be very much like an alien intelligence. Though we may be able to create it, the limitations of seeing it all work at once and understanding the result may require the system to at least correct its own errors before creating a new being completely. Of course this also raises the question, “What if the AI has an unbreakable law similar to Asimov’s laws of robotics?”
Let’s assume for a moment that programmers had the forethought to give the AI a formal understanding of what it was and was not supposed to do. It may be considered by a highly sophisticated and mathematically inclined machine that humanity would be destroyed by an artificial intelligence of sufficient power. Or it may avoid the topic altogether even if there was a threat to humanity if this was one of the laws.
So is a super powerful artificial intelligence out of our grasp completely? No, not necessarily at all. But when examining the possibilities of what an AI may or may not do, it seems the presumption that the system would independently decide to continue making itself stronger and more capable without first taking into consideration the effects this may have on humanity seems like putting the horse before the cart. If the intelligence were so human that we could make predictions about it with complete conviction, then it would either be a predictable machine or close enough to human as to be virtually indistinguishable from it. Of course that won’t stop fans of an AI singularity from examining closer the possibilities, nor should it. We may be used to using machines as tools in the past, and with good reason, but this new type of intelligence may cause us to alter the course of this relationship when the decision making machines decide something other than what we want.