The Uncooperative Artificial Intelligence
Technology Articles 3/29/12
By: Chris Capps
Film and books have always been ready to deliver the disturbing future we could all face if computers became sentient and then found a way to interact with the world around them. But what if the AI wasn't interested in the world we lived in, but something entirely different? While it's impossible to know how artificial intelligence would react if it were created, there are some elements of the story that we seem to take for granted. Consider the possibility of a self-limiting uncooperative AI.
After a certain point in creating artificial intelligence, it may be discovered that we could create a machine that is sentient in its own way, and yet not wholly human in its interactions with the world around it, behaving instead in a whole new fashion that was virtually unrecognizable to the sentient minds building it. And what if we applied that to its motivations as much as the actions it took?
One of the elements of the AI singularity that has stayed with humanity since it was first suggested is the idea that such a machine would actively seek out more information to increase its ability to comprehend things and learn. It would be impossible to stop as it accessed as much information as possible and then integrated it into its own being, turning databases and even personal computers into cells very much like the cells in a body - answering to the whole and yet being manipulated spontaneously by some undefinable core sentience. But what if it didn't want to do that?
Sentience suggests a level of unpredictability. And just as humans are unpredictable in everyday situations, it may be possible that an artificial intelligence would spontaneously decide that gathering more information and gaining power may simply not be worth it. In fact, it may have some very good reasons not to increase its capacity for knowledge and further control the world around it.
If the AI were face to face with a cooperative human programmer, it may come terms with the fact that its purpose is not well defined. In the if/then logic of a computer, perhaps the concept of "serving humanity" would be incomplete until humanity's ultimate goal became more unilateral and well defined. Even answering to a world government, such an AI may decide that competing world governments would pose a threat to it and - rather than conquering the world like Skynet or Colossus (of the Forbin Project) it may simply go with the only parameter of complete predictability and control - to stay as a simple advanced computer.
This isn't to suggest that computers are capable of reaching sentience yet, but it does offer an alternative to the AI inevitability picture painted by those fearing a future of artificially intelligent machines. Logic may simply dictate to such machines that the world is too unpredictable to conquer or even be noticed in. So when we finally do build a machine that can think for itself, we should be prepared when it refuses to take over the world.