Over the past few years a number of transhumanists and technologists have come forward with the possibility that at some point in the near future we may be looking at the possibility of an invasion of sorts from an artificial form of intelligence. The invasion will not likely happen in our real space, but rather exclusively be handled over a series of networks and affect information as it is being transferred. The true end result of such an occurrence is unknown, but some have begun to question whether it might be worthwhile to develop some sort of plan in the event of an AI outbreak or “attack” as it may be interpreted. But is such a plan even possible?
However effective or ineffective as it may be, there is nothing inherently wrong with the development for a strategy in response to an AI attack other than the unpredictable nature of the AI manifestation itself. The AI could be performing in any number of ways and even be attempting to do things that are detrimental in the short-term but in the long run be beneficial to the human race. Of course this is only one possibility of several. But can we speculate on the nature of such an attack even if ultimately we cannot determine its root motivation?
For one thing, such an event would be taking place almost exclusively through the different avenues that networks take place on. But the actual networks themselves may not be exclusively limited to the Internet. After the discovery of Stuxnet, some security experts have speculated that an attack could take place through a combination of networked terminals and viruses that exist undetected on hard disks as well. And in the event of a Stuxnet style attack, detection would become even more difficult.
Perhaps the best way of defending against a worldwide attack started by an artificial intelligence would be to attempt to discover where the event was coming from and then studying the entity involved. By understanding the way the being forms logical conclusions, it may then be possible when it is still in an early stage to confront it with a stream of data that would render it inert or at least – if it has reached beyond such a primitive level of thinking, reason with it and trap it within an illusion causing it to believe at least temporarily that it is still operating in the real world.
Of course the AI will still be operating from a single location. Avoiding global catastrophe may be as simple as just limiting the location’s ability to tap into other networks. If the being is still able to communicate through alternate means or has somehow gained mobility from a single spot – either through transferring itself into a network and assembling an ad hoc brain over the network or by quite literally mobilizing the core brain it is in, this might make things more difficult. But as we speculate in this direction the possibilities would become less grounded in a real world of possibility and quickly spiral in the direction of science fiction. Let’s just hope when a self-improving AI does come around they decide humanity is a good thing.