Science fiction and horror always seem to voice the concerns of the generation producing it. But while the idea of a robotic apocalypse may have once been nothing more than a mere fantasy, we’re already starting to see technology that could make this symbol of technological oppression more literal. And while artificial intelligence may still be far off from becoming self aware, even viruses today possess traits that – if translated into robots in the military, could pose a very serious and long lasting problem to the future just as landmines did after World War II.
Robots are still controlled by humans when they go out into the battlefield. But many of their systems are not controlled by radio in the same way an RC helicopter would be. Instead, the on-board targeting computers and navigation systems can offer them a far more substantial and efficient punch than we simple humans would normally be able to bestow. But it might not be as simple as that in a few decades. And the aforementioned viruses may have already given us a clear indication of what a robot may one day be able to do.
An article in the July 25th edition of the New York Times in 2009 entitled “Scientists Worry Machines May Outsmart Man” used the phrase “cockroach” quoted from the Association for the Advancement of Artificial Intelligence to describe the level of intelligence exhibited by certain artificial programs. Combine the same level of intelligence, programming to kill enemy targets operating within a certain area, a few advanced weapons, the terrain scaling technology of Honda’s Big Dog robotic walker, and the energy regenerating and consuming properties of the EATR robot which consumes organic tissue and converts it to energy. Take these different factors into place and we may one day be looking at a fairly grim prospect. Take the idea of the same robots then acquiring materials and creating more copies of themselves gradually over time, and we may actually be looking at another major conflict on our planet. But while it may not be the same as the Skynet nightmare of the Terminator series, it might still be a sustained issue future generations would have to bear the burden of – particularly if proper safety precautions were not placed in the objects themselves such as an automated shut-off switch after a certain period of time.
If the idea seemed ridiculous in the 1980’s, it is quickly becoming a sobering possibility for some future date. But it may still be some time off. Currently, though robots are certainly advanced, they still have difficulty operating in a changing and unpredictable environment. Rather than adapting to new situations they still have to adhere to a specific code. Intelligence for these machines is one of the things keeping the objects from becoming a major contemporary issue. But perhaps in twenty years we may be seeing the start of a whole new generation of autonomous machines made specifically for war. Will we as a society be ready for the ethical and technical questions this raises?