It’s been almost ten years since the US first went to war in Afghanistan began, but even as the ten year mark approaches since troops first set foot in what was intended to then be a temporary military maneuver, a friendly fire tragedy has the public scrutinizing the use of robotic drones in military campaigns. And though the incident may not be the first time a shot was fired mistakenly at a friendly target, it is the first time a marine was mistakenly targeted by a robotic drone.
It’s an issue we may have to come to terms with as technology improves. Just as humans make mistakes in the field of war, machines will be misdirected and accidents can happen. But the fact that these incidents are accidents certainly does not make the result any less tragic. How will the future treat these incidents? And will we see a future battlefield where drones are safer to ensure this level of tragedy never happens again? Or will we instead be forced into a world where mistakes are taken lightly as no human is to blame when they become increasingly automated?
The use of robotics in the field of war began as simply as any other new technology. As it began, drones were intended to be used primarily for surveillance. But when they saw an increased level of success, troops were soon using them to help ensure they would remain safe when engaging enemy combatants. Unfortunately, it also yielded a level of anonymity to the user – one that made tragedies all the more difficult to track down. And civilian deaths have been reported having happened with no proof or troops present.
And though the incident is tragic, it does paint a sobering future when we look at future robotic devices intended to be used by the military for war in the future. So far every shot fired by an automated attack drone has to be confirmed by a human user operating via live controls at a remote location. But what happens when a system like this ultimately does lose its human controller? Will we one day be looking at a world where robots have advanced to such a degree that they can achieve autonomy and even make the decision themselves to kill or not?
Over the years there have been several proposed safety measures that could be introduced to ensure an automated drone would not attack a human, with varying levels of applicability to the real world. One idea put forward was the use of a patch to be worn or woven into a military uniform that would not allow a weapon to fire at a target wearing it. Of course this too carries with it several potential drawbacks. In addition to the potential for such a device to be captured by enemy troops, some have voiced concerns about making possession of an object a requirement for a drone not to target an individual, suggesting civilians would not be spared if such a machine targeted them. Luckily, it seems we may still have some time before completely automated robots enter the battle field. But in the mean time the shots fired on friendly targets would have to have been confirmed by human hands.