Earlier this year, I was invited to speak on the Technology for Peace panel at IPI’s 44th Vienna Seminar. Ameerah Haq (Under-Secretary-General, United Nations Department of Field Support) was also on this panel, and explained how drones are increasingly becoming a feature of DPKO missions. As proof of the importance of this innovation, she recounted a story about the first flight of the MONUSCO drones, operated by UN peacekeeping troops stationed in Goma, North Kivu, in the Democratic Republic of Congo. Goma is on the shores of Lake Kivu, and the most common mode of transport between Goma and Bukavu are unsafe, overcrowded boats across the lake. On their test flight, the UN drones sent back real-time imagery of a boat that was sinking in the middle of the lake. The peacekeepers quickly deployed a few UN boats and saved many passengers from drowning.
That UN peacekeepers were able to undertake a rescue thanks to their new drones is laudable. But the key purpose of deploying UN troops to Goma is to guarantee the safety and protection of civilians in an area where violence from non-state armed groups is all too common. Why did Ms. Haq chose to share a story that was about a humanitarian action peripheral to the central purpose of DPKO missions? Is it early days and there wasn’t much else to share? Or was this the only story that could be shared because the others would compromise the intelligence gathering that drones are allowing the mission to undertake?
The second thought stayed with me. UN peacekeepers are actively collecting data on civilian (and military?) activities in the Kivus (and elsewhere). Does the local population get a say in what data is collected, and to what purpose? How relevant is this question in conflict settings? Do the same standards apply as elsewhere? Patrick Meier has been doing some great work on the ethics of humanitarian UAVs, but I wonder if we need a concrete discussion on the ethics of drone use for conflict prevention. OCHA recently published a policy brief on the use of UAVs by humanitarian actors where it directly recommends against using UAVs in conflict settings:
“Focus on using UAVs in natural disasters and avoid use in conflict settings: The use of UAVs in conflict settings is still too complex and hard to separate from military uses.”
I understand that OCHA may not want to complicate the still-nascent discussion on humanitarian UAVs by considering conflict settings. However, if drones are starting to be used for non-military purposes in places like the DRC, then we need to begin to discuss this. Here are three problems and two possible solutions to start a conversation on drones, ethics and conflict.
Problem 1: privacy and consent. The discussion around data privacy and UAVs centers on two issues: consent and the imperative to save lives. Consent is critical to any data collection and dissemination in conflict settings, whether via UAVs or otherwise. It is often difficult to meet Do No Harm principles because the unintended consequences of data collection in complex conflict environments are so hard to predict. An important way to mitigate this risk is to obtain the consent of those being surveyed who are most likely to understand these unintended consequences. But if the purpose of the MONUSCO UAVs is to allow peacekeepers to monitor a broader area than they can cover over-land, then how operationally viable is it to obtain consent for UAV-collected data? Humanitarian actors at times argue that the imperative to save lives trumps the need for consent in certain situations and / or at certain levels of data aggregation. This is an important argument to make in humanitarian crises, but how applicable is it to collecting data on civilian protection? It is much harder to draw the line on what is life-threatening in a conflict context. UAVs cannot detect intent, so how are imagery analysts to determine if a situation is likely to result in loss of life?
Problem 2: fear and confusion. In describing common misconceptions about humanitarian UAVs, Patrick Meier argues that most drones used by the UN / NGOs are perceived by local communities as toys, not as threatening military equipment. In speaking with local peacebuilders in the Somali Region and in Pakistan, I wonder whether the same is true in (at least some) conflict contexts. There is significant trauma among local populations who have witnessed drone strikes that appeared to come from nowhere. There is also much greater suspicion of anything that looks like an instrument to spy, to relay information to places of power far away, and that might (even unintentionally) make them a target for military action. This blogpost by the IRC raises similar concerns about the difficulty that local populations may have in distinguishing drones-for-good in conflict settings. When the MONUSCO drones first started to operate, a consortium of NGOs working in the Kivus warned that they might (at least in the eyes of local beneficiaries) appear to blur the lines between military and humanitarian actors. The OCHA policy brief reinforces these concerns, arguing that painting and signaling humanitarian UAVs to distinguish them from military drones works well in natural disasters, but is unlikely to be sufficient to overcome the fears of local populations in conflict settings.
Problem 3: response and deterrence. Whether collected with UAVs, via SMS-enabled crowdsourcing or at community meetings, a key issue with any system that gathers data in or about a conflict is that it raises expectations for a response. This risk is especially concerning for MONUSCO, who have in the past been criticised for inadequate response to known threats to civilians. Is it ethical for MONUSCO or other UN /NGO actors to deploy UAVs if they do not have the capacity to respond to increased information on threats? One possible counter-argument is to say that the presence of UAVs is in itself a deterrent (just as the presence of UN peacekeepers is meant to be a deterrent). In fact, the head of DPKO has suggested that deterrence is a direct aim of UN drones. Other initiatives using satellite imagery to monitor violence, such as the Satellite Sentinel Project, have similarly argued that surveillance of conflict areas acts as a deterrent. But the notion that a digital Panopticon can deter violent acts is disputable (see for example here), since most conflict actors on the ground are unlikely to be aware that they are being watched and / or are immune to the consequences of surveillance.
Solution 1: education and civic engagement. Educating communities where drones are deployed is one way to address the issues above. OCHA’s policy brief indicates that it is important to increase “the degree of transparency, acceptance and community engagement of the UAV program”. An open conversation with communities can include considerations about the potential risks of drone-enabled data collection and whether communities believe these risks are worth taking. This can make way for informed consent about the operation of drones, allowing communities to engage critically, offer grounded advice and hold drone operators to account. Still a question remains: what happens if a community, after being educated and openly consulted about a UAV program, decides that drones pose too much of a risk or are otherwise not beneficial? In other words, can communities stop UN- or NGO-operated drones from collecting information they have not consented to sharing? Education will be insufficient if there are no mechanisms in place for participatory decision-making on drone use in conflict settings.
Solution 2: from civic engagement to empowerment. Perhaps civic engagement in how outside actors use humanitarian UAVs is not sufficient. In my view, the critical ethical question about drones and conflict is how they shift the balance of power. As with other data-driven, tech-enabled tools, ultimately the only ethical solution (and probably also the most effective at achieving impact) is community-driven implementation of UAV programs. Drones flown by communities as part of their own conflict prevention processes and activities. If you think that’s a crazy undertaking, consider that something similar is already happening for community-led, UAV-enabled disaster risk reduction in Haiti. And there is plenty that local peacebuilders could use drones for in conflict settings: from peace activism using tactics for civil resistance, to citizen journalism that communicates the effects of conflict, to community monitoring and reporting of displacement due to violence.
I’m guessing this second solution is not going to sit easily with most readers. If you think it would never fly because people would be taken for spies and military / government officials would be afraid of them, then doesn’t that reinforce the three ethical problems outlined above? The more I consider how drones could be used for good in conflict settings, the more I think that local peacebuilders need to turn the ethics discourse on its head: as well as defending privacy and holding drone operators to account, start using the same tools and engage from a place of power.