UNDP has been exploring ways to use crowdsourcing in conflict prevention. Debates on this topic tend to focus on how to collect and analyse data from the crowd that is credible and provides early warning (more on that below). Whilst these are important questions, there is also much to explore in the use of the same technologies for early response. For traditional conflict early warning systems, early response comes in the form of an institutional partner that has the response capabilities to act on information collected. For crowdsourced conflict early warning, an institutional responder is one option (crowdsourcing can be proposed as one data collection technique available to early responders). Another option is to reverse the information flow in order to mobilize a response directly from the crowd.
Crowdfeeding is to early response what crowdsourcing is to early warning. Crowdfeeding involves using the same technologies that are used to gather data from the crowd to feed back that data to the crowd. By enabling this kind of communication with the crowd, a crowdfeeding early response system could not only help organizations involved in early response to deliver messages of peace / information to prevent conflict, but potentially also allow the crowd to self-organize grassroots responses.
In many ways, using technology to deliver peace messages is not so different from traditional early response. It’s a communication strategy that could be employed by an institutional responder alongside other similar activities (peace festivals, community visits, public information campaigns). Crowdfeeding for grassroots early response is more complicated – it’s also closer to activism than it is to early response. Much like crowdsourcing can’t be controlled, there’s an element of crowdfeeding that can’t be controlled. Could some activist tactics that involve crowdfeeding be adapted to early response (e.g. create a flashmob – but a peace flashmob)? Could a crowdfeeeding warning system work based on experiences of crowdfeeding to direct activists demonstrating?
All that said, before getting to crowdfeeding, data has to be collected from the crowd and turned into credible, actionable information.
There are some evident obstacles to collecting crowdsourced data. In countries where access to technologies (mobile and internet) is still patchy, data can be skewed by who has access. (One practical measure: figure out geographic areas where there is no coverage / socio-economic groups that have lower access, and control for those.) Even with access, the crowd doesn’t always want to tell you what’s up. UNDP mentions the successful example of monitoring during Kenya’s 2010 constitutional referendum; a similar effort around the 2011 referendum in Sudan was less successful. It’s worth asking the question of what incentives the crowd has to report to an early warning system (and faith in an early response mechanism may be the key). Finally, where there is limited political space, it may be unethical to ask the crowd to provide data that could put them in danger.
Once you get data from the crowd, the challenge becomes how to make it usable. If analyzing conflict data is already difficult, analyzing crowdsourced conflict data is even more problematic. Having done some work with this kind of data through the SBTF, here’s one basic conclusion: crowdsourced data is an excellent, fast way of letting responders know where and about what they should be asking further questions. It cannot provide any answers, it just points you in the direction of further inquiry (much faster than any other system). The UN’s Global Pulse is also doing some interesting research on analysis of social media data for early warning. Although not directly applicable (UN Global Pulse does not focus on conflict), lessons can be drawn from the techniques they explore.
Whatever methods are used, credible analysis hinges on the verification process put in place. One possible technique is to triangulate crowdsourced tagging of satellite imagery with crowdsourced reporting from the ground. Hiring field monitors to verify information is another (more costly) possibility. A network of field monitors can also grow organically – if one reporter from the public consistently reports information that is deemed correct (when compared to information from field monitors, for example), then the reporter becomes a trusted source.
When considering credibility and how this limits analysis, it’s worth also thinking what kind of questions are asked of the crowd. Crowdsourcing for conflict data might benefit most from focusing on subjective questions. Not “was there a violent attack here today?” – which would need a lot of verification, could lead to malicious use for spreading rumours, requires links to police reporting, etc. Rather “do you feel safe here today?”. That doesn’t need verifying. Someone can feel unsafe regardless of what is objectively going on around them, and this feeling is in itself an important indicator of the likelihood of conflict in the near future.
There is tremendous potential in the application of technologies that connect the crowd to conflict early warning and response. Despite the risks and challenges, these new methods have the potential to give voice to people living in conflict and empowering them to act for peace. It will be interesting to see where UNDP takes this.