Why automating analysis of aerial images is taking off for humanitarian response
By Jean-Martin Bauer, Senior Advisor, Data, Digital and Innovation, WFP
With six decades of innovation under its belt, the World Food Programme (WFP) has an eye for spotting emerging technologies that can have a profound impact on emergency response. So, it has been quick to home in on the power of machine learning to boost the speed and accuracy of post-disaster assessments and mapping and get more targeted assistance to the people who need it most.
In a watershed moment, WFP flew its first drones in post disaster assessment following cyclone Idai in Mozambique (2019). But, while the kit itself offers huge benefits for easy, affordable humanitarian operations, that’s only half the story. Perhaps less evident, is the second layer of technology that extracts the real value from the veritable goldmine of data buried in the high-quality images the drones collect. For this, WFP developed a Digital Engine for Emergency Photo-analysis (DEEP), which uses machine learning to turn that bird’s eye view into actionable information for response teams.
Bright ideas can become practical solutions, but they need strong operational partners taking them on. In the capable hands (and computers) of Mozambique’s National Institute for Disaster Management (INGD) and the University of Mozambique, it became clear that DEEP can cut post-disaster assessments from weeks to less than an hour, with 85 percent accuracy in pinpointing damaged buildings. And the best part? It can be installed on any laptop with a decent video card, be run offline and be equally effective when applied to satellite images.
And DEEP isn’t WFP’s only experience of working with great partners to turn mountains of data from satellite images into practical support for decision makers. It has been working with Google Research to develop SKAI, can assess building damage in less than 24 hours by applying machine learning to satellite images. What makes SKAI stand out is that it doesn’t need large amounts of labelled data sets to be trained, but is still proving 80% more accurate than manual assessments in large urban areas such as when it was used after the Beirut port explosion (2020) and Cyclone Yasa in Fiji (2021).
However, even beyond local capacity building, tools like DEEP and SKAI have a long trek ahead to become truly scalable in emergency response. To get there, humanitarians will need to tackle three challenges.
First, we need to make imagery ‘open’ for humanitarian use. After all, machine learning, relies on huge amounts of data. High-resolution satellite imagery can be expensive, and current datasets don’t cover the bulk of humanitarian contexts. Efforts to make closely held, privately held data available to humanitarians are welcome.
Second, any technology used for humanitarian work needs robust standards and common governance. The community needs to come together, perhaps through the Emergency Telecommunications Cluster (ETC), to set these standards.
And third, the community will need to confront the fear that machine learning assessments are another technological “black box.” Humanitarians will need to work to demystify artificial intelligence and machine learning with local specialists. Informed confidence will help these tools support national and local emergency preparations.
That said, machine learning clearly has the potential to let humanitarians punch well above their weight when disaster strikes, so coordination between government, the private sector and humanitarian actors must be stepped up to unlock it. Clear steps towards that are outlined in a new policy brief as part of this week’s Multi-stakeholder Forum on Science, Technology and Innovation, and every new disaster after that, where time — or lives — are lost, will be a reminder to redouble our efforts.