New Assessment Tech Regarding Disaster Damages

New Assessment Tech Regarding Disaster Damages

Photo illust Pexels

This post is also available in: heעברית (Hebrew)

Satellite imagery is frequently used for damage assessments following natural disasters, from hurricanes to forest fires, in order to tell responders what infrastructure was damaged, which roads were still operational and which airfields were ready to receive aid.

While satellite imagery can be produced rather quickly following a natural disaster, that data still needs to be processed by human analysts to determine what’s damaged and what’s still standing. 

Automating that process using machine learning algorithms could be key to getting that information to responders even faster. 

US Defense Innovation Unit has initiated the xView2 Challenge, seeking computer vision algorithms capable of identifying objects in satellite images first responders might need, like a building. In addition to automatically locating a building in a satellite image, the new algorithms must also be able to assess what kind of damage the building has sustained, if any, in the image. The goal is detecting key objects in overhead imagery in context and assessing damage in a disaster situation.

A publicly available dataset of satellite imagery that includes images before and after several types of disaster, including wildfire, landslides, earthquakes and flood damage, will be used at the challenge. The dataset includes 700,000 building annotations across 5,000 square kilometers spanning 15 countries, according to c4isrnet.com.

It remains to see whether this initiative of development of algorithms capable of identifying and labeling damage assessments from satellite imagery will lead to practical applications in the field of emergency response.