, ,

The importance of having good baseline data

Information and communication is the lifeline of any disaster response. It is critical for people on the ground to convey the situation, as well as the urgent need for supplies and relief in specific locations. It helps organizations collaborate to avoid duplicative effort and gaps in assistance.

The crisis response community has long known that the use of information and communications technology (ICT) can quickly coordinate efforts, thereby making their work more targeted and effective. Recent improvements in ICT, such as availability of BGANs, WiMax and WiFi mesh networks, provide an opportunity to improve information sharing, not only within organizations but also between them.

This blog post illustrates the need for a coordinated collection of baseline data in disaster prone countries through a cross-organizational, multi-phased approach.

The humanitarian sector has the opportunity to harness technological advancements to improve information-sharing during a crisis. Technology is not the solution. But it is a significant tool that can enhance intelligent and immediate decision-making.

The State of Crisis Information Management

Numerous challenges in information management arise when responding to a major disaster or conflict, such as:

  • recording the damage to housing, infrastructure, and services
  • tracking displaced populations
  • distributing the massive influx of humanitarian supplies
  • coordinating the work in and between clusters, as well as the work of dozens of agencies outside the cluster approach

A recent survey of organizations that responded to the devastating earthquake in Haiti pointed out that one of the key issues they faced was an overall lack of baseline information about the situation in the country. For many of the UN clusters operating, it took months to get a comprehensive overview of what the situation was like before the earthquake struck, and then to start understanding what effects it had.

In Haiti the situation was particularly devastating because almost all government offices and ministries had been destroyed in the earthquake, and most of their data systems were lost. This is a common issue faced by response organizations around the world.

Baseline and post-disaster information is collected and controlled by many autonomous parties, including national authorities, many of whom may be working together for the first time. Due to the lack of a common repository of baseline data, organizations spend considerable amount of time either recreating the data or searching for it. Therefore, it is important to improve access to, and interoperability of, data collected before, during, and after an emergency. This is essential to building better response capacity.

Humanitarian response to sudden onset disasters requires:

  • rapid assessment of the spatial distribution of affected people and existing resources
  • good geographical information to plan initial response actions
  • shared knowledge of which organizations are working where (who-what-where or “3W data”) so that response can be coordinated to avoid gaps and overlaps in aid

This applies to any humanitarian response. But in a sudden onset disaster, the timeframes of information supply and demand are severely compressed. Pre-assembled information resources for the affected area may not exist. Even in areas where development projects have been present before the crisis occurred, data is often dispersed and unknown by the wider humanitarian community, or cannot be accessed and assimilated quickly enough.

Recurring data problems include:

  • Discoverable data. Data is either not made available to, or is not discoverable by, relevant organizations.
  • Available data. Data may not be immediately accessible, archived, or stored/backed up in a location outside of the devastated area.
  • Released data. Data sets may be subject to legal restrictions. Even if these restrictions are waived for humanitarian use, there may be problems with immediate authorization and redistribution.
  • Formatted data. Data may be unsuitable for direct import into a database or GIS system, and may require substantial processing.
  • Conflicting data.

Emergencies create an ever increasing number of information web portals, which is in itself a good thing. However, it can be problematic when the data is rapidly evolving. The enthusiasm to (re)publish as much information as possible can lead to confusion and inefficiencies, as users search through multiple copies of similar looking data to extract what is new or different.

The above issues are widely recognized by practitioners in humanitarian information management. Still, these problems recur in almost every sudden onset disaster emergency, in both developed and developing countries.

Each emergency brings together a unique collection of local, national and international humanitarian players. Some are experienced emergency responders, and some are not. Some are government-endorsed, whilst others are simply concerned citizens. While there will be some common elements across every emergency (government, UN agencies, major INGOs), the varying roles played by each makes it impossible to predict a ‘humanitarian blueprint’ for each new emergency. This vast range of experience, resources, and mandates, can make sharing response best practices extremely difficult.

Common problems with baseline data can – and must – be resolved for each emergency. For example:

  • During the initial days of an emergency, the main coordinating agencies agree at a national or local level which administration boundaries and P-code datasets should be used for coordination. It is critical that this decision is communicated to everyone involved in the disaster response.
  • Humanitarian assessment templates and base map data should be standardized and made compatible.
  • The supply of baseline data should be driven by the information needs of the humanitarian response. Priorities differ from emergency to emergency, and this presents a constant challenge in using limited resources to meet urgent information needs at each stage of the response.
  • The information needed by the affected community is not necessarily the same as the information demanded by large humanitarian agencies.

A well-coordinated humanitarian response will use multiple datasets, created by different personnel in different agencies, describing a highly dynamic and multi-faceted situation. To make these datasets interoperable and manageable imposes a higher overhead cost. But to create a data model that is planned strategically versus reactively will minimize that cost.

Moving forward

A multi-agency effort is essential to improve the availability and accessibility to baseline and crisis information. This needs to be a collaborative effort of the entire humanitarian response community with support and involvement of the private and academic sectors. The now no longer existing IASC Task Force on Information Management did a good job by defining what the Core and Fundamental Operational Datasets (COD/FOD) are that we need to collect for each country, but the difficult part is to actually ensure they are available for each country and that those that have been collected are actually kept up to date.

We at NetHope are looking at new and innovative ways to address this and are looking for organizations who are interested in working with us on this. If you want to work with us on this, feel free to reach out to me for further information.

Copyright © 2020 Integra Government Services International LLC