Guest post by Ryan Prosser, CTO, FloodMapp

Flooding is becoming more frequent and more severe, but current flood information communicated to businesses and the public is too broad, resulting in lives, assets, and productivity lost. The World Bank estimates that at least 30% of flood damage is preventable with real-time, accurate, and understandable flood information. That is over $20 billion a year on a global scale in avoidable losses. Traditional technology used for flood modeling cannot scale and remain at high-resolution, particularly not with real-time data, which results in these losses.

Who we are

FloodMapp’s CEO Juliette Murphy and I saw this gap in the capability of traditional flood technology. We set out to create a new rapid flood model that would drastically improve emergency response and save lives that should have never been lost after both working in engineering firms for over a decade and experiencing two major floods first-hand. Today, we lead a highly specialized team of data scientists, hydrologists, software engineers and growth specialists to build FloodMapp, a world-first flood modeling solution, purpose-built for flood forecasting and early warning. Aimed at improving safety and preventing damage, the FloodMapp solution provides highly accurate, real-time, property-specific, and dynamic flood inundation and depth insights for businesses exposed to flooding. It is 10,000 faster and 200% higher resolution than traditional models in an emergency response setting.

With AWS, our team is able to create high throughput data pipelines for scientific computation that leverages machine learning to enable critical and visual real-time flood information. Having this speed and precision of information drastically improves situational awareness, informed decision making, and most importantly, saves lives, before, during, and after a flood event. This technology is a game-changer for emergency and asset managers, as well as for resilience leaders that want to keep their communities safe.

How it works

The FloodMapp product is incredibly technically complex. From day one, we chose AWS to help us to launch a solution that was highly computationally intensive and with billions of data points (1.3 billion as of May 27th) and 350,000 new measurements every hour. They have always been present and supportive in the startup community in Queensland with a proven track record of hosting large scale data pipelines. Our data pipelines currently rely heavily on the AWS Batch and ECS services. Without these services, we would not be able to run our rapid flood models across QLD and the entire continental United States. Like most things, it’s been an iterative approach to get where we are. We didn’t start out with the ability or need to have up to 207 concurrent hydraulic models each with their own EBS snapshot of terrain data running on Batch. We’ve been in development for 1.5 years, starting small with our proof of concepts and pushing each new service until it was no longer fit for purpose. The biggest change for us was moving to containerization. It was a transition, but a worthwhile one.

What have we learned?

We’ve been focused on scale since day one. It’s been an interesting journey of discovery and design to construct a system capable of the high performance scientific workloads required to deliver hydraulic model results for over 20 thousand catchments across Australia and the United states. We ended up learning the hard way that it’s hard to be certain what the performance bottleneck is going to be beforehand. We have a great team that’s always trying to get the best performance out of every component. That also means we’ve run into bottlenecks on processes being CPU bound, processes running out of memory, running out of disk space, or even being constrained by disk write speed. The first big insight comes from one of my favorite quotes: “How do you eat a whale? One bite at a time.” It’s important to figure out how to make a problem embarrassingly parallel so that you can containerize it with docker and use ECR and Batch (or ECS/EKS) to run these computations in concert. Once you can split a problem into 1,000 smaller problems, you can start to look at EC2 Launch templates to add disk size and provided write speed. But it all starts by breaking it down into small bites.

We work predominantly with innovative government, utility, insurance and mining leaders in the US and Australia to bring this solution to communities that are exposed to flooding to save lives and assets. If you’d like to know more about how we are helping communities and companies become more resilient to flooding events with our new rapid flood model, please visit our website at www.floodmapp.com.

We also just love being connected with forward-thinking engineers, emergency managers and disaster management enthusiasts so please follow us on LinkedIn and Twitter (@floodmapp) for company updates and interesting information about the current state of global and local flood data.

(Excerpt) Read more Here | 2020-06-29 21:24:25
Image credit: source