Skip to content

krakchris/rimbun.io

Repository files navigation

rimbun.io

Table of contents

  • Intro
  • Description
  • How To Use
  • Input
  • Output
  • Basemodels
  • Open data
  • Alternatives

Introduction

Rimbun.io is an open source tool developed by Green City Watch in collaboration with The World Bank and the Government of Indonesia. The tools are build to leverage remote sensing and machine learning, known as “geoAI” to measure urban blue spaces and riperian zones.

image


Description

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Rimbun.io is and open source Water Detection algorithm using U-net model infrastructure. Rimbun.io was designed to run on AWS EC2 instances but for improving the model training data needs to be made on local computers using GIS software.

How to use

The experimental workflow in this repository mainly consists of jupyter notebooks. The easiest way to use these notebooks is to start a AWS Deep learning AMI. The notebooks have been tested on an ubuntu deep learning AMI. Requirement files are supplied in this repository.

Input

  • 4 or 8 band aerial or satellite imagery with a spatial resolution between 20-80cm including RGBI bands
  • Trained model or Water annotations

Output

  • Water Detection Shapefiles and metadata

Workflows

Workflow for creating inferences from large satellite/aerial image

  1. Collect imagery

    • purpose: downloading imagery from gbdx
    • input: city name
    • output: gridded imagery data on s3 drive
  2. Run inference

    • purpose: detecting water bodies on satellite imagery
    • input: satellite data (step 1) and pre-trained model (available on s3)
    • output: vector data of water areas
  3. Measure extra veriables

    • purpose: measure changes in green cover and overlap with percil map
    • input: vector data of water bodies
    • output: green cover fraction values surrounding water bodies over time, values for percil overlap per type for water body

Workflow for (Re)Training a model on hand annotations from Image

  1. Collect imagery

    • same as above
  2. Creating trianing data

    • use desktop GIS tools to generate training data
  3. Retrain model

    • purpose: retraining the model using pretrained model to fix the workflow for a new area
    • input: satellite imagery, training data, pre-trained model
    • output: retrained model focused on specific area
  4. Run inference

    • same as above
  5. Measure metadata

    • same as above

Basemodels

We have tested the U-net model architecture to be working.

The base model can be downloaded here: Link

OpenData

Treetect is tested on Worldview-2/3 imagery. If you do not have access to Commercial High Resolution Imagery you can use open datasets provided by Maxar (covering specific sites only). Or have a look at the NAIP Dataset here covering the USA (not tested). Otherwise check out this vast collection of remote sensing data.

Alternatives

There are several open source frameworks for applying machine learning to satellite imagery. Find a selection below:

About

Monitoring urban wilderness at scale

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published