Semi-automatic delineation of visible cadastral boundaries from high-resolution remote sensing data

The software tool supports the delineation of visible boundaries by automatically retrieving information from high-resolution optical sensor data captured with UAV, aerial, or satellite platforms. The automatically extracted boundary features are used to support with the subsequent interactive delineation. The tool is designed for areas, in which boundaries are demarcated by physical objects and are thus visible. The tool focuses on improving current indirect surveying by simplifying facilitates image-based cadastral mapping by making use of image analysis and machine learning.

The tool’s workflow consists of (a) image segmentation, (b) boundary classification, and (c) interactive delineation:

Boundary Delineation workflow proposed to improve indirect surveying

All source code be found on GitHub. All code is open source and can be used free of costs. Manuals are provided in our GitHub wiki.

The interactive delineation is implemented in the BoundaryDelineation QGIS plugin. It can be downloaded via the QGIS plugin repository.

The following YouTube video demonstrates the use of the BoundaryDelineation QGIS plugin:

Background

The very high resolution of UAV-based images (better than 5cm ground resolution) opens new possibilities for mapping. The human operator can easily recognize most objects in the imagery. However, for an efficient workflow, ideally objects of interest should be detected and extracted automatically from the imagery: this expedites the surveying and mapping process. Although automatic image interpretation has a long history in research, the very high resolutions faced with nowadays brings new challenges: working on object basis rather than pixel, but at the same time exploit the pixel resolution for the retrieval of land tenure object features, modelling and exploitation of context in several hierarchy levels, and efficient use of prior knowledge. Our tool aims to exploit the imagery captured in the its4land UAV flights to enable automatic land tenure feature extraction. Such an approach cannot deliver complete matching – as some tenure boundaries are only social and not visible to sensors – however, even 50% matching would radically alter tenure mapping workflow costs and times.

Feedback

Feel free to share your ideas on the current state of the boundary delineation workflow. You can do this by evaluation the workflow in terms of its strengths, weaknesses, opportunities and threats (SWOT analysis) via the following form:

https://goo.gl/forms/x7S2v56S6L2c9WDT2