K-fold cross validation in GRASS GIS

A common technique to estimate the accuracy of a predictive model is k-fold cross-validation. In k-fold cross-validation, the original sample is randomly partitioned into a number of sub-samples with an approximately equal number of records. Of these sub-samples, a single sub-sample is retained as the validation data for testing the model, and the remaining sub-samples are combined to be used as training data. The cross-validation process is then repeated as many times as there are sub-samples, with each of the sub-samples used exactly once as the validation data (Table 1).

Table 1. Illustration of data partitioning in a 4-fold cross-validation, with training data used to train the model, and test data to validate the model.

The k evaluation results can then be averaged (or otherwise combined) to produce a single estimation. The advantage of this method is that all observations are used for both training and validation, and each observation is used for validation exactly once.

Functions for modelling and machine learning in e.g., R and Python’s Scikit-learn often contain build-in cross-validation routines. But it is also fairly easy to build such a routine yourself. This tutorial shows how one can easily build a k-fold cross-validation routine in GRASS GIS, e.g., to evaluate the predictive performance of two interpolation techniques, the inverse Distance Weighting and bilinear spline interpolation.

Figure 1. A) Elevation map of North Carolina. B) Elevation estimation based on inverse distance weighting interpolation of the elevation at 150 random sample points. C) Residue map with the differences between A and B. D) Relative differences between A and B, computed as (A-B)/A. Map C and D are overlaid with the 150 sample locations.

This tutorial is available on https://tutorials.ecodiv.earth.

Plotting GRASS data in Python

GRASS GIS offers some useful but basic plotting options for raster data. However, for plotting of data in attribute tables and for more advanced graphs, we need to use other software tools. In this tutorial I explore some of the possibilities offered by Pandas plot() and how we can further tune plots using matplotlib / pyplot library.

Map of the municipals in Wake County, North Carolina, and for each municipal the distribution of distances to the nearest school (data source: North Carolina sample data set).


GRASS and Pandas – from attribute table to pandas dataframe


In this post I show how to import an attribute table of a vector layer in a GRASS GIS database into a Pandas data frame. Pandas stands for Python Data Analysis Library which provides high-performance, easy-to-use data structures and data analysis tools for the Python programming language. For people familiar with R, the Pandas data frame is an object similar to the R data frame. They are a lot like the most common way in which spreadsheets are used, with the data presented in rectangular form with columns holding variables and rows holding observations. An important characteristic is that the data frame, like a spreadsheet, can hold different types of data in different columns: numbers, character data, dates and so on. Continue reading “GRASS and Pandas – from attribute table to pandas dataframe”

Update r.vif add-on for GRASS GIS

I just updated the r.vif add-on. The add-on let’s you do a step-wise variance inflation factor (VIF) procedure. As explained in more detail here, the VIF  can be used to detect multicollinearity in a set of explanatory variables. The step-wise selection procedure provides a way to select a et of variables with sufficient low multicollinearity.

The update should make the computation of VIF much faster. For very large raster layers it is possible to have the VIF computed based on a random subset of raster cells. There is also a low-memory option. This allows one to run this add-on with much larger data sets. But, as explained in the r.vif manual page, it also runs considerably slower.

Exporting rasters to Mbtiles using GDAL

Web maps are generally made up of many small, square images called tiles, which are placed side by side in order to create the illusion of a very large seamless image [for a good explanation, see here].

Tiled based maps can be made up of many tiles. Loading all those tiles would be inefficient and slow. That is where the MBtiles, developed by Mapbox, come in. The MBtiles specification is an efficient format for storing millions of tiles in a single SQLite database.

I have written before about Tilemill, a great tool to create great looking interactive maps. Since then more tools have come available to create MBtiles. Continue reading “Exporting rasters to Mbtiles using GDAL”

Terrain attribute selection in environmental studies

Exploring species-environment relationships is important for amongst others habitat mapping, biogeographical classification, conservation, and management. And it has become easier with (i) the advance of a wide range of tools, including many open source tools, and (ii) availability of more relevant data sources. For example, there are many tools with which it is relatively easy to create a wide range of derived terrain variables using digital elevation (DEM) or bathymetric (DBM) models. However, the ease of use of many of these tools, especially when used by non-experts, may lead to the selection of arbitrary or sub-optimal set of variables. In addition, derived variables will often be highly correlated (Lecours et al. 2017).

Continue reading “Terrain attribute selection in environmental studies”