Subscribe to R-spatial feed
R Spatial software blogs and ideas
Updated: 5 hours 4 min ago

Spring School on “Statistical analysis of hyperspectral and high-dimensional remote-sensing data using R”: a report

Sat, 03/25/2017 - 01:00

The Spring School on “Statistical analysis of hyperspectral and high-dimensional remote-sensing data using R”, held at the University of Jena, March 13-17, 2017, was organized by the GIScience group, led by Prof. Alexander Brenning and two researchers from the GIScience research group, Patrick Schratz and Dr. Jannes Münchow.

The school brought together a diverse group of 28 researchers (e.g. geoscientists, forestry, environmental studies) at different scientific levels (graduate students, PhD, postdoc, professor) from all over the world as far as Chile, Peru, Turkey, and Bosnia & Herzegowina. Overall, eight german and 16 non-german participants (20 male, 8 female) took part in this event. During five days the participants were introduced to the theoretical background of hyperspectral remote sensing data and learned in numerous hands-on sessions how to analyse and illustrate spatial data in R. The Spring School was organized within the LIFE Healthy Forest project and supported by the Michael Stifel Center Jena.

In this short blog-post I will give a quick overview of the many, many things we learned during this intense “spatial stats-and-R-week”.

Participants and organizers of the Spring School on “Statistical analysis of hyperspectral and high-dimensional remote-sensing data using R” in Jena, © H. Petschko

Day 1

On the first day of the summer school the participants obtained a theoretical introduction to hyperspectral remote-sensing data with examples focusing on the application of hyperspectral data in forest research.
Marco Peña from the Alberto Hurtado University in Chile gave a lecture on “Introduction to hyperspectral remote sensing” which brought everyone to the same level.
This very comprehensive introduction was followed by a talk on hyperspectral applications exemplified on a study on forests in the Bialowieza Forest in eastern Poland by Aneta Modzelewska from the Forest Research Institute in Raszyn.
The last talk on the first day was by Dr. Henning Buddenbaum (University of Trier) on “Hyperspectral remote sensing for measuring biochemical leaf parameters in forests”.
Dr. Buddenbaum is involved in the Science Advisory Group – Forests and Natural Ecosystems in the EnMAP mission, a German hyperspectral satellite mission aiming at monitoring and characterising the Earth’s environment globally.

Lecture by Prof. A. Brenning on “Statistical and machine learning in remote sensing”, © H. Petschko

Day 2

The second day was filled with hands-on R sessions. In a first session by Patrick Schratz we learned about his “must know” features of R, namely Rmarkdown, the apply-family and pipes.
This was followed by two session focusing on the usage of R as a GIS. Dr. Jannes Münchow, who developed the package RQGIS, an interface between R and QGIS which allows the user to access QGIS algorithms from within R.
Afterwards we were introduced to the R package mapview, by its author, Dr. Tim Appelhans. Mapview is a GIS-like interactive graphing tool that is directly accessible within RStudio (or the web browser, if you are not using RStudio). It is especially helpful if you want to quickly do a visual check whether a certain analysis has produced reasonable results.

Solving R-problems with Dr. Jannes Münchow, © H. Petschko

Day 3

The third day started with a lecture and hands-on session on “Statistical and machine learning in remote sensing” by Prof. Alexander Brenning with a focus on linear discriminant analysis, support vector machine and random forest. A short overview of these statistical modeling methods and the application in R including a comprehensive tutorial can be found here.
In the afternoon, Dr. Thomas Bocklitz presented a very different perspective in the application of spectral data analysis in histopathology. Afterwards, the participants had a chance to discuss their own research involving spatial modeling techniques or R-problem with the group and the experts from the GIScience group in Jena.

Open session during the Day3 of the Spring School to discuss research projects of the participants, © H. Petschko

Day 4

On the fourth day, Partick Schratz briefly introduced the hsdar package developed by Dr. Lukas Lehnert from University of Marburg. It can be used for processing and analysis on hyperspectral data in R.
Prof. Brenning focused in his second session further on the assessment of model accuracy (non-spatial and spatial validation methods, variable importance) using the sperrorest package and dealing with high dimensionality in linear regression.

Discussing sampling designs with Prof. A. Brenning, © H. Petschko

Introduction to parallel processing in R with Patrick Schratz, © H. Petschko

Day 5 (Thuringian Forest excursion)

On the last day, we visited a monitoring site and a site with tornado damage (see images below) from 2016 in the Thuringian Forest together with three experts from the official authority “ThüringenForst”.
In conclusion, the Spring School was a great event with many fruitful hands-on R-sessions during which the participants could learn helpful tricks in R, how to use R as a GIS and about statistical and machine learning in R. Hopefully there will be more academic “schools” like this one to follow in the future (maybe even with a thematic focus on geomorphology or natural hazards).

Tornado damage in the Thuringian Forest from September 2016 © P. Schratz

Field trip to the Thuringian Forest, © X. Tagle

Tidying feature geometries with sf

Sun, 03/19/2017 - 01:00

view raw Rmd

Introduction

Spatial line and polygon data are often messy; although simple features formally follow a standard, there is no guarantee that data is clean when imported in R. This blog shows how we can identify, (de)select, or repair broken and invalid geometries. We also show how empty geometries arise, and can be dealt with. Literature on invalid polygons and correcting them is found in Ramsey (2010), Ledoux, Ohori, and Meijers (2014), Ledoux, Ohori, and Meijers (2012), and Van Oosterom, Quak, and Tijssen (2005); all these come with excelent figures illustrating the problem cases.

We see that from version 0.4-0, sf may be linked to lwgeom,

library(sf) ## Linking to GEOS 3.5.1, GDAL 2.1.3, proj.4 4.9.2, lwgeom 2.3.1 r15264

where lwgeom stands for the light-weight geometry library that powers postgis. This library is not present on CRAN, so binary packages installed from CRAN will not come with it. It is only linked to sf when it is detected during a build from source. When lwgeom is present, we will have a working version of st_make_valid, which is essentially identical to PostGIS’ ST_makeValid.

Corrup or invalid geometries?

There are two types of things that can go wrong when dealing with geometries in sf. First, a geometry can be corrupt, which is for instance the case for a LINESTRING with one point, or a POLYGON with more than zero and less than 4 points:

l0 = st_linestring(matrix(1:2,1,2)) p0 = st_polygon(list(rbind(c(0,0),c(1,1),c(0,0))))

These cases could of course be easily caught by the respective constructor functions, but they are not because we want to see what happens. Also, if we would catch them, it would not prevent us from running into them, because the majority of spatial data enters R through GDAL, and sf’s binary interface (reading well-known binary). Also, for many purposes corrupt may not be a problem, e.g. if we only want to plot them. In case we want to use them however in geometrical operations, we’ll typically see a message like:

IllegalArgumentException: Invalid number of points in LinearRing found 3 - must be 0 or >= 4

which points to GEOS not accepting a geometry as a possible geometry. Such an error message however does not point us to which geometry caused this. We could of course write a loop over all geometries to find this out, but can also use st_is_valid which returns by default NA on corrupt geometries:

l0 = st_linestring(matrix(1:2,1,2)) p0 = st_polygon(list(rbind(c(0,0),c(1,1),c(0,0)))) p = st_point(c(0,1)) # not corrupt st_is_valid(st_sfc(l0, p0, p)) ## [1] NA NA TRUE

Simple feature validity refers to a number of properties that polygons should have, such as non-self intersecting, holes being inside polygons. A number of different examples for invalid geometries are found in Ledoux, Ohori, and Meijers (2014), and were taken from their prepair github repo:

# A 'bowtie' polygon: p1 = st_as_sfc("POLYGON((0 0, 0 10, 10 0, 10 10, 0 0))") # Square with wrong orientation: p2 = st_as_sfc("POLYGON((0 0, 0 10, 10 10, 10 0, 0 0))") # Inner ring with one edge sharing part of an edge of the outer ring: p3 = st_as_sfc("POLYGON((0 0, 10 0, 10 10, 0 10, 0 0),(5 2,5 7,10 7, 10 2, 5 2))") # Dangling edge: p4 = st_as_sfc("POLYGON((0 0, 10 0, 15 5, 10 0, 10 10, 0 10, 0 0))") # Outer ring not closed: p5 = st_as_sfc("POLYGON((0 0, 10 0, 10 10, 0 10))") # Two adjacent inner rings: p6 = st_as_sfc("POLYGON((0 0, 10 0, 10 10, 0 10, 0 0), (1 1, 1 8, 3 8, 3 1, 1 1), (3 1, 3 8, 5 8, 5 1, 3 1))") # Polygon with an inner ring inside another inner ring: p7 = st_as_sfc("POLYGON((0 0, 10 0, 10 10, 0 10, 0 0), (2 8, 5 8, 5 2, 2 2, 2 8), (3 3, 4 3, 3 4, 3 3))") p = c(p1, p2, p3, p4, p5, p6, p7) (valid = st_is_valid(p)) ## [1] FALSE TRUE FALSE FALSE NA FALSE FALSE

Interestingly, GEOS considers p5 as corrupt (NA) and p2 as valid.

To query GEOS for the reason of invalidity, we can use the reason = TRUE argument to st_is_valid:

st_is_valid(p, reason = TRUE) ## [1] "Self-intersection[5 5]" "Valid Geometry" ## [3] "Self-intersection[10 2]" "Self-intersection[10 0]" ## [5] NA "Self-intersection[3 1]" ## [7] "Holes are nested[3 3]" Making invalid polygons valid

As mentioned above, in case sf was linked to lwgeom, which is confirmed by

sf_extSoftVersion()["lwgeom"] ## lwgeom ## "2.3.1 r15264"

not printing a NA, we can use st_make_valid to make geometries valid:

st_make_valid(p) ## Geometry set for 7 features ## geometry type: GEOMETRY ## dimension: XY ## bbox: xmin: 0 ymin: 0 xmax: 15 ymax: 10 ## epsg (SRID): NA ## proj4string: NA ## First 5 geometries: ## MULTIPOLYGON(((0 0, 0 10, 5 5, 0 0)), ((5 5, 10... ## POLYGON((0 0, 0 10, 10 10, 10 0, 0 0)) ## GEOMETRYCOLLECTION(POLYGON((10 7, 10 2, 10 0, 0... ## GEOMETRYCOLLECTION(POLYGON((10 0, 0 0, 0 10, 10... ## POLYGON((0 0, 10 0, 10 10, 0 10, 0 0))

A well-known “trick”, which may be your only alternative if is to buffer the geometries with zero distance:

st_buffer(p[!is.na(valid)], 0.0) ## Geometry set for 6 features ## geometry type: POLYGON ## dimension: XY ## bbox: xmin: 0 ymin: 0 xmax: 10 ymax: 10 ## epsg (SRID): NA ## proj4string: NA ## First 5 geometries: ## POLYGON((0 0, 0 10, 5 5, 0 0)) ## POLYGON((0 0, 0 10, 10 10, 10 0, 0 0)) ## POLYGON((0 0, 0 10, 10 10, 10 7, 5 7, 5 2, 10 2... ## POLYGON((0 0, 0 10, 10 10, 10 0, 0 0)) ## POLYGON((0 0, 0 10, 10 10, 10 0, 0 0), (1 1, 3 ...

but we see that, apart from the fact that this only works for non-corrupt geometries, we end up with different results.

A larger example from the prepair site is this:

x = read_sf("/home/edzer/git/prepair/data/CLC2006_2018418.geojson") st_is_valid(x) ## [1] FALSE st_is_valid(st_make_valid(x)) ## [1] TRUE plot(x, col = 'grey', axes = TRUE, graticule = TRUE)

The corresponding paper, Ledoux, Ohori, and Meijers (2012) zooms in on problematic points. The authors argue to use constrained triangulation instead of the (less documented) approach taken by lwgeom; Mike Sumner also explores this here. It builds upon RTriangle, which cannot be integrated in sf as it is distributed under license with a non-commercial clause. Ledoux uses CGAL, which would be great to have an interface to from R!

Empty geometries

Empty geometries exist, and can be thought of as zero-length vectors, data.frames without rows, or NULL values in lists: in essence, there’s place for information, but there is no information. An empty geometry arises for instance if we ask for the intersection of two non-intersecting geometries:

st_intersection(st_point(0:1), st_point(1:2)) ## GEOMETRYCOLLECTION()

In principle, we could have designed sf such that empty geometries were represented a NULL value, but the standard prescrives that every geometry type has an empty instance:

st_linestring() ## LINESTRING() st_polygon() ## POLYGON() st_point() ## POINT(NA NA)

and thus the empty geometry is typed. This guarantees clean roundtrips from a database to R back into a database: no information (on type) gets lost in case of presence of empty geometries.

How can we detect, and filter on empty geometries? We can do that with st_dimension:

lin = st_linestring(rbind(c(0,0),c(1,1))) pol = st_polygon(list(rbind(c(0,0),c(1,1),c(0,1),c(0,0)))) poi = st_point(c(1,1)) p0 = st_point() pol0 = st_polygon() st_dimension(st_sfc(lin, pol, poi, p0, pol0)) ## [1] 1 2 0 NA NA

and see that empty geometries return NA.

The standard however prescribes that an empty polygon still has dimension two, and we can override the NA convenience to get standard-compliant dimensions by

st_dimension(st_sfc(lin, pol, poi, p0, pol0), NA_if_empty = FALSE) ## [1] 1 2 0 0 2 Tidying feature geometries

When you analyse your spatial data with sf and you don’t get any warnings or error messages, all may be fine. In case you do, or your are curious, you can check for

  1. empty geometries, using any(is.na(st_dimension(x)))
  2. corrupt geometries, using any(is.na(st_is_valid(x)))
  3. invalid geometries, using any(na.omit(st_is_valid(x)) == FALSE); in case of corrupt and/or invalid geometries,
  4. in case of invalid geometries, query the reason for invalidity by st_is_valid(x, reason = TRUE)
  5. you may be succesful in making geometries valid using st_make_valid(x) or, if st_make_valid is not supported by
  6. st_buffer(x, 0.0) on non-corrupt geometries (but beware of the bowtie example above, where st_buffer removes one half).
  7. After succesful a st_make_valid, you may want to select a particular type subset using st_is, or cast GEOMETRYCOLLECTIONS to MULTIPOLYGON by
st_make_valid(p) %>% st_cast("MULTIPOLYGON") ## Warning in st_cast.GEOMETRYCOLLECTION(X[[i]], ...): only first part of ## geometrycollection is retained ## Warning in st_cast.GEOMETRYCOLLECTION(X[[i]], ...): only first part of ## geometrycollection is retained ## Warning in st_cast.GEOMETRYCOLLECTION(X[[i]], ...): only first part of ## geometrycollection is retained ## Geometry set for 7 features ## geometry type: MULTIPOLYGON ## dimension: XY ## bbox: xmin: 0 ymin: 0 xmax: 10 ymax: 10 ## epsg (SRID): NA ## proj4string: NA ## First 5 geometries: ## MULTIPOLYGON(((0 0, 0 10, 5 5, 0 0)), ((5 5, 10... ## MULTIPOLYGON(((0 0, 0 10, 10 10, 10 0, 0 0))) ## MULTIPOLYGON(((10 7, 10 2, 10 0, 0 0, 0 10, 10 ... ## MULTIPOLYGON(((10 0, 0 0, 0 10, 10 10, 10 0))) ## MULTIPOLYGON(((0 0, 10 0, 10 10, 0 10, 0 0)))

For longer explanations about what makes a polygons invalid, do read one of the references below, all are richly illustrated

References [references]

Ledoux, Hugo, Ken Arroyo Ohori, and Martijn Meijers. 2012. “Automatically Repairing Invalid Polygons with a Constrained Triangulation.” In. Agile. https://3d.bk.tudelft.nl/ken/files/12_agile.pdf.

———. 2014. “A Triangulation-Based Approach to Automatically Repair GIS Polygons.” Computers & Geosciences 66: 121–31. https://pdfs.semanticscholar.org/d9ec/b32a7844b436fcd4757958e5eeca9563fcd2.pdf.

Ramsey, Paul. 2010. “PostGIS-Tips for Power Users.” Presentation on: FOSS4G. http://2010.foss4g.org/presentations/3369.pdf.

Van Oosterom, Peter, Wilko Quak, and Theo Tijssen. 2005. “About Invalid, Valid and Clean Polygons.” In Developments in Spatial Data Handling, 1–16. Springer. http://excerpts.numilog.com/books/3540267727.pdf.

Spatial Modeling Using Statistical Learning Techniques

Mon, 03/13/2017 - 01:00
Introduction

Geospatial data scientists often make use of a variety of statistical and machine learning techniques for spatial prediction in applications such as landslide susceptibility modeling (Goetz et al. 2015) or habitat modeling (Knudby, Brenning, and LeDrew 2010). Novel and often more flexible techniques promise improved predictive performances as they are better able to represent nonlinear relationships or higher-order interactions between predictors than less flexible linear models.

Nevertheless, this increased flexibility comes with the risk of possible over-fitting to the training data. Since nearby spatial observations often tend to be more similar than distant ones, traditional random cross-validation is unable to detect this over-fitting whenever spatial observations are close to each other (e.g. Brenning (2005)). Spatial cross-validation addresses this by resampling the data not completely randomly, but using larger spatial regions. In some cases, spatial data is grouped, e.g. in remotely-sensed land use mapping grid cells belonging to the same field share the same management procedures and cultivation history, making them more similar to each other than to pixels from other fields with the same crop type.

The sperrorest package provides a customizable toolkit for cross-validation (and bootstrap) estimation using a variety of spatial resampling schemes. More so, this toolkit can even be extended to spatio-temporal data or other complex data structures. This blog post will walk you through a simple case study, crop classification in central Chile (Peña and Brenning 2015).

Data and Packages

As a case study we will carry out a supervised classification analysis using remotely-sensed data to predict fruit-tree crop types in central Chile. This data set is a subsample of data from (Peña and Brenning 2015).

library(pacman) p_load(sperrorest) data("maipo", package = "sperrorest")

The remote-sensing predictor variables were derived from an image times series consisting of eight Landsat images acquired throughout the (southern hemisphere) growing season. The data set includes the following variables:

Response

  • croptype: response variable (factor) with 4 levels: ground truth information

Predictors

  • b[12-87]: spectral data, e.g. b82 = image date #8, spectral band #2
  • ndvi[01-08]: Normalized Difference Vegetation Index, e.g. #8 = image date #8
  • ndwi[01-08]: Normalized Difference Water Index, e.g. #8 = image date #8

Others

  • field: field identifier (grouping variable - not to be used as predictor)
  • utmx, utmy: x/y location; not to be used as predictors

All but the first four variables of the data set are predictors; their names are used to construct a formula object:

predictors <- colnames(maipo)[5:ncol(maipo)] # Construct a formula: fo <- as.formula(paste("croptype ~", paste(predictors, collapse = "+"))) Modeling

Here we will take a look at a few classification methods with varying degrees of computational complexity and flexibility. This should give you an idea of how different models are handled by sperrorest, depending on the characteristics of their fitting and prediction methods. Please refer to (James et al. 2013) for background information on the models used here.

Linear Discriminant Analysis (LDA)

LDA is simple and fast, and often performs surprisingly well if the problem at hand is ‘linear enough’. As a start, let’s fit a model with all predictors and using all available data:

p_load(MASS) fit <- lda(fo, data = maipo)

Predict the croptype with the fitted model and calculate the misclassification error rate (MER) on the training sample:

pred <- predict(fit, newdata = maipo)$class mean(pred != maipo$croptype) ## [1] 0.0437

But remember that this result is over-optimistic because we are re-using the training sample for model evaluation. We will soon show you how to do better with cross-validation.

We can also take a look at the confusion matrix but again, this result is overly optimistic:

table(pred = pred, obs = maipo$croptype) ## obs ## pred crop1 crop2 crop3 crop4 ## crop1 1294 8 4 37 ## crop2 50 1054 4 44 ## crop3 0 0 1935 6 ## crop4 45 110 29 3093 Classification Tree

Classification and regresion trees (CART) take a completely different approach—they are based on yes/no questions in the predictor variables and can be referred to as a binary partitioning technique. Fit a model with all predictors and default settings:

p_load(rpart) fit <- rpart(fo, data = maipo) ## optional: view the classiciation tree # par(xpd = TRUE) # plot(fit) # text(fit, use.n = TRUE)

Again, predict the croptype with the fitted model and calculate the average MER:

pred <- predict(fit, newdata = maipo, type = "class") mean(pred != maipo$croptype) ## [1] 0.113

Here the predict call is slightly different. Again, we could calculate a confusion matrix.

table(pred = pred, obs = maipo$croptype) ## obs ## pred crop1 crop2 crop3 crop4 ## crop1 1204 66 0 54 ## crop2 47 871 38 123 ## crop3 38 8 1818 53 ## crop4 100 227 116 2950 RandomForest

Bagging, bundling and random forests build upon the CART technique by fitting many trees on bootstrap resamples of the original data set (Breiman 1996) (Breiman 2001) (Hothorn and Lausen 2005). They differ in that random forest also samples from the predictors, and bundling adds an ancillary classifier for improved classification. We will use the nowadays widely used randomForest() here.

p_load(randomForest) fit <- randomForest(fo, data = maipo, coob = TRUE) fit ## ## Call: ## randomForest(formula = fo, data = maipo, coob = TRUE) ## Type of random forest: classification ## Number of trees: 500 ## No. of variables tried at each split: 8 ## ## OOB estimate of error rate: 0.57% ## Confusion matrix: ## crop1 crop2 crop3 crop4 class.error ## crop1 1382 2 0 5 0.00504 ## crop2 1 1163 0 8 0.00768 ## crop3 0 0 1959 13 0.00659 ## crop4 7 5 3 3165 0.00472

Let’s take a look at the MER achieved on the training sample:

pred <- predict(fit, newdata = maipo, type = "class") mean(pred != maipo$croptype) ## [1] 0 table(pred = pred, obs = maipo$croptype) ## obs ## pred crop1 crop2 crop3 crop4 ## crop1 1389 0 0 0 ## crop2 0 1172 0 0 ## crop3 0 0 1972 0 ## crop4 0 0 0 3180

Isn’t this amazing? Only one grid cell is misclassified by the bagging classifier! Even the OOB (out-of-bag) estimate of the error rate is < 1%.
Too good to be true? We’ll see…

Cross-Validation Estimation of Predictive Performance

Of course we can’t take the MER on the training set too seriously—it is biased. But we’ve heard of cross-validation, in which disjoint subsets are used for model training and testing. Let’s use sperrorest for cross-validation.

Also, at this point we should highlight that the observations in this data set are pixels, and multiple grid cells belong to the same field. In a predictive situation, and when field boundaries are known (as is the case here), we would want to predict the same class for all grid cells that belong to the same field. Here we will use a majority filter. This filter ensures that the final predicted class type of every field is the most often predicted croptype within one field.

Linear Discriminant Analysis (LDA)

First, we need to create a wrapper predict method for LDA for sperrorest(). This is necessary in order to accomodate the majority filter, and also because class predictions from lda’s predict method are hidden in the $class component of the returned object.

lda.predfun <- function(object, newdata, fac = NULL) { p_load(nnet) majority <- function(x) { levels(x)[which.is.max(table(x))] } majority.filter <- function(x, fac) { for (lev in levels(fac)) { x[fac == lev] <- majority(x[fac == lev]) } x } pred <- predict(object, newdata = newdata)$class if (!is.null(fac)) pred <- majority.filter(pred, newdata[, fac]) return(pred) }

To ensure that custom predict-functions do also work with parsperrorest(), we need to define all custom functions in one step. Otherwise, the foreach() package in par.mode = 2 of parsperrorest() will throw errors because of the way how it loads functions of the parent environment.

Finally, we can run sperrorest() with a non-spatial sampling setup (partition.cv()). In this example we use a ‘50 repetitions - 5 folds’ setup. To make your results more independent of particular random partitioning, you may want to use 100 repetitions or even more in practice.

res.lda.nsp <- sperrorest(fo, data = maipo, coords = c("utmx","utmy"), model.fun = lda, pred.fun = lda.predfun, pred.args = list(fac = "field"), smp.fun = partition.cv, smp.args = list(repetition = 1:50, nfold = 5), error.rep = TRUE, error.fold = TRUE, progress = FALSE) summary(res.lda.nsp$error.rep) ## mean sd median IQR ## train.error 3.40e-02 0.001 3.40e-02 0.001 ## train.accuracy 9.66e-01 0.001 9.66e-01 0.001 ## train.events 4.69e+03 0.000 4.69e+03 0.000 ## train.count 3.09e+04 0.000 3.09e+04 0.000 ## test.error 4.00e-02 0.002 4.00e-02 0.002 ## test.accuracy 9.60e-01 0.002 9.60e-01 0.002 ## test.events 1.17e+03 0.000 1.17e+03 0.000 ## test.count 7.71e+03 0.000 7.71e+03 0.000

To run a spatial cross-validation at the field level, we can use partition.factor.cv() as the sampling function. Since we are using 5 folds, we get a coarse 80/20 split of our data. 80% will be used for training, 20% for testing our trained model.

To take a look where our training and tests sets will be partitioned on each fold, we can plot them. The red colored points represent the test set in each fold, the black colored points the training set. Note that because we plotted over 7000 points, overplotting occurs and since the red crosses are plotted after the black ones, it seems visually that way more than ~20% of red points exist than it is really the case.

resamp <- partition.factor.cv(maipo, nfold = 5, repetition = 1:1, fac = "field") plot(resamp, maipo, coords = c("utmx","utmy"))

Subsequently, we have to specify the location of the fields (fac = "field") in the prediction arguments (pred.args) and sampling arguments (smp.args) in sperrorest().

res.lda.sp <- sperrorest(fo, data = maipo, coords = c("utmx","utmy"), model.fun = lda, pred.fun = lda.predfun, pred.args = list(fac = "field"), smp.fun = partition.factor.cv, smp.args = list(fac = "field", repetition = 1:50, nfold = 5), error.rep = TRUE, error.fold = TRUE, benchmark = TRUE, progress = FALSE) res.lda.sp$benchmark$runtime.performance summary(res.lda.sp$error.rep) ## mean sd median IQR ## train.error 2.95e-02 0.00177 2.97e-02 0.00261 ## train.accuracy 9.70e-01 0.00177 9.70e-01 0.00261 ## train.events 4.69e+03 0.00000 4.69e+03 0.00000 ## train.count 3.09e+04 0.00000 3.09e+04 0.00000 ## test.error 6.65e-02 0.00807 6.59e-02 0.01083 ## test.accuracy 9.33e-01 0.00807 9.34e-01 0.01083 ## test.events 1.17e+03 0.00000 1.17e+03 0.00000 ## test.count 7.71e+03 0.00000 7.71e+03 0.00000 RandomForest

In the case of Random Forest, the customized pred.fun looks as follows; it is only required because of the majority filter, without it, we could just omit the pred.fun and pred.args arguments below.

rf.predfun <- function(object, newdata, fac = NULL) { p_load(nnet) majority <- function(x) { levels(x)[which.is.max(table(x))] } majority.filter <- function(x, fac) { for (lev in levels(fac)) { x[fac == lev] <- majority(x[fac == lev]) } x } pred <- predict(object, newdata = newdata) if (!is.null(fac)) pred <- majority.filter(pred, newdata[,fac]) return(pred) } res.rf.sp <- sperrorest(fo, data = maipo, coords = c("utmx","utmy"), model.fun = randomForest, pred.fun = rf.predfun, pred.args = list(fac = "field"), smp.fun = partition.factor.cv, smp.args = list(fac = "field", repetition = 1:50, nfold = 5), error.rep = TRUE, error.fold = TRUE, benchmark = TRUE, progress = 2) ## Mon Feb 27 20:56:01 2017 Repetition 1 ## Mon Feb 27 20:57:12 2017 Repetition 2 ## Mon Feb 27 20:58:20 2017 Repetition 3 ## Mon Feb 27 20:59:29 2017 Repetition 4 ## Mon Feb 27 21:00:36 2017 Repetition 5 ## Mon Feb 27 21:01:46 2017 Repetition 6 ## Mon Feb 27 21:02:55 2017 Repetition 7 ## Mon Feb 27 21:04:01 2017 Repetition 8 ## Mon Feb 27 21:05:07 2017 Repetition 9 ## Mon Feb 27 21:06:16 2017 Repetition 10 ## Mon Feb 27 21:07:23 2017 Repetition 11 ## Mon Feb 27 21:08:30 2017 Repetition 12 ## Mon Feb 27 21:09:38 2017 Repetition 13 ## Mon Feb 27 21:10:45 2017 Repetition 14 ## Mon Feb 27 21:11:53 2017 Repetition 15 ## Mon Feb 27 21:13:01 2017 Repetition 16 ## Mon Feb 27 21:14:09 2017 Repetition 17 ## Mon Feb 27 21:15:16 2017 Repetition 18 ## Mon Feb 27 21:16:23 2017 Repetition 19 ## Mon Feb 27 21:17:31 2017 Repetition 20 ## Mon Feb 27 21:18:39 2017 Repetition 21 ## Mon Feb 27 21:19:46 2017 Repetition 22 ## Mon Feb 27 21:20:53 2017 Repetition 23 ## Mon Feb 27 21:22:03 2017 Repetition 24 ## Mon Feb 27 21:23:13 2017 Repetition 25 ## Mon Feb 27 21:24:23 2017 Repetition 26 ## Mon Feb 27 21:25:32 2017 Repetition 27 ## Mon Feb 27 21:26:39 2017 Repetition 28 ## Mon Feb 27 21:27:47 2017 Repetition 29 ## Mon Feb 27 21:28:55 2017 Repetition 30 ## Mon Feb 27 21:30:03 2017 Repetition 31 ## Mon Feb 27 21:31:11 2017 Repetition 32 ## Mon Feb 27 21:32:18 2017 Repetition 33 ## Mon Feb 27 21:33:25 2017 Repetition 34 ## Mon Feb 27 21:34:33 2017 Repetition 35 ## Mon Feb 27 21:35:40 2017 Repetition 36 ## Mon Feb 27 21:36:47 2017 Repetition 37 ## Mon Feb 27 21:37:54 2017 Repetition 38 ## Mon Feb 27 21:39:02 2017 Repetition 39 ## Mon Feb 27 21:40:09 2017 Repetition 40 ## Mon Feb 27 21:41:17 2017 Repetition 41 ## Mon Feb 27 21:42:24 2017 Repetition 42 ## Mon Feb 27 21:43:31 2017 Repetition 43 ## Mon Feb 27 21:44:38 2017 Repetition 44 ## Mon Feb 27 21:45:46 2017 Repetition 45 ## Mon Feb 27 21:46:54 2017 Repetition 46 ## Mon Feb 27 21:48:01 2017 Repetition 47 ## Mon Feb 27 21:49:07 2017 Repetition 48 ## Mon Feb 27 21:50:15 2017 Repetition 49 ## Mon Feb 27 21:51:21 2017 Repetition 50 ## Mon Feb 27 21:52:27 2017 Done. res.rf.sp$benchmark$runtime.performance ## Time difference of 56.4 mins summary(res.rf.sp$error.rep$test.error) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.0630 0.0827 0.0871 0.0868 0.0928 0.1100 summary(res.rf.sp$error.rep$test.accuracy) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.890 0.907 0.913 0.913 0.917 0.937

What a surprise! RandomForest classification isn’t that good after all, if we acknowledge that in ‘real life’ we wouldn’t be making predictions in situations where the class membership of other grid cells in the same field is known in the training stage. So spatial dependence does matter.

Parallelized Cross-Validation

To speed things up, we can use parsperrorest() and inspect the runtime difference. Note that we have two parallel modes to choose from!
In this example we will use both par.mode = 1 and par.mode = 2 to show you the runtime differences.

Details of argument par.mode

The advantage of par.mode = 1 compared to par.mode = 2 is its speed. However, par.mode = 1 is somewhat less stable. This means that par.mode = 1 will throws errors for some models (e.g. LDA) while par.mode = 2 does not. par.mode = 1 uses either a parallel::mclapply() (Unix) or parallel::parApply() (Windows) while par.mode = 2 is running on a foreach backend.

While traditional mclapply() approaches have the downside that no progress report can be printed, the pbapply package implemented in parsperrorest() provides progress output (showing a progress bar) and does even work cross-platform. par.mode = 1 does not show a progress bar but prints fold and/or repetition information to the console.

res.rf.sp.par1 <- parsperrorest(fo, data = maipo, coords = c("utmx","utmy"), model.fun = randomForest, pred.fun = rf.predfun, pred.args = list(fac = "field"), smp.fun = partition.factor.cv, smp.args = list(fac = "field", repetition = 1:50, nfold = 5), par.args = list(par.units = 4, par.mode = 1), error.rep = TRUE, error.fold = TRUE, benchmark = TRUE, progress = FALSE) res.rf.sp.par1$benchmarks$runtime.performance ## Time difference of 18 mins res.rf.sp.par2 <- parsperrorest(fo, data = maipo, coords = c("utmx","utmy"), model.fun = randomForest, pred.fun = rf.predfun, pred.args = list(fac = "field"), smp.fun = partition.factor.cv, smp.args = list(fac = "field", repetition = 1:50, nfold = 5), par.args = list(par.units = 4, par.mode = 2), error.rep = TRUE, error.fold = TRUE, benchmark = TRUE, progress = FALSE) res.rf.sp.par2$benchmarks$runtime.performance ## Time difference of 19.9 mins

Both the differences of the parallel methods (parsperrorest()) compared to the sequential one (sperrorest()) and between the parallel modes themselves can be seen as variance among the repetitions. If you would use more repetitions (e.g. 100), this difference would converge towards zero.

Results of par.mode = 1 summary(res.rf.sp.par1$error.rep$test.error) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.0744 0.0813 0.0852 0.0862 0.0903 0.1090 summary(res.rf.sp.par1$error.rep$test.accuracy) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.891 0.910 0.915 0.914 0.919 0.926 Results of par.mode = 2 summary(res.rf.sp.par2$error.rep$test.error) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.0638 0.0832 0.0874 0.0872 0.0926 0.1130 summary(res.rf.sp.par2$error.rep$test.accuracy) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.887 0.907 0.913 0.913 0.917 0.936 Usage Advices

Given all the different sampling functions and the custom predict function (rf.predfun()) you might be a little confused which function to use for your use case.
If you want to do a “normal”, i.e. non-spatial cross-validation we recommend to use partition.cv() as smp.fun in sperrorest() or parsperrorest().
If you want to perform a spatial cross-validation and you do not have a grouping structure like fields in our example, you should use partition.kmeans() as smp.fun. This creates a spatial k-means clustering within your cross-validation runs.

Most often you can simply use the generic predict() method for your model. Only in cases where you want to do prediction based on some factor level like in this vignette, you need to write your own prediction function.

Wrapper functions

Another point to be aware of is that some model functions do use a different argument naming/order than the model.fun() argument of sperrorest() expects (formula + data). For example, the formula argument of glmmPQL() is named fixed instead of formula.
Also sometimes you may encounter errors when providing spatial correlation structures to your model using model.args. If this happens, also try to provide the correlation structure within a wrapper function.

my_glmmPQL <- function(formula, data) { # calculate spatial correlation structure correlation = corSpatial(value = c(5024, 0.25), form = ~ry + rx, nugget = TRUE, fixed = FALSE, type = "spherical") correlation = Initialize(correlation, data = data) # actual model.fun glmmPQL(fixed = formula, data = data, correlation = correlation) }

For all further questions, please feel free to open an issue at our Github repo.

References [references]

Breiman, Leo. 1996. “Bagging Predictors.” Machine Learning 24 (2). Springer Nature: 123–40. doi:10.1007/bf00058655.

———. 2001. “Random Forests.” Machine Learning 45 (1). Springer Nature: 5–32. doi:10.1023/a:1010933404324.

Brenning, A. 2005. “Spatial Prediction Models for Landslide Hazards: Review, Comparison and Evaluation.” Natural Hazards and Earth System Science 5 (6). Copernicus GmbH: 853–62. doi:10.5194/nhess-5-853-2005.

Goetz, J.N., A. Brenning, H. Petschko, and P. Leopold. 2015. “Evaluating Machine Learning and Statistical Prediction Techniques for Landslide Susceptibility Modeling.” Computers & Geosciences 81 (August). Elsevier BV: 1–11. doi:10.1016/j.cageo.2015.04.007.

Hothorn, Torsten, and Berthold Lausen. 2005. “Bundling Classifiers by Bagging Trees.” Computational Statistics & Data Analysis 49 (4). Elsevier BV: 1068–78. doi:10.1016/j.csda.2004.06.019.

James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani.

  1. An Introduction to Statistical Learning. Springer New York. doi:10.1007/978-1-4614-7138-7.

Knudby, Anders, Alexander Brenning, and Ellsworth LeDrew. 2010. “New Approaches to Modelling Fishhabitat Relationships.” Ecological Modelling 221 (3). Elsevier BV: 503–11. doi:10.1016/j.ecolmodel.2009.11.008.

Peña, M.A., and A. Brenning. 2015. “Assessing Fruit-Tree Crop Classification from Landsat-8 Time Series for the Maipo Valley, Chile.” Remote Sensing of Environment 171 (December). Elsevier BV: 234–44. doi:10.1016/j.rse.2015.10.029.

mapedit - interactively edit spatial data in R

Mon, 01/30/2017 - 01:00

[view raw Rmd]

The R ecosystem offers a powerful set of packages for geospatial analysis. For a comprehensive list see the CRAN Task View: Analysis of Spatial Data. Yet, many geospatial workflows require interactivity for smooth uninterrupted completion. With new tools, such as htmlwidgets, shiny, and crosstalk, we can now inject this useful interactivity without leaving the R environment. In the first phase of the mapedit project, we have focused on experimenting and creating proof of concepts for the following three objectives:

  1. drawing, editing, and deleting features,

  2. selecting and querying of features and map regions,

  3. editing attributes.

Install mapedit

To run the code in the following discussion, please install with devtools::install_github. Please be aware that the current functionality is strictly a proof of concept, and the API will change rapidly and dramatically.

devtools::install_github("bhaskarvk/leaflet.extras") devtools::install_github("r-spatial/mapedit") Drawing, Editing, Deleting Features

We would like to set up an easy process for CRUD (create, read, update, and delete) of map features. The function edit_map demonstrates a first step toward this goal.

Proof of Concept 1 | Draw on Blank Map

To see how we might add some features, let’s start with a blank map, and then feel free to draw, edit, and delete with the Leaflet.Draw toolbar on the map. Once finished drawing simply press “Done”.

library(leaflet) library(mapedit) what_we_created <- leaflet() %>% addTiles() %>% edit_map()

edit_map returns a list with drawn, edited, deleted, and finished features as GeoJSON. In this case, if we would like to see our finished creation we can focus on what_we_created$finished. Since this is GeoJSON, the easiest way to see what we just created will be to use the addGeoJSON function from leaflet. This works well with polylines, polygons, rectangles, and points, but circles will be treated as points without some additional code. In future versions of the API it is likely that mapedit will return simple features gemometries rather than geojson by default.

leaflet() %>% addTiles() %>% addGeoJSON(what_we_created$finished) Proof of Concept 2 | Edit and Delete Existing Features

As an extension of the first proof of concept, we might like to edit and/or delete existing features. Let’s play Donald Trump for this exercise and use the border between Mexico and the United States for California and Arizona. For the sake of the example, let’s use a simplified polyline as our border. As we have promised we want to build a wall, but if we could just move the border a little in some places, we might be able to ease construction.

library(sf) # simplified border for purpose of exercise border <- st_as_sfc( "LINESTRING(-109.050197582692 31.3535554844322, -109.050197582692 31.3535554844322, -111.071681957692 31.3723176640684, -111.071681957692 31.3723176640684, -114.807033520192 32.509681296831, -114.807033520192 32.509681296831, -114.741115551442 32.750242384668, -114.741115551442 32.750242384668, -117.158107738942 32.5652527715121, -117.158107738942 32.5652527715121)" ) %>% st_set_crs(4326) # plot quickly for visual inspection plot(border)

Since we are Trump, we can do what we want, so let’s edit the line to our liking. We will use mapview for our interactive map since it by default gives us an OpenTopoMap layer and the develop branch includes preliminary simple features support. With our new border and fence, we will avoid the difficult mountains and get a little extra beachfront.

# use develop branch of mapview with simple features support # devtools::install_github("environmentalinformatics-marburg/mapview@develop") library(mapview) new_borders <- mapview(border)@map %>% edit_map("border")

Now, we can quickly inspect our new borders and then send the coordinates to the wall construction company.

leaflet() %>% addTiles() %>% fitBounds(-120, 35, -104, 25) %>% addGeoJSON(new_borders$drawn)

Disclaimers

If you played enough with the border example, you might notice a couple of glitches and missing functionality. This is a good time for a reminder that this is alpha and intended as a proof of concept. Please provide feedback, so that we can insure a quality final product. In this case, the older version of Leaflet.Draw in RStudio Viewer has some bugs, so clicking an existing point creates a new one rather than allowing editing of that point. Also, the returned list from edit_map has no knowledge of the provided features.

Selecting Regions

The newest version of leaflet provides crosstalk support, but support is currently limited to addCircleMarkers. This functionality is enhanced by the sf use of list columns and integration with dplyr verbs. Here is a quick example with the breweries91 data from mapview.

library(crosstalk) library(mapview) library(sf) library(shiny) library(dplyr) # convert breweries91 from mapview into simple features # and add a Century column that we will use for selection brew_sf <- st_as_sf(breweries91) %>% mutate(century = floor(founded/100)*100) %>% filter(!is.na(century)) %>% mutate(id=1:n()) pts <- SharedData$new(brew_sf, key = ~id, group = "grp1") ui <- fluidPage( fluidRow( column(4, filter_slider(id="filterselect", label="Century Founded", sharedData=pts, column=~century, step=50)), column(6, leafletOutput("leaflet1")) ), h4("Selected points"), verbatimTextOutput("selectedpoints") ) server <- function(input, output, session) { # unfortunatly create SharedData again for scope pts <- SharedData$new(brew_sf, key = ~id, group = "grp1") lf <- leaflet(pts) %>% addTiles() %>% addMarkers() not_rendered <- TRUE # hack to only draw leaflet once output$leaflet1 <- renderLeaflet({ if(req(not_rendered,cancelOutput=TRUE)) { not_rendered <- FALSE lf } }) output$selectedpoints <- renderPrint({ df <- pts$data(withSelection = TRUE) cat(nrow(df), "observation(s) selected\n\n") str(dplyr::glimpse(df)) }) } shinyApp(ui, server)

With mapedit, we would like to enhance the geospatial crosstalk integration to extend beyond leaflet::addCircleMarkers. In addition, we would like to provide an interactive interface to the geometric operations of sf, such as st_intersects(), st_difference(), and st_contains().

Proof of Concept 3

As a select/query proof of concept, assume we want to interactively select some US states for additional analysis. We will build off Bhaskar Karambelkar’s leaflet projection example using Bob Rudis albersusa package.

# use @bhaskarvk USA Albers with leaflet code # https://bhaskarvk.github.io/leaflet/examples/proj4Leaflet.html #devtools::install_github("hrbrmstr/albersusa") library(albersusa) library(sf) library(leaflet) library(mapedit) spdf <- usa_composite() %>% st_as_sf() pal <- colorNumeric( palette = "Blues", domain = spdf$pop_2014 ) bounds <- c(-125, 24 ,-75, 45) (lf <- leaflet( options= leafletOptions( worldCopyJump = FALSE, crs=leafletCRS( crsClass="L.Proj.CRS", code='EPSG:2163', proj4def='+proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +a=6370997 +b=6370997 +units=m +no_defs', resolutions = c(65536, 32768, 16384, 8192, 4096, 2048,1024, 512, 256, 128) ))) %>% fitBounds(bounds[1], bounds[2], bounds[3], bounds[4]) %>% setMaxBounds(bounds[1], bounds[2], bounds[3], bounds[4]) %>% addPolygons( data=spdf, weight = 1, color = "#000000", # adding group necessary for identification group = ~iso_3166_2, fillColor=~pal(pop_2014), fillOpacity=0.7, label=~stringr::str_c(name,' ', format(pop_2014, big.mark=",")), labelOptions= labelOptions(direction = 'auto')#, #highlightOptions = highlightOptions( # color='#00ff00', bringToFront = TRUE, sendToBack = TRUE) ) ) # test out select_map with albers example select_map( lf, style_false = list(weight = 1), style_true = list(weight = 4) )

The select_map() function will return a data.frame with an id/group column and a selected column. select_map() will work with nearly all leaflet overlays and offers the ability to customize the styling of selected and unselected features.

Editing Attributes

A common task in geospatial analysis involves editing or adding feature attributes. While much of this can be accomplished in the R console, an interactive UI on a reference map can often help perform this task. Mapbox’s geojson.io provides a good reference point for some of the features we would like to provide in mapedit.

Proof of Concept 4

As a proof of concept, we made a Shiny app that thinly wraps a slightly modified geojson.io. Currently, we will have to pretend that there is a mechanism to load R feature data onto the map, since this functionality does not yet exist.

library(shiny) edited_features <- runGitHub( "geojson.io", "timelyportfolio", ref="shiny" )

Conclusion

mapedit hopes to add useful interactivity to your geospatial workflows by leveraging powerful new functionality in R with the interactivity of HTML, JavaScript, and CSS. mapedit will be better with your feedback, requests, bug reports, use cases, and participation. We will report on progress periodically with blog posts on this site, and we will develop openly on the mapedit Github repo.

sf - plot, graticule, transform, units, cast, is

Thu, 01/12/2017 - 01:00

[view raw Rmd]

This year began with the R Consortium blog on simple features:

#rstats A new post by Edzer Pebesma reviews the status of the R Consortium’s Simple Features project: https://t.co/W8YqH3WQVJ

— Joseph Rickert (@RStudioJoe) January 3, 2017

This blog post describes changes of sf 0.2-8 and upcoming 0.2-9, compared to 0.2-7, in more detail.

Direct linking to Proj.4

Since 0.2-8, sf links directly to the Proj.4 library:

library(sf) ## Linking to GEOS 3.5.0, GDAL 2.1.0, proj.4 4.9.2

before that, it would use the projection interface of GDAL, which uses Proj.4, but exposes only parts of it. The main reason for switching to Proj.4 is the ability for stronger error checking. For instance, where GDAL would interpret any unrecognized field for +datum as WGS84:

# sf 0.2-7: > st_crs("+proj=longlat +datum=NAD26") $epsg [1] NA $proj4string [1] "+proj=longlat +ellps=WGS84 +no_defs" attr(,"class") [1] "crs"

Now, with sf 0.2-8 we get a proper error in case of an unrecognized +datum field:

t = try(st_crs("+proj=longlat +datum=NAD26")) attr(t, "condition") ## <simpleError in make_crs(x): invalid crs: +proj=longlat +datum=NAD26, reason: unknown elliptical parameter name> plotting

The default plot method for sf objects (simple features with attributes, or data.frames with a simple feature geometry list-column) now plots the set of maps, one for each attribute, with automatic color scales:

nc = st_read(system.file("gpkg/nc.gpkg", package="sf"), quiet = TRUE) plot(nc)

well, that is all there is, basically. For plotting a single map, select the appropriate attribute

plot(nc["SID79"])

or only the geometry:

plot(st_geometry(nc))

graticules

Package sf gained a function st_graticule to generate graticules, grids formed by lines with constant longitude or latitude. Suppose we want to project nc to the state plane, and plot it with a longitude latitude graticule in NAD27 (the original datum of nc):

nc_sp = st_transform(nc["SID79"], 32119) # NC state plane, m plot(nc_sp, graticule = st_crs(nc), axes = TRUE)

The underlying function, st_graticule, can be used directly to generate a simple object with graticules, but is rather meant to be used by plotting functions that benefit from a graticule in the background, such as plot or ggplot. The function provides the end points of graticules and the angle at which they end; an example for using Lambert equal area on the USA is found in the help page of st_graticule:

The default plotting method for simple features with longitude/latitude coordinates is the equirectangular projection, (also called geographic projection, or equidistant cylindrical (eqc) projection) which linearly maps longitude and latitude into \(x\) and \(y\), transforming \(y\) such that in the center of the map 1 km easting equals 1 km northing. This is also the default for sp::plot, sp::spplot and ggplot2::coord_quickmap. The official Proj.4 transformation for this is found here.

We can obtain e.g. a plate carrée projection (with one degree latitude equaling one degree longitude) with

caree = st_crs("+proj=eqc") plot(st_transform(nc[1], caree), graticule = st_crs(nc), axes=TRUE, lon = -84:-76)

and we see indeed that the lon/lat grid is formed of squares.

The usual R plot for nc obtained by

plot(nc[1], graticule = st_crs(nc), axes = TRUE)

corrects for latitude. The equivalent, officially projected map is obtained by using the eqc projection with the correct latitude:

mean(st_bbox(nc)[c(2,4)]) ## [1] 35.23582 eqc = st_crs("+proj=eqc +lat_ts=35.24") plot(st_transform(nc[1], eqc), graticule = st_crs(nc), axes=TRUE)

so that in the center of these (identical) maps, 1 km east equals 1 km north.

geosphere and units support

sf now uses functions in package geosphere to compute distances or areas on the sphere. This is only possible for points and not for arbitrary feature geometries:

centr = st_centroid(nc) ## Warning in st_centroid.sfc(st_geometry(x)): st_centroid does not give ## correct centroids for longitude/latitude data st_distance(centr[c(1,10)])[1,2] ## 34093.21 m

As a comparison, we can compute distances in two similar projections, each having a different measurement unit:

centr.sp = st_transform(centr, 32119) # NC state plane, m (m <- st_distance(centr.sp[c(1,10)])[1,2]) ## 34097.54 m centr.ft = st_transform(centr, 2264) # NC state plane, US feet (ft <- st_distance(centr.ft[c(1,10)])[1,2]) ## 111868.3 US_survey_foot

and we see that the units are reported, by using package units. To verify that the distances are equivalent, we can compute

ft/m ## 1 1

which does automatic unit conversion before computing the ratio. (Here, 1 1 should be read as one, unitless (with unit 1)).

For spherical distances, sf uses geosphere::distGeo. It passes on the parameters of the datum, as can be seen from

st_distance(centr[c(1,10)])[1,2] # NAD27 ## 34093.21 m st_distance(st_transform(centr, 4326)[c(1,10)])[1,2] # WGS84 ## 34094.28 m

Other measures come with units too, e.g. st_area

st_area(nc[1:5,]) ## Units: m^2 ## [1] 1137388604 611077263 1423489919 694546292 1520740530

units vectors can be coerced to numeric by

as.numeric(st_area(nc[1:5,])) ## [1] 1137388604 611077263 1423489919 694546292 1520740530 type casting

With help from Mike Sumner and Etienne Racine, we managed to get a working st_cast, which helps converting one geometry in another.

casting individual geometries (sfg)

Casting individual geometries will close polygons when needed:

st_point(c(0,1)) %>% st_cast("MULTIPOINT") ## MULTIPOINT(0 1) st_linestring(rbind(c(0,1), c(5,6))) %>% st_cast("MULTILINESTRING") ## MULTILINESTRING((0 1, 5 6)) st_linestring(rbind(c(0,0), c(1,0), c(1,1))) %>% st_cast("POLYGON") ## POLYGON((0 0, 1 0, 1 1, 0 0))

and will warn on loss of information:

st_linestring(rbind(c(0,1), c(5,6))) %>% st_cast("POINT") ## Warning in st_cast.LINESTRING(., "POINT"): point from first coordinate only ## POINT(0 1) st_multilinestring(list(matrix(1:4,2), matrix(1:6,,2))) %>% st_cast("LINESTRING") ## Warning in st_cast.MULTILINESTRING(., "LINESTRING"): keeping first ## linestring only ## LINESTRING(1 3, 2 4) casting sets of geometries (sfc)

Casting sfc objects can group or ungroup geometries:

# group: st_sfc(st_point(0:1), st_point(2:3), st_point(4:5)) %>% st_cast("MULTIPOINT", ids = c(1,1,2)) ## Geometry set for 2 features ## geometry type: MULTIPOINT ## dimension: XY ## bbox: xmin: 0 ymin: 1 xmax: 4 ymax: 5 ## epsg (SRID): NA ## proj4string: NA ## MULTIPOINT(0 1, 2 3) ## MULTIPOINT(4 5) # ungroup: st_sfc(st_multipoint(matrix(1:4,,2))) %>% st_cast("POINT") ## Geometry set for 2 features ## geometry type: POINT ## dimension: XY ## bbox: xmin: 1 ymin: 3 xmax: 2 ymax: 4 ## epsg (SRID): NA ## proj4string: NA ## POINT(1 3) ## POINT(2 4)

st_cast with no to argument will convert mixes of GEOM and MULTIGEOM to MULTIGEOM, where GEOM is POINT, LINESTRING or POLYGON, e.g.

st_sfc( st_multilinestring(list(matrix(5:8,,2))), st_linestring(matrix(1:4,2)) ) %>% st_cast() ## Geometry set for 2 features ## geometry type: MULTILINESTRING ## dimension: XY ## bbox: xmin: 1 ymin: 3 xmax: 6 ymax: 8 ## epsg (SRID): NA ## proj4string: NA ## MULTILINESTRING((5 7, 6 8)) ## MULTILINESTRING((1 3, 2 4))

or unpack geometry collections:

x <- st_sfc( st_multilinestring(list(matrix(5:8,,2))), st_point(c(2,3)) ) %>% st_cast("GEOMETRYCOLLECTION") x ## Geometry set for 2 features ## geometry type: GEOMETRYCOLLECTION ## dimension: XY ## bbox: xmin: 2 ymin: 3 xmax: 6 ymax: 8 ## epsg (SRID): NA ## proj4string: NA ## GEOMETRYCOLLECTION(MULTILINESTRING((5 7, 6 8))) ## GEOMETRYCOLLECTION(POINT(2 3)) x %>% st_cast() ## Geometry set for 2 features ## geometry type: GEOMETRY ## dimension: XY ## bbox: xmin: 2 ymin: 3 xmax: 6 ymax: 8 ## epsg (SRID): NA ## proj4string: NA ## MULTILINESTRING((5 7, 6 8)) ## POINT(2 3) casting on sf objects

The casting of sf objects works in principle identical, except that for ungrouping, attributes are repeated (and might give rise to warning messages),

# ungroup: st_sf(a = 1, geom = st_sfc(st_multipoint(matrix(1:4,,2)))) %>% st_cast("POINT") ## Warning in st_cast.sf(., "POINT"): repeating attributes for all sub- ## geometries for which they may not be constant ## Simple feature collection with 2 features and 1 field ## geometry type: POINT ## dimension: XY ## bbox: xmin: 1 ymin: 3 xmax: 2 ymax: 4 ## epsg (SRID): NA ## proj4string: NA ## c.1..1. geom ## 1 1 POINT(1 3) ## 2 1 POINT(2 4)

and for grouping, attributes are aggregated, which requires an aggregation function

# group: st_sf(a = 1:3, geom = st_sfc(st_point(0:1), st_point(2:3), st_point(4:5))) %>% st_cast("MULTIPOINT", ids = c(1,1,2), FUN = mean) ## Simple feature collection with 2 features and 2 fields ## geometry type: MULTIPOINT ## dimension: XY ## bbox: xmin: 0 ymin: 1 xmax: 4 ymax: 5 ## epsg (SRID): NA ## proj4string: NA ## ids.group a geom ## 1 1 1.5 MULTIPOINT(0 1, 2 3) ## 2 2 3 MULTIPOINT(4 5) type selection

In case we have a mix of geometry types, we can select those of a particular geometry type by the new helper function st_is. As an example we create a mix of polygons, lines and points:

g = st_makegrid(n=c(2,2), offset = c(0,0), cellsize = c(2,2)) s = st_sfc(st_polygon(list(rbind(c(1,1), c(2,1),c(2,2),c(1,2),c(1,1))))) i = st_intersection(st_sf(a=1:4, geom = g), st_sf(b = 2, geom = s)) ## Warning in st_intersection(st_sf(a = 1:4, geom = g), st_sf(b = 2, geom = ## s)): attribute variables are assumed to be spatially constant throughout ## all geometries i ## Simple feature collection with 4 features and 2 fields ## geometry type: GEOMETRY ## dimension: XY ## bbox: xmin: 1 ymin: 1 xmax: 2 ymax: 2 ## epsg (SRID): NA ## proj4string: NA ## a b geometry ## 1 1 2 POLYGON((2 2, 2 1, 1 1, 1 2... ## 2 2 2 LINESTRING(2 2, 2 1) ## 3 3 2 LINESTRING(1 2, 2 2) ## 4 4 2 POINT(2 2)

and can select using dplyr::filter, or directly using st_is:

filter(i, st_is(geometry, c("POINT"))) ## Simple feature collection with 1 feature and 2 fields ## geometry type: GEOMETRY ## dimension: XY ## bbox: xmin: 1 ymin: 1 xmax: 2 ymax: 2 ## epsg (SRID): NA ## proj4string: NA ## a b geometry ## 1 4 2 POINT(2 2) filter(i, st_is(geometry, c("POINT", "LINESTRING"))) ## Simple feature collection with 3 features and 2 fields ## geometry type: GEOMETRY ## dimension: XY ## bbox: xmin: 1 ymin: 1 xmax: 2 ymax: 2 ## epsg (SRID): NA ## proj4string: NA ## a b geometry ## 1 2 2 LINESTRING(2 2, 2 1) ## 2 3 2 LINESTRING(1 2, 2 2) ## 3 4 2 POINT(2 2) st_is(i, c("POINT", "LINESTRING")) ## [1] FALSE TRUE TRUE TRUE

OpenEO: a GDAL for Earth Observation Analytics

Tue, 11/29/2016 - 01:00

Earth observation data, or satellite imagery, is one of the richest sources to find out how our Earth is changing. The amount of Earth observation data we collect Today has become too large to analyze on a single computer. Although most of the Earth observation data is available for free, practical difficulties we are currently facing when we try to analyze it seriously constrains the potential benefits for citizens, industry, scientists, or society. How did we get here?

GIS: the 80’s

To understand the current difficulty when analyzing big Earth observation data analysis, let us look how geographic information systems (GIS) developed over the past decades. In the early days, they would be isolated structures:

where one would get things done in isolation, without any chance of verifying or comparing it with another system: these were expensive systems, hard to set up and maintain, and (with the exception of GRASS) closed databases and closed source software.

File formats: the 90’s

In the 90’s, file formats came up: several systems started supporting various file formats, and dedicated programs that would do certain file format conversions became available. This made many new things possible, such as the integration of S-Plus with Arc-Info, but to fully realize this, each software would have to implement drivers for every file format. Developing new applications or introducing a new file format are both difficult in this model.

GDAL: the 00’s

Then came GDAL! This Geospatial Data Abstraction Layer is a software library that reads and writes raster and vector data. Instead of having to write drivers for each file format, application developers needed to only write a GDAL client driver. When proposing a new file format, instead of having to convince many application developers to support it, only a GDAL driver for the new format was required to realize quick adoption. Instead of many-to-many links, only many-to-one links were needed:

R and python suddenly became strong GIS and spatial modelling tools. ArcGIS users could suddenly deal with the weird data formats from hydrologists, meteorologists, and climate scientists. Developing innovative applications and introducing new file formats became attractive.

For analyzing big Earth observation data, GDAL has its limitations, including:

  • it has weak data semantics: the data model does not include observation time or band wavelength (color), but addresses bands by dataset name and band number,
  • raster datasets cannot tell whether pixels refer to points, cells with constant value, or cells with an aggregated value; most regridding methods silently assume the least likely option (points),
  • the library cannot do much processing, meaning that clients that do the processing and use GDAL for reading and writing need to be close to where the data is, which is far away from the user’s computer.
Big Earth Observation data: the 10’s

We currently see a plethora of cloud environments that all try to solve the problem of how to effectively analyze Earth Observation data that are too large to download and process locally. It very much reminds of the isolated GIS of the 80’s, in the sense that strongly differing systems have been built with large efforts, and when solving a certain problem in one system it is practically impossible to try to solve it in another system too. Several systems work with data processing back-ends that are not open source. The following figure only shows a few systems for illustration:

Open Earth Observation Science

For open science, open data is a necessary but not sufficient condition. In order to fight the reproducibility crisis, analysis procedures need to be fully transparent and reproducible. To get there, it needs to be simple to execute a given analysis on two different data centers to verify that they yield identical results. Today, this sounds rather utopian. On the other hand, we believe that one of the reasons that data science has taken off is that the software making it possible (e.g. python, R, spark) was able to hide irrelevant details and had reached at a stage where it was transparent, stable, and robust.

Google Earth Engine is an excellent example of an interface that is simple, in the sense that users can directly address sensors and observation times and work with composites rather than having to comb through the raw data consisting of large collections of files (“scenes”). The computational back-end however is not transparent, it lets the user execute functions as far as they are provided by the API, but not inspect the source code of these function, or modify them.

How then can we make progress out of a world of currently incompatible, isolated Earth Observation data center efforts? Learning from the past, the solution might be a central interface between users who are allowed to think in high-level objects (“Landsat 7 over Western Europe, 2005-2015”) and a set of computational Earth Observation data centers that agree on supporting this interface. A GDAL for Earth Observation Analytics, so to speak. We’ll call it OpenEO.

Open EO Interface: the 20’s

The following figure shows schematically how this could look like: Users write scripts in a neutral language that addresses the data and operations, but ignores the computer architecture of the back-end. The back-end carries out the computations and returns results. It needs to be able to identify users (e.g. verify its permission to use it), but also to tell which data they provide and which computational operations they afford.

More in detail, and following the GDAL model closely, the architecture contains of

  • a client and back-end neutral set of API’s for both sides, defining which functions users can call, and which functions back-ends have to obey to,
  • a driver for each back-end that translates the neutral requirements into the platform specific offerings, or information that a certain function is not available (e.g. creating new datasets in a write-only back-end)
  • a driver for each client that binds the interfaces to the particular front-end such as R or python

With an architecture like this, it would not only become easy and attractive for users to change from one back-end to the other, but also make it much easier to choose between back-ends, because the value propositions of the back-end providers are now comparable, on paper as well as in practice.

A rough idea of the architecture is shown in this figure:

One of the drivers will interface collections of scenes in a back-end, or on the local hard drive. GDAL will remain playing an important role in back-ends if the back-end uses files in a format supported by GDAL, and in the front-end if the (small) results of big computations are fetched.

The way forward

We, as the authors of this piece, have started to work on these ideas in current activities, projects, and proposals. We are planning to submit a proposal to the EC H2020 call EO-2-2017: EO Big Data Shift. We hope that the call writers (and the reviewers) have the same problem in mind as what we explain above.

We are publishing this article because we believe this work has to be done, and only a concerted effort can realize it. When you agree and would like to participate, please get in touch with us. When successful, it will in the longer run benefit science, industry, and all of us.

Earlier blogs related to this topic

Earlier posts related to this:

Simple features now on CRAN

Wed, 11/02/2016 - 01:00

Submitting a package to CRAN is always an adventure, submitting a package with lots of external dependencies even more so. A week ago I submitted the simple features for R package to CRAN, and indeed, hell broke loose! Luckily, the people behind CRAN are extremely patient, friendly and helpful, but they let your code test on a big server farm with computers in 13 different flavors.

Of course we test code on linux and windows after every code push to github, but that feels like talking to a machine, like remotely compiling and testing. CRAN feels different: you first need to manually confirm that you did your utmost best to solve problems, and then there is a person telling you everything still remaining! Of course, this is of incredible help, and a big factor in the R community’s sustainability.

Package sf is somewhat special in that it links to GEOS and GDAL, and in particular GDAL links, depending on how it is installed, to many (77 in my case) other libraries, each with their own versions. After first submission of sf 0.2-0, I ran into the following issues with my code.

sf 0.2-0
  • I had to change all links starting with http://cran.r-project.org/web/packages/pkg into https://cran.r-project.org/packages=pkg. A direct link to a units vignette on CRAN had to be removed.
  • some of the tests gave very different output, because my default testing platforms (laptop, travis) have PostGIS, and CRAN machines don’t; I changed this so that testing without PostGIS (CRAN) is now most silent
  • the tests still output differences in GDAL and GEOS versions, but that was considered OK.

That was it! The good message

Thanks, on CRAN now. Best -k

arrived! Party time! Too early. In the evening (CRAN never sleeps) an email arrived, mentioning:

This worked for my incoming checks, but just failed for the regular checks, with Error in loadNamespace(name) : there is no package called 'roxygen2' Calls: :: ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous> Execution halted For some reason roxygen2 is not working. Is it installed? ERROR: configuration failed for package ‘sf’

with lots of helpful hints. Indeed, my package generated Rcpp files and manual pages dynamically during install; this requires Rcpp and roxygen2 to be available unconditionally and they aren’t.

So I sat down and worked on 0.2-1, to address this. Before I could do that, an email from Oxford (Brian Ripley) arrived, telling me that sf had caused some excitement in the multi-flavor server farm:

Here, it should be noted (again) that the only two NOTEs were due to the excellent work of Jeroen Ooms who compiled GDAL and many other libraries for rwinlib, and prepared sf for downloading and using them. The rest was my contribution.

In addition, an issue was raised by Dirk Eddelbuettel, telling me that his Rcpp reverse check farm had noted him that sf required GDAL 2.0 or later, but not by properly checking its version but by generating plane compile errors. The horror, the horror.

sf 0.2-1: roxygen, Rcpp, SQLITE on Solaris

sf 0.2-1 tried to address the Rcpp and roxygen2 problems: I took their generation out of the configure and configure.win scripts. I added all automatically derived files to the github repo, to get everything in sync. Worked:

Thanks, on CRAN now. [Tonight I'll know for sure ...] Best -k

… and no emails in the evening.

Also, the errors on Solaris platforms were caused by the SQLITE library not being present, hence GeoPackage not being available as a GDAL driver. As a consequence, I had to set back the examples reading a GeoPackage polygons file to one where a shapefile is read. Bummer.

Another issue was related to relying on GDAL 2.1 features without testing for it; this was rather easily solved by conditional compiling.

This gave:

meaning SOME improvement, but where do these UBSAN reports suddenly come from!?

sf 0.2-2: byte swapping, memory alignment

Over-optimistically as I am, I had commented that life is too short to do byte swapping when reading WKB. Welcome to the CRAN server farm. Solaris-Spark is big-endian. Although the R code reading WKB does read non-native endian, this would have required to rewrite all tests, so I added byte swapping to the C++ code, using a helpful SO post.

The UBSAN issues all listed something like:

UBSAN: don't assume that pointers can point anywhere to, in valid memory; wkb.cpp:185:39: runtime error: load of misaligned address 0x6150001d0e29 for type 'uint32_t', which requires 4 byte alignment 0x6150001d0e29: note: pointer points here 00 00 00 01 06 00 00 00 01 00 00 00 01 03 00 00 00 01 00 00 00 1b 00 00 00 00 00 00 a0 41 5e 54

Sounds scary? What I had done was for every coordinate use a double pointer, pointing it to the right place in the WKB byte stream, then copy its value, and move it 8 bytes. I love this *d++ expression. But you can’t do this anymore! Although the code worked on my machines, you can’t put a double pointer to any location and assume it’ll work everywhere. The solution was to memcpy the relevant bytes to a double value on the stack, and copy that into the Rcpp::NumericVector.

To be done:

All these changes have brought me here:

where you see that linux and Windows compile (all NOTEs indicate that the library is too large, which is unavoidable with GDAL) and that errors are Mac related:

r-devel-macos-x86_64-clang ** testing if installed package can be loaded Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/Users/ripley/R/packages/tests-devel/sf.Rcheck/sf/libs/sf.so': dlopen(/Users/ripley/R/packages/tests-devel/sf.Rcheck/sf/libs/sf.so, 6): Symbol not found: _H5T_NATIVE_DOUBLE_g Referenced from: /Users/ripley/R/packages/tests-devel/sf.Rcheck/sf/libs/sf.so Expected in: flat namespace in /Users/ripley/R/packages/tests-devel/sf.Rcheck/sf/libs/sf.so

which indicates a problem with GDAL not linking to the HDF5 library (unsolved).

The second:

r-release-osx-x86_64-mavericks checking gdal-config usability... yes configure: GDAL: 1.11.4 checking GDAL version >= 2.0.0... yes

indicates that

  • this platform still runs GDAL 1.x, so needs to be upgraded and that
  • my check for GDAL version present on the system still does not work!
CRAN flavors

CRAN flavors is a great asset that teaches you problems of all kinds at an early stage. Without it, users would have run at some stage into problems that are now caught up front. Thanks to the tremendous effort of the CRAN team!!

UPDATE, Dec 21, 2016
  • Thanks to Roger Bivand, Simon Urbanek and Brian Ripley for constructive help, the MacOSX-mavericks binary build is now on CRAN, the issue is described here

Automatic units in axis labels

Thu, 09/29/2016 - 02:00

This blog post concerns the development version of units, installed by

devtools::install_github("edzer/units")

[view raw Rmd]

Have you ever tried to properly add measurement units to R plots? It might go like this:

xlab = parse(text = "temperature ~~ group('[', degree * C, ']')") ylab = parse(text = "speed ~~ group('[', m * ~~ s^-1, ']')") par(mar = par("mar") + c(0, .3, 0, 0)) # avoids cutting of superscript plot(3 + 1:10 + 2 * rnorm(10), xlab = xlab, ylab = ylab)

The main observation is, of course that it can be done. However,

  • it looks geeky, and not quite intuitive
  • you would typically postpone this work to just before submitting the paper, or during review
  • you need this so infrequently that you tend to forget how it works.

Although well-written help is found in ?plotmath, all three observations cause frustration.

The original paper desribing plotmath is by Paul Murrell and Ross Ihaka. R core member Paul Murrell also wrote package grid, part of base R. Few people use it directly, but without it ggplot2 or lattice could not exist.

Automatic unit handling

The new units CRAN package now makes working with units

  • easier
  • automatic, and
  • less error-prone

Here is an example using mtcars. First, we specify the imperial units to those known in the udunits2 database:

library(units) gallon = make_unit("gallon") consumption = mtcars$mpg * with(ud_units, mi/gallon) displacement = mtcars$disp * ud_units[["in"]]^3

For displacement, we cannot use the normal lookup in the database

displacement = mtcars$disp * with(ud_units, in)

because in (inch) is also a reserved word in R.

We convert these values to SI units by

units(displacement) = with(ud_units, cm^3) units(consumption) = with(ud_units, km/l) consumption[1:5] ## Units: km/l ## [1] 8.928017 8.928017 9.693276 9.098075 7.950187 Automatic measurement units in axis labels

We can plot these numeric variabes of type units by

par(mar = par("mar") + c(0, .1, 0, 0)) # avoids cutting of brackets at lhs plot(displacement, consumption)

The units automatically appear in axis labels! If we want to have negative power instead of division bars, we can set a global option

units_options(negative_power = TRUE) # division becomes ^-1

Expressions such as

1/displacement [1:10] ## Units: cm^-3 ## [1] 0.0003813984 0.0003813984 0.0005650347 0.0002365261 0.0001695104 ## [6] 0.0002712166 0.0001695104 0.0004159764 0.0004334073 0.0003641035

automatically convert units, which also happens in plots (note the converted units symbols):

par(mar = par("mar") + c(0, .3, 0, 0)) plot(1/displacement, 1/consumption)

How to do this with ggplot?

We can of course plot these data by dropping units:

library(ggplot2) ggplot() + geom_point(aes(x = as.numeric(displacement), y = as.numeric(consumption)))

but that doesn’t show us units. Giving the units as variables gives an error:

ggplot() + geom_point(aes(x = displacement, y = consumption)) ## Don't know how to automatically pick scale for object of type units. Defaulting to continuous. ## Don't know how to automatically pick scale for object of type units. Defaulting to continuous. ## Error in Ops.units(x, range[1]): both operands of the expression should be "units" objects

(I could make that error go away by letting units drop the requirement that in a comparison both sides should have compatible units, which of course would be wrong.)

We can then go all the way with

ggplot() + geom_point(aes(x = as.numeric(displacement), y = as.numeric(consumption))) + xlab(make_unit_label("displacement", displacement)) + ylab(make_unit_label("consumption", consumption))

which at least doesn’t cut off the left label, but feels too convoluted and error-prone.

Oh ggplot gurus, who can help us out, here? How can we obtain that last plot by

ggplot() + geom_point(aes(x = displacement, y = consumption))

?

Update of Dec 2, 2016

Thanks to ggguru Thomas Lin Pedersen, automatic units in axis labels of ggplots are now provided by CRAN package ggforce:

library(ggforce) ggplot() + geom_point(aes(x = displacement, y = consumption))

and see this vignette for more examples. In addition to printing units in default axes labels, it allows for on-the-fly unit conversion in ggplot expressions:

dm = with(ud_units, dm) gallon = with(ud_units, gallon) mi = with(ud_units, mi) ggplot() + geom_point(aes(x = displacement, y = consumption)) + scale_x_unit(unit = dm^3) + scale_y_unit(unit = mi/gallon)

Related posts/articles

The future of R spatial

Mon, 09/26/2016 - 10:00

Last week’s geostat summer school in Albacete was a lot of fun, with about 60 participants and 10 lecturers. Various courses were given on handling, analyzing and modelling spatial and spatiotemporal data, using open source software. Participants came from all kind of directions, not only geosciences but also antropology, epidemiology and surprisingly many from biology and ecology. Tom Hengl invited us to discuss the future of spatial and spatiotemporal analysis on day 2:

@edzerpebesma talking about the future of spatial and spatiotemporal analysis at #geostat2016 @uclm_inter pic.twitter.com/yfQL2vb5ii

— Rubén G. Mateo (@RubenGMateo) September 20, 2016

In the background of the screen, you see the first appveyor (= windows) build of sf, the simple features for R package. It means that thanks to Jeroen Ooms and rwinlib, windows users can now build binary packages that link to GDAL 2.1, GEOS and Proj.4:

Windows users with Rtools installed can now build and install sfr. Opens the way for others to directly Rcpp into gdal2. Ta2 @opencpu !

— Edzer Pebesma (@edzerpebesma) September 21, 2016

Thanks to the efficient well-known-binary interface of sf, and thanks to using C++ and Rcpp, compared to sp the sf package now reads large feature sets much (18 x) faster into much (4 x) smaller objects (benchmark shapefile provided by Robin Lovelace):

> system.time(r <- rgdal::readOGR(".", "gis.osm_buildings_v06")) OGR data source with driver: ESRI Shapefile Source: ".", layer: "gis.osm_buildings_v06" with 487576 features It has 6 fields user system elapsed 90.312 0.744 91.053 > object.size(r) 1556312104 bytes > system.time(s <- sf::st_read(".", "gis.osm_buildings_v06")) Reading layer gis.osm_buildings_v06 from data source . using driver "ESRI Shapefile" features: 487576 fields: 6 converted into: MULTIPOLYGON proj4string: +proj=longlat +datum=WGS84 +no_defs user system elapsed 5.100 0.092 5.191 > object.size(s) 410306448 bytes Raster data

Currently, R package raster is gradually being ported to C++ for efficiency reasons. For reading and writing data through GDAL, it uses rgdal, so when going through a big (cached) raster in C++ it has to go through C++ \(\rightarrow\) R \(\rightarrow\) rgdal \(\rightarrow\) R \(\rightarrow\) C++ for every chunk of data. The current set of raster classes

library(raster) Loading required package: sp > showClass("Raster") Virtual Class "Raster" [package "raster"] Slots: Name: title extent rotated rotation ncols nrows crs Class: character Extent logical .Rotation integer integer CRS Name: history z Class: list list Extends: "BasicRaster" Known Subclasses: Class "RasterLayer", directly Class "RasterBrick", directly Class "RasterStack", directly Class ".RasterQuad", directly Class "RasterLayerSparse", by class "RasterLayer", distance 2 Class ".RasterBrickSparse", by class "RasterBrick", distance 2

has grown somewhat ad hoc, and should be replaced by a single class that supports

  • one or more layers (bands, attributes)
  • time as a dimension
  • altitude or depth as a dimension (possibly expressed as pressure level)
The future

So, how does the future of R spatial look like?

  1. vector data use simple features, now in package sf
  2. raster data get a single, flexible class that generalizes all Raster* classes now in raster and integrates with simple features
  3. vector and raster data share a clear and consistent interface, no more conflicting function names
  4. raster computing directly links to GDAL, but supports distributed computing back ends provided e.g. by SciDB, Google Earth Engine or rasdaman
  5. spatiotemporal classes in spacetime and trajectories build on simple features or raster
  6. support for measurement units
  7. support for strong typing that encourages meaningful computation.

Exciting times are ahead of us. We need your help!

Reading well-known-binary into R

Thu, 09/01/2016 - 11:00

This blog post describes ways to read binary simple feature data into R, and compares them.

WKB (well-known-binary) is the (ISO) standard binary serialization for simple features. You see it often printed in hexadecimal notation , e.g. in spatially extended databases such as PostGIS:

postgis=# SELECT 'POINT(1 2)'::geometry; geometry -------------------------------------------- 0101000000000000000000F03F0000000000000040 (1 row)

where the alternative form is the human-readable text (Well-known text) form:

postgis=# SELECT ST_AsText('POINT(1 2)'::geometry); st_astext ------------ POINT(1 2) (1 row)

In fact, the WKB is the way databases store features in BLOBs (binary large objects). This means that, unlike well-known text, reading well-known binary involves

  • no loss of precision caused by text <–> binary conversion,
  • no conversion of data needed at all (provided the endianness is native)

As a consequence, it should be possible to do this blazingly fast. Also with R? And large data sets?

Three software scenarios

I compared three software implementations:

  1. sf::st_as_sfc (of package sf) using C++ to read WKB
  2. sf::st_as_sfc (of package sf) using pure R to read WKB (but C++ to compute bounding box)
  3. wkb::readWKB (of package wkb) using pure R to read features into sp-compatible objects

Note that the results below were obtained after profiling, and implementing expensive parts in C++.

Three geometries

I created three different (sets of) simple features to compare read performance: one large and simple line, one data set with many small lines, and one multi-part line containing many sub-lines:

  1. single LINESTRING with many points: a single LINESTRING with one million nodes (pionts) is read into a single simple feature
  2. many LINESTRINGs with few points: half a million simple features of type LINESTRING are read, each having two nodes (points)
  3. single MULTILINESTRING with many short lines: a single simple feature of type MULTILINESTRING is read, consisting of half a million line segments, each line segment consisting of two points.

A reproducible demo-script is found in the sf package here, and can be run by

devtools::install_github("edzer/sfr") demo(bm_wkb)

Reported run times are in seconds, and were obtained by system.time().

single LINESTRING with many points expression user system elapsed sf::st_as_sfc(.) 0.032 0.000 0.031 sf::st_as_sfc(., pureR = TRUE) 0.096 0.012 0.110 wkb::readWKB(.) 8.276 0.000 8.275

We see that for this case both sf implementations are comparable; this is due to the fact that the whole line of 16 Mb is read into R with a single readBin call: C++ can’t do this much faster.

I suspect wkb::readWKB is slower here because instead of reading a complete matrix in one step it makes a million calls to readPoint, and then merges the points read in R. This adds a few million function calls. Since only a single Line is created, not much overhead from sp can take place here.

Function calls, as John Chambers explains in Extending R, have a constant overhead of about 1000 instructions. Having lots of them may become expensive, if each of them does relatively little.

many LINESTRINGs with few points expression user system elapsed sf::st_as_sfc(.) 1.244 0.000 1.243 sf::st_as_sfc(., pureR = TRUE) 55.004 0.056 55.063 wkb::readWKB(.) 257.092 0.192 257.291

Here we see a strong performance gain of the C++ implementation: all the object creation is done in C++, without R function calls. wkb::readWKB slowness may be largely due to overhead caused by sp: creating Line and Lines objects, object validation, computing bounding box.

I made the C++ and “pureR” implementations considerably faster by moving the bounding box calculation to C++. The C++ implementation was further optimized by moving the type check to C++: if a mix of types is read from a set of WKB objects, sfc will coerce them to a single type (e.g., a set of LINESTRING and MULTILINESTRING will be coerced to all MULTILINESTRING.)

single MULTILINESTRING with many short lines expression user system elapsed sf::st_as_sfc(.) 0.348 0.000 0.348 sf::st_as_sfc(., pureR = TRUE) 24.088 0.008 24.100 wkb::readWKB(.) 87.072 0.004 87.074

Here we see again the cost of function calls: both “pureR” in sf and wkb::readWKB are much slower due to the many function calls; the latter also due to object management and validation in sp.

Discussion

Reading well-known binary spatial data into R can be done pretty elegantly by R, but in many scenarios can be much faster using C++. We observe speed gains up to a factor 250.

Book review: Extending R

Wed, 08/17/2016 - 02:00

“Extending R”, by John M. Chambers; Paperback $69.95, May 24, 2016 by Chapman and Hall/CRC; 364 Pages - 7 B/W Illustrations;

R is a free software environment for statistical computing and graphics. It started as a free implementation of the S language, which was back then commercially available as S-Plus, and has since around ten years become the lingua franca of statistics, the main language people use to communicate statistical computation. R’s popularity stems partly from the fact that it is free and open source, partly from the fact that it is easily extendible: through add-on packages that follow a clearly defined structure, new statistical ideas can be implemented, shared, and used by others. Using R, the computational aspects of research can be communicated in a reproducible way, understood by a large audience.

Written between 1984 and 1998, John Chambers is (co-)author of the four leading – “brown”, “blue”, “white”, “green” – books that describe the S language as it evolved and as it is now. He has designed it, implemented it, and improved it in all its phases. Being part of the R core team, he is author of the methods package, part of every R installation, providing the S4 approach to object orientation.

This book, Extending R, appeared as a volume in “The R Series”. The book is organized in four parts:

  1. Understanding R,
  2. Programming with R,
  3. Object-oriented programming, and
  4. Interfaces.

The first part starts with explaining three principles underlying R:

  • Everything that exists in R is an object
  • Everything that happens in R is a function call
  • Interfaces to other software are part of R.

These principle form the basis for parts II, III and IV. The first chapter introduces them. Chapter two, “Evolution”, describes the history of the S language, from its earliest days to Today: the coming and going of S-Plus, the arrival of R and its dominance Today. It also describes the evolution of functional S, and the evolution of object-oriented programming in S. Chapter 3, “R in action”, explains a number of basics of R, such as how function calls work, how objects are implemented, and how the R evaluator works.

Part II, “Programming with R”, discusses functions in depth, explains what objects are and how they are managed, and explains what extension packages do to the R environment. It discusses small, medium and large programming exercises, and what they demand.

Part III, “Object-oriented programming”, largely focuses on the difference between functional object oriented programming (as implemented in S4) and encapsulated object oriented programming as implemented in reference classes (similar to C++ and java), and shows examples for which purpose each paradigm is most useful.

Part IV, “Interfaces”, explains the potential and challenges of interfacing R with other programming languages. It discusses several of such interfaces, and describes a general framework for creating such interfaces. As instances of this framework it provides interfaces to the Python and Julia languages, and discusses the existing Rcpp framework.

For who was this book written? It is clearly not an introductory text, nor a how-to or hands-on book for learning how to program R or write R packages, and it refers to the two volumes Advanced R and R packages, both written by Hadley Wickham. For those with a bit of experience with R programming and a general interest in the language, this book may give a number of new insights and a deeper, often evolutionary motivated understanding.

Not surprisingly, the book also gives clear advice on how software development should take place: object-oriented with formally defined classes (S4 or reference classes), and it argues why this is a good idea. One of these arguments is the ability to do method dispatch based on more than one argument. This needs all arguments to be evaluated, and does not work well with non-standard evaluation. Many R packages currently promoted by Hadley Wickham and many others (“tidyverse”) often favor non-standard evaluation, and constrain to S3. I think that both arguments have some merit, and would look forward to a good user study that compares the usability of the two approaches.

Measurement units in R now simplify

Tue, 08/16/2016 - 02:00

[view raw Rmd]

I wrote earlier about the units R package in this blog post. Last weekend I was happily surprised by two large pull requests (1, 2), from Thomas Mailund. He discusses his contribution in this blog.

Essentially, the pull requests enable

  • the handling and definition of user-defined units in R, and
  • automatic simplification of units
How it works

Units now have to be created explicitly, e.g. by

library(units) m = make_unit("m") s = make_unit("s") (a = 1:10 * m/s) ## Units: m/s ## [1] 1 2 3 4 5 6 7 8 9 10

The units of the udunits2 package are no longer loaded automatically; they are in a database (list) called ud_untis, which is lazyloaded, so after

rm("m", "s")

two clean solutions to use them are either

(a = 1:10 * ud_units$m / ud_units$s) ## Units: m/s ## [1] 1 2 3 4 5 6 7 8 9 10

or

(with(ud_units, a <- 1:10 * m / s)) ## Units: m/s ## [1] 1 2 3 4 5 6 7 8 9 10

and one much less clean solution is to first attach the whole database:

attach(ud_units) ## The following object is masked _by_ .GlobalEnv: ## ## a ## The following object is masked from package:datasets: ## ## npk ## The following objects are masked from package:base: ## ## F, T (a = 1:10 * m / s) ## Units: m/s ## [1] 1 2 3 4 5 6 7 8 9 10 Simplification

Simplification not only works when identical units appear in both numerator and denominator:

a = 1:10 * m / s a * (10 * s) ## Units: m ## [1] 10 20 30 40 50 60 70 80 90 100

but also when a unit in the numerator and denominator are convertible:

a = 1:10 * m / s a * (10 * min) ## Units: m ## [1] 600 1200 1800 2400 3000 3600 4200 4800 5400 6000 a / (0.1 * km) ## Units: 1/s ## [1] 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 New units

New units can be created on the fly, and are simplified:

apple = make_unit("apple") euro = make_unit("euro") (nr = c(5, 10, 15) * apple) ## Units: apple ## [1] 5 10 15 (cost_per_piece = 0.57 * euro / apple) ## 0.57 euro/apple (cost = nr * cost_per_piece) ## Units: euro ## [1] 2.85 5.70 8.55 Limitations

Two limitations of the current implementation are

  1. automatic conversion of user-implemented units into other user-defined units or to and from units in the ud_units database is not supported,
  2. non-integer powers are no (longer) supported.

Simple features for R, part 2

Mon, 07/18/2016 - 02:00

[view raw Rmd]

What happened so far?
  • in an earlier blog post I introduced the idea of having simple features mapped directly into simple R objects
  • an R Consortium ISC proposal to implement this got granted
  • during UseR! 2016 I presented this proposal (slides), which we followed up with an open discussion on future directions
  • first steps to implement this in the sf package have finished, and are described below

This blog post describes current progress.

Install & test

You can install package sf directly from github:

library(devtools) # maybe install first? install_github("edzer/sfr", ref = "16e205f54976bee75c72ac1b54f117868b6fafbc")

if you want to try out read.sf, which reads through GDAL 2.0, you also need my fork of the R package rgdal2, installed by

install_github("edzer/rgdal2")

this, obviously, requires that GDAL 2.0 or later is installed, along with development files.

After installing, a vignette contains some basic operations, and is shown by

library(sf) vignette("basic") How does it work?

Basic design ideas and constraints have been written in this document.

Simple features are one of the following 17 types: Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon, GeometryCollection, CircularString, CompoundCurve, CurvePolygon, MultiCurve, MultiSurface, Curve, Surface, PolyhedralSurface, TIN, and Triangle. Each type can have 2D points (XY), 3D points (XYZ), 2D points with measure (XYM) and 3D points with measure (XYZM). This leads to 17 x 4 = 68 combinations.

The first seven of these are most common, and have been implemented, allowing for XY, XYZ, XYM and XYZM geometries.

Simple feature instances: sfi

A single simple feature is created by calling the constructor function, along with a modifier in case a three-dimensional geometry has measure “M” as its third dimension:

library(sf) POINT(c(2,3)) ## [1] "POINT(2 3)" POINT(c(2,3,4)) ## [1] "POINT Z(2 3 4)" POINT(c(2,3,4), "M") ## [1] "POINT M(2 3 4)" POINT(c(2,3,4,5)) ## [1] "POINT ZM(2 3 4 5)"

what is printed is a well kown text representation of the object; the data itself is however stored as a regular R vector or matrix:

str(POINT(c(2,3,4), "M")) ## Classes 'POINT M', 'sfi' num [1:3] 2 3 4 str(LINESTRING(rbind(c(2,2), c(3,3), c(3,2)))) ## LINESTRING [1:3, 1:2] 2 3 3 2 3 2 ## - attr(*, "class")= chr [1:2] "LINESTRING" "sfi"

By using the two simple rules that

  1. sets of points are kept in a matrix
  2. other sets are kept in a list

we end up with the following structures, with increasing complexity:

Sets of points (matrix): str(LINESTRING(rbind(c(2,2), c(3,3), c(3,2)))) ## LINESTRING [1:3, 1:2] 2 3 3 2 3 2 ## - attr(*, "class")= chr [1:2] "LINESTRING" "sfi" str(MULTIPOINT(rbind(c(2,2), c(3,3), c(3,2)))) ## MULTIPOINT [1:3, 1:2] 2 3 3 2 3 2 ## - attr(*, "class")= chr [1:2] "MULTIPOINT" "sfi" Sets of sets of points: str(MULTILINESTRING(list(rbind(c(2,2), c(3,3), c(3,2)), rbind(c(2,1),c(0,0))))) ## List of 2 ## $ : num [1:3, 1:2] 2 3 3 2 3 2 ## $ : num [1:2, 1:2] 2 0 1 0 ## - attr(*, "class")= chr [1:2] "MULTILINESTRING" "sfi" outer = matrix(c(0,0,10,0,10,10,0,10,0,0),ncol=2, byrow=TRUE) hole1 = matrix(c(1,1,1,2,2,2,2,1,1,1),ncol=2, byrow=TRUE) hole2 = matrix(c(5,5,5,6,6,6,6,5,5,5),ncol=2, byrow=TRUE) str(POLYGON(list(outer, hole1, hole2))) ## List of 3 ## $ : num [1:5, 1:2] 0 10 10 0 0 0 0 10 10 0 ## $ : num [1:5, 1:2] 1 1 2 2 1 1 2 2 1 1 ## $ : num [1:5, 1:2] 5 5 6 6 5 5 6 6 5 5 ## - attr(*, "class")= chr [1:2] "POLYGON" "sfi" Sets of sets of sets of points: pol1 = list(outer, hole1, hole2) pol2 = list(outer + 12, hole1 + 12) pol3 = list(outer + 24) mp = MULTIPOLYGON(list(pol1,pol2,pol3)) str(mp) ## List of 3 ## $ :List of 3 ## ..$ : num [1:5, 1:2] 0 10 10 0 0 0 0 10 10 0 ## ..$ : num [1:5, 1:2] 1 1 2 2 1 1 2 2 1 1 ## ..$ : num [1:5, 1:2] 5 5 6 6 5 5 6 6 5 5 ## $ :List of 2 ## ..$ : num [1:5, 1:2] 12 22 22 12 12 12 12 22 22 12 ## ..$ : num [1:5, 1:2] 13 13 14 14 13 13 14 14 13 13 ## $ :List of 1 ## ..$ : num [1:5, 1:2] 24 34 34 24 24 24 24 34 34 24 ## - attr(*, "class")= chr [1:2] "MULTIPOLYGON" "sfi" Sets of sets of sets of sets of points: str(GEOMETRYCOLLECTION(list(MULTIPOLYGON(list(pol1,pol2,pol3)), POINT(c(2,3))))) ## List of 2 ## $ :List of 3 ## ..$ :List of 3 ## .. ..$ : num [1:5, 1:2] 0 10 10 0 0 0 0 10 10 0 ## .. ..$ : num [1:5, 1:2] 1 1 2 2 1 1 2 2 1 1 ## .. ..$ : num [1:5, 1:2] 5 5 6 6 5 5 6 6 5 5 ## ..$ :List of 2 ## .. ..$ : num [1:5, 1:2] 12 22 22 12 12 12 12 22 22 12 ## .. ..$ : num [1:5, 1:2] 13 13 14 14 13 13 14 14 13 13 ## ..$ :List of 1 ## .. ..$ : num [1:5, 1:2] 24 34 34 24 24 24 24 34 34 24 ## ..- attr(*, "class")= chr [1:2] "MULTIPOLYGON" "sfi" ## $ :Classes 'POINT', 'sfi' num [1:2] 2 3 ## - attr(*, "class")= chr [1:2] "GEOMETRYCOLLECTION" "sfi"

where this is of course a worst case: GEOMETRYCOLLECTION objects with simpler elements have less nesting.

Methods for sfi

The following methods have been implemented for sfi objects:

methods(class = "sfi") ## [1] as.WKT format print ## see '?methods' for accessing help and source code Alternatives to this implementation
  1. Package rgdal2 reads point sets not in a matrix, but into a list with numeric vectors named x and y. This is closer to the GDAL (OGR) data model, and would allow for easier disambiguation of the third dimension (m or z) in case of three-dimensional points. It is more difficult to select a single point, and requires validation of vector lenghts being identical. I’m inclined to keep using matrix for point sets.
  2. Currently, POINT Z is of class c("POINT Z", "sfi"). An alternative would be to have it derive from POINT, i.e. give it class c("POINT Z", "POINT", "sfi"). This would make it easier to write methods for XYZ, XYM and XYZM geometries. This may be worth trying out.
Simple feature list columns: sfc

Collections of simple features can be added together into a list. If all elements of this list

  • are of identical type (have identical class), or are a mix of X and MULTIX (with X being one of POINT, LINESTRING or POLYGON)
  • have an identical coordinate reference system

then they can be combined in a sfc object. This object

  • converts, if needed, X into MULTIX (this is also what PostGIS does),
  • registers the coordinate reference system in attributes epsg and proj4string,
  • has the bounding box in attribute bbox, and updates it after subsetting
ls1 = LINESTRING(rbind(c(2,2), c(3,3), c(3,2))) ls2 = LINESTRING(rbind(c(5,5), c(4,1), c(1,2))) sfc = sfc(list(ls1, ls2), epsg = 4326) attributes(sfc) ## $class ## [1] "sfc" ## ## $type ## [1] "LINESTRING" ## ## $epsg ## [1] 4326 ## ## $bbox ## xmin xmax ymin ymax ## 1 5 1 5 ## ## $proj4string ## [1] "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0" attributes(sfc[1]) ## $class ## [1] "sfc" ## ## $type ## [1] "LINESTRING" ## ## $epsg ## [1] 4326 ## ## $bbox ## xmin xmax ymin ymax ## 2 3 2 3 ## ## $proj4string ## [1] "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"

The following methods have been implemented for sfc simple feature list columns:

methods(class = "sfc") ## [1] bbox format [ summary ## see '?methods' for accessing help and source code data.frames with simple features: sf

Typical spatial data contain attribute values and attribute geometries. When combined in a table, they can be converted into sf objects, e.g. by

roads = data.frame(widths = c(5, 4.5)) roads$geom = sfc roads.sf = sf(roads) roads.sf ## widths geom ## 1 5.0 LINESTRING(2 2, 3 3, 3 2) ## 2 4.5 LINESTRING(5 5, 4 1, 1 2) summary(roads.sf) ## widths geom ## Min. :4.500 LINESTRING :2 ## 1st Qu.:4.625 epsg:4326 :0 ## Median :4.750 +init=epsg...:0 ## Mean :4.750 ## 3rd Qu.:4.875 ## Max. :5.000 attributes(roads.sf) ## $names ## [1] "widths" "geom" ## ## $row.names ## [1] 1 2 ## ## $class ## [1] "sf" "data.frame" ## ## $sf_column ## geom ## 2 ## ## $relation_to_geometry ## widths ## <NA> ## Levels: field lattice entity

here, attribute relation_to_geometry allows documenting how attributes relate to the geometry: are they constant (field), aggregated over the geometry (lattice), or do they identify individual entities (buildings, parcels etc.)?

The following methods have been implemented for sfc simple feature list columns:

methods(class = "sf") ## [1] geometry ## see '?methods' for accessing help and source code Coercion to and from sp

Points, MultiPoints, Lines, MultiLines, Polygons and MultiPolygons can be converted between sf and sp, both ways. A round trip is demonstrated by:

df = data.frame(a=1) df$geom = sfc(list(mp)) sf = sf(df) library(methods) a = as(sf, "Spatial") class(a) ## [1] "SpatialPolygonsDataFrame" ## attr(,"package") ## [1] "sp" b = as.sf(a) all.equal(sf, b) # round-trip sf-sp-sf ## [1] TRUE a2 = as(a, "SpatialPolygonsDataFrame") all.equal(a, a2) # round-trip sp-sf-sp ## [1] TRUE Reading through GDAL

Function read.sf works, if rgdal2 is installed (see above), and reads simple features through GDAL:

(s = read.sf(system.file("shapes/", package="maptools"), "sids"))[1:5,] ## AREA PERIMETER CNTY_ CNTY_ID NAME FIPS FIPSNO CRESS_ID BIR74 ## 0 0.114 1.442 1825 1825 Ashe 37009 37009 5 1091 ## 1 0.061 1.231 1827 1827 Alleghany 37005 37005 3 487 ## 2 0.143 1.63 1828 1828 Surry 37171 37171 86 3188 ## 3 0.07 2.968 1831 1831 Currituck 37053 37053 27 508 ## 4 0.153 2.206 1832 1832 Northampton 37131 37131 66 1421 ## SID74 NWBIR74 BIR79 SID79 NWBIR79 geom ## 0 1 10 1364 0 19 MULTIPOLYGON(((-81.47275543212 ... ## 1 0 10 542 3 12 MULTIPOLYGON(((-81.23989105224 ... ## 2 5 208 3616 6 260 MULTIPOLYGON(((-80.45634460449 ... ## 3 1 123 830 2 145 MULTIPOLYGON(((-76.00897216796 ... ## 4 9 1066 1606 3 1197 MULTIPOLYGON(((-77.21766662597 ... summary(s) ## AREA PERIMETER CNTY_ CNTY_ID NAME ## 0.118 : 4 1.307 : 2 1825 : 1 1825 : 1 Alamance : 1 ## 0.091 : 3 1.601 : 2 1827 : 1 1827 : 1 Alexander: 1 ## 0.143 : 3 1.68 : 2 1828 : 1 1828 : 1 Alleghany: 1 ## 0.07 : 2 1.791 : 2 1831 : 1 1831 : 1 Anson : 1 ## 0.078 : 2 0.999 : 1 1832 : 1 1832 : 1 Ashe : 1 ## 0.08 : 2 1 : 1 1833 : 1 1833 : 1 Avery : 1 ## (Other):84 (Other):90 (Other):94 (Other):94 (Other) :94 ## FIPS FIPSNO CRESS_ID BIR74 SID74 ## 37001 : 1 37001 : 1 1 : 1 1027 : 1 0 :13 ## 37003 : 1 37003 : 1 10 : 1 1035 : 1 4 :13 ## 37005 : 1 37005 : 1 100 : 1 1091 : 1 1 :11 ## 37007 : 1 37007 : 1 11 : 1 11158 : 1 5 :11 ## 37009 : 1 37009 : 1 12 : 1 1143 : 1 2 : 8 ## 37011 : 1 37011 : 1 13 : 1 1173 : 1 3 : 6 ## (Other):94 (Other):94 (Other):94 (Other):94 (Other):38 ## NWBIR74 BIR79 SID79 NWBIR79 geom ## 736 : 3 10432 : 1 2 :10 1161 : 2 MULTIPOLYGON:100 ## 1 : 2 1059 : 1 0 : 9 5 : 2 epsg:NA : 0 ## 10 : 2 1141 : 1 1 : 9 10 : 1 ## 1243 : 2 11455 : 1 4 : 9 1023 : 1 ## 134 : 2 1157 : 1 5 : 9 1033 : 1 ## 930 : 2 1173 : 1 3 : 6 104 : 1 ## (Other):87 (Other):94 (Other):48 (Other):92

This also shows the abbreviation of long geometries when printed or summarized, provided by the format methods.

The following works for me, with PostGIS installed and data loaded:

(s = read.sf("PG:dbname=postgis", "meuse2"))[1:5,] ## zinc geom ## 1 1022 POINT(181072 333611) ## 2 1141 POINT(181025 333558) ## 3 640 POINT(181165 333537) ## 4 257 POINT(181298 333484) ## 5 269 POINT(181307 333330) summary(s) ## zinc geom ## Min. : 113.0 POINT :155 ## 1st Qu.: 198.0 epsg:NA : 0 ## Median : 326.0 +proj=ster...: 0 ## Mean : 469.7 ## 3rd Qu.: 674.5 ## Max. :1839.0 Still to do/to be decided

The following issues need to be decided upon:

  • reproject sf objects through rgdal2? support well-known-text for CRS? or use PROJ.4 directly?
  • when subsetting attributes from an sf objects, make geometry sticky (like sp does), or drop geometry and return data.frame (data.frame behaviour)?

The following things still need to be done:

  • write simple features through GDAL (using rgdal2)
  • using gdal geometry functions in rgdal2
  • extend rgdal2 to also read XYZ, XYM, and XYZM geometries - my feeling is that this will be easier than modifying rgdal
  • reprojection of sf objects
  • link to GEOS, using GEOS functions: GDAL with GEOS enabled (and rgdal2) has some of this, but not for instance rgeos::gRelate
  • develop better and more complete test cases; also check the OGC test suite
  • improve documentation, add tutorial (vignettes, paper)
  • add plot functions (base, grid)
  • explore direct WKB - sf conversion, without GDAL
  • explore how meaningfulness of operations can be verified when for attributes their relation_to_geometry has been specified

Please let me know if you have any comments, suggestions or questions!