This is an

**for**__archive__**R-sig-geo**, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first**(through the list homepage, not via Nabble)**and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.*Updated:*33 min 54 sec ago

### Re: dnearneigh() from spdep: Points with the exact same location are not considered neighbours.

Do not post HTML-mail, only plain text. Your example is not reproducible

because you used HTML-mail.

Please read the help file, the bounds are described as being between lower

(greater than) and upper (less than or equal to) bounds. Since the

distance between identical points is strictly zero, they are not

neighbours because the distance must be > d1 and <= d2. If d1 is < 0, it

is reset to 0, as it is assumed that a negative lower bound is a user

error (and it would break the underlying compiled code).

In any case, no reasonable cross-sectional spatial process has duplicated

point (nugget) observations in situations in which spatial weights would

be used (spatio-temporal panels will have, but then time differs).

Hope this clarifies,

Roger

On Wed, 12 Apr 2017, Maël Le Noc via R-sig-Geo wrote:

> Dear List

>

> As I was working on a project, I realized that when I use dnearneigh

> from spdep, two (or more) points that have the exact same coordinates

> are not considered neighbours and thus are not linked (even when the

> lower bound is put to 0 or even to -1). See below for an example.

> (However this does not happen if the parameter longlat is set to false)

>

> Does the function behave the same way for you? Am I missing something?

> Is this an expected behavior? And if so, if there a way to change that ?

>

> In the example below, points 1 and 2 are not connected to each other/are

> not neighbours (as you can see since the both have only one link, to 3),

> even though they have the exact same coordinates (and are thus less than

> 25km apart), while point 3 is connected to both point 1 and 2.

> If I want to assess autocorrelation using, for instance joincount.test,

> this is then an issue...

>

>> /library(data.table) />/library(spdep) />/pointstable <- data.table(XCoord=c(13.667029,13.667029,13.667028), /YCoord=c(42.772396,42.772396,42.772396))

>> /print(pointstable) / XCoord YCoord

> 1: 13.667029 42.772396

> 2: 13.667029 42.772396

> 3: 13.667028 42.772396

>> /coords <-cbind(pointstable$XCoord, pointstable$YCoord) />/nbLocal<- dnearneigh(coords, d1=0, d2=25, longlat = TRUE) />/nbLocal<- dnearneigh(coords, d1=-1, d2=25, longlat = TRUE) #both lines /produce the same output

>> /summary(nbLocal) /Neighbour list object:

> Number of regions: 3

> Number of nonzero links: 4

> Percentage nonzero weights: 44.44444

> Average number of links: 1.333333

> Link number distribution:

>

> 1 2

> 2 1

> 2 least connected regions:

> 1 2 with 1 link

> 1 most connected region:

> 3 with 2 links

>> //

> Thanks

> Maël

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

Editor-in-Chief of The R Journal, https://journal.r-project.org/index.html

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

because you used HTML-mail.

Please read the help file, the bounds are described as being between lower

(greater than) and upper (less than or equal to) bounds. Since the

distance between identical points is strictly zero, they are not

neighbours because the distance must be > d1 and <= d2. If d1 is < 0, it

is reset to 0, as it is assumed that a negative lower bound is a user

error (and it would break the underlying compiled code).

In any case, no reasonable cross-sectional spatial process has duplicated

point (nugget) observations in situations in which spatial weights would

be used (spatio-temporal panels will have, but then time differs).

Hope this clarifies,

Roger

On Wed, 12 Apr 2017, Maël Le Noc via R-sig-Geo wrote:

> Dear List

>

> As I was working on a project, I realized that when I use dnearneigh

> from spdep, two (or more) points that have the exact same coordinates

> are not considered neighbours and thus are not linked (even when the

> lower bound is put to 0 or even to -1). See below for an example.

> (However this does not happen if the parameter longlat is set to false)

>

> Does the function behave the same way for you? Am I missing something?

> Is this an expected behavior? And if so, if there a way to change that ?

>

> In the example below, points 1 and 2 are not connected to each other/are

> not neighbours (as you can see since the both have only one link, to 3),

> even though they have the exact same coordinates (and are thus less than

> 25km apart), while point 3 is connected to both point 1 and 2.

> If I want to assess autocorrelation using, for instance joincount.test,

> this is then an issue...

>

>> /library(data.table) />/library(spdep) />/pointstable <- data.table(XCoord=c(13.667029,13.667029,13.667028), /YCoord=c(42.772396,42.772396,42.772396))

>> /print(pointstable) / XCoord YCoord

> 1: 13.667029 42.772396

> 2: 13.667029 42.772396

> 3: 13.667028 42.772396

>> /coords <-cbind(pointstable$XCoord, pointstable$YCoord) />/nbLocal<- dnearneigh(coords, d1=0, d2=25, longlat = TRUE) />/nbLocal<- dnearneigh(coords, d1=-1, d2=25, longlat = TRUE) #both lines /produce the same output

>> /summary(nbLocal) /Neighbour list object:

> Number of regions: 3

> Number of nonzero links: 4

> Percentage nonzero weights: 44.44444

> Average number of links: 1.333333

> Link number distribution:

>

> 1 2

> 2 1

> 2 least connected regions:

> 1 2 with 1 link

> 1 most connected region:

> 3 with 2 links

>> //

> Thanks

> Maël

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

Editor-in-Chief of The R Journal, https://journal.r-project.org/index.html

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: variance-covariance matrix for GMerrorsar

Is this the link to the package?

https://r-forge.r-project.org/R/?group_id=182

Chelsea

On Tue, Apr 11, 2017 at 10:59 PM, Qiuhua Ma <[hidden email]> wrote:

> Thanks for your quick reply. You are right. Marginal wtp should take into

> account rho for spatial lag model.

>

> I still would like to use GMerrorsar. Can you please send me the source

> package?

>

> Best,

>

> Chelsea

>

> On Tue, Apr 11, 2017 at 7:54 AM, Roger Bivand <[hidden email]> wrote:

>

>> On Tue, 11 Apr 2017, Qiuhua Ma wrote:

>>

>> Dear list,

>>>

>>> I want to use bootstrapping to derive confidence intervals for marginal

>>> wtp after GMerrorsar command.

>>>

>>> It works for stsls since covariance matrix is directly available.

>>> However,

>>> I cannot find covariance matrix for GMerrorsar.

>>>

>>> For example, the following code works for stsls:

>>>

>>> model1.beta <- coef(model1)

>>>

>>> model1.vcov <- summary(model1)$var

>>>

>>> model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

>>>

>>> model1.mwtp <- model1.sim * Pbar

>>>

>>> model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

>>>

>>

>> The DGP for this model is (I - \rho W)^{-1} (X \beta + e), so I'm in geat

>> doubt about whether your proposal is correct (model1.vcov is has one more

>> row and column than X has columns, so including \rho); the first element of

>> model1.beta is \rho.

>>

>>

>>>

>>> when I apply the same code for GMerrorsar:

>>>

>>>

>>> model2.beta <- coef(model2)

>>>

>>> model2.vcov <- summary(model2)$var

>>>

>>>

>>> model2.vcov

>>>>

>>>

>>> NULL

>>>

>>>

>>> How can I obtain covariance matrix for GMerrorsar?

>>>

>>> Reading the code, you'll see where the matrices occur. Running under

>> debug, you can assign the outside the environment of the function if you

>> like (use <<- ). I've added a vcov component in the returned object (source

>> on R-Forge, I can send a source package or a Windows binary package).

>>

>> You should also look at sphet::spreg, which does return a var component.

>> Please note that you should think of the DGP first and foremost, the coef

>> and var may return the values for what you are treating as nuisance parts

>> of the model. Getting the distribution of the willingess to pay also

>> probably involves them and their variability.

>>

>> Have you considered getting the WTP marginal from a Bayesian approach?

>>

>> Hope this helps,

>>

>> Roger

>>

>>

>>

>>> Chelsea

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>>

>> --

>> Roger Bivand

>> Department of Economics, Norwegian School of Economics,

>> Helleveien 30, N-5045 Bergen, Norway.

>> voice: +47 55 95 93 55; e-mail: [hidden email]

>> Editor-in-Chief of The R Journal, https://journal.r-project.org/

>> index.html

>> http://orcid.org/0000-0003-2392-6140

>> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

https://r-forge.r-project.org/R/?group_id=182

Chelsea

On Tue, Apr 11, 2017 at 10:59 PM, Qiuhua Ma <[hidden email]> wrote:

> Thanks for your quick reply. You are right. Marginal wtp should take into

> account rho for spatial lag model.

>

> I still would like to use GMerrorsar. Can you please send me the source

> package?

>

> Best,

>

> Chelsea

>

> On Tue, Apr 11, 2017 at 7:54 AM, Roger Bivand <[hidden email]> wrote:

>

>> On Tue, 11 Apr 2017, Qiuhua Ma wrote:

>>

>> Dear list,

>>>

>>> I want to use bootstrapping to derive confidence intervals for marginal

>>> wtp after GMerrorsar command.

>>>

>>> It works for stsls since covariance matrix is directly available.

>>> However,

>>> I cannot find covariance matrix for GMerrorsar.

>>>

>>> For example, the following code works for stsls:

>>>

>>> model1.beta <- coef(model1)

>>>

>>> model1.vcov <- summary(model1)$var

>>>

>>> model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

>>>

>>> model1.mwtp <- model1.sim * Pbar

>>>

>>> model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

>>>

>>

>> The DGP for this model is (I - \rho W)^{-1} (X \beta + e), so I'm in geat

>> doubt about whether your proposal is correct (model1.vcov is has one more

>> row and column than X has columns, so including \rho); the first element of

>> model1.beta is \rho.

>>

>>

>>>

>>> when I apply the same code for GMerrorsar:

>>>

>>>

>>> model2.beta <- coef(model2)

>>>

>>> model2.vcov <- summary(model2)$var

>>>

>>>

>>> model2.vcov

>>>>

>>>

>>> NULL

>>>

>>>

>>> How can I obtain covariance matrix for GMerrorsar?

>>>

>>> Reading the code, you'll see where the matrices occur. Running under

>> debug, you can assign the outside the environment of the function if you

>> like (use <<- ). I've added a vcov component in the returned object (source

>> on R-Forge, I can send a source package or a Windows binary package).

>>

>> You should also look at sphet::spreg, which does return a var component.

>> Please note that you should think of the DGP first and foremost, the coef

>> and var may return the values for what you are treating as nuisance parts

>> of the model. Getting the distribution of the willingess to pay also

>> probably involves them and their variability.

>>

>> Have you considered getting the WTP marginal from a Bayesian approach?

>>

>> Hope this helps,

>>

>> Roger

>>

>>

>>

>>> Chelsea

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>>

>> --

>> Roger Bivand

>> Department of Economics, Norwegian School of Economics,

>> Helleveien 30, N-5045 Bergen, Norway.

>> voice: +47 55 95 93 55; e-mail: [hidden email]

>> Editor-in-Chief of The R Journal, https://journal.r-project.org/

>> index.html

>> http://orcid.org/0000-0003-2392-6140

>> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: variance-covariance matrix for GMerrorsar

Thanks for your quick reply. You are right. Marginal wtp should take into

account rho for spatial lag model.

I still would like to use GMerrorsar. Can you please send me the source

package?

Best,

Chelsea

On Tue, Apr 11, 2017 at 7:54 AM, Roger Bivand <[hidden email]> wrote:

> On Tue, 11 Apr 2017, Qiuhua Ma wrote:

>

> Dear list,

>>

>> I want to use bootstrapping to derive confidence intervals for marginal

>> wtp after GMerrorsar command.

>>

>> It works for stsls since covariance matrix is directly available. However,

>> I cannot find covariance matrix for GMerrorsar.

>>

>> For example, the following code works for stsls:

>>

>> model1.beta <- coef(model1)

>>

>> model1.vcov <- summary(model1)$var

>>

>> model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

>>

>> model1.mwtp <- model1.sim * Pbar

>>

>> model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

>>

>

> The DGP for this model is (I - \rho W)^{-1} (X \beta + e), so I'm in geat

> doubt about whether your proposal is correct (model1.vcov is has one more

> row and column than X has columns, so including \rho); the first element of

> model1.beta is \rho.

>

>

>>

>> when I apply the same code for GMerrorsar:

>>

>>

>> model2.beta <- coef(model2)

>>

>> model2.vcov <- summary(model2)$var

>>

>>

>> model2.vcov

>>>

>>

>> NULL

>>

>>

>> How can I obtain covariance matrix for GMerrorsar?

>>

>> Reading the code, you'll see where the matrices occur. Running under

> debug, you can assign the outside the environment of the function if you

> like (use <<- ). I've added a vcov component in the returned object (source

> on R-Forge, I can send a source package or a Windows binary package).

>

> You should also look at sphet::spreg, which does return a var component.

> Please note that you should think of the DGP first and foremost, the coef

> and var may return the values for what you are treating as nuisance parts

> of the model. Getting the distribution of the willingess to pay also

> probably involves them and their variability.

>

> Have you considered getting the WTP marginal from a Bayesian approach?

>

> Hope this helps,

>

> Roger

>

>

>

>> Chelsea

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> Editor-in-Chief of The R Journal, https://journal.r-project.org/index.html

> http://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

account rho for spatial lag model.

I still would like to use GMerrorsar. Can you please send me the source

package?

Best,

Chelsea

On Tue, Apr 11, 2017 at 7:54 AM, Roger Bivand <[hidden email]> wrote:

> On Tue, 11 Apr 2017, Qiuhua Ma wrote:

>

> Dear list,

>>

>> I want to use bootstrapping to derive confidence intervals for marginal

>> wtp after GMerrorsar command.

>>

>> It works for stsls since covariance matrix is directly available. However,

>> I cannot find covariance matrix for GMerrorsar.

>>

>> For example, the following code works for stsls:

>>

>> model1.beta <- coef(model1)

>>

>> model1.vcov <- summary(model1)$var

>>

>> model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

>>

>> model1.mwtp <- model1.sim * Pbar

>>

>> model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

>>

>

> The DGP for this model is (I - \rho W)^{-1} (X \beta + e), so I'm in geat

> doubt about whether your proposal is correct (model1.vcov is has one more

> row and column than X has columns, so including \rho); the first element of

> model1.beta is \rho.

>

>

>>

>> when I apply the same code for GMerrorsar:

>>

>>

>> model2.beta <- coef(model2)

>>

>> model2.vcov <- summary(model2)$var

>>

>>

>> model2.vcov

>>>

>>

>> NULL

>>

>>

>> How can I obtain covariance matrix for GMerrorsar?

>>

>> Reading the code, you'll see where the matrices occur. Running under

> debug, you can assign the outside the environment of the function if you

> like (use <<- ). I've added a vcov component in the returned object (source

> on R-Forge, I can send a source package or a Windows binary package).

>

> You should also look at sphet::spreg, which does return a var component.

> Please note that you should think of the DGP first and foremost, the coef

> and var may return the values for what you are treating as nuisance parts

> of the model. Getting the distribution of the willingess to pay also

> probably involves them and their variability.

>

> Have you considered getting the WTP marginal from a Bayesian approach?

>

> Hope this helps,

>

> Roger

>

>

>

>> Chelsea

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> Editor-in-Chief of The R Journal, https://journal.r-project.org/index.html

> http://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### dnearneigh() from spdep: Points with the exact same location are not considered neighbours.

Dear List

As I was working on a project, I realized that when I use dnearneigh

from spdep, two (or more) points that have the exact same coordinates

are not considered neighbours and thus are not linked (even when the

lower bound is put to 0 or even to -1). See below for an example.

(However this does not happen if the parameter longlat is set to false)

Does the function behave the same way for you? Am I missing something?

Is this an expected behavior? And if so, if there a way to change that ?

In the example below, points 1 and 2 are not connected to each other/are

not neighbours (as you can see since the both have only one link, to 3),

even though they have the exact same coordinates (and are thus less than

25km apart), while point 3 is connected to both point 1 and 2.

If I want to assess autocorrelation using, for instance joincount.test,

this is then an issue...

>/library(data.table) />/library(spdep) />/pointstable <- data.table(XCoord=c(13.667029,13.667029,13.667028), /YCoord=c(42.772396,42.772396,42.772396))

>/print(pointstable) / XCoord YCoord

1: 13.667029 42.772396

2: 13.667029 42.772396

3: 13.667028 42.772396

>/coords <-cbind(pointstable$XCoord, pointstable$YCoord) />/nbLocal<- dnearneigh(coords, d1=0, d2=25, longlat = TRUE) />/nbLocal<- dnearneigh(coords, d1=-1, d2=25, longlat = TRUE) #both lines /produce the same output

>/summary(nbLocal) /Neighbour list object:

Number of regions: 3

Number of nonzero links: 4

Percentage nonzero weights: 44.44444

Average number of links: 1.333333

Link number distribution:

1 2

2 1

2 least connected regions:

1 2 with 1 link

1 most connected region:

3 with 2 links

>//

Thanks

Maël

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

As I was working on a project, I realized that when I use dnearneigh

from spdep, two (or more) points that have the exact same coordinates

are not considered neighbours and thus are not linked (even when the

lower bound is put to 0 or even to -1). See below for an example.

(However this does not happen if the parameter longlat is set to false)

Does the function behave the same way for you? Am I missing something?

Is this an expected behavior? And if so, if there a way to change that ?

In the example below, points 1 and 2 are not connected to each other/are

not neighbours (as you can see since the both have only one link, to 3),

even though they have the exact same coordinates (and are thus less than

25km apart), while point 3 is connected to both point 1 and 2.

If I want to assess autocorrelation using, for instance joincount.test,

this is then an issue...

>/library(data.table) />/library(spdep) />/pointstable <- data.table(XCoord=c(13.667029,13.667029,13.667028), /YCoord=c(42.772396,42.772396,42.772396))

>/print(pointstable) / XCoord YCoord

1: 13.667029 42.772396

2: 13.667029 42.772396

3: 13.667028 42.772396

>/coords <-cbind(pointstable$XCoord, pointstable$YCoord) />/nbLocal<- dnearneigh(coords, d1=0, d2=25, longlat = TRUE) />/nbLocal<- dnearneigh(coords, d1=-1, d2=25, longlat = TRUE) #both lines /produce the same output

>/summary(nbLocal) /Neighbour list object:

Number of regions: 3

Number of nonzero links: 4

Percentage nonzero weights: 44.44444

Average number of links: 1.333333

Link number distribution:

1 2

2 1

2 least connected regions:

1 2 with 1 link

1 most connected region:

3 with 2 links

>//

Thanks

Maël

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### bug in plotting rasterLayer?

Hi,

I found that when I try to plot 3 rasters on top of each other, the third

raster plot causes the plot device coordinates to shift to the right.

I can reproduce this error easily:

library(raster)

f <- system.file("external/test.grd", package="raster")

r <- raster(f)

plot(r)

plot(r, add=TRUE)

plot(r, add=TRUE)

plot(r, add=TRUE)

For me, the 3rd plot with add=TRUE causes a coordinate system shift.

I am running R v3.3.3 with raster package v2.5-8.

Are others finding the same thing?

--

Pascal Title

PhD candidate | Rabosky Lab <http://www.raboskylab.org/>

Dept of Ecology and Evolutionary Biology

University of Michigan

[hidden email]

pascaltitle.weebly.com

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I found that when I try to plot 3 rasters on top of each other, the third

raster plot causes the plot device coordinates to shift to the right.

I can reproduce this error easily:

library(raster)

f <- system.file("external/test.grd", package="raster")

r <- raster(f)

plot(r)

plot(r, add=TRUE)

plot(r, add=TRUE)

plot(r, add=TRUE)

For me, the 3rd plot with add=TRUE causes a coordinate system shift.

I am running R v3.3.3 with raster package v2.5-8.

Are others finding the same thing?

--

Pascal Title

PhD candidate | Rabosky Lab <http://www.raboskylab.org/>

Dept of Ecology and Evolutionary Biology

University of Michigan

[hidden email]

pascaltitle.weebly.com

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### best way to find contiguous chunks in a sparse space-time matrix

Hi, I have a space-time sparse grid layout where columns are locations and rows are time. In general, a location is missing data for some of its times. I need to subset the layout such that I can represent contiguous blocks of time, wherein for each block of time (rows), all of the selected locations have data. The tradeoff is that I would like to minimize the number of blocks needed to span the total range in time, while also keeping as many locations as possible for any given block. When done, the resulting selection might look like this, where X are non-selected space-time, and the numbers denote selected space-time, 1 . . . k where k is the number of blocks to span the total time range of interest.

1 1 1 X 1 X 1 1 X

1 1 1 X 1 X 1 1 X

1 1 1 X 1 X 1 1 X

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

3 3 X 3 3 3 3 X 3

3 3 X 3 3 3 3 X 3

3 3 X 3 3 3 3 X 3

3 3 X 3 3 3 3 X 3

. . .

For starters, I am content to ignore spatial relationships between the locations represented in the columns. Later, it would be wonderful if I could consider spatial proximity such that a location that has missing values is more likely to be left out of selection if there are nearby locations that have data and can "cover" for the problematic location.

If anyone has some suggestions on approaches to solve this problem, I would greatly appreciate it.

Thank you,

Scott Waichler

Pacific Northwest National Laboratory

Richland, WA USA

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

1 1 1 X 1 X 1 1 X

1 1 1 X 1 X 1 1 X

1 1 1 X 1 X 1 1 X

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

X 2 2 X X 2 2 2 2

3 3 X 3 3 3 3 X 3

3 3 X 3 3 3 3 X 3

3 3 X 3 3 3 3 X 3

3 3 X 3 3 3 3 X 3

. . .

For starters, I am content to ignore spatial relationships between the locations represented in the columns. Later, it would be wonderful if I could consider spatial proximity such that a location that has missing values is more likely to be left out of selection if there are nearby locations that have data and can "cover" for the problematic location.

If anyone has some suggestions on approaches to solve this problem, I would greatly appreciate it.

Thank you,

Scott Waichler

Pacific Northwest National Laboratory

Richland, WA USA

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: variance-covariance matrix for GMerrorsar

On Tue, 11 Apr 2017, Qiuhua Ma wrote:

> Dear list,

>

> I want to use bootstrapping to derive confidence intervals for marginal

> wtp after GMerrorsar command.

>

> It works for stsls since covariance matrix is directly available. However,

> I cannot find covariance matrix for GMerrorsar.

>

> For example, the following code works for stsls:

>

> model1.beta <- coef(model1)

>

> model1.vcov <- summary(model1)$var

>

> model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

>

> model1.mwtp <- model1.sim * Pbar

>

> model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

The DGP for this model is (I - \rho W)^{-1} (X \beta + e), so I'm in geat

doubt about whether your proposal is correct (model1.vcov is has one more

row and column than X has columns, so including \rho); the first element

of model1.beta is \rho.

>

>

> when I apply the same code for GMerrorsar:

>

>

> model2.beta <- coef(model2)

>

> model2.vcov <- summary(model2)$var

>

>

>> model2.vcov

>

> NULL

>

>

> How can I obtain covariance matrix for GMerrorsar?

> Reading the code, you'll see where the matrices occur. Running under

debug, you can assign the outside the environment of the function if you

like (use <<- ). I've added a vcov component in the returned object

(source on R-Forge, I can send a source package or a Windows binary

package).

You should also look at sphet::spreg, which does return a var component.

Please note that you should think of the DGP first and foremost, the coef

and var may return the values for what you are treating as nuisance parts

of the model. Getting the distribution of the willingess to pay also

probably involves them and their variability.

Have you considered getting the WTP marginal from a Bayesian approach?

Hope this helps,

Roger

>

> Chelsea

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

Editor-in-Chief of The R Journal, https://journal.r-project.org/index.html

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear list,

>

> I want to use bootstrapping to derive confidence intervals for marginal

> wtp after GMerrorsar command.

>

> It works for stsls since covariance matrix is directly available. However,

> I cannot find covariance matrix for GMerrorsar.

>

> For example, the following code works for stsls:

>

> model1.beta <- coef(model1)

>

> model1.vcov <- summary(model1)$var

>

> model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

>

> model1.mwtp <- model1.sim * Pbar

>

> model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

The DGP for this model is (I - \rho W)^{-1} (X \beta + e), so I'm in geat

doubt about whether your proposal is correct (model1.vcov is has one more

row and column than X has columns, so including \rho); the first element

of model1.beta is \rho.

>

>

> when I apply the same code for GMerrorsar:

>

>

> model2.beta <- coef(model2)

>

> model2.vcov <- summary(model2)$var

>

>

>> model2.vcov

>

> NULL

>

>

> How can I obtain covariance matrix for GMerrorsar?

> Reading the code, you'll see where the matrices occur. Running under

debug, you can assign the outside the environment of the function if you

like (use <<- ). I've added a vcov component in the returned object

(source on R-Forge, I can send a source package or a Windows binary

package).

You should also look at sphet::spreg, which does return a var component.

Please note that you should think of the DGP first and foremost, the coef

and var may return the values for what you are treating as nuisance parts

of the model. Getting the distribution of the willingess to pay also

probably involves them and their variability.

Have you considered getting the WTP marginal from a Bayesian approach?

Hope this helps,

Roger

>

> Chelsea

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

Editor-in-Chief of The R Journal, https://journal.r-project.org/index.html

http://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: GEOSTAT Summer School for PhD students | 13-19 August 2017 | MEDILS Split

MEMO: This is to remind you that the registrations for GEOSTAT Summer

School 13-19 August 2017 | MEDILS Split, will close on:

- 15th of April 2017

To register for GEOSTAT visit:

https://geostat-course.org/2017

PS: Current number of registrations: 44 (this summer school is limited

to 55 participants).

T. Hengl

http://www.wageningenur.nl/en/Persons/dr.-T-Tom-Hengl.htm

On 06/03/17 19:58, Tomislav Hengl wrote:

>

> The GEOSTAT Split 2017 Summer School will be held this year at the

> Mediterranean Institute for Life Sciences (MEDILS) Split, Croatia in

> the period 13-19 August 2017. MEDILS is an international centre of

> excellence for molecular biology and is located in a nature park in a

> near proximity of the historic town Split. It has in-house

> accommodation and numerous facilities including its own park and sport

> facilities.

>

> The registrations are now open. To register for this Summer School

> please visit:

>

> https://geostat-course.org/2017

>

> This summer school is limited to 55 participants. In the case of

> higher number of registrations, applicants will be ranked based on (1)

> their contribution to the FOSS, (2) academic output, (3) distance to

> the venue and (4) time of registration. The registrations fees are

> usually around 550 EUR (invoice will be sent after registrations).

> Registration fees cover costs of using facilities, lunch and coffee

> breaks and costs of travel and accommodation for lecturers.

> Participants from ODA countries (employed by an organization or

> company in ODA-listed country) typically receive a subsidized price

> for the accommodation and registration costs. GEOSTAT makes no profit.

> All lecturers are volunteers. None of the lecturers receives any

> honorarium payment or is contracted by the local organizers.

>

> This years topics include:

> - Application of R+OSGeo tools in: spatio-temporal monitoring,

> geostatistical mapping, point pattern analysis, epidemiology...

> - Machine Learning methods for spatial and spatio-temporal data,

> - QGIS and RQGIS tutorials,

> - RMarkdown/Sweave tutorials,

> - Visualization of spatio-temporal data,

>

> Lecturers:

> - Barry Rowlingson (Lancaster Medical School, Lancaster University),

> - Virgilio Gomez Rubio (Deparment of Mathematics, Universidad de

> Castilla-La Mancha),

> - Jannes Muenchow (Department of GIScience, University of Jena),

> - Daniel Nüst (Institute for Geoinformatics, University of Münster)

> - Tomislav Hengl (ISRIC - World Soil Information),

>

> Registration deadline is:

>

> - April 15th 2017

>

> hope to see you in Split,

>

> T. Hengl

> http://www.wageningenur.nl/en/Persons/dr.-T-Tom-Hengl.htm

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

School 13-19 August 2017 | MEDILS Split, will close on:

- 15th of April 2017

To register for GEOSTAT visit:

https://geostat-course.org/2017

PS: Current number of registrations: 44 (this summer school is limited

to 55 participants).

T. Hengl

http://www.wageningenur.nl/en/Persons/dr.-T-Tom-Hengl.htm

On 06/03/17 19:58, Tomislav Hengl wrote:

>

> The GEOSTAT Split 2017 Summer School will be held this year at the

> Mediterranean Institute for Life Sciences (MEDILS) Split, Croatia in

> the period 13-19 August 2017. MEDILS is an international centre of

> excellence for molecular biology and is located in a nature park in a

> near proximity of the historic town Split. It has in-house

> accommodation and numerous facilities including its own park and sport

> facilities.

>

> The registrations are now open. To register for this Summer School

> please visit:

>

> https://geostat-course.org/2017

>

> This summer school is limited to 55 participants. In the case of

> higher number of registrations, applicants will be ranked based on (1)

> their contribution to the FOSS, (2) academic output, (3) distance to

> the venue and (4) time of registration. The registrations fees are

> usually around 550 EUR (invoice will be sent after registrations).

> Registration fees cover costs of using facilities, lunch and coffee

> breaks and costs of travel and accommodation for lecturers.

> Participants from ODA countries (employed by an organization or

> company in ODA-listed country) typically receive a subsidized price

> for the accommodation and registration costs. GEOSTAT makes no profit.

> All lecturers are volunteers. None of the lecturers receives any

> honorarium payment or is contracted by the local organizers.

>

> This years topics include:

> - Application of R+OSGeo tools in: spatio-temporal monitoring,

> geostatistical mapping, point pattern analysis, epidemiology...

> - Machine Learning methods for spatial and spatio-temporal data,

> - QGIS and RQGIS tutorials,

> - RMarkdown/Sweave tutorials,

> - Visualization of spatio-temporal data,

>

> Lecturers:

> - Barry Rowlingson (Lancaster Medical School, Lancaster University),

> - Virgilio Gomez Rubio (Deparment of Mathematics, Universidad de

> Castilla-La Mancha),

> - Jannes Muenchow (Department of GIScience, University of Jena),

> - Daniel Nüst (Institute for Geoinformatics, University of Münster)

> - Tomislav Hengl (ISRIC - World Soil Information),

>

> Registration deadline is:

>

> - April 15th 2017

>

> hope to see you in Split,

>

> T. Hengl

> http://www.wageningenur.nl/en/Persons/dr.-T-Tom-Hengl.htm

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### variance-covariance matrix for GMerrorsar

Dear list,

I want to use bootstrapping to derive confidence intervals for marginal

wtp after GMerrorsar command.

It works for stsls since covariance matrix is directly available. However,

I cannot find covariance matrix for GMerrorsar.

For example, the following code works for stsls:

model1.beta <- coef(model1)

model1.vcov <- summary(model1)$var

model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

model1.mwtp <- model1.sim * Pbar

model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

when I apply the same code for GMerrorsar:

model2.beta <- coef(model2)

model2.vcov <- summary(model2)$var

> model2.vcov

NULL

How can I obtain covariance matrix for GMerrorsar?

Chelsea

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I want to use bootstrapping to derive confidence intervals for marginal

wtp after GMerrorsar command.

It works for stsls since covariance matrix is directly available. However,

I cannot find covariance matrix for GMerrorsar.

For example, the following code works for stsls:

model1.beta <- coef(model1)

model1.vcov <- summary(model1)$var

model1.sim <- rmultnorm(10000, mu = model1.beta, vmat = model1.vcov)

model1.mwtp <- model1.sim * Pbar

model1.ci <- apply(model1.mwtp, 2, quantile, c(0.025, 0.975))

when I apply the same code for GMerrorsar:

model2.beta <- coef(model2)

model2.vcov <- summary(model2)$var

> model2.vcov

NULL

How can I obtain covariance matrix for GMerrorsar?

Chelsea

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Ordinary co-kriging with gstat - computing time/syntax error?

You need to set nmax for each variable, as a parameter of the gstat()

function call. I don't see nmax mentioned anywhere in the documentation

of gstat::predict.

On 10/04/17 18:16, Mercedes Román wrote:

> Dear R-sig-Geo members,

>

> I want to perform co-kriging on the residuals on two regression models with

> gstat. I am following some tutorials (Rossiter, 2012; Malone et al., 2017),

> but I may have a mistake in the script. The computing time seems too long

> (I know it takes a lot of calculation time, but in four days it never

> finished), even when I set nmax=20. Originally, I wanted to perform a test

> on ~ 11000 points, but I am trying now with a subset of 200 points. My

> training dataset has ~ 27000 observations. I would like to produce a

> reproducible example, but perhaps just the sintaxis of the script may be

> enough to have a clue of my problem. My computer has an Intel®Xeon® CPU

> E5-2609 v2 processor, and 32Go RAM memory.

>

> Let’s say my two response variables are X1 and X2, and the dataframes with

> the coordinates and covariates are “training” and “test”.

>

> ### Calculate the predictions on the training dataset (predictors is a

> dataframe with the covariates)

> training$pred.X1 <- predict(modelX1, newdata = predictors)

> training$pred.X2 <- predict(modelX2, newdata = predictors)

> ### Calculate residuals

> training$res.X1<- training$X1 - training$ pred.X1

> training$res.X2 <- training$X2 - training$pred.X2

> ### Transform to spatial

> training.sp <- training

> coordinates(training.sp) <- ~ x + y

> proj4string(training.sp) <- CRS("+init=epsg:2154")

>

> ### Define gstat object and compute experimental semivariograms for X1 and

> X2 (for visual examination)

> X1.g <- gstat(formula = res.X1~1, data = training.sp)

> X1.svar <- variogram(X1.g, width = 5000, cutoff=250000)

> X1.ivgm <- vgm(nugget=0.56, range=14229, psill=0.65, kappa=10, model="Mat")

> ## initial variogram model

> X1.vgm <- fit.variogram(X1.svar, model=X1.ivgm, fit.method=7)

> plot(X1.svar, X1.vgm)

>

> X2.g <- gstat(formula = res.X2~1, data = training.sp)

> X2.svar <- variogram(X2.g, width = 5000, cutoff=250000)

> X2.ivgm <- vgm(nugget=0.19, range=13954, psill=0.21,kappa=10, model="Mat")

> X2.vgm <- fit.variogram(X2.svar, model=X2.ivgm, fit.method=7)

> plot(X2.svar, X2.vgm)

>

> ### Define gstat object to compute the direct variograms and covariogram

> g <- gstat( NULL, id="res.X1", formula = res.X1~1, data = training.sp)

> g <- gstat( g, id="res.X2", formula = res.X2~1, data = training.sp)

> v.cross <- variogram(g, width=5000, cutoff = 250000)

> #### Fill parameters

> g <- gstat(g, id = "res.X2", model = X2.vgm, fill.all=T)

> ### fit LMCR

> g <- fit.lmc(v.cross, g)

> ### Predict at the test locations (test.sp only indicates the coordinates)

> test.sp <- test

> coordinates(ensemble.sp) <- ~ x + y

> proj4string(ensemble.sp) <- CRS("+init=epsg:2154")

> k.c_residuals <- predict(g, test.sp, nmax=20)

>

> I am wondering if there is something missing in the predict command… I do

> not need to include the training data in it?

> I have been reading previous posts from R-sig-Geo, but I still did not find

> the solution. I would appreciate any advice you could give me.

>

> Thanks,

>

> Mercedes Roman

>

> INRA

>

> Unité de Service InfoSol

>

> 2163, avenue de la Pomme de Pin

>

> CS 40001 Ardon

>

> 45075 ORLEANS cedex 2

>

> France

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Edzer Pebesma

Institute for Geoinformatics (ifgi), University of Münster

Heisenbergstraße 2, 48149 Münster, Germany; +49 251 83 33081

Journal of Statistical Software: http://www.jstatsoft.org/

Computers & Geosciences: http://elsevier.com/locate/cageo/

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

function call. I don't see nmax mentioned anywhere in the documentation

of gstat::predict.

On 10/04/17 18:16, Mercedes Román wrote:

> Dear R-sig-Geo members,

>

> I want to perform co-kriging on the residuals on two regression models with

> gstat. I am following some tutorials (Rossiter, 2012; Malone et al., 2017),

> but I may have a mistake in the script. The computing time seems too long

> (I know it takes a lot of calculation time, but in four days it never

> finished), even when I set nmax=20. Originally, I wanted to perform a test

> on ~ 11000 points, but I am trying now with a subset of 200 points. My

> training dataset has ~ 27000 observations. I would like to produce a

> reproducible example, but perhaps just the sintaxis of the script may be

> enough to have a clue of my problem. My computer has an Intel®Xeon® CPU

> E5-2609 v2 processor, and 32Go RAM memory.

>

> Let’s say my two response variables are X1 and X2, and the dataframes with

> the coordinates and covariates are “training” and “test”.

>

> ### Calculate the predictions on the training dataset (predictors is a

> dataframe with the covariates)

> training$pred.X1 <- predict(modelX1, newdata = predictors)

> training$pred.X2 <- predict(modelX2, newdata = predictors)

> ### Calculate residuals

> training$res.X1<- training$X1 - training$ pred.X1

> training$res.X2 <- training$X2 - training$pred.X2

> ### Transform to spatial

> training.sp <- training

> coordinates(training.sp) <- ~ x + y

> proj4string(training.sp) <- CRS("+init=epsg:2154")

>

> ### Define gstat object and compute experimental semivariograms for X1 and

> X2 (for visual examination)

> X1.g <- gstat(formula = res.X1~1, data = training.sp)

> X1.svar <- variogram(X1.g, width = 5000, cutoff=250000)

> X1.ivgm <- vgm(nugget=0.56, range=14229, psill=0.65, kappa=10, model="Mat")

> ## initial variogram model

> X1.vgm <- fit.variogram(X1.svar, model=X1.ivgm, fit.method=7)

> plot(X1.svar, X1.vgm)

>

> X2.g <- gstat(formula = res.X2~1, data = training.sp)

> X2.svar <- variogram(X2.g, width = 5000, cutoff=250000)

> X2.ivgm <- vgm(nugget=0.19, range=13954, psill=0.21,kappa=10, model="Mat")

> X2.vgm <- fit.variogram(X2.svar, model=X2.ivgm, fit.method=7)

> plot(X2.svar, X2.vgm)

>

> ### Define gstat object to compute the direct variograms and covariogram

> g <- gstat( NULL, id="res.X1", formula = res.X1~1, data = training.sp)

> g <- gstat( g, id="res.X2", formula = res.X2~1, data = training.sp)

> v.cross <- variogram(g, width=5000, cutoff = 250000)

> #### Fill parameters

> g <- gstat(g, id = "res.X2", model = X2.vgm, fill.all=T)

> ### fit LMCR

> g <- fit.lmc(v.cross, g)

> ### Predict at the test locations (test.sp only indicates the coordinates)

> test.sp <- test

> coordinates(ensemble.sp) <- ~ x + y

> proj4string(ensemble.sp) <- CRS("+init=epsg:2154")

> k.c_residuals <- predict(g, test.sp, nmax=20)

>

> I am wondering if there is something missing in the predict command… I do

> not need to include the training data in it?

> I have been reading previous posts from R-sig-Geo, but I still did not find

> the solution. I would appreciate any advice you could give me.

>

> Thanks,

>

> Mercedes Roman

>

> INRA

>

> Unité de Service InfoSol

>

> 2163, avenue de la Pomme de Pin

>

> CS 40001 Ardon

>

> 45075 ORLEANS cedex 2

>

> France

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Edzer Pebesma

Institute for Geoinformatics (ifgi), University of Münster

Heisenbergstraße 2, 48149 Münster, Germany; +49 251 83 33081

Journal of Statistical Software: http://www.jstatsoft.org/

Computers & Geosciences: http://elsevier.com/locate/cageo/

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

**signature.asc**(484 bytes) Download Attachment### Ordinary co-kriging with gstat - computing time/syntax error?

Dear R-sig-Geo members,

I want to perform co-kriging on the residuals on two regression models with

gstat. I am following some tutorials (Rossiter, 2012; Malone et al., 2017),

but I may have a mistake in the script. The computing time seems too long

(I know it takes a lot of calculation time, but in four days it never

finished), even when I set nmax=20. Originally, I wanted to perform a test

on ~ 11000 points, but I am trying now with a subset of 200 points. My

training dataset has ~ 27000 observations. I would like to produce a

reproducible example, but perhaps just the sintaxis of the script may be

enough to have a clue of my problem. My computer has an Intel®Xeon® CPU

E5-2609 v2 processor, and 32Go RAM memory.

Let’s say my two response variables are X1 and X2, and the dataframes with

the coordinates and covariates are “training” and “test”.

### Calculate the predictions on the training dataset (predictors is a

dataframe with the covariates)

training$pred.X1 <- predict(modelX1, newdata = predictors)

training$pred.X2 <- predict(modelX2, newdata = predictors)

### Calculate residuals

training$res.X1<- training$X1 - training$ pred.X1

training$res.X2 <- training$X2 - training$pred.X2

### Transform to spatial

training.sp <- training

coordinates(training.sp) <- ~ x + y

proj4string(training.sp) <- CRS("+init=epsg:2154")

### Define gstat object and compute experimental semivariograms for X1 and

X2 (for visual examination)

X1.g <- gstat(formula = res.X1~1, data = training.sp)

X1.svar <- variogram(X1.g, width = 5000, cutoff=250000)

X1.ivgm <- vgm(nugget=0.56, range=14229, psill=0.65, kappa=10, model="Mat")

## initial variogram model

X1.vgm <- fit.variogram(X1.svar, model=X1.ivgm, fit.method=7)

plot(X1.svar, X1.vgm)

X2.g <- gstat(formula = res.X2~1, data = training.sp)

X2.svar <- variogram(X2.g, width = 5000, cutoff=250000)

X2.ivgm <- vgm(nugget=0.19, range=13954, psill=0.21,kappa=10, model="Mat")

X2.vgm <- fit.variogram(X2.svar, model=X2.ivgm, fit.method=7)

plot(X2.svar, X2.vgm)

### Define gstat object to compute the direct variograms and covariogram

g <- gstat( NULL, id="res.X1", formula = res.X1~1, data = training.sp)

g <- gstat( g, id="res.X2", formula = res.X2~1, data = training.sp)

v.cross <- variogram(g, width=5000, cutoff = 250000)

#### Fill parameters

g <- gstat(g, id = "res.X2", model = X2.vgm, fill.all=T)

### fit LMCR

g <- fit.lmc(v.cross, g)

### Predict at the test locations (test.sp only indicates the coordinates)

test.sp <- test

coordinates(ensemble.sp) <- ~ x + y

proj4string(ensemble.sp) <- CRS("+init=epsg:2154")

k.c_residuals <- predict(g, test.sp, nmax=20)

I am wondering if there is something missing in the predict command… I do

not need to include the training data in it?

I have been reading previous posts from R-sig-Geo, but I still did not find

the solution. I would appreciate any advice you could give me.

Thanks,

Mercedes Roman

INRA

Unité de Service InfoSol

2163, avenue de la Pomme de Pin

CS 40001 Ardon

45075 ORLEANS cedex 2

France

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I want to perform co-kriging on the residuals on two regression models with

gstat. I am following some tutorials (Rossiter, 2012; Malone et al., 2017),

but I may have a mistake in the script. The computing time seems too long

(I know it takes a lot of calculation time, but in four days it never

finished), even when I set nmax=20. Originally, I wanted to perform a test

on ~ 11000 points, but I am trying now with a subset of 200 points. My

training dataset has ~ 27000 observations. I would like to produce a

reproducible example, but perhaps just the sintaxis of the script may be

enough to have a clue of my problem. My computer has an Intel®Xeon® CPU

E5-2609 v2 processor, and 32Go RAM memory.

Let’s say my two response variables are X1 and X2, and the dataframes with

the coordinates and covariates are “training” and “test”.

### Calculate the predictions on the training dataset (predictors is a

dataframe with the covariates)

training$pred.X1 <- predict(modelX1, newdata = predictors)

training$pred.X2 <- predict(modelX2, newdata = predictors)

### Calculate residuals

training$res.X1<- training$X1 - training$ pred.X1

training$res.X2 <- training$X2 - training$pred.X2

### Transform to spatial

training.sp <- training

coordinates(training.sp) <- ~ x + y

proj4string(training.sp) <- CRS("+init=epsg:2154")

### Define gstat object and compute experimental semivariograms for X1 and

X2 (for visual examination)

X1.g <- gstat(formula = res.X1~1, data = training.sp)

X1.svar <- variogram(X1.g, width = 5000, cutoff=250000)

X1.ivgm <- vgm(nugget=0.56, range=14229, psill=0.65, kappa=10, model="Mat")

## initial variogram model

X1.vgm <- fit.variogram(X1.svar, model=X1.ivgm, fit.method=7)

plot(X1.svar, X1.vgm)

X2.g <- gstat(formula = res.X2~1, data = training.sp)

X2.svar <- variogram(X2.g, width = 5000, cutoff=250000)

X2.ivgm <- vgm(nugget=0.19, range=13954, psill=0.21,kappa=10, model="Mat")

X2.vgm <- fit.variogram(X2.svar, model=X2.ivgm, fit.method=7)

plot(X2.svar, X2.vgm)

### Define gstat object to compute the direct variograms and covariogram

g <- gstat( NULL, id="res.X1", formula = res.X1~1, data = training.sp)

g <- gstat( g, id="res.X2", formula = res.X2~1, data = training.sp)

v.cross <- variogram(g, width=5000, cutoff = 250000)

#### Fill parameters

g <- gstat(g, id = "res.X2", model = X2.vgm, fill.all=T)

### fit LMCR

g <- fit.lmc(v.cross, g)

### Predict at the test locations (test.sp only indicates the coordinates)

test.sp <- test

coordinates(ensemble.sp) <- ~ x + y

proj4string(ensemble.sp) <- CRS("+init=epsg:2154")

k.c_residuals <- predict(g, test.sp, nmax=20)

I am wondering if there is something missing in the predict command… I do

not need to include the training data in it?

I have been reading previous posts from R-sig-Geo, but I still did not find

the solution. I would appreciate any advice you could give me.

Thanks,

Mercedes Roman

INRA

Unité de Service InfoSol

2163, avenue de la Pomme de Pin

CS 40001 Ardon

45075 ORLEANS cedex 2

France

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Question niche based model whith R

You might check out the dismo package and the vignette it has on species distribution modeling.

https://cran.r-project.org/web/packages/dismo/index.html

Tim

Date: Mon, 10 Apr 2017 09:12:54 +0000 (UTC)

From: Soufianou Abou <[hidden email]>

To: "[hidden email]" <[hidden email]>

Subject: [R-sig-Geo] Question niche based model whith R

Message-ID: <[hidden email]>

Content-Type: text/plain; charset="UTF-8"

?Hello dears,I work on the modeling of the ecological niche of striga, I have presence data (data) and bioclimatic variables and other variables around.Indeed, my question how should I proceed to model the ecological niche with software R??Pending receipt of my request, please accept my best regards.SADDA Abou-Soufianou -------------------------------------- DoctorantUniversit? Dan Dicko Dankoulodo de Maradi,120 avenue Maman Koraou,ADS, MaradiBP 465 Maradi/Niger E-mail:?[hidden email]?: (+227)96269987

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### SDM course in the beautiful city of Evora, Portugal

Dear List,

This is to inform you that the 4th edition of the PhD/Postdoc course

“Species distributions models: concepts, methods, applications, and

challenges” is organised by Prof. Miguel Araùjo and Dr. Babak Naimi in

University of Evora, Portugal between 22th and 28th of May, 2017.

The course aims to introduce the fundamental concepts underpinning

ecological niche models (ENM), describing some of the most prominent

methods currently in use, and discussing the strengths and limitations of

these models for different applications. The course gives equal weight to

theory and application. The students will have the opportunity to learn how

to run ENM with the new R package, sdm, that is an object-oriented,

reproducible and extensible R platform for species distribution modelling (

http://www.biogeoinformatics.org/) that has been developed by the

organisers of the course.

For more details about the course and the registration, check the following

website:

http://www.maraujolab.com/2017-species-distributions-modelling-course/

You can also download the paper introduces the sdm R package from the

following link:

http://onlinelibrary.wiley.com/doi/10.1111/ecog.01881/full

Please feel free to forward this message to anyone who may be interested.

Best regards,

Babak Naimi

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

This is to inform you that the 4th edition of the PhD/Postdoc course

“Species distributions models: concepts, methods, applications, and

challenges” is organised by Prof. Miguel Araùjo and Dr. Babak Naimi in

University of Evora, Portugal between 22th and 28th of May, 2017.

The course aims to introduce the fundamental concepts underpinning

ecological niche models (ENM), describing some of the most prominent

methods currently in use, and discussing the strengths and limitations of

these models for different applications. The course gives equal weight to

theory and application. The students will have the opportunity to learn how

to run ENM with the new R package, sdm, that is an object-oriented,

reproducible and extensible R platform for species distribution modelling (

http://www.biogeoinformatics.org/) that has been developed by the

organisers of the course.

For more details about the course and the registration, check the following

website:

http://www.maraujolab.com/2017-species-distributions-modelling-course/

You can also download the paper introduces the sdm R package from the

following link:

http://onlinelibrary.wiley.com/doi/10.1111/ecog.01881/full

Please feel free to forward this message to anyone who may be interested.

Best regards,

Babak Naimi

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Question niche based model whith R

There are several packages available in R to do ecological niche modelling.

One easy-to-use option would be the sdm package. It is a comprehensive

modelling and simulation framework to fit species distribution models

supports several algorithms, and also is user-friendly. You can find a

tutorial for a quick start at: http://biogeoinformatics.org

Good luck,

Babak

>

> Date: Mon, 10 Apr 2017 09:12:54 +0000 (UTC)

> From: Soufianou Abou <[hidden email]>

> To: "[hidden email]" <[hidden email]>

> Subject: [R-sig-Geo] Question niche based model whith R

> Message-ID: <[hidden email]>

> Content-Type: text/plain; charset="UTF-8"

>

> ?Hello dears,I work on the modeling of the ecological niche of striga, I

> have presence data (data) and bioclimatic variables and other variables

> around.Indeed, my question how should I proceed to model the ecological

> niche with software R??Pending receipt of my request, please accept my best

> regards.SADDA Abou-Soufianou --------------------------------------

> DoctorantUniversit? Dan Dicko Dankoulodo de Maradi,120 avenue Maman

> Koraou,ADS, MaradiBP 465 Maradi/Niger E-mail:?[hidden email]?:

> (+227)96269987 <+227%2096%2026%2099%2087>

> [[alternative HTML version deleted]]

>

>

>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

One easy-to-use option would be the sdm package. It is a comprehensive

modelling and simulation framework to fit species distribution models

supports several algorithms, and also is user-friendly. You can find a

tutorial for a quick start at: http://biogeoinformatics.org

Good luck,

Babak

>

> Date: Mon, 10 Apr 2017 09:12:54 +0000 (UTC)

> From: Soufianou Abou <[hidden email]>

> To: "[hidden email]" <[hidden email]>

> Subject: [R-sig-Geo] Question niche based model whith R

> Message-ID: <[hidden email]>

> Content-Type: text/plain; charset="UTF-8"

>

> ?Hello dears,I work on the modeling of the ecological niche of striga, I

> have presence data (data) and bioclimatic variables and other variables

> around.Indeed, my question how should I proceed to model the ecological

> niche with software R??Pending receipt of my request, please accept my best

> regards.SADDA Abou-Soufianou --------------------------------------

> DoctorantUniversit? Dan Dicko Dankoulodo de Maradi,120 avenue Maman

> Koraou,ADS, MaradiBP 465 Maradi/Niger E-mail:?[hidden email]?:

> (+227)96269987 <+227%2096%2026%2099%2087>

> [[alternative HTML version deleted]]

>

>

>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Question niche based model with R

Hi Abou,

it is a quite broad question...a simple and easy answer would be "you

can just model yous data.frame with ?lm function or ?glm but you would

lose many of the iteresting stuff!! Then, in my opinion, first of all

you must decide between "presence-only" and "presence-absence" models

(my suggestion is for the second and in your case you must learn how

to generate pseudo-absences or background points. So, in my

experience, there are two main packages which are the most interesting

and suited for this purpose and which may help you: dismo and biomod2

(https://cran.r-project.org/web/packages/dismo/dismo.pdf -

https://cran.r-project.org/web/packages/biomod2/biomod2.pdf). There

are also many tutorials which can help you, for instance

https://cran.r-project.org/web/packages/dismo/vignettes/sdm.pdf

Then once you will handle the subject you will also be able to create

your own SDM using the algorithm you prefer (e.g. RandomForest,

MaxEnt, GLM, MARS, GAM and so on...) and without any additional

package. Anyway, my advice is to use one of the two above cited

packages because they are object-oriented and optimized for Ecological

Modelling.

Just to cover possible next question (which model is the best

solution) you will see that it is impossible to define the BEST method

which can perform correctly in all the circumstances. Many Researchers

are used to work with a single-algorithm (for example Generalized

Linear Models, Maximum Entropy, Generalized Additive Models, Neural

network, Random Forest etc.) while others are more willing to work

with ensemble projections and consensus models (i.e. mean or weighted

mean of more than one algorithm). Some examples of single-model and

ensemble model are here:

http://dx.doi.org/10.5424/fs/2016253-09476

http://dx.doi.org/10.1111/gcb.12604

http://dx.doi.org/10.1007/s13595-014-0439-4

http://dx.doi.org/10.1111/gcb.12476

Cheers,

--

Maurizio Marchi

Skype ID: maurizioxyz

linux user 552742

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

it is a quite broad question...a simple and easy answer would be "you

can just model yous data.frame with ?lm function or ?glm but you would

lose many of the iteresting stuff!! Then, in my opinion, first of all

you must decide between "presence-only" and "presence-absence" models

(my suggestion is for the second and in your case you must learn how

to generate pseudo-absences or background points. So, in my

experience, there are two main packages which are the most interesting

and suited for this purpose and which may help you: dismo and biomod2

(https://cran.r-project.org/web/packages/dismo/dismo.pdf -

https://cran.r-project.org/web/packages/biomod2/biomod2.pdf). There

are also many tutorials which can help you, for instance

https://cran.r-project.org/web/packages/dismo/vignettes/sdm.pdf

Then once you will handle the subject you will also be able to create

your own SDM using the algorithm you prefer (e.g. RandomForest,

MaxEnt, GLM, MARS, GAM and so on...) and without any additional

package. Anyway, my advice is to use one of the two above cited

packages because they are object-oriented and optimized for Ecological

Modelling.

Just to cover possible next question (which model is the best

solution) you will see that it is impossible to define the BEST method

which can perform correctly in all the circumstances. Many Researchers

are used to work with a single-algorithm (for example Generalized

Linear Models, Maximum Entropy, Generalized Additive Models, Neural

network, Random Forest etc.) while others are more willing to work

with ensemble projections and consensus models (i.e. mean or weighted

mean of more than one algorithm). Some examples of single-model and

ensemble model are here:

http://dx.doi.org/10.5424/fs/2016253-09476

http://dx.doi.org/10.1111/gcb.12604

http://dx.doi.org/10.1007/s13595-014-0439-4

http://dx.doi.org/10.1111/gcb.12476

Cheers,

--

Maurizio Marchi

Skype ID: maurizioxyz

linux user 552742

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Question niche based model whith R

Hello dears,I work on the modeling of the ecological niche of striga, I have presence data (data) and bioclimatic variables and other variables around.Indeed, my question how should I proceed to model the ecological niche with software R? Pending receipt of my request, please accept my best regards.SADDA Abou-Soufianou -------------------------------------- DoctorantUniversité Dan Dicko Dankoulodo de Maradi,120 avenue Maman Koraou,ADS, MaradiBP 465 Maradi/Niger E-mail: [hidden email] : (+227)96269987

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Cancelled: CALL

APR
15
"CALL" has been cancelled
When
Saturday, 15 April 2017

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

**invite.ics**(7K) Download Attachment### Cancelled: CALL

APR
15
"CALL" has been cancelled
When
Saturday, 15 April 2017

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

**invite.ics**(7K) Download Attachment### Cancelled: CALL

APR
15
"CALL" has been cancelled
When
Saturday, 15 April 2017

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

**invite.ics**(7K) Download Attachment### Cancelled: CALL

12:00 PM to 12:30 PM

(GMT) Greenwich Mean Time - Dublin / Edinburgh / Lisbon / London

Message > CALL FOR ABSTRACTS CLOSES ON 15th OF APRIL This event invitation was sent from Yahoo Calendar

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

**invite.ics**(7K) Download Attachment