This is an

**for**__archive__**R-sig-geo**, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first**(through the list homepage, not via Nabble)**and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.*Updated:*1 hour 3 min ago

### FOSS4G UK Conference

The UK edition of the leading open source geospatial software conference is

happening in Edinburgh in September:

https://uk.osgeo.org/foss4guk2019/

there's a few talks with R, but also plenty of opportunity to expand your

geospatial knowledge beyond R.

Generous sponsorship has meant we have kept the price low - a two-day

conference (incl lunch and refreshments) for £80.

I've been organising the workshops sessions and am also giving a talk on

integration of R and QGIS.

Hope to see some of you there.

Barry

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

happening in Edinburgh in September:

https://uk.osgeo.org/foss4guk2019/

there's a few talks with R, but also plenty of opportunity to expand your

geospatial knowledge beyond R.

Generous sponsorship has meant we have kept the price low - a two-day

conference (incl lunch and refreshments) for £80.

I've been organising the workshops sessions and am also giving a talk on

integration of R and QGIS.

Hope to see some of you there.

Barry

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

Dear Roger and all,

In the end I used lm() to estimate my non-spatial FE models in order to extract the log likelihood values (the results are exactly the same when using the plm() function.

I also used this method to estimate the SLX model. I just used the slag() function on each explanatory variable I wanted to have a spatial lag for, added it to my dataframe, and then estimated using lm() so that I can extract the log likelihood. I also managed to run my SDEM model with the spml() function without using slag() within the model.

I just have one question: in the splm package documentation, there should be an extractable logLik value when I build a model using the spml() function

However, when I do summary(spatial.SEM)$logLik, the result is NULL. When I do logLik(spatial.SEM), the error message is:

Error in UseMethod("logLik") :

no applicable method for 'logLik' applied to an object of class "splm"

Since spml() uses maximum likelihood estimation so why can't I extract the log likelihood value?

Any help would be greatly appreciated!

Best wishes,

Letisha

________________________________

From: Letisha Sarah Fong Rui Zhen <[hidden email]>

Sent: Thursday, July 25, 2019 5:16:43 PM

To: [hidden email] <[hidden email]>

Cc: [hidden email] <[hidden email]>

Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

Dear Roger,

Thank you for your quick response.

I have uploaded the spatial weights matrix and sample dataset I'm working with here: https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

Reading the data in and transforming them into a pdataframe and listw, respectively:

spatialweight <- read.csv("spatialweight.csv", header = T)

row.names(spatialweight) <- spatialweight$X

spatialweight <- spatialweight[, -1]

spatialweight.mat <- as.matrix(spatialweight)

mylistw <- mat2listw(spatialweight.mat, style = "M")

mydata <- read.csv("sampledata.csv", header = T)

mydata <- pdata.frame(mydata, index = c("Country", "Year"))

I first ran a non-spatial model to determine the best specification for fixed effects:

nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

I would like to compare these models based on log likelihood and AIC, but the plm() function does not appear to provide a log likelihood or AIC. I have read through the JSS plm article and it states that models made with the plm() function are "estimated using the lm function to the transformed data". I'm aware that we can use logLik() and AIC() for a model estimated with the lm() function. However it doesn't seem to work with the plm() function.

For example, I did logLik(nonspatial.twowayFE) and AIC(nonspatial.twowayFE) but the error message for both is:

Error in UseMethod("logLik") :

no applicable method for 'logLik' applied to an object of class "c('plm', 'panelmodel')"

Please let me know if I'm calling the wrong function(s) and/or if you're aware of a way to compare these models based on log likelihood and/or AIC.

For the spatial models, here is my code:

spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

As in my original post, the SLX and SEM models ran OK but the error when I try to run the SDEM model is:

Error in UseMethod("slag") :

no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

The variables that I use the slag() function on are all numeric, so I don't know what's wrong. I seem to be able to use slag() with plm() but not with spml(), but I don't know why this is so.

I need to compare the models to see if SDEM can be reduced to one of its nested form. As was the case of the non-spatial models, I can't get the log likelihood for models created with the plm() function, so any suggestions are welcome. I've already read through the JSS articles for splm and plm as well as both documentations and there's no information on this (except that models built with the plm() function are estimated using the lm function to the transformed data).

Thanks for clarifying the impact measures for SDEM and SLX. Just to check - when you say linear combination for standard errors do you mean e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of the direct impact and theta1 is the coefficient of the indirect impact)?

Thank you for your help!

Best wishes,

Sarah

________________________________

From: Roger Bivand <[hidden email]>

Sent: Wednesday, July 24, 2019 9:50:13 PM

To: Letisha Sarah Fong Rui Zhen <[hidden email]>

Cc: [hidden email] <[hidden email]>

Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear all,

Please do not post HTML-formated messages.

>

> I???m working with panel data of 9 countries and 18 years and I???m

> running fixed effects SDEM, SLX and SEM with the splm package.

>

> I have three questions:

>

> 1. I can???t seem to get the SDEM model to run. My code for each of the

> 3 models is:

>

> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

> data = my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

> model = ???within???, effect = ???individual???)

>

> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

> my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> I am able to run both SLX and SEM models without problem, but when I try

> to run the SDEM model, the error message is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

>

> I don???t understand what is wrong here, as I have no problems with the

> slag() function in the SLX model. My data is a pdataframe and each

> variable is a numeric pseries.

My guess would be that you should protect your square with I() in general,

but have no idea - this is not a reproducible example.

>

>

> 2. How can I calculate impact measures (direct, indirect and total) for

> spatial panel models?

>

> The impacts() function in spdep doesn???t work anymore and the impacts()

> function from the spatialreg package seems to work only for

> cross-sectional data and not panel data.

>

> For example, I ran:

>

> spatialreg::impacts(model.SLX)

>

> And the error message is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class

> "c('plm', 'panelmodel')"

>

> I have tried methods(impacts) but none of the functions seem to work for

> my SLX model created with the splm package.

But your SLX model is created with the plm package, isn't it? The only use

of splm is for the manual lags with slag()?

>

> I also looked at some previous examples in the splm documentation and

> more specifically the spml() function and the example provided

> (specifically the impact measures) doesn???t work anymore:

>

> data(Produc, package = "plm")

> data(usaww)

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

> ## random effects panel with spatial lag

> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

> model="random", spatial.error="none", lag=TRUE)

> summary(respatlag) ## calculate impact measures impac1 <-

> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

> summary(impac1, zstats=TRUE, short=TRUE)

>

The implemented impacts methods in splm apply to the case where the lagged

response is included. For SDEM and SLX, you can get the impacts by

addition for the total impactss, and by linear combination for their

standard errors. This is not implemented in a method. Further, slag() does

not give any impacts method any information on which variables have been

lagged - in your illustration above EI and RE are not lagged.

> The error message when I run the impacts() function is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class "splm"

>

> My question is therefore, how do I go about calculating direct, indirect

> and total impact measures for spatial panel data?

>

>

> 3. How can I test if the SDEM model can be simplified to the SLX model

> (since I estimate the SDEM by maximum likelihood (spml function) and

> the SLX by ordinary linear regression (plm function))? From my

> understanding the plm() function does not compute a loglikelihood or

> AIC so I probably can???t do a likelihood ratio test to choose

> between models (I haven???t tried this out because I???m stuck at

> running the SDEM model).

Do you know definitely that plm does not provide a log likelihood? I

realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

splm articles?

Roger

>

> Any help or advice would be greatly appreciated. Thank you.

>

> Best wishes,

> Sarah

>

>

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are

> not the intended recipient, please delete it and notify us immediately;

> you should not copy or use it for any purpose, nor disclose its contents

> to any other person. Thank you.

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

In the end I used lm() to estimate my non-spatial FE models in order to extract the log likelihood values (the results are exactly the same when using the plm() function.

I also used this method to estimate the SLX model. I just used the slag() function on each explanatory variable I wanted to have a spatial lag for, added it to my dataframe, and then estimated using lm() so that I can extract the log likelihood. I also managed to run my SDEM model with the spml() function without using slag() within the model.

I just have one question: in the splm package documentation, there should be an extractable logLik value when I build a model using the spml() function

However, when I do summary(spatial.SEM)$logLik, the result is NULL. When I do logLik(spatial.SEM), the error message is:

Error in UseMethod("logLik") :

no applicable method for 'logLik' applied to an object of class "splm"

Since spml() uses maximum likelihood estimation so why can't I extract the log likelihood value?

Any help would be greatly appreciated!

Best wishes,

Letisha

________________________________

From: Letisha Sarah Fong Rui Zhen <[hidden email]>

Sent: Thursday, July 25, 2019 5:16:43 PM

To: [hidden email] <[hidden email]>

Cc: [hidden email] <[hidden email]>

Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

Dear Roger,

Thank you for your quick response.

I have uploaded the spatial weights matrix and sample dataset I'm working with here: https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

Reading the data in and transforming them into a pdataframe and listw, respectively:

spatialweight <- read.csv("spatialweight.csv", header = T)

row.names(spatialweight) <- spatialweight$X

spatialweight <- spatialweight[, -1]

spatialweight.mat <- as.matrix(spatialweight)

mylistw <- mat2listw(spatialweight.mat, style = "M")

mydata <- read.csv("sampledata.csv", header = T)

mydata <- pdata.frame(mydata, index = c("Country", "Year"))

I first ran a non-spatial model to determine the best specification for fixed effects:

nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

I would like to compare these models based on log likelihood and AIC, but the plm() function does not appear to provide a log likelihood or AIC. I have read through the JSS plm article and it states that models made with the plm() function are "estimated using the lm function to the transformed data". I'm aware that we can use logLik() and AIC() for a model estimated with the lm() function. However it doesn't seem to work with the plm() function.

For example, I did logLik(nonspatial.twowayFE) and AIC(nonspatial.twowayFE) but the error message for both is:

Error in UseMethod("logLik") :

no applicable method for 'logLik' applied to an object of class "c('plm', 'panelmodel')"

Please let me know if I'm calling the wrong function(s) and/or if you're aware of a way to compare these models based on log likelihood and/or AIC.

For the spatial models, here is my code:

spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

As in my original post, the SLX and SEM models ran OK but the error when I try to run the SDEM model is:

Error in UseMethod("slag") :

no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

The variables that I use the slag() function on are all numeric, so I don't know what's wrong. I seem to be able to use slag() with plm() but not with spml(), but I don't know why this is so.

I need to compare the models to see if SDEM can be reduced to one of its nested form. As was the case of the non-spatial models, I can't get the log likelihood for models created with the plm() function, so any suggestions are welcome. I've already read through the JSS articles for splm and plm as well as both documentations and there's no information on this (except that models built with the plm() function are estimated using the lm function to the transformed data).

Thanks for clarifying the impact measures for SDEM and SLX. Just to check - when you say linear combination for standard errors do you mean e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of the direct impact and theta1 is the coefficient of the indirect impact)?

Thank you for your help!

Best wishes,

Sarah

________________________________

From: Roger Bivand <[hidden email]>

Sent: Wednesday, July 24, 2019 9:50:13 PM

To: Letisha Sarah Fong Rui Zhen <[hidden email]>

Cc: [hidden email] <[hidden email]>

Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear all,

Please do not post HTML-formated messages.

>

> I???m working with panel data of 9 countries and 18 years and I???m

> running fixed effects SDEM, SLX and SEM with the splm package.

>

> I have three questions:

>

> 1. I can???t seem to get the SDEM model to run. My code for each of the

> 3 models is:

>

> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

> data = my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

> model = ???within???, effect = ???individual???)

>

> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

> my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> I am able to run both SLX and SEM models without problem, but when I try

> to run the SDEM model, the error message is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

>

> I don???t understand what is wrong here, as I have no problems with the

> slag() function in the SLX model. My data is a pdataframe and each

> variable is a numeric pseries.

My guess would be that you should protect your square with I() in general,

but have no idea - this is not a reproducible example.

>

>

> 2. How can I calculate impact measures (direct, indirect and total) for

> spatial panel models?

>

> The impacts() function in spdep doesn???t work anymore and the impacts()

> function from the spatialreg package seems to work only for

> cross-sectional data and not panel data.

>

> For example, I ran:

>

> spatialreg::impacts(model.SLX)

>

> And the error message is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class

> "c('plm', 'panelmodel')"

>

> I have tried methods(impacts) but none of the functions seem to work for

> my SLX model created with the splm package.

But your SLX model is created with the plm package, isn't it? The only use

of splm is for the manual lags with slag()?

>

> I also looked at some previous examples in the splm documentation and

> more specifically the spml() function and the example provided

> (specifically the impact measures) doesn???t work anymore:

>

> data(Produc, package = "plm")

> data(usaww)

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

> ## random effects panel with spatial lag

> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

> model="random", spatial.error="none", lag=TRUE)

> summary(respatlag) ## calculate impact measures impac1 <-

> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

> summary(impac1, zstats=TRUE, short=TRUE)

>

The implemented impacts methods in splm apply to the case where the lagged

response is included. For SDEM and SLX, you can get the impacts by

addition for the total impactss, and by linear combination for their

standard errors. This is not implemented in a method. Further, slag() does

not give any impacts method any information on which variables have been

lagged - in your illustration above EI and RE are not lagged.

> The error message when I run the impacts() function is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class "splm"

>

> My question is therefore, how do I go about calculating direct, indirect

> and total impact measures for spatial panel data?

>

>

> 3. How can I test if the SDEM model can be simplified to the SLX model

> (since I estimate the SDEM by maximum likelihood (spml function) and

> the SLX by ordinary linear regression (plm function))? From my

> understanding the plm() function does not compute a loglikelihood or

> AIC so I probably can???t do a likelihood ratio test to choose

> between models (I haven???t tried this out because I???m stuck at

> running the SDEM model).

Do you know definitely that plm does not provide a log likelihood? I

realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

splm articles?

Roger

>

> Any help or advice would be greatly appreciated. Thank you.

>

> Best wishes,

> Sarah

>

>

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are

> not the intended recipient, please delete it and notify us immediately;

> you should not copy or use it for any purpose, nor disclose its contents

> to any other person. Thank you.

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Simple Ripley's CRS test for market point patters

On 25/07/19 12:50 PM, ASANTOS wrote:

> Thanks for your help Marcelino e for the careful explanation Rolf Turner

> and so sorry about my post script configuration, I expected that I

> solved that in my new post.

>

> First my variable area is a marked point (attribute or auxiliary

> information about my point process - page 7 and not a spatial covariate,

> effect in the outcome of my experimental area - page 50). Based in this

> information, my hypophyses is that the *size of ant nests* a cause of

> ecological intraspecific competition for resources (such as food and

> territory) *have different patterns of spatial distribution*, for this:

>

> #Packages

> require(spatstat)

> require(sp)

>

> # Create some points that represents ant nests

>

> xp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,

> 371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,

> 371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,

> 371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,

> 371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,

> 371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,

> 371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,

> 371159.291,371158.552,370978.252,371120.03,371116.993)

>

> yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,

> 8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,

> 8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,

> 8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,

> 8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,

> 8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,

> 8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,

> 8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,

> 8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

>

> #Now I have the size of each nest (marked process)

>

> area<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,

> 88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,

> 105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

>

> # Make a countour for the window creation

> W <- convexhull.xy(xp,yp)

>

>

> # Class of nests size - large, medium and small

> acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

> labels=c("s","m","l"))

>

> #Create a ppp object

>

> syn.ppp <- ppp(x=xp,y=yp,window=W, marks=acat)

>

> # Test intensity hypothesis

>

> f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

> f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity for

> each area category.

> anova(f1,f2,test="Chi")

>

> #0.002015 ** OK, the hypothesis that the intensities are the same was

> reject, but intensities is not the question.

>

> Based in my initial hypothesis, I've like to show envelopes and observed

> values of the use of some function for the three point patters (large,

> medium and small ant nest size)under CSR. And for this I try to:

>

> kS<-envelope(syn.ppp[syn.ppp$acat=="s"], nsim=99,fun=Kest)

>

> plot(kS,lwd=list(3,1,1,1), main="")

>

> kM<-envelope(syn.ppp[syn.ppp$acat=="m"], nsim=99,fun=Kest)

>

> plot(kM,lwd=list(3,1,1,1), main="")

>

> kL<-envelope(syn.ppp[syn.ppp$acat=="l"], nsim=99,fun=Kest)

>

> plot(kL,lwd=list(3,1,1,1), main="")

>

> But doesn't work yet. My approach now sounds correct?

>

> Thanks in advanced,

The formatting of your latest email was a great improvement and is now

admirably legible.

I am still not completely clear what your research question is, or what

hypotheses you wish to test.

Let me preface my remarks by saying that I have so far gone along with

your inclination to convert the numeric marks (area) of your pattern to

categorical (small, medium, large) marks. I believe that this may be a

suboptimal approach. (As someone said in an old post to R-help, on an

entirely different topic, 'There is a reason that the speedometer in

your car doesn't just read "slow" and "fast".' :-) )

On the other hand there is not a lot on offer in the way of techniques

for handling numeric marks, so the "discretisation" approach may be the

best that one can do.

Now to proceed with trying to answer your question(s). You say:

> my hypophyses is that the size of ant nests a cause of ecological

> intraspecific competition for resources (such as food and territory)

> have different patterns of spatial distribution

That's a (rather vague) "*alternative*" hypothesis. The corresponding

null hypothesis is that the three categories have the *same* spatial

distribution. We reject that null hypothesis, since we reject the

hypothesis that they have the same intensities.

Now what?

You say:

> Based in my initial hypothesis, I've like to show envelopes and observed

> values of the use of some function for the three point patters (large,

> medium and small ant nest size) under CSR.

This is a bit incoherent and a demonstrates somewhat confused thinking.

As the signature file of a frequent poster to R-help says "Tell me

what you want to do, not how you want to do it."

Perhaps the following will help to clarify. For a categorical marked

point pattern (which is what we are currently dealing with) there are

infinitely many possibilities for the joint distribution of the patterns

corresponding to the marks.

The simplest of these is CSR (which means that the marks are essentially

irrelevant). We have already rejected this hypothesis.

The next simplest is what is designated in [1; p. 584] as "CSRI"

(complete spatial randomness with independence). This means that each

of the patterns is "individually CSR" (Poisson process, constant

intensity) and that the patterns are independent.

After that, the possibilities explode astronomically --- different

(non-Poisson) structures for each of the patterns, different dependence

structures, .... The sky's the limit.

One "reasonably simple" possibility amongst this plethora of

possibilities is the multitype Strauss point process model, which could

be fitted (with some effort) using ppm() and profilepl().

However before we open that can of worms, we can test "CSRI":

fit <- ppm(syn.ppp ~ marks)

set.seed(42)

E <- envelope(fit,savefuns=TRUE)

dclf.test(E)

mad.test(E)

The dclf test gives a p-value of 0.14; the mad test gives a p-value of

0.18. Thus there appears to be no evidence against the null hypothesis

of CSRI.

Therefore "CSRI" would appear to be an adequate/plausible model for

these data.

There may be more that needs to be done, but this is a start.

I would appreciate comments from Marcelino (and of course from Adrian

and Ege) especially if any of them notice something incorrect in my advice.

cheers,

Rolf

[1] Spatial Point Patterns: Methodology and Applications with R

1st Edition, Adrian Baddeley, Ege Rubak, Rolf Turner

Chapman and Hall/CRC, 2015

P. S. You said of your efforts to continue your analysis: "But doesn't

work yet." This is a very vague and unhelpful assertion! One thing

that is wrong with your efforts is that in, e.g.:

kS<-envelope(syn.ppp[syn.ppp$acat=="s"], nsim=99,fun=Kest)

the object syn.ppp has no component named "acat"!!! (You really need to

gain some understanding of how R works!)

You *could* say

kS <- envelope(syn.ppp[syn.ppp$marks=="s"])

or (better)

kS <- envelope(syn.ppp[marks(syn.ppp)=="s"])

to get the sort of result that you had hoped for. Note that specifying

the arguments nsim=99 and fun=Kest, while harmless, is redundant (and

therefore tends to obfuscate). These are the *default* values!!! Read

the help, and learn to use R efficiently!

R.

--

Honorary Research Fellow

Department of Statistics

University of Auckland

Phone: +64-9-373-7599 ext. 88276

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

Dear Roger,

Thank you for your quick response.

I have uploaded the spatial weights matrix and sample dataset I'm working with here: https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

Reading the data in and transforming them into a pdataframe and listw, respectively:

spatialweight <- read.csv("spatialweight.csv", header = T)

row.names(spatialweight) <- spatialweight$X

spatialweight <- spatialweight[, -1]

spatialweight.mat <- as.matrix(spatialweight)

mylistw <- mat2listw(spatialweight.mat, style = "M")

mydata <- read.csv("sampledata.csv", header = T)

mydata <- pdata.frame(mydata, index = c("Country", "Year"))

I first ran a non-spatial model to determine the best specification for fixed effects:

nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

I would like to compare these models based on log likelihood and AIC, but the plm() function does not appear to provide a log likelihood or AIC. I have read through the JSS plm article and it states that models made with the plm() function are "estimated using the lm function to the transformed data". I'm aware that we can use logLik() and AIC() for a model estimated with the lm() function. However it doesn't seem to work with the plm() function.

For example, I did logLik(nonspatial.twowayFE) and AIC(nonspatial.twowayFE) but the error message for both is:

Error in UseMethod("logLik") :

no applicable method for 'logLik' applied to an object of class "c('plm', 'panelmodel')"

Please let me know if I'm calling the wrong function(s) and/or if you're aware of a way to compare these models based on log likelihood and/or AIC.

For the spatial models, here is my code:

spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

As in my original post, the SLX and SEM models ran OK but the error when I try to run the SDEM model is:

Error in UseMethod("slag") :

no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

The variables that I use the slag() function on are all numeric, so I don't know what's wrong. I seem to be able to use slag() with plm() but not with spml(), but I don't know why this is so.

I need to compare the models to see if SDEM can be reduced to one of its nested form. As was the case of the non-spatial models, I can't get the log likelihood for models created with the plm() function, so any suggestions are welcome. I've already read through the JSS articles for splm and plm as well as both documentations and there's no information on this (except that models built with the plm() function are estimated using the lm function to the transformed data).

Thanks for clarifying the impact measures for SDEM and SLX. Just to check - when you say linear combination for standard errors do you mean e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of the direct impact and theta1 is the coefficient of the indirect impact)?

Thank you for your help!

Best wishes,

Sarah

________________________________

From: Roger Bivand <[hidden email]>

Sent: Wednesday, July 24, 2019 9:50:13 PM

To: Letisha Sarah Fong Rui Zhen <[hidden email]>

Cc: [hidden email] <[hidden email]>

Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear all,

Please do not post HTML-formated messages.

>

> I???m working with panel data of 9 countries and 18 years and I???m

> running fixed effects SDEM, SLX and SEM with the splm package.

>

> I have three questions:

>

> 1. I can???t seem to get the SDEM model to run. My code for each of the

> 3 models is:

>

> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

> data = my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

> model = ???within???, effect = ???individual???)

>

> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

> my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> I am able to run both SLX and SEM models without problem, but when I try

> to run the SDEM model, the error message is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

>

> I don???t understand what is wrong here, as I have no problems with the

> slag() function in the SLX model. My data is a pdataframe and each

> variable is a numeric pseries.

My guess would be that you should protect your square with I() in general,

but have no idea - this is not a reproducible example.

>

>

> 2. How can I calculate impact measures (direct, indirect and total) for

> spatial panel models?

>

> The impacts() function in spdep doesn???t work anymore and the impacts()

> function from the spatialreg package seems to work only for

> cross-sectional data and not panel data.

>

> For example, I ran:

>

> spatialreg::impacts(model.SLX)

>

> And the error message is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class

> "c('plm', 'panelmodel')"

>

> I have tried methods(impacts) but none of the functions seem to work for

> my SLX model created with the splm package.

But your SLX model is created with the plm package, isn't it? The only use

of splm is for the manual lags with slag()?

>

> I also looked at some previous examples in the splm documentation and

> more specifically the spml() function and the example provided

> (specifically the impact measures) doesn???t work anymore:

>

> data(Produc, package = "plm")

> data(usaww)

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

> ## random effects panel with spatial lag

> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

> model="random", spatial.error="none", lag=TRUE)

> summary(respatlag) ## calculate impact measures impac1 <-

> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

> summary(impac1, zstats=TRUE, short=TRUE)

>

The implemented impacts methods in splm apply to the case where the lagged

response is included. For SDEM and SLX, you can get the impacts by

addition for the total impactss, and by linear combination for their

standard errors. This is not implemented in a method. Further, slag() does

not give any impacts method any information on which variables have been

lagged - in your illustration above EI and RE are not lagged.

> The error message when I run the impacts() function is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class "splm"

>

> My question is therefore, how do I go about calculating direct, indirect

> and total impact measures for spatial panel data?

>

>

> 3. How can I test if the SDEM model can be simplified to the SLX model

> (since I estimate the SDEM by maximum likelihood (spml function) and

> the SLX by ordinary linear regression (plm function))? From my

> understanding the plm() function does not compute a loglikelihood or

> AIC so I probably can???t do a likelihood ratio test to choose

> between models (I haven???t tried this out because I???m stuck at

> running the SDEM model).

Do you know definitely that plm does not provide a log likelihood? I

realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

splm articles?

Roger

>

> Any help or advice would be greatly appreciated. Thank you.

>

> Best wishes,

> Sarah

>

>

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are

> not the intended recipient, please delete it and notify us immediately;

> you should not copy or use it for any purpose, nor disclose its contents

> to any other person. Thank you.

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Thank you for your quick response.

I have uploaded the spatial weights matrix and sample dataset I'm working with here: https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

Reading the data in and transforming them into a pdataframe and listw, respectively:

spatialweight <- read.csv("spatialweight.csv", header = T)

row.names(spatialweight) <- spatialweight$X

spatialweight <- spatialweight[, -1]

spatialweight.mat <- as.matrix(spatialweight)

mylistw <- mat2listw(spatialweight.mat, style = "M")

mydata <- read.csv("sampledata.csv", header = T)

mydata <- pdata.frame(mydata, index = c("Country", "Year"))

I first ran a non-spatial model to determine the best specification for fixed effects:

nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

I would like to compare these models based on log likelihood and AIC, but the plm() function does not appear to provide a log likelihood or AIC. I have read through the JSS plm article and it states that models made with the plm() function are "estimated using the lm function to the transformed data". I'm aware that we can use logLik() and AIC() for a model estimated with the lm() function. However it doesn't seem to work with the plm() function.

For example, I did logLik(nonspatial.twowayFE) and AIC(nonspatial.twowayFE) but the error message for both is:

Error in UseMethod("logLik") :

no applicable method for 'logLik' applied to an object of class "c('plm', 'panelmodel')"

Please let me know if I'm calling the wrong function(s) and/or if you're aware of a way to compare these models based on log likelihood and/or AIC.

For the spatial models, here is my code:

spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

As in my original post, the SLX and SEM models ran OK but the error when I try to run the SDEM model is:

Error in UseMethod("slag") :

no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

The variables that I use the slag() function on are all numeric, so I don't know what's wrong. I seem to be able to use slag() with plm() but not with spml(), but I don't know why this is so.

I need to compare the models to see if SDEM can be reduced to one of its nested form. As was the case of the non-spatial models, I can't get the log likelihood for models created with the plm() function, so any suggestions are welcome. I've already read through the JSS articles for splm and plm as well as both documentations and there's no information on this (except that models built with the plm() function are estimated using the lm function to the transformed data).

Thanks for clarifying the impact measures for SDEM and SLX. Just to check - when you say linear combination for standard errors do you mean e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of the direct impact and theta1 is the coefficient of the indirect impact)?

Thank you for your help!

Best wishes,

Sarah

________________________________

From: Roger Bivand <[hidden email]>

Sent: Wednesday, July 24, 2019 9:50:13 PM

To: Letisha Sarah Fong Rui Zhen <[hidden email]>

Cc: [hidden email] <[hidden email]>

Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear all,

Please do not post HTML-formated messages.

>

> I???m working with panel data of 9 countries and 18 years and I???m

> running fixed effects SDEM, SLX and SEM with the splm package.

>

> I have three questions:

>

> 1. I can???t seem to get the SDEM model to run. My code for each of the

> 3 models is:

>

> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

> data = my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

> model = ???within???, effect = ???individual???)

>

> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

> my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> I am able to run both SLX and SEM models without problem, but when I try

> to run the SDEM model, the error message is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

>

> I don???t understand what is wrong here, as I have no problems with the

> slag() function in the SLX model. My data is a pdataframe and each

> variable is a numeric pseries.

My guess would be that you should protect your square with I() in general,

but have no idea - this is not a reproducible example.

>

>

> 2. How can I calculate impact measures (direct, indirect and total) for

> spatial panel models?

>

> The impacts() function in spdep doesn???t work anymore and the impacts()

> function from the spatialreg package seems to work only for

> cross-sectional data and not panel data.

>

> For example, I ran:

>

> spatialreg::impacts(model.SLX)

>

> And the error message is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class

> "c('plm', 'panelmodel')"

>

> I have tried methods(impacts) but none of the functions seem to work for

> my SLX model created with the splm package.

But your SLX model is created with the plm package, isn't it? The only use

of splm is for the manual lags with slag()?

>

> I also looked at some previous examples in the splm documentation and

> more specifically the spml() function and the example provided

> (specifically the impact measures) doesn???t work anymore:

>

> data(Produc, package = "plm")

> data(usaww)

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

> ## random effects panel with spatial lag

> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

> model="random", spatial.error="none", lag=TRUE)

> summary(respatlag) ## calculate impact measures impac1 <-

> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

> summary(impac1, zstats=TRUE, short=TRUE)

>

The implemented impacts methods in splm apply to the case where the lagged

response is included. For SDEM and SLX, you can get the impacts by

addition for the total impactss, and by linear combination for their

standard errors. This is not implemented in a method. Further, slag() does

not give any impacts method any information on which variables have been

lagged - in your illustration above EI and RE are not lagged.

> The error message when I run the impacts() function is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class "splm"

>

> My question is therefore, how do I go about calculating direct, indirect

> and total impact measures for spatial panel data?

>

>

> 3. How can I test if the SDEM model can be simplified to the SLX model

> (since I estimate the SDEM by maximum likelihood (spml function) and

> the SLX by ordinary linear regression (plm function))? From my

> understanding the plm() function does not compute a loglikelihood or

> AIC so I probably can???t do a likelihood ratio test to choose

> between models (I haven???t tried this out because I???m stuck at

> running the SDEM model).

Do you know definitely that plm does not provide a log likelihood? I

realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

splm articles?

Roger

>

> Any help or advice would be greatly appreciated. Thank you.

>

> Best wishes,

> Sarah

>

>

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are

> not the intended recipient, please delete it and notify us immediately;

> you should not copy or use it for any purpose, nor disclose its contents

> to any other person. Thank you.

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Simple Ripley's CRS test for market point patters

Thanks for your help Marcelino e for the careful explanation Rolf Turner

and so sorry about my post script configuration, I expected that I

solved that in my new post.

First my variable area is a marked point (attribute or auxiliary

information about my point process - page 7 and not a spatial covariate,

effect in the outcome of my experimental area - page 50). Based in this

information, my hypophyses is that the *size of ant nests* a cause of

ecological intraspecific competition for resources (such as food and

territory) *have different patterns of spatial distribution*, for this:

#Packages

require(spatstat)

require(sp)

# Create some points that represents ant nests

xp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,

371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,

371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,

371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,

371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,

371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,

371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,

371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,

8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,

8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,

8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,

8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,

8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,

8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,

8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,

8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

#Now I have the size of each nest (marked process)

area<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,

88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,

105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a countour for the window creation

W <- convexhull.xy(xp,yp)

# Class of nests size - large, medium and small

acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

labels=c("s","m","l"))

#Create a ppp object

syn.ppp <- ppp(x=xp,y=yp,window=W, marks=acat)

# Test intensity hypothesis

f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity for

each area category.

anova(f1,f2,test="Chi")

#0.002015 ** OK, the hypothesis that the intensities are the same was

reject, but intensities is not the question.

Based in my initial hypothesis, I've like to show envelopes and observed

values of the use of some function for the three point patters (large,

medium and small ant nest size)under CSR. And for this I try to:

kS<-envelope(syn.ppp[syn.ppp$acat=="s"], nsim=99,fun=Kest)

plot(kS,lwd=list(3,1,1,1), main="")

kM<-envelope(syn.ppp[syn.ppp$acat=="m"], nsim=99,fun=Kest)

plot(kM,lwd=list(3,1,1,1), main="")

kL<-envelope(syn.ppp[syn.ppp$acat=="l"], nsim=99,fun=Kest)

plot(kL,lwd=list(3,1,1,1), main="")

But doesn't work yet. My approach now sounds correct?

Thanks in advanced,

Alexandre

--

======================================================================

Alexandre dos Santos

Prote????o Florestal

IFMT - Instituto Federal de Educa????o, Ci??ncia e Tecnologia de Mato Grosso

Campus C??ceres

Caixa Postal 244

Avenida dos Ramires, s/n

Bairro: Distrito Industrial

C??ceres - MT CEP: 78.200-000

Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)

[hidden email]

Lattes: http://lattes.cnpq.br/1360403201088680

OrcID: orcid.org/0000-0001-8232-6722

Researchgate: www.researchgate.net/profile/Alexandre_Santos10

LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635

Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

======================================================================

Em 23/07/2019 22:00, Rolf Turner escreveu:

>

> Thanks to Marcelino for chiming in on this.?? I should have responded

> earlier, but was "busy on a personal matter".

>

> To add to what Marcelino said:

>

> (1) Your post in HTML was horribly garbled and a struggle to read.

> *PLEASE* set your email client so that it does *NOT* post in HTML when

> posting to this list.

>

> (2) A very minor point:?? Your construction of "syn.ppp" was

> unnecessarily complicated and convoluted.?? You could simply do:

>

> ?????? syn.ppp?? <- ppp(x=xp,y=yp,window=W,marks=area)

>

> (3) You appear to be confused as to the distinction between "marks"

> and "covariates".???? These are superficially similar but conceptually

> completely different.?? What you are dealing with is *marks*.?? There are

> no covariates involved.?? See [1], p. 147.

>

> (4) Your calls to envelope() are mal-formed; the expression "area >=

> 0" and "area < 25" will be taken as the values of "nrank" and ... who

> knows what??? Did you not notice the warning messages you got about

> there being something wrong with "nrank"?

>

> You are being hopelessly na??ve in expecting envelope() to interpret

> "area >= 0" and "area < 25" in the way you want it to interpret them.

> The spatstat package does not read minds.

>

> Marcelino has shown you how to make a proper call.

>

> (5) Since you are interested in categorising "area" into groups,

> rather than being interested in the *numerical* value of area, you

> should do the categorisation explicitly:

>

> acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

> ?????????????????????? labels=c("s","m","l")

> # right=FALSE since you want "area < 25" rather than

> # "area <= 25" --- although this makes no difference for the

> # area values actually used.

>

> syn.ppp <- ppp(x=xp,y=yp,marks=acat)

>

> (6) It is not clear to me what your "research question" is.?? Do you

> want to test whether the intensities differ between the area

> categories? Unless my thinking is hopelessly confused, this has

> nothing to do with (or at least does not require the use of) the

> envelope() function:

>

> f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

> f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity

> ???????????????????????????????????????????????????? # for each area category.

> anova(f1,f2,test="Chi")

>

> This test (not surprisingly) rejects the hypothesis that the

> intensities are the same; p-value = 0.0020.

>

> If this is not the hypothesis that you are interested in, please

> clarify your thinking and your question, and get back to us.

>

> cheers,

>

> Rolf Turner

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

and so sorry about my post script configuration, I expected that I

solved that in my new post.

First my variable area is a marked point (attribute or auxiliary

information about my point process - page 7 and not a spatial covariate,

effect in the outcome of my experimental area - page 50). Based in this

information, my hypophyses is that the *size of ant nests* a cause of

ecological intraspecific competition for resources (such as food and

territory) *have different patterns of spatial distribution*, for this:

#Packages

require(spatstat)

require(sp)

# Create some points that represents ant nests

xp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,

371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,

371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,

371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,

371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,

371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,

371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,

371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,

8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,

8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,

8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,

8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,

8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,

8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,

8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,

8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

#Now I have the size of each nest (marked process)

area<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,

88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,

105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a countour for the window creation

W <- convexhull.xy(xp,yp)

# Class of nests size - large, medium and small

acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

labels=c("s","m","l"))

#Create a ppp object

syn.ppp <- ppp(x=xp,y=yp,window=W, marks=acat)

# Test intensity hypothesis

f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity for

each area category.

anova(f1,f2,test="Chi")

#0.002015 ** OK, the hypothesis that the intensities are the same was

reject, but intensities is not the question.

Based in my initial hypothesis, I've like to show envelopes and observed

values of the use of some function for the three point patters (large,

medium and small ant nest size)under CSR. And for this I try to:

kS<-envelope(syn.ppp[syn.ppp$acat=="s"], nsim=99,fun=Kest)

plot(kS,lwd=list(3,1,1,1), main="")

kM<-envelope(syn.ppp[syn.ppp$acat=="m"], nsim=99,fun=Kest)

plot(kM,lwd=list(3,1,1,1), main="")

kL<-envelope(syn.ppp[syn.ppp$acat=="l"], nsim=99,fun=Kest)

plot(kL,lwd=list(3,1,1,1), main="")

But doesn't work yet. My approach now sounds correct?

Thanks in advanced,

Alexandre

--

======================================================================

Alexandre dos Santos

Prote????o Florestal

IFMT - Instituto Federal de Educa????o, Ci??ncia e Tecnologia de Mato Grosso

Campus C??ceres

Caixa Postal 244

Avenida dos Ramires, s/n

Bairro: Distrito Industrial

C??ceres - MT CEP: 78.200-000

Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)

[hidden email]

Lattes: http://lattes.cnpq.br/1360403201088680

OrcID: orcid.org/0000-0001-8232-6722

Researchgate: www.researchgate.net/profile/Alexandre_Santos10

LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635

Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

======================================================================

Em 23/07/2019 22:00, Rolf Turner escreveu:

>

> Thanks to Marcelino for chiming in on this.?? I should have responded

> earlier, but was "busy on a personal matter".

>

> To add to what Marcelino said:

>

> (1) Your post in HTML was horribly garbled and a struggle to read.

> *PLEASE* set your email client so that it does *NOT* post in HTML when

> posting to this list.

>

> (2) A very minor point:?? Your construction of "syn.ppp" was

> unnecessarily complicated and convoluted.?? You could simply do:

>

> ?????? syn.ppp?? <- ppp(x=xp,y=yp,window=W,marks=area)

>

> (3) You appear to be confused as to the distinction between "marks"

> and "covariates".???? These are superficially similar but conceptually

> completely different.?? What you are dealing with is *marks*.?? There are

> no covariates involved.?? See [1], p. 147.

>

> (4) Your calls to envelope() are mal-formed; the expression "area >=

> 0" and "area < 25" will be taken as the values of "nrank" and ... who

> knows what??? Did you not notice the warning messages you got about

> there being something wrong with "nrank"?

>

> You are being hopelessly na??ve in expecting envelope() to interpret

> "area >= 0" and "area < 25" in the way you want it to interpret them.

> The spatstat package does not read minds.

>

> Marcelino has shown you how to make a proper call.

>

> (5) Since you are interested in categorising "area" into groups,

> rather than being interested in the *numerical* value of area, you

> should do the categorisation explicitly:

>

> acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

> ?????????????????????? labels=c("s","m","l")

> # right=FALSE since you want "area < 25" rather than

> # "area <= 25" --- although this makes no difference for the

> # area values actually used.

>

> syn.ppp <- ppp(x=xp,y=yp,marks=acat)

>

> (6) It is not clear to me what your "research question" is.?? Do you

> want to test whether the intensities differ between the area

> categories? Unless my thinking is hopelessly confused, this has

> nothing to do with (or at least does not require the use of) the

> envelope() function:

>

> f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

> f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity

> ???????????????????????????????????????????????????? # for each area category.

> anova(f1,f2,test="Chi")

>

> This test (not surprisingly) rejects the hypothesis that the

> intensities are the same; p-value = 0.0020.

>

> If this is not the hypothesis that you are interested in, please

> clarify your thinking and your question, and get back to us.

>

> cheers,

>

> Rolf Turner

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear all,

Please do not post HTML-formated messages.

>

> I???m working with panel data of 9 countries and 18 years and I???m

> running fixed effects SDEM, SLX and SEM with the splm package.

>

> I have three questions:

>

> 1. I can???t seem to get the SDEM model to run. My code for each of the

> 3 models is:

>

> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

> data = my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

> model = ???within???, effect = ???individual???)

>

> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

> my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> I am able to run both SLX and SEM models without problem, but when I try

> to run the SDEM model, the error message is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

>

> I don???t understand what is wrong here, as I have no problems with the

> slag() function in the SLX model. My data is a pdataframe and each

> variable is a numeric pseries.

My guess would be that you should protect your square with I() in general,

but have no idea - this is not a reproducible example.

>

>

> 2. How can I calculate impact measures (direct, indirect and total) for

> spatial panel models?

>

> The impacts() function in spdep doesn???t work anymore and the impacts()

> function from the spatialreg package seems to work only for

> cross-sectional data and not panel data.

>

> For example, I ran:

>

> spatialreg::impacts(model.SLX)

>

> And the error message is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class

> "c('plm', 'panelmodel')"

>

> I have tried methods(impacts) but none of the functions seem to work for

> my SLX model created with the splm package.

But your SLX model is created with the plm package, isn't it? The only use

of splm is for the manual lags with slag()?

>

> I also looked at some previous examples in the splm documentation and

> more specifically the spml() function and the example provided

> (specifically the impact measures) doesn???t work anymore:

>

> data(Produc, package = "plm")

> data(usaww)

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

> ## random effects panel with spatial lag

> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

> model="random", spatial.error="none", lag=TRUE)

> summary(respatlag) ## calculate impact measures impac1 <-

> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

> summary(impac1, zstats=TRUE, short=TRUE)

>

The implemented impacts methods in splm apply to the case where the lagged

response is included. For SDEM and SLX, you can get the impacts by

addition for the total impactss, and by linear combination for their

standard errors. This is not implemented in a method. Further, slag() does

not give any impacts method any information on which variables have been

lagged - in your illustration above EI and RE are not lagged.

> The error message when I run the impacts() function is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class "splm"

>

> My question is therefore, how do I go about calculating direct, indirect

> and total impact measures for spatial panel data?

>

>

> 3. How can I test if the SDEM model can be simplified to the SLX model

> (since I estimate the SDEM by maximum likelihood (spml function) and

> the SLX by ordinary linear regression (plm function))? From my

> understanding the plm() function does not compute a loglikelihood or

> AIC so I probably can???t do a likelihood ratio test to choose

> between models (I haven???t tried this out because I???m stuck at

> running the SDEM model).

Do you know definitely that plm does not provide a log likelihood? I

realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

splm articles?

Roger

>

> Any help or advice would be greatly appreciated. Thank you.

>

> Best wishes,

> Sarah

>

>

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are

> not the intended recipient, please delete it and notify us immediately;

> you should not copy or use it for any purpose, nor disclose its contents

> to any other person. Thank you.

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear all,

Please do not post HTML-formated messages.

>

> I???m working with panel data of 9 countries and 18 years and I???m

> running fixed effects SDEM, SLX and SEM with the splm package.

>

> I have three questions:

>

> 1. I can???t seem to get the SDEM model to run. My code for each of the

> 3 models is:

>

> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

> data = my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

> model = ???within???, effect = ???individual???)

>

> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

> my.data, listw = my.listw, model = ???within???, effect =

> ???individual???, lag = F, spatial.error = ???b???)

>

> I am able to run both SLX and SEM models without problem, but when I try

> to run the SDEM model, the error message is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

>

> I don???t understand what is wrong here, as I have no problems with the

> slag() function in the SLX model. My data is a pdataframe and each

> variable is a numeric pseries.

My guess would be that you should protect your square with I() in general,

but have no idea - this is not a reproducible example.

>

>

> 2. How can I calculate impact measures (direct, indirect and total) for

> spatial panel models?

>

> The impacts() function in spdep doesn???t work anymore and the impacts()

> function from the spatialreg package seems to work only for

> cross-sectional data and not panel data.

>

> For example, I ran:

>

> spatialreg::impacts(model.SLX)

>

> And the error message is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class

> "c('plm', 'panelmodel')"

>

> I have tried methods(impacts) but none of the functions seem to work for

> my SLX model created with the splm package.

But your SLX model is created with the plm package, isn't it? The only use

of splm is for the manual lags with slag()?

>

> I also looked at some previous examples in the splm documentation and

> more specifically the spml() function and the example provided

> (specifically the impact measures) doesn???t work anymore:

>

> data(Produc, package = "plm")

> data(usaww)

> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

> ## random effects panel with spatial lag

> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

> model="random", spatial.error="none", lag=TRUE)

> summary(respatlag) ## calculate impact measures impac1 <-

> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

> summary(impac1, zstats=TRUE, short=TRUE)

>

The implemented impacts methods in splm apply to the case where the lagged

response is included. For SDEM and SLX, you can get the impacts by

addition for the total impactss, and by linear combination for their

standard errors. This is not implemented in a method. Further, slag() does

not give any impacts method any information on which variables have been

lagged - in your illustration above EI and RE are not lagged.

> The error message when I run the impacts() function is:

>

> Error in UseMethod("impacts", obj) :

> no applicable method for 'impacts' applied to an object of class "splm"

>

> My question is therefore, how do I go about calculating direct, indirect

> and total impact measures for spatial panel data?

>

>

> 3. How can I test if the SDEM model can be simplified to the SLX model

> (since I estimate the SDEM by maximum likelihood (spml function) and

> the SLX by ordinary linear regression (plm function))? From my

> understanding the plm() function does not compute a loglikelihood or

> AIC so I probably can???t do a likelihood ratio test to choose

> between models (I haven???t tried this out because I???m stuck at

> running the SDEM model).

Do you know definitely that plm does not provide a log likelihood? I

realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

splm articles?

Roger

>

> Any help or advice would be greatly appreciated. Thank you.

>

> Best wishes,

> Sarah

>

>

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are

> not the intended recipient, please delete it and notify us immediately;

> you should not copy or use it for any purpose, nor disclose its contents

> to any other person. Thank you.

>

> [[alternative HTML version deleted]]

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

Dear all,

I�m working with panel data of 9 countries and 18 years and I�m running fixed effects SDEM, SLX and SEM with the splm package.

I have three questions:

1. I can�t seem to get the SDEM model to run. My code for each of the 3 models is:

model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data, listw = my.listw, model = �within�, effect = �individual�, lag = F, spatial.error = �b�)

model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data, model = �within�, effect = �individual�)

model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data = my.data, listw = my.listw, model = �within�, effect = �individual�, lag = F, spatial.error = �b�)

I am able to run both SLX and SEM models without problem, but when I try to run the SDEM model, the error message is:

Error in UseMethod("slag") :

no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

I don�t understand what is wrong here, as I have no problems with the slag() function in the SLX model. My data is a pdataframe and each variable is a numeric pseries.

2. How can I calculate impact measures (direct, indirect and total) for spatial panel models?

The impacts() function in spdep doesn�t work anymore and the impacts() function from the spatialreg package seems to work only for cross-sectional data and not panel data.

For example, I ran:

spatialreg::impacts(model.SLX)

And the error message is:

Error in UseMethod("impacts", obj) :

no applicable method for 'impacts' applied to an object of class "c('plm', 'panelmodel')"

I have tried methods(impacts) but none of the functions seem to work for my SLX model created with the splm package.

I also looked at some previous examples in the splm documentation and more specifically the spml() function and the example provided (specifically the impact measures) doesn�t work anymore:

data(Produc, package = "plm")

data(usaww)

fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

## random effects panel with spatial lag

respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

model="random", spatial.error="none", lag=TRUE)

summary(respatlag)

## calculate impact measures

impac1 <- impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

summary(impac1, zstats=TRUE, short=TRUE)

The error message when I run the impacts() function is:

Error in UseMethod("impacts", obj) :

no applicable method for 'impacts' applied to an object of class "splm"

My question is therefore, how do I go about calculating direct, indirect and total impact measures for spatial panel data?

3. How can I test if the SDEM model can be simplified to the SLX model (since I estimate the SDEM by maximum likelihood (spml function) and the SLX by ordinary linear regression (plm function))? From my understanding the plm() function does not compute a loglikelihood or AIC so I probably can�t do a likelihood ratio test to choose between models (I haven�t tried this out because I�m stuck at running the SDEM model).

Any help or advice would be greatly appreciated. Thank you.

Best wishes,

Sarah

________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I�m working with panel data of 9 countries and 18 years and I�m running fixed effects SDEM, SLX and SEM with the splm package.

I have three questions:

1. I can�t seem to get the SDEM model to run. My code for each of the 3 models is:

model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data, listw = my.listw, model = �within�, effect = �individual�, lag = F, spatial.error = �b�)

model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data, model = �within�, effect = �individual�)

model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data = my.data, listw = my.listw, model = �within�, effect = �individual�, lag = F, spatial.error = �b�)

I am able to run both SLX and SEM models without problem, but when I try to run the SDEM model, the error message is:

Error in UseMethod("slag") :

no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

I don�t understand what is wrong here, as I have no problems with the slag() function in the SLX model. My data is a pdataframe and each variable is a numeric pseries.

2. How can I calculate impact measures (direct, indirect and total) for spatial panel models?

The impacts() function in spdep doesn�t work anymore and the impacts() function from the spatialreg package seems to work only for cross-sectional data and not panel data.

For example, I ran:

spatialreg::impacts(model.SLX)

And the error message is:

Error in UseMethod("impacts", obj) :

no applicable method for 'impacts' applied to an object of class "c('plm', 'panelmodel')"

I have tried methods(impacts) but none of the functions seem to work for my SLX model created with the splm package.

I also looked at some previous examples in the splm documentation and more specifically the spml() function and the example provided (specifically the impact measures) doesn�t work anymore:

data(Produc, package = "plm")

data(usaww)

fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

## random effects panel with spatial lag

respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

model="random", spatial.error="none", lag=TRUE)

summary(respatlag)

## calculate impact measures

impac1 <- impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

summary(impac1, zstats=TRUE, short=TRUE)

The error message when I run the impacts() function is:

Error in UseMethod("impacts", obj) :

no applicable method for 'impacts' applied to an object of class "splm"

My question is therefore, how do I go about calculating direct, indirect and total impact measures for spatial panel data?

3. How can I test if the SDEM model can be simplified to the SLX model (since I estimate the SDEM by maximum likelihood (spml function) and the SLX by ordinary linear regression (plm function))? From my understanding the plm() function does not compute a loglikelihood or AIC so I probably can�t do a likelihood ratio test to choose between models (I haven�t tried this out because I�m stuck at running the SDEM model).

Any help or advice would be greatly appreciated. Thank you.

Best wishes,

Sarah

________________________________

Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: [FORGED] Simple Ripley's CRS test for market point patters

Thanks to Marcelino for chiming in on this. I should have responded

earlier, but was "busy on a personal matter".

To add to what Marcelino said:

(1) Your post in HTML was horribly garbled and a struggle to read.

*PLEASE* set your email client so that it does *NOT* post in HTML when

posting to this list.

(2) A very minor point: Your construction of "syn.ppp" was

unnecessarily complicated and convoluted. You could simply do:

syn.ppp <- ppp(x=xp,y=yp,window=W,marks=area)

(3) You appear to be confused as to the distinction between "marks" and

"covariates". These are superficially similar but conceptually

completely different. What you are dealing with is *marks*. There are

no covariates involved. See [1], p. 147.

(4) Your calls to envelope() are mal-formed; the expression "area >= 0"

and "area < 25" will be taken as the values of "nrank" and ... who knows

what? Did you not notice the warning messages you got about there being

something wrong with "nrank"?

You are being hopelessly naïve in expecting envelope() to interpret

"area >= 0" and "area < 25" in the way you want it to interpret them.

The spatstat package does not read minds.

Marcelino has shown you how to make a proper call.

(5) Since you are interested in categorising "area" into groups, rather

than being interested in the *numerical* value of area, you should do

the categorisation explicitly:

acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

labels=c("s","m","l")

# right=FALSE since you want "area < 25" rather than

# "area <= 25" --- although this makes no difference for the

# area values actually used.

syn.ppp <- ppp(x=xp,y=yp,marks=acat)

(6) It is not clear to me what your "research question" is. Do you want

to test whether the intensities differ between the area categories?

Unless my thinking is hopelessly confused, this has nothing to do with

(or at least does not require the use of) the envelope() function:

f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity

# for each area category.

anova(f1,f2,test="Chi")

This test (not surprisingly) rejects the hypothesis that the intensities

are the same; p-value = 0.0020.

If this is not the hypothesis that you are interested in, please clarify

your thinking and your question, and get back to us.

cheers,

Rolf Turner

--

Honorary Research Fellow

Department of Statistics

University of Auckland

Phone: +64-9-373-7599 ext. 88276

[1] Spatial Point Patterns: Methodology and Applications with R

1st Edition, Adrian Baddeley, Ege Rubak, Rolf Turner

Chapman and Hall/CRC, 2015

P. S. It is of course (!!!) highly recommended that you spend some time

reading the book cited above, if you are going to work in this area.

R. T.

On 23/07/19 9:36 AM, Alexandre Santos via R-sig-Geo wrote:

> Dear R-Sig-Geo Members, I"ve like to find any simple way for apply CRS test for market point patters, for this I try to create a script below:

> #Packages require(spatstat)require(sp)

>

> # Create some points that represents ant nests in UTMxp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,371159.291,371158.552,370978.252,371120.03,371116.993)

> yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

> # Then I create the size of each nest - my covariate used as marked processarea<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

>

> # Make a countour - only as exerciseW <- convexhull.xy(xp,yp)

> #Create a ppp objectp_xy<-cbind(xp,yp)syn.ppp<-ppp(x=coordinates(p_xy)[,1],y=coordinates(p_xy)[,2],window=W, marks=area)syn.ppp <- as.ppp(syn.ppp)plot(syn.ppp, main=" ")

> First, I've like to study CSR of market point process (my hypothesis is that different size have a same spatial distribution) when area >= 0, area < 25 and area >=25, area < 55, for this I make:

> # Area 0-25env.formi1<-envelope(syn.ppp,nsim=99,fun=Kest, area >= 0, area < 25)plot(env.formi1,lwd=list(3,1,1,1), main="")

> # Area 25-55env.formi2<-envelope(syn.ppp,nsim=99,fun=Kest, area >=25, area < 55)plot(env.formi2,lwd=list(3,1,1,1), main="")

> My first problem is that I have the same plot in both conditions and I don't know why.

> Second, if I try to estimate the market intensity observed pattern

> est.int <- density(syn.ppp)est_xy <- rmpoispp(est.int)plot(est_xy, main=" ")

> My output is only my points position without marked area in my ppp object created.

> My question is what is the problem with this Ripley's reduced second moment function approach? There are any way for study my point process when the area is a covariate of my point process?

> Thanks in advanced

> Alexandre

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: DELDIR

This question should have been, in first instance, directed to me (the

maintainer of the deldir package), rather than to the r-sig-geo list.

On 24/07/19 1:47 AM, PONS Frederic - CEREMA/DTerMed/DREC/SRILH wrote:

> Dear all

>

> Sometines we are using the deldir function for squeletisation.

Had to Google this word. According to what I see it should be spelled

"squelettisation" (with a double "t"). The English word is

"skeletonisation", and this list is (for better or for worse) an English

language list.

> Since the last review, deldir stops in some shapefile because polygons

> are too narrow. In the version berfore, there was no problems.

>

> We have readed the " Notes on error messages" and the problem of

> anticlockwise order of triangle is listed.

>

> In the trifnd R function , the code is

> # Check that the vertices of the triangle listed in tau are

> # in anticlockwise order. (If they aren't then alles upgefucken

> # ist; throw an error.)

> call acchk(tau(1),tau(2),tau(3),anticl,x,y,ntot,eps)

> if(!anticl) {

> call acchk(tau(3),tau(2),tau(1),anticl,x,y,ntot,eps)

> if(!anticl) {

> call fexit("Both vertex orderings are clockwise. See help for

> deldir.")

> } else {

> ivtmp = tau(3)

> tau(3) = tau(1)

> tau(1) = ivtmp

> }

> }

>

> We don't understand why do not order the bad triangles into the good

> order. Perhaps, if this problem appears, the beginning of the deldir

> function is not good.

>

> If someone can explain us.

The error message you quote indicates that *both* orderings are bad so

something is toadally out of whack. It would be rash (and arrogant) for

the software to fool around with the structure. In such situations the

user should diagnose what is going wrong.

If you cannot diagnose the problem, please send me a reprex and I will

look into it.

cheers,

Rolf Turner

--

Honorary Research Fellow

Department of Statistics

University of Auckland

Phone: +64-9-373-7599 ext. 88276

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### DELDIR

Dear all

Sometines we are using the deldir function for squeletisation.

Since the last review, deldir stops in some shapefile because polygons

are too narrow. In the version berfore, there was no problems.

We have readed the " Notes on error messages" and the problem of

anticlockwise order of triangle is listed.

In the trifnd R function , the code is

# Check that the vertices of the triangle listed in tau are

# in anticlockwise order. (If they aren't then alles upgefucken

# ist; throw an error.)

call acchk(tau(1),tau(2),tau(3),anticl,x,y,ntot,eps)

if(!anticl) {

call acchk(tau(3),tau(2),tau(1),anticl,x,y,ntot,eps)

if(!anticl) {

call fexit("Both vertex orderings are clockwise. See help for

deldir.")

} else {

ivtmp = tau(3)

tau(3) = tau(1)

tau(1) = ivtmp

}

}

We don't understand why do not order the bad triangles into the good

order. Perhaps, if this problem appears, the beginning of the deldir

function is not good.

If someone can explain us.

Best regards

*Frédéric Pons

*

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Sometines we are using the deldir function for squeletisation.

Since the last review, deldir stops in some shapefile because polygons

are too narrow. In the version berfore, there was no problems.

We have readed the " Notes on error messages" and the problem of

anticlockwise order of triangle is listed.

In the trifnd R function , the code is

# Check that the vertices of the triangle listed in tau are

# in anticlockwise order. (If they aren't then alles upgefucken

# ist; throw an error.)

call acchk(tau(1),tau(2),tau(3),anticl,x,y,ntot,eps)

if(!anticl) {

call acchk(tau(3),tau(2),tau(1),anticl,x,y,ntot,eps)

if(!anticl) {

call fexit("Both vertex orderings are clockwise. See help for

deldir.")

} else {

ivtmp = tau(3)

tau(3) = tau(1)

tau(1) = ivtmp

}

}

We don't understand why do not order the bad triangles into the good

order. Perhaps, if this problem appears, the beginning of the deldir

function is not good.

If someone can explain us.

Best regards

*Frédéric Pons

*

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Simple Ripley's CRS test for market point patters

Dear Alexandre,

I think that the solution to your first problem would be something like

this:

#Area 0-25

env.formi1<-envelope(syn.ppp[syn.ppp$marks>=0 & syn.ppp$marks<25],

nsim=99,fun=Kest)

plot(env.formi1,lwd=list(3,1,1,1), main="")

Then, for your second problem, you should clarify what do you understand

by "market intensity observed pattern". In fact, rmpoispp() simulates

random multitype ppp's, i.e., it does not estimate parameters of

quantitatively marked point processes.

Maybe be you would consider interesting

Smooth.ppp(syn.ppp)

and

envelope(syn.ppp, nsim=99,fun=markcorr,

simulate=expression(rlabel(syn.ppp)))

Cheers,

Marcelino

El 22/07/2019 a las 23:36, Alexandre Santos via R-sig-Geo escribió:

> Dear R-Sig-Geo Members, I"ve like to find any simple way for apply CRS test for market point patters, for this I try to create a script below:

> #Packages require(spatstat)require(sp)

>

> # Create some points that represents ant nests in UTMxp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,371159.291,371158.552,370978.252,371120.03,371116.993)

> yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

> # Then I create the size of each nest - my covariate used as marked processarea<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

>

> # Make a countour - only as exerciseW <- convexhull.xy(xp,yp)

> #Create a ppp objectp_xy<-cbind(xp,yp)syn.ppp<-ppp(x=coordinates(p_xy)[,1],y=coordinates(p_xy)[,2],window=W, marks=area)syn.ppp <- as.ppp(syn.ppp)plot(syn.ppp, main=" ")

> First, I've like to study CSR of market point process (my hypothesis is that different size have a same spatial distribution) when area >= 0, area < 25 and area >=25, area < 55, for this I make:

> # Area 0-25env.formi1<-envelope(syn.ppp,nsim=99,fun=Kest, area >= 0, area < 25)plot(env.formi1,lwd=list(3,1,1,1), main="")

> # Area 25-55env.formi2<-envelope(syn.ppp,nsim=99,fun=Kest, area >=25, area < 55)plot(env.formi2,lwd=list(3,1,1,1), main="")

> My first problem is that I have the same plot in both conditions and I don't know why.

> Second, if I try to estimate the market intensity observed pattern

> est.int <- density(syn.ppp)est_xy <- rmpoispp(est.int)plot(est_xy, main=" ")

> My output is only my points position without marked area in my ppp object created.

> My question is what is the problem with this Ripley's reduced second moment function approach? There are any way for study my point process when the area is a covariate of my point process?

> Thanks in advanced

> Alexandre

>

>

> -- ======================================================================Alexandre dos SantosProteção Florestal IFMT - Instituto Federal de Educação, Ciência e Tecnologia de Mato GrossoCampus CáceresCaixa Postal 244Avenida dos Ramires, s/nBairro: Distrito Industrial Cáceres - MT CEP: 78.200-000Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)e-mails:[hidden email] [hidden email] Lattes: http://lattes.cnpq.br/1360403201088680 OrcID: orcid.org/0000-0001-8232-6722 - ResearcherID: A-5790-2016Researchgate: www.researchgate.net/profile/Alexandre_Santos10 LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

> ======================================================================

>

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> .

>

--

Marcelino de la Cruz Rot

Depto. de Biología y Geología

Física y Química Inorgánica

Universidad Rey Juan Carlos

Móstoles España

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I think that the solution to your first problem would be something like

this:

#Area 0-25

env.formi1<-envelope(syn.ppp[syn.ppp$marks>=0 & syn.ppp$marks<25],

nsim=99,fun=Kest)

plot(env.formi1,lwd=list(3,1,1,1), main="")

Then, for your second problem, you should clarify what do you understand

by "market intensity observed pattern". In fact, rmpoispp() simulates

random multitype ppp's, i.e., it does not estimate parameters of

quantitatively marked point processes.

Maybe be you would consider interesting

Smooth.ppp(syn.ppp)

and

envelope(syn.ppp, nsim=99,fun=markcorr,

simulate=expression(rlabel(syn.ppp)))

Cheers,

Marcelino

El 22/07/2019 a las 23:36, Alexandre Santos via R-sig-Geo escribió:

> Dear R-Sig-Geo Members, I"ve like to find any simple way for apply CRS test for market point patters, for this I try to create a script below:

> #Packages require(spatstat)require(sp)

>

> # Create some points that represents ant nests in UTMxp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,371159.291,371158.552,370978.252,371120.03,371116.993)

> yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

> # Then I create the size of each nest - my covariate used as marked processarea<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

>

> # Make a countour - only as exerciseW <- convexhull.xy(xp,yp)

> #Create a ppp objectp_xy<-cbind(xp,yp)syn.ppp<-ppp(x=coordinates(p_xy)[,1],y=coordinates(p_xy)[,2],window=W, marks=area)syn.ppp <- as.ppp(syn.ppp)plot(syn.ppp, main=" ")

> First, I've like to study CSR of market point process (my hypothesis is that different size have a same spatial distribution) when area >= 0, area < 25 and area >=25, area < 55, for this I make:

> # Area 0-25env.formi1<-envelope(syn.ppp,nsim=99,fun=Kest, area >= 0, area < 25)plot(env.formi1,lwd=list(3,1,1,1), main="")

> # Area 25-55env.formi2<-envelope(syn.ppp,nsim=99,fun=Kest, area >=25, area < 55)plot(env.formi2,lwd=list(3,1,1,1), main="")

> My first problem is that I have the same plot in both conditions and I don't know why.

> Second, if I try to estimate the market intensity observed pattern

> est.int <- density(syn.ppp)est_xy <- rmpoispp(est.int)plot(est_xy, main=" ")

> My output is only my points position without marked area in my ppp object created.

> My question is what is the problem with this Ripley's reduced second moment function approach? There are any way for study my point process when the area is a covariate of my point process?

> Thanks in advanced

> Alexandre

>

>

> -- ======================================================================Alexandre dos SantosProteção Florestal IFMT - Instituto Federal de Educação, Ciência e Tecnologia de Mato GrossoCampus CáceresCaixa Postal 244Avenida dos Ramires, s/nBairro: Distrito Industrial Cáceres - MT CEP: 78.200-000Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)e-mails:[hidden email] [hidden email] Lattes: http://lattes.cnpq.br/1360403201088680 OrcID: orcid.org/0000-0001-8232-6722 - ResearcherID: A-5790-2016Researchgate: www.researchgate.net/profile/Alexandre_Santos10 LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

> ======================================================================

>

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> .

>

--

Marcelino de la Cruz Rot

Depto. de Biología y Geología

Física y Química Inorgánica

Universidad Rey Juan Carlos

Móstoles España

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Simple Ripley's CRS test for market point patters

Dear R-Sig-Geo Members, I"ve like to find any simple way for apply CRS test for market point patters, for this I try to create a script below:

#Packages require(spatstat)require(sp)

# Create some points that represents ant nests in UTMxp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

# Then I create the size of each nest - my covariate used as marked processarea<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a countour - only as exerciseW <- convexhull.xy(xp,yp)

#Create a ppp objectp_xy<-cbind(xp,yp)syn.ppp<-ppp(x=coordinates(p_xy)[,1],y=coordinates(p_xy)[,2],window=W, marks=area)syn.ppp <- as.ppp(syn.ppp)plot(syn.ppp, main=" ")

First, I've like to study CSR of market point process (my hypothesis is that different size have a same spatial distribution) when area >= 0, area < 25 and area >=25, area < 55, for this I make:

# Area 0-25env.formi1<-envelope(syn.ppp,nsim=99,fun=Kest, area >= 0, area < 25)plot(env.formi1,lwd=list(3,1,1,1), main="")

# Area 25-55env.formi2<-envelope(syn.ppp,nsim=99,fun=Kest, area >=25, area < 55)plot(env.formi2,lwd=list(3,1,1,1), main="")

My first problem is that I have the same plot in both conditions and I don't know why.

Second, if I try to estimate the market intensity observed pattern

est.int <- density(syn.ppp)est_xy <- rmpoispp(est.int)plot(est_xy, main=" ")

My output is only my points position without marked area in my ppp object created.

My question is what is the problem with this Ripley's reduced second moment function approach? There are any way for study my point process when the area is a covariate of my point process?

Thanks in advanced

Alexandre

-- ======================================================================Alexandre dos SantosProteção Florestal IFMT - Instituto Federal de Educação, Ciência e Tecnologia de Mato GrossoCampus CáceresCaixa Postal 244Avenida dos Ramires, s/nBairro: Distrito Industrial Cáceres - MT CEP: 78.200-000Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)e-mails:[hidden email] [hidden email] Lattes: http://lattes.cnpq.br/1360403201088680 OrcID: orcid.org/0000-0001-8232-6722 - ResearcherID: A-5790-2016Researchgate: www.researchgate.net/profile/Alexandre_Santos10 LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

======================================================================

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

#Packages require(spatstat)require(sp)

# Create some points that represents ant nests in UTMxp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

# Then I create the size of each nest - my covariate used as marked processarea<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a countour - only as exerciseW <- convexhull.xy(xp,yp)

#Create a ppp objectp_xy<-cbind(xp,yp)syn.ppp<-ppp(x=coordinates(p_xy)[,1],y=coordinates(p_xy)[,2],window=W, marks=area)syn.ppp <- as.ppp(syn.ppp)plot(syn.ppp, main=" ")

First, I've like to study CSR of market point process (my hypothesis is that different size have a same spatial distribution) when area >= 0, area < 25 and area >=25, area < 55, for this I make:

# Area 0-25env.formi1<-envelope(syn.ppp,nsim=99,fun=Kest, area >= 0, area < 25)plot(env.formi1,lwd=list(3,1,1,1), main="")

# Area 25-55env.formi2<-envelope(syn.ppp,nsim=99,fun=Kest, area >=25, area < 55)plot(env.formi2,lwd=list(3,1,1,1), main="")

My first problem is that I have the same plot in both conditions and I don't know why.

Second, if I try to estimate the market intensity observed pattern

est.int <- density(syn.ppp)est_xy <- rmpoispp(est.int)plot(est_xy, main=" ")

My output is only my points position without marked area in my ppp object created.

My question is what is the problem with this Ripley's reduced second moment function approach? There are any way for study my point process when the area is a covariate of my point process?

Thanks in advanced

Alexandre

-- ======================================================================Alexandre dos SantosProteção Florestal IFMT - Instituto Federal de Educação, Ciência e Tecnologia de Mato GrossoCampus CáceresCaixa Postal 244Avenida dos Ramires, s/nBairro: Distrito Industrial Cáceres - MT CEP: 78.200-000Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)e-mails:[hidden email] [hidden email] Lattes: http://lattes.cnpq.br/1360403201088680 OrcID: orcid.org/0000-0001-8232-6722 - ResearcherID: A-5790-2016Researchgate: www.researchgate.net/profile/Alexandre_Santos10 LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

======================================================================

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: st_simplify outputs the input geometry with no simplification

I missed it completely.

Thank you very much!

Hugo

Michael Sumner <[hidden email]> escreveu no dia segunda, 22/07/2019

à(s) 15:12:

> It has changed it, there's now a mix of POLYGON and MULTIPOLYGON

> geometries, but no change in the underlying coordinates. You have to use

> the dTolerance argument:

>

> nrow(st_coordinates(nc))

> [1] 2529

> nrow(st_coordinates(st_cast(nc_simplfy)))

> [1] 2529

> nrow(st_coordinates(st_cast(st_simplify(nc, dTolerance = 1000))))

> [1] 1941

>

> It still might change the geometry with dTolerance = 0 (the default), in

> that case it will remove vertices that are collinear (unnecessarily dense

> straight lines, reduce to two-vertex edges). I only learnt that recently.

>

> HTH

>

>

>

> On Mon, Jul 22, 2019 at 8:15 PM Hugo Costa <[hidden email]> wrote:

>

>> Dear list,

>>

>> function st_simplify outputs exactly the same geometry as the input. This

>> is an example:

>>

>> library(sf)

>>

>> nc = st_read(system.file("shape/nc.shp", package="sf"))

>> nc <- nc %>% st_transform(3857)

>> plot(st_geometry(nc))

>>

>> nc_simplfy<-st_simplify(nc)

>> plot(st_geometry(nc_simplfy))

>>

>> However, I'm quite sure some days ago st_simpliffy was working as

>> expected.

>> Could it be something wrong in my machine? What am I missing?

>>

>> > sessionInfo()

>> R version 3.5.3 (2019-03-11)

>> Platform: x86_64-w64-mingw32/x64 (64-bit)

>> Running under: Windows >= 8 x64 (build 9200)

>>

>> Matrix products: default

>>

>> locale:

>> [1] LC_COLLATE=Portuguese_Portugal.1252 LC_CTYPE=Portuguese_Portugal.1252

>> LC_MONETARY=Portuguese_Portugal.1252 LC_NUMERIC=C

>> [5] LC_TIME=Portuguese_Portugal.1252

>>

>> attached base packages:

>> [1] stats graphics grDevices utils datasets methods base

>>

>> other attached packages:

>> [1] sf_0.7-3

>>

>> loaded via a namespace (and not attached):

>> [1] compiler_3.5.3 magrittr_1.5 class_7.3-15 DBI_1.0.0

>> tools_3.5.3 units_0.6-2 Rcpp_1.0.1 grid_3.5.3 e1071_1.7-0.1

>> [10] classInt_0.3-1

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>

>

> --

> Michael Sumner

> Software and Database Engineer

> Australian Antarctic Division

> Hobart, Australia

> e-mail: [hidden email]

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Thank you very much!

Hugo

Michael Sumner <[hidden email]> escreveu no dia segunda, 22/07/2019

à(s) 15:12:

> It has changed it, there's now a mix of POLYGON and MULTIPOLYGON

> geometries, but no change in the underlying coordinates. You have to use

> the dTolerance argument:

>

> nrow(st_coordinates(nc))

> [1] 2529

> nrow(st_coordinates(st_cast(nc_simplfy)))

> [1] 2529

> nrow(st_coordinates(st_cast(st_simplify(nc, dTolerance = 1000))))

> [1] 1941

>

> It still might change the geometry with dTolerance = 0 (the default), in

> that case it will remove vertices that are collinear (unnecessarily dense

> straight lines, reduce to two-vertex edges). I only learnt that recently.

>

> HTH

>

>

>

> On Mon, Jul 22, 2019 at 8:15 PM Hugo Costa <[hidden email]> wrote:

>

>> Dear list,

>>

>> function st_simplify outputs exactly the same geometry as the input. This

>> is an example:

>>

>> library(sf)

>>

>> nc = st_read(system.file("shape/nc.shp", package="sf"))

>> nc <- nc %>% st_transform(3857)

>> plot(st_geometry(nc))

>>

>> nc_simplfy<-st_simplify(nc)

>> plot(st_geometry(nc_simplfy))

>>

>> However, I'm quite sure some days ago st_simpliffy was working as

>> expected.

>> Could it be something wrong in my machine? What am I missing?

>>

>> > sessionInfo()

>> R version 3.5.3 (2019-03-11)

>> Platform: x86_64-w64-mingw32/x64 (64-bit)

>> Running under: Windows >= 8 x64 (build 9200)

>>

>> Matrix products: default

>>

>> locale:

>> [1] LC_COLLATE=Portuguese_Portugal.1252 LC_CTYPE=Portuguese_Portugal.1252

>> LC_MONETARY=Portuguese_Portugal.1252 LC_NUMERIC=C

>> [5] LC_TIME=Portuguese_Portugal.1252

>>

>> attached base packages:

>> [1] stats graphics grDevices utils datasets methods base

>>

>> other attached packages:

>> [1] sf_0.7-3

>>

>> loaded via a namespace (and not attached):

>> [1] compiler_3.5.3 magrittr_1.5 class_7.3-15 DBI_1.0.0

>> tools_3.5.3 units_0.6-2 Rcpp_1.0.1 grid_3.5.3 e1071_1.7-0.1

>> [10] classInt_0.3-1

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>

>

> --

> Michael Sumner

> Software and Database Engineer

> Australian Antarctic Division

> Hobart, Australia

> e-mail: [hidden email]

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: st_simplify outputs the input geometry with no simplification

It has changed it, there's now a mix of POLYGON and MULTIPOLYGON

geometries, but no change in the underlying coordinates. You have to use

the dTolerance argument:

nrow(st_coordinates(nc))

[1] 2529

nrow(st_coordinates(st_cast(nc_simplfy)))

[1] 2529

nrow(st_coordinates(st_cast(st_simplify(nc, dTolerance = 1000))))

[1] 1941

It still might change the geometry with dTolerance = 0 (the default), in

that case it will remove vertices that are collinear (unnecessarily dense

straight lines, reduce to two-vertex edges). I only learnt that recently.

HTH

On Mon, Jul 22, 2019 at 8:15 PM Hugo Costa <[hidden email]> wrote:

> Dear list,

>

> function st_simplify outputs exactly the same geometry as the input. This

> is an example:

>

> library(sf)

>

> nc = st_read(system.file("shape/nc.shp", package="sf"))

> nc <- nc %>% st_transform(3857)

> plot(st_geometry(nc))

>

> nc_simplfy<-st_simplify(nc)

> plot(st_geometry(nc_simplfy))

>

> However, I'm quite sure some days ago st_simpliffy was working as expected.

> Could it be something wrong in my machine? What am I missing?

>

> > sessionInfo()

> R version 3.5.3 (2019-03-11)

> Platform: x86_64-w64-mingw32/x64 (64-bit)

> Running under: Windows >= 8 x64 (build 9200)

>

> Matrix products: default

>

> locale:

> [1] LC_COLLATE=Portuguese_Portugal.1252 LC_CTYPE=Portuguese_Portugal.1252

> LC_MONETARY=Portuguese_Portugal.1252 LC_NUMERIC=C

> [5] LC_TIME=Portuguese_Portugal.1252

>

> attached base packages:

> [1] stats graphics grDevices utils datasets methods base

>

> other attached packages:

> [1] sf_0.7-3

>

> loaded via a namespace (and not attached):

> [1] compiler_3.5.3 magrittr_1.5 class_7.3-15 DBI_1.0.0

> tools_3.5.3 units_0.6-2 Rcpp_1.0.1 grid_3.5.3 e1071_1.7-0.1

> [10] classInt_0.3-1

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Michael Sumner

Software and Database Engineer

Australian Antarctic Division

Hobart, Australia

e-mail: [hidden email]

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

geometries, but no change in the underlying coordinates. You have to use

the dTolerance argument:

nrow(st_coordinates(nc))

[1] 2529

nrow(st_coordinates(st_cast(nc_simplfy)))

[1] 2529

nrow(st_coordinates(st_cast(st_simplify(nc, dTolerance = 1000))))

[1] 1941

It still might change the geometry with dTolerance = 0 (the default), in

that case it will remove vertices that are collinear (unnecessarily dense

straight lines, reduce to two-vertex edges). I only learnt that recently.

HTH

On Mon, Jul 22, 2019 at 8:15 PM Hugo Costa <[hidden email]> wrote:

> Dear list,

>

> function st_simplify outputs exactly the same geometry as the input. This

> is an example:

>

> library(sf)

>

> nc = st_read(system.file("shape/nc.shp", package="sf"))

> nc <- nc %>% st_transform(3857)

> plot(st_geometry(nc))

>

> nc_simplfy<-st_simplify(nc)

> plot(st_geometry(nc_simplfy))

>

> However, I'm quite sure some days ago st_simpliffy was working as expected.

> Could it be something wrong in my machine? What am I missing?

>

> > sessionInfo()

> R version 3.5.3 (2019-03-11)

> Platform: x86_64-w64-mingw32/x64 (64-bit)

> Running under: Windows >= 8 x64 (build 9200)

>

> Matrix products: default

>

> locale:

> [1] LC_COLLATE=Portuguese_Portugal.1252 LC_CTYPE=Portuguese_Portugal.1252

> LC_MONETARY=Portuguese_Portugal.1252 LC_NUMERIC=C

> [5] LC_TIME=Portuguese_Portugal.1252

>

> attached base packages:

> [1] stats graphics grDevices utils datasets methods base

>

> other attached packages:

> [1] sf_0.7-3

>

> loaded via a namespace (and not attached):

> [1] compiler_3.5.3 magrittr_1.5 class_7.3-15 DBI_1.0.0

> tools_3.5.3 units_0.6-2 Rcpp_1.0.1 grid_3.5.3 e1071_1.7-0.1

> [10] classInt_0.3-1

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Michael Sumner

Software and Database Engineer

Australian Antarctic Division

Hobart, Australia

e-mail: [hidden email]

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### st_simplify outputs the input geometry with no simplification

Dear list,

function st_simplify outputs exactly the same geometry as the input. This

is an example:

library(sf)

nc = st_read(system.file("shape/nc.shp", package="sf"))

nc <- nc %>% st_transform(3857)

plot(st_geometry(nc))

nc_simplfy<-st_simplify(nc)

plot(st_geometry(nc_simplfy))

However, I'm quite sure some days ago st_simpliffy was working as expected.

Could it be something wrong in my machine? What am I missing?

> sessionInfo()

R version 3.5.3 (2019-03-11)

Platform: x86_64-w64-mingw32/x64 (64-bit)

Running under: Windows >= 8 x64 (build 9200)

Matrix products: default

locale:

[1] LC_COLLATE=Portuguese_Portugal.1252 LC_CTYPE=Portuguese_Portugal.1252

LC_MONETARY=Portuguese_Portugal.1252 LC_NUMERIC=C

[5] LC_TIME=Portuguese_Portugal.1252

attached base packages:

[1] stats graphics grDevices utils datasets methods base

other attached packages:

[1] sf_0.7-3

loaded via a namespace (and not attached):

[1] compiler_3.5.3 magrittr_1.5 class_7.3-15 DBI_1.0.0

tools_3.5.3 units_0.6-2 Rcpp_1.0.1 grid_3.5.3 e1071_1.7-0.1

[10] classInt_0.3-1

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

function st_simplify outputs exactly the same geometry as the input. This

is an example:

library(sf)

nc = st_read(system.file("shape/nc.shp", package="sf"))

nc <- nc %>% st_transform(3857)

plot(st_geometry(nc))

nc_simplfy<-st_simplify(nc)

plot(st_geometry(nc_simplfy))

However, I'm quite sure some days ago st_simpliffy was working as expected.

Could it be something wrong in my machine? What am I missing?

> sessionInfo()

R version 3.5.3 (2019-03-11)

Platform: x86_64-w64-mingw32/x64 (64-bit)

Running under: Windows >= 8 x64 (build 9200)

Matrix products: default

locale:

[1] LC_COLLATE=Portuguese_Portugal.1252 LC_CTYPE=Portuguese_Portugal.1252

LC_MONETARY=Portuguese_Portugal.1252 LC_NUMERIC=C

[5] LC_TIME=Portuguese_Portugal.1252

attached base packages:

[1] stats graphics grDevices utils datasets methods base

other attached packages:

[1] sf_0.7-3

loaded via a namespace (and not attached):

[1] compiler_3.5.3 magrittr_1.5 class_7.3-15 DBI_1.0.0

tools_3.5.3 units_0.6-2 Rcpp_1.0.1 grid_3.5.3 e1071_1.7-0.1

[10] classInt_0.3-1

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: FW: Including Alaska and Hawaii in listw Specifications of US States and Substate Regions

On Wed, 17 Jul 2019, Stuart Reece wrote:

> Hi,

>

> I was wanting to include a listw list of neighbours for USA states which

> include Alaska and Hawaii with Alaska a neighbour for Washington and Oregon

> states and Hawaii a neighbour for California and the Western continental US

> states.

>

> Naturally queen relationships fail.

>

> The Albers shapefiles at this URL are really useful for general mapping

> purposes but result in erroneous relationships with k-nearest neighbours

> when this is performed in GeoDa.

>

>

>

> These maps have Hawaii and Alaska elided to south of Texas.

>

>

>

> <https://github.com/hrbrmstr/albersusa>

> https://github.com/hrbrmstr/albersusa

>

>

They are troublesome to install, accessing just the file is impossible.

Did you look at the documentation of spdep::edit.nb()? Reading the list of

functions and methods in a package tends to be helpful. Use

spdep::make.sym.nb() after storing the output of edit.nb(). Because

edit.nb() is interactive, I can't provide a script. The next release of

spdep will support displaying sf polygons, for now you'd need to use:

usa <- usa_sf("aeqd")

nb <- spdep::poly2nb(usa)

crds <- st_coordinates(st_centroid(usa, of_largest_polygon=TRUE))

nb1 <- spdep::edit.nb(nb, crds, as(usa, "Spatial"))

nb2 <- spdep::make.sym.nb(nb1)

or similar.

Hope this clarifies,

Roger

>

> When one uses the native map - with unelided states - available from the URL

> below - then k-nearest neighbours also does not work properly in GeoDa.

>

>

>

> I also want to extend this to the SAMHSA substate shapefile found at this

> URL for the 395 substate areas used by the USA National Survey of Drug Use

> and Health within SAMHSA.

>

>

>

>

> <https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefil

> e>

> https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefile

>

>

>

>

> I did see that someone was able to create relationships across a straight

> for Italy although I am not quite sure how this was accomplished

> technically.

>

>

>

> I would be grateful for any help you could provide.

>

>

>

> Thankyou so much,

>

>

>

> Stuart Reece.

>

>

>

>

>

> [[alternative HTML version deleted]]

>

>

>

> _______________________________________________

>

> R-sig-Geo mailing list

>

> <mailto:[hidden email]> [hidden email]

>

> <https://stat.ethz.ch/mailman/listinfo/r-sig-geo>

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Hi,

>

> I was wanting to include a listw list of neighbours for USA states which

> include Alaska and Hawaii with Alaska a neighbour for Washington and Oregon

> states and Hawaii a neighbour for California and the Western continental US

> states.

>

> Naturally queen relationships fail.

>

> The Albers shapefiles at this URL are really useful for general mapping

> purposes but result in erroneous relationships with k-nearest neighbours

> when this is performed in GeoDa.

>

>

>

> These maps have Hawaii and Alaska elided to south of Texas.

>

>

>

> <https://github.com/hrbrmstr/albersusa>

> https://github.com/hrbrmstr/albersusa

>

>

They are troublesome to install, accessing just the file is impossible.

Did you look at the documentation of spdep::edit.nb()? Reading the list of

functions and methods in a package tends to be helpful. Use

spdep::make.sym.nb() after storing the output of edit.nb(). Because

edit.nb() is interactive, I can't provide a script. The next release of

spdep will support displaying sf polygons, for now you'd need to use:

usa <- usa_sf("aeqd")

nb <- spdep::poly2nb(usa)

crds <- st_coordinates(st_centroid(usa, of_largest_polygon=TRUE))

nb1 <- spdep::edit.nb(nb, crds, as(usa, "Spatial"))

nb2 <- spdep::make.sym.nb(nb1)

or similar.

Hope this clarifies,

Roger

>

> When one uses the native map - with unelided states - available from the URL

> below - then k-nearest neighbours also does not work properly in GeoDa.

>

>

>

> I also want to extend this to the SAMHSA substate shapefile found at this

> URL for the 395 substate areas used by the USA National Survey of Drug Use

> and Health within SAMHSA.

>

>

>

>

> <https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefil

> e>

> https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefile

>

>

>

>

> I did see that someone was able to create relationships across a straight

> for Italy although I am not quite sure how this was accomplished

> technically.

>

>

>

> I would be grateful for any help you could provide.

>

>

>

> Thankyou so much,

>

>

>

> Stuart Reece.

>

>

>

>

>

> [[alternative HTML version deleted]]

>

>

>

> _______________________________________________

>

> R-sig-Geo mailing list

>

> <mailto:[hidden email]> [hidden email]

>

> <https://stat.ethz.ch/mailman/listinfo/r-sig-geo>

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### FW: Including Alaska and Hawaii in listw Specifications of US States and Substate Regions

Hi,

I was wanting to include a listw list of neighbours for USA states which

include Alaska and Hawaii with Alaska a neighbour for Washington and Oregon

states and Hawaii a neighbour for California and the Western continental US

states.

Naturally queen relationships fail.

The Albers shapefiles at this URL are really useful for general mapping

purposes but result in erroneous relationships with k-nearest neighbours

when this is performed in GeoDa.

These maps have Hawaii and Alaska elided to south of Texas.

<https://github.com/hrbrmstr/albersusa>

https://github.com/hrbrmstr/albersusa

When one uses the native map - with unelided states - available from the URL

below - then k-nearest neighbours also does not work properly in GeoDa.

I also want to extend this to the SAMHSA substate shapefile found at this

URL for the 395 substate areas used by the USA National Survey of Drug Use

and Health within SAMHSA.

<https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefil

e>

https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefile

I did see that someone was able to create relationships across a straight

for Italy although I am not quite sure how this was accomplished

technically.

I would be grateful for any help you could provide.

Thankyou so much,

Stuart Reece.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

<mailto:[hidden email]> [hidden email]

<https://stat.ethz.ch/mailman/listinfo/r-sig-geo>

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I was wanting to include a listw list of neighbours for USA states which

include Alaska and Hawaii with Alaska a neighbour for Washington and Oregon

states and Hawaii a neighbour for California and the Western continental US

states.

Naturally queen relationships fail.

The Albers shapefiles at this URL are really useful for general mapping

purposes but result in erroneous relationships with k-nearest neighbours

when this is performed in GeoDa.

These maps have Hawaii and Alaska elided to south of Texas.

<https://github.com/hrbrmstr/albersusa>

https://github.com/hrbrmstr/albersusa

When one uses the native map - with unelided states - available from the URL

below - then k-nearest neighbours also does not work properly in GeoDa.

I also want to extend this to the SAMHSA substate shapefile found at this

URL for the 395 substate areas used by the USA National Survey of Drug Use

and Health within SAMHSA.

<https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefil

e>

https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefile

I did see that someone was able to create relationships across a straight

for Italy although I am not quite sure how this was accomplished

technically.

I would be grateful for any help you could provide.

Thankyou so much,

Stuart Reece.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

<mailto:[hidden email]> [hidden email]

<https://stat.ethz.ch/mailman/listinfo/r-sig-geo>

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### individual plots for farm fields in mapview

Dear List

I want to present a scatter plot (on time) of soil sample data for each block in a tree crop, using r. I have a csv file with the value of the selected attribute, the block name, the x and y, and the dates for 9 locations, making 34 observations in total. I wish to present it as a leafpop::popupGraph on a mapview map. The user will be click on the point(a block centroid) within the block and see the history/plot for that location only.

The leafpop package manual provides the svg example using the meuse data set. Using it as the basis, I have a map with the points, and the popup. I have not been able to supply only the plot for the specific block to each point instance within it. My plot is a faceted ggplot with one facet for each block. Onclick on the map, each of my 9 locations shows the same full faceted plot (all 9). I want just the plot for that location.

The meuse example shows rep iteration through one plot, changing the colour of the value in that plot to highlight data point for the geographic location selected. While elegant, it is no guide to achieving my target.

Tim Salabim kindly pointed me a step forward suggesting lapply, but I do have not yet the depth of skill to successfully implement it. Below is an extract of the code, to show the steps I have taken so far.

I am not succeeding at getting the plot images. It also appears that the process loops through all 34 observations to make plots for each.

I am hoping the members can guide me a few steps further forward.

Here is the code:

block.graph <- function(select_organic_matter_by_blockname_with_xy_range, na.rm = TRUE){

# create list of blocks in data to loop over

block_list <- unique(som_xy$block_name)

# create for loop to produce ggplot2 graphs

for (i in seq_along(block_list)) {

# create plot for each block in the dataframe

soil_om <-

ggplot(subset(som_xy, som_xy$block_name==block_list[i]),

aes(x= date, y = organic_matter)) +

theme_light() +

geom_point() +

ylim(0, 10) +

geom_hline(data = som_xy, linetype="dotted", colour="#ff710c",alpha = 0.5, aes(yintercept = range_min)) +

geom_hline(data = som_xy, linetype="dotted", colour="#ff710c",alpha = 0.5, aes(yintercept = range_max))

p <- (soil_om +

theme(

axis.text.x=element_text(angle=90,hjust=1, vjust=1),

axis.title.x = element_blank(),

axis.title.y = element_text(angle=90, hjust=0.5),

legend.title = element_blank(),

plot.title = element_text(hjust = 0.5)

)+

ggtitle("Organic Matter Time Series")+

ylab("%"))

}}

individ_plot <- lapply(som_xy$block_name, block.graph)

mapview(collins_farm,

data = select_organic_matter_by_blockname_with_xy_range,

zcol = "block_name",

popup= leafpop::popupImage(individ_plot))

Thanks in advance David

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I want to present a scatter plot (on time) of soil sample data for each block in a tree crop, using r. I have a csv file with the value of the selected attribute, the block name, the x and y, and the dates for 9 locations, making 34 observations in total. I wish to present it as a leafpop::popupGraph on a mapview map. The user will be click on the point(a block centroid) within the block and see the history/plot for that location only.

The leafpop package manual provides the svg example using the meuse data set. Using it as the basis, I have a map with the points, and the popup. I have not been able to supply only the plot for the specific block to each point instance within it. My plot is a faceted ggplot with one facet for each block. Onclick on the map, each of my 9 locations shows the same full faceted plot (all 9). I want just the plot for that location.

The meuse example shows rep iteration through one plot, changing the colour of the value in that plot to highlight data point for the geographic location selected. While elegant, it is no guide to achieving my target.

Tim Salabim kindly pointed me a step forward suggesting lapply, but I do have not yet the depth of skill to successfully implement it. Below is an extract of the code, to show the steps I have taken so far.

I am not succeeding at getting the plot images. It also appears that the process loops through all 34 observations to make plots for each.

I am hoping the members can guide me a few steps further forward.

Here is the code:

block.graph <- function(select_organic_matter_by_blockname_with_xy_range, na.rm = TRUE){

# create list of blocks in data to loop over

block_list <- unique(som_xy$block_name)

# create for loop to produce ggplot2 graphs

for (i in seq_along(block_list)) {

# create plot for each block in the dataframe

soil_om <-

ggplot(subset(som_xy, som_xy$block_name==block_list[i]),

aes(x= date, y = organic_matter)) +

theme_light() +

geom_point() +

ylim(0, 10) +

geom_hline(data = som_xy, linetype="dotted", colour="#ff710c",alpha = 0.5, aes(yintercept = range_min)) +

geom_hline(data = som_xy, linetype="dotted", colour="#ff710c",alpha = 0.5, aes(yintercept = range_max))

p <- (soil_om +

theme(

axis.text.x=element_text(angle=90,hjust=1, vjust=1),

axis.title.x = element_blank(),

axis.title.y = element_text(angle=90, hjust=0.5),

legend.title = element_blank(),

plot.title = element_text(hjust = 0.5)

)+

ggtitle("Organic Matter Time Series")+

ylab("%"))

}}

individ_plot <- lapply(som_xy$block_name, block.graph)

mapview(collins_farm,

data = select_organic_matter_by_blockname_with_xy_range,

zcol = "block_name",

popup= leafpop::popupImage(individ_plot))

Thanks in advance David

** David Hine**

Land and Water Management PL

Level 7, 127 Creek St

Brisbane, Qld 4000

Australia

m: 0429 886 146 +61 429 886 146

t: (07) 4015 3470 +61 7 4015 3470

m: (+86) 136 1627 2645

--> GeoPortal with
example presentations of spatial data for horticulture
users and others.

Land and Water Management PL

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Including Alaska and Hawaii in listw Specifications of US States and Substate Regions

Hi,

I was wanting to include a listw list of neighbours for USA states which

include Alaska and Hawaii with Alaska a neighbour for Washington and Oregon

states and Hawaii a neighbour for California and the Western continental US

states.

Naturally queen relationships fail.

The Albers shapefiles at this URL are really useful for general mapping

purposes but result in erroneous relationships with k-nearest neighbours

when this is performed in GeoDa.

These maps have Hawaii and Alaska elided to south of Texas.

https://github.com/hrbrmstr/albersusa

When one uses the native map - with unelided states - available from the URL

below - then k-nearest neighbours also does not work properly in GeoDa.

I also want to extend this to the SAMHSA substate shapefile found at this

URL for the 395 substate areas used by the USA National Survey of Drug Use

and Health within SAMHSA.

https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefile

I did see that someone was able to create relationships across a straight

for Italy although I am not quite sure how this was accomplished

technically.

I would be grateful for any help you could provide.

Thankyou so much,

Stuart Reece.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I was wanting to include a listw list of neighbours for USA states which

include Alaska and Hawaii with Alaska a neighbour for Washington and Oregon

states and Hawaii a neighbour for California and the Western continental US

states.

Naturally queen relationships fail.

The Albers shapefiles at this URL are really useful for general mapping

purposes but result in erroneous relationships with k-nearest neighbours

when this is performed in GeoDa.

These maps have Hawaii and Alaska elided to south of Texas.

https://github.com/hrbrmstr/albersusa

When one uses the native map - with unelided states - available from the URL

below - then k-nearest neighbours also does not work properly in GeoDa.

I also want to extend this to the SAMHSA substate shapefile found at this

URL for the 395 substate areas used by the USA National Survey of Drug Use

and Health within SAMHSA.

https://www.samhsa.gov/data/report/2014-2016-nsduh-substate-region-shapefile

I did see that someone was able to create relationships across a straight

for Italy although I am not quite sure how this was accomplished

technically.

I would be grateful for any help you could provide.

Thankyou so much,

Stuart Reece.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: problems in anisotropy detection

Dear Emanuele,

Did you first checked manually the presence of anisotropy with gstat

variogram function? Then consider also that automatic fitting procedure

generally are not capable to fit a zonal anisotropy but only a geometric

one.

Sebastiano

Il sab 13 lug 2019, 07:32 Emanuele Barca <[hidden email]> ha

scritto:

> Dear friends,

>

> I have a dataset of hydraulic heads, mydata = <X, Y, HH, Z> and i would

> like to check for anisotropy.

>

> In R I found 3 functions to carry out such task:

>

> 1. likfit

>

> x_geodata <- as.geodata(mydata, coord.cols = 1:2, data.col = 3,

> covar.col = 4)

> fit_mle <- likfit(x_geodata,

> fix.nugget = FALSE,

> cov.model = "exponential", psiA = 0, psiR = 1,

> ini = c(var(Y), 1), fix.psiA = FALSE, fix.psiR =

> FALSE,

> hessian = TRUE)

>

> that detects no anisotropy.

>

> 2. estimateAnisotropy

>

> mydata.sp <- mydata

> coordinates(mydata.sp) = ~ X + Y

> estimateAnisotropy(mydata.sp, depVar = "LivStat", "LivStat ~ Z")

>

> that returns the following

>

> [generalized least squares trend estimation]

> $`ratio`

> [1] 1.340775

>

> $direction

> [1] -35.29765

>

> $Q

> Q11 Q22 Q12

> [1,] 1.926136e-05 2.329241e-05 5.721893e-06

>

> $doRotation

> [1] TRUE

>

> finally,

>

> 3. estiStAni

>

> vmod1 <- fit.variogram(vgm1, vgm(18, "Ste", 1300, 0.78, kappa = 1.7))

> estiStAni(vgm1, c(10, 150), "vgm", vmod1)

>

> that returns an error:

>

> Error in `$<-.data.frame`(`*tmp*`, "dir.hor", value = 0) :

> replacement has 1 row, data has 0.

>

> I am very puzzled, can anyone help me understanding if there is

> anisotropy in my dataset?

>

> thanks

>

> emanuele

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Did you first checked manually the presence of anisotropy with gstat

variogram function? Then consider also that automatic fitting procedure

generally are not capable to fit a zonal anisotropy but only a geometric

one.

Sebastiano

Il sab 13 lug 2019, 07:32 Emanuele Barca <[hidden email]> ha

scritto:

> Dear friends,

>

> I have a dataset of hydraulic heads, mydata = <X, Y, HH, Z> and i would

> like to check for anisotropy.

>

> In R I found 3 functions to carry out such task:

>

> 1. likfit

>

> x_geodata <- as.geodata(mydata, coord.cols = 1:2, data.col = 3,

> covar.col = 4)

> fit_mle <- likfit(x_geodata,

> fix.nugget = FALSE,

> cov.model = "exponential", psiA = 0, psiR = 1,

> ini = c(var(Y), 1), fix.psiA = FALSE, fix.psiR =

> FALSE,

> hessian = TRUE)

>

> that detects no anisotropy.

>

> 2. estimateAnisotropy

>

> mydata.sp <- mydata

> coordinates(mydata.sp) = ~ X + Y

> estimateAnisotropy(mydata.sp, depVar = "LivStat", "LivStat ~ Z")

>

> that returns the following

>

> [generalized least squares trend estimation]

> $`ratio`

> [1] 1.340775

>

> $direction

> [1] -35.29765

>

> $Q

> Q11 Q22 Q12

> [1,] 1.926136e-05 2.329241e-05 5.721893e-06

>

> $doRotation

> [1] TRUE

>

> finally,

>

> 3. estiStAni

>

> vmod1 <- fit.variogram(vgm1, vgm(18, "Ste", 1300, 0.78, kappa = 1.7))

> estiStAni(vgm1, c(10, 150), "vgm", vmod1)

>

> that returns an error:

>

> Error in `$<-.data.frame`(`*tmp*`, "dir.hor", value = 0) :

> replacement has 1 row, data has 0.

>

> I am very puzzled, can anyone help me understanding if there is

> anisotropy in my dataset?

>

> thanks

>

> emanuele

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo