This is an

**for**__archive__**R-sig-geo**, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first**(through the list homepage, not via Nabble)**and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.*Updated:*2 hours 41 min ago

### Re: editing a correlogram

On Sat, 16 Mar 2019, Ben Tupper wrote:

> Hi,

>

> In this case the spdep::plot.spcor() function, which I think you are

> using, doesn't provide the mechanism for the caller to override the

> default pch value. You can see this by looking at the pl.spcor function

> (as shown way below.) I think it may be easiest for you to simply

> rewrite the function with the plotting parameters assigned as arguments.

Thanks, indeed the ... should be passed through to plotting functions,

and I'll look at doing this. My preference would be to extract the

components of the returned object needed for customised plotting, not

trying to finesse the plot method, which was always meant for guidance,

like other diagnostic plots.

As Rolf said, an example of the code you are using (on a built-in data

set) to show what you want would make things easier, say based on the help

page example. Note that print method returns its calculations invisibly

too.

nc.sids <- st_read(system.file("shapes/sids.shp", package="spData")[1],

quiet=TRUE)

rn <- as.character(nc.sids$FIPS)

ncCC89_nb <- read.gal(system.file("weights/ncCC89.gal",

package="spData")[1],

region.id=rn)

ft.SID74 <- sqrt(1000)*(sqrt(nc.sids$SID74/nc.sids$BIR74) +

sqrt((nc.sids$SID74+1)/nc.sids$BIR74))

tr.SIDS74 <- ft.SID74*sqrt(nc.sids$BIR74)

Ispc <- sp.correlogram(ncCC89_nb, tr.SIDS74, order=8, method="I",

zero.policy=TRUE)

str(Ispc)

str(print(Ispc, "bonferroni"))

Ispc_b <- as.data.frame(print(Ispc, "bonferroni"))

Ispc_b$lag <- 1:8

library(ggplot2)

ggplot(Ispc_b) + geom_point(aes(x=lag, y=estimate))

+ geom_hline(yintercept=0)

Or whatever you feel like doing.

Roger

>

> Cheers,

> Ben

>

>

>> spdep::plot.spcor

> function (x, main, ylab, ylim, ...)

> {

> if (missing(main))

> main <- x$var

> if ((x$method == "I") || (x$method == "C")) {

> lags <- as.integer(rownames(x$res))

> to.plot <- which((x$res[, 3] > 0) & (x$res[, 3] != Inf))

> sd2 <- rep(0, nrow(x$res))

> sd2[to.plot] <- 2 * sqrt(x$res[to.plot, 3])

> if (missing(ylim)) {

> ylim <- range(c(x$res[, 1] + sd2, x$res[, 1] - sd2))

> }

> if (missing(ylab))

> if (x$method == "I")

> ylab <- "Moran's I"

> if (x$method == "C")

> ylab <- "Geary's C"

> plot(lags, x$res[, 1], type = "p", pch = 18, ylim = ylim,

> main = main, ylab = ylab, xaxt = "n")

> arrows(lags, x$res[, 1] + sd2, lags, x$res[, 1] - sd2,

> length = 0.1, angle = 90)

> arrows(lags, x$res[, 1] - sd2, lags, x$res[, 1] + sd2,

> length = 0.1, angle = 90)

> axis(1, at = lags)

> abline(h = x$res[1, 2])

> }

> else {

> res <- as.vector(x$res)

> lags <- as.integer(names(x$res))

> if (missing(ylim))

> ylim <- c(-1, 1)

> if (missing(ylab))

> ylab <- "Spatial autocorrelation"

> plot(lags, res, type = "h", ylim = ylim, main = main,

> ylab = ylab, lwd = 4, xaxt = "n")

> axis(1, at = lags)

> abline(h = 0)

> }

> }

> <bytecode: 0x7fb8799d0e40>

> <environment: namespace:spdep>

>

>

>

>> On Mar 16, 2019, at 1:16 PM, Leonardo Matheus Servino <[hidden email]> wrote:

>>

>> I tried the function par() and arguments inside the plot(), but some

>> parameters doesn't change.

>> For example, the argument pch=, which changes the symbols that represents

>> the points in the plot doesn't work.

>>

>>

>> Em sáb, 16 de mar de 2019 às 13:48, Sarah Goslee <[hidden email] <mailto:[hidden email]>>

>> escreveu:

>>

>>> Of course.

>>>

>>> The ... argument to the plot method means that you can use standard

>>> base graphics options to customize as you wish.

>>>

>>> ?par gives the whole list, although they may not all be useful for

>>> correlograms.

>>>

>>> If you have specific questions after you try customizing to your

>>> liking, the list can certainly help with details.

>>>

>>> Sarah

>>>

>>> On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

>>> <[hidden email]> wrote:

>>>>

>>>> Hello,

>>>>

>>>> I've been tried to use the function "sp.correlogram". After plot the

>>>> correlogram, I would like to edit the grafic's appearence, to publish it.

>>>> It is possible?

>>>>

>>>> Thanks

>>>>

>>>> --

>>>> Leonardo Matheus Servino

>>>> Pós-Graduação em Evolução e Diversidade

>>>> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e

>>> Humanas

>>>>

>>>

>>> --

>>> Sarah Goslee (she/her)

>>> http://www.numberwright.com

>>>

>>

>>

>> --

>> Leonardo Matheus Servino

>> Pós-Graduação em Evolução e Diversidade

>> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

>>

>> LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

>>

>> Rua Arcturus, 3. Jardim Antares

>> 09606-070 São Bernardo do Campo - SP

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email] <mailto:[hidden email]>

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo <https://stat.ethz.ch/mailman/listinfo/r-sig-geo>

> Ben Tupper

> Bigelow Laboratory for Ocean Sciences

> 60 Bigelow Drive, P.O. Box 380

> East Boothbay, Maine 04544

> http://www.bigelow.org

>

> Ecological Forecasting: https://eco.bigelow.org/

>

>

>

>

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Hi,

>

> In this case the spdep::plot.spcor() function, which I think you are

> using, doesn't provide the mechanism for the caller to override the

> default pch value. You can see this by looking at the pl.spcor function

> (as shown way below.) I think it may be easiest for you to simply

> rewrite the function with the plotting parameters assigned as arguments.

Thanks, indeed the ... should be passed through to plotting functions,

and I'll look at doing this. My preference would be to extract the

components of the returned object needed for customised plotting, not

trying to finesse the plot method, which was always meant for guidance,

like other diagnostic plots.

As Rolf said, an example of the code you are using (on a built-in data

set) to show what you want would make things easier, say based on the help

page example. Note that print method returns its calculations invisibly

too.

nc.sids <- st_read(system.file("shapes/sids.shp", package="spData")[1],

quiet=TRUE)

rn <- as.character(nc.sids$FIPS)

ncCC89_nb <- read.gal(system.file("weights/ncCC89.gal",

package="spData")[1],

region.id=rn)

ft.SID74 <- sqrt(1000)*(sqrt(nc.sids$SID74/nc.sids$BIR74) +

sqrt((nc.sids$SID74+1)/nc.sids$BIR74))

tr.SIDS74 <- ft.SID74*sqrt(nc.sids$BIR74)

Ispc <- sp.correlogram(ncCC89_nb, tr.SIDS74, order=8, method="I",

zero.policy=TRUE)

str(Ispc)

str(print(Ispc, "bonferroni"))

Ispc_b <- as.data.frame(print(Ispc, "bonferroni"))

Ispc_b$lag <- 1:8

library(ggplot2)

ggplot(Ispc_b) + geom_point(aes(x=lag, y=estimate))

+ geom_hline(yintercept=0)

Or whatever you feel like doing.

Roger

>

> Cheers,

> Ben

>

>

>> spdep::plot.spcor

> function (x, main, ylab, ylim, ...)

> {

> if (missing(main))

> main <- x$var

> if ((x$method == "I") || (x$method == "C")) {

> lags <- as.integer(rownames(x$res))

> to.plot <- which((x$res[, 3] > 0) & (x$res[, 3] != Inf))

> sd2 <- rep(0, nrow(x$res))

> sd2[to.plot] <- 2 * sqrt(x$res[to.plot, 3])

> if (missing(ylim)) {

> ylim <- range(c(x$res[, 1] + sd2, x$res[, 1] - sd2))

> }

> if (missing(ylab))

> if (x$method == "I")

> ylab <- "Moran's I"

> if (x$method == "C")

> ylab <- "Geary's C"

> plot(lags, x$res[, 1], type = "p", pch = 18, ylim = ylim,

> main = main, ylab = ylab, xaxt = "n")

> arrows(lags, x$res[, 1] + sd2, lags, x$res[, 1] - sd2,

> length = 0.1, angle = 90)

> arrows(lags, x$res[, 1] - sd2, lags, x$res[, 1] + sd2,

> length = 0.1, angle = 90)

> axis(1, at = lags)

> abline(h = x$res[1, 2])

> }

> else {

> res <- as.vector(x$res)

> lags <- as.integer(names(x$res))

> if (missing(ylim))

> ylim <- c(-1, 1)

> if (missing(ylab))

> ylab <- "Spatial autocorrelation"

> plot(lags, res, type = "h", ylim = ylim, main = main,

> ylab = ylab, lwd = 4, xaxt = "n")

> axis(1, at = lags)

> abline(h = 0)

> }

> }

> <bytecode: 0x7fb8799d0e40>

> <environment: namespace:spdep>

>

>

>

>> On Mar 16, 2019, at 1:16 PM, Leonardo Matheus Servino <[hidden email]> wrote:

>>

>> I tried the function par() and arguments inside the plot(), but some

>> parameters doesn't change.

>> For example, the argument pch=, which changes the symbols that represents

>> the points in the plot doesn't work.

>>

>>

>> Em sáb, 16 de mar de 2019 às 13:48, Sarah Goslee <[hidden email] <mailto:[hidden email]>>

>> escreveu:

>>

>>> Of course.

>>>

>>> The ... argument to the plot method means that you can use standard

>>> base graphics options to customize as you wish.

>>>

>>> ?par gives the whole list, although they may not all be useful for

>>> correlograms.

>>>

>>> If you have specific questions after you try customizing to your

>>> liking, the list can certainly help with details.

>>>

>>> Sarah

>>>

>>> On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

>>> <[hidden email]> wrote:

>>>>

>>>> Hello,

>>>>

>>>> I've been tried to use the function "sp.correlogram". After plot the

>>>> correlogram, I would like to edit the grafic's appearence, to publish it.

>>>> It is possible?

>>>>

>>>> Thanks

>>>>

>>>> --

>>>> Leonardo Matheus Servino

>>>> Pós-Graduação em Evolução e Diversidade

>>>> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e

>>> Humanas

>>>>

>>>

>>> --

>>> Sarah Goslee (she/her)

>>> http://www.numberwright.com

>>>

>>

>>

>> --

>> Leonardo Matheus Servino

>> Pós-Graduação em Evolução e Diversidade

>> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

>>

>> LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

>>

>> Rua Arcturus, 3. Jardim Antares

>> 09606-070 São Bernardo do Campo - SP

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email] <mailto:[hidden email]>

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo <https://stat.ethz.ch/mailman/listinfo/r-sig-geo>

> Ben Tupper

> Bigelow Laboratory for Ocean Sciences

> 60 Bigelow Drive, P.O. Box 380

> East Boothbay, Maine 04544

> http://www.bigelow.org

>

> Ecological Forecasting: https://eco.bigelow.org/

>

>

>

>

>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: editing a correlogram

Hi,

In this case the spdep::plot.spcor() function, which I think you are using, doesn't provide the mechanism for the caller to override the default pch value. You can see this by looking at the pl.spcor function (as shown way below.) I think it may be easiest for you to simply rewrite the function with the plotting parameters assigned as arguments.

Cheers,

Ben

> spdep::plot.spcor

function (x, main, ylab, ylim, ...)

{

if (missing(main))

main <- x$var

if ((x$method == "I") || (x$method == "C")) {

lags <- as.integer(rownames(x$res))

to.plot <- which((x$res[, 3] > 0) & (x$res[, 3] != Inf))

sd2 <- rep(0, nrow(x$res))

sd2[to.plot] <- 2 * sqrt(x$res[to.plot, 3])

if (missing(ylim)) {

ylim <- range(c(x$res[, 1] + sd2, x$res[, 1] - sd2))

}

if (missing(ylab))

if (x$method == "I")

ylab <- "Moran's I"

if (x$method == "C")

ylab <- "Geary's C"

plot(lags, x$res[, 1], type = "p", pch = 18, ylim = ylim,

main = main, ylab = ylab, xaxt = "n")

arrows(lags, x$res[, 1] + sd2, lags, x$res[, 1] - sd2,

length = 0.1, angle = 90)

arrows(lags, x$res[, 1] - sd2, lags, x$res[, 1] + sd2,

length = 0.1, angle = 90)

axis(1, at = lags)

abline(h = x$res[1, 2])

}

else {

res <- as.vector(x$res)

lags <- as.integer(names(x$res))

if (missing(ylim))

ylim <- c(-1, 1)

if (missing(ylab))

ylab <- "Spatial autocorrelation"

plot(lags, res, type = "h", ylim = ylim, main = main,

ylab = ylab, lwd = 4, xaxt = "n")

axis(1, at = lags)

abline(h = 0)

}

}

<bytecode: 0x7fb8799d0e40>

<environment: namespace:spdep>

> On Mar 16, 2019, at 1:16 PM, Leonardo Matheus Servino <[hidden email]> wrote:

>

> I tried the function par() and arguments inside the plot(), but some

> parameters doesn't change.

> For example, the argument pch=, which changes the symbols that represents

> the points in the plot doesn't work.

>

>

> Em sáb, 16 de mar de 2019 às 13:48, Sarah Goslee <[hidden email] <mailto:[hidden email]>>

> escreveu:

>

>> Of course.

>>

>> The ... argument to the plot method means that you can use standard

>> base graphics options to customize as you wish.

>>

>> ?par gives the whole list, although they may not all be useful for

>> correlograms.

>>

>> If you have specific questions after you try customizing to your

>> liking, the list can certainly help with details.

>>

>> Sarah

>>

>> On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

>> <[hidden email]> wrote:

>>>

>>> Hello,

>>>

>>> I've been tried to use the function "sp.correlogram". After plot the

>>> correlogram, I would like to edit the grafic's appearence, to publish it.

>>> It is possible?

>>>

>>> Thanks

>>>

>>> --

>>> Leonardo Matheus Servino

>>> Pós-Graduação em Evolução e Diversidade

>>> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e

>> Humanas

>>>

>>

>> --

>> Sarah Goslee (she/her)

>> http://www.numberwright.com

>>

>

>

> --

> Leonardo Matheus Servino

> Pós-Graduação em Evolução e Diversidade

> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

>

> LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

>

> Rua Arcturus, 3. Jardim Antares

> 09606-070 São Bernardo do Campo - SP

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email] <mailto:[hidden email]>

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo <https://stat.ethz.ch/mailman/listinfo/r-sig-geo> Ben Tupper

Bigelow Laboratory for Ocean Sciences

60 Bigelow Drive, P.O. Box 380

East Boothbay, Maine 04544

http://www.bigelow.org

Ecological Forecasting: https://eco.bigelow.org/

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

In this case the spdep::plot.spcor() function, which I think you are using, doesn't provide the mechanism for the caller to override the default pch value. You can see this by looking at the pl.spcor function (as shown way below.) I think it may be easiest for you to simply rewrite the function with the plotting parameters assigned as arguments.

Cheers,

Ben

> spdep::plot.spcor

function (x, main, ylab, ylim, ...)

{

if (missing(main))

main <- x$var

if ((x$method == "I") || (x$method == "C")) {

lags <- as.integer(rownames(x$res))

to.plot <- which((x$res[, 3] > 0) & (x$res[, 3] != Inf))

sd2 <- rep(0, nrow(x$res))

sd2[to.plot] <- 2 * sqrt(x$res[to.plot, 3])

if (missing(ylim)) {

ylim <- range(c(x$res[, 1] + sd2, x$res[, 1] - sd2))

}

if (missing(ylab))

if (x$method == "I")

ylab <- "Moran's I"

if (x$method == "C")

ylab <- "Geary's C"

plot(lags, x$res[, 1], type = "p", pch = 18, ylim = ylim,

main = main, ylab = ylab, xaxt = "n")

arrows(lags, x$res[, 1] + sd2, lags, x$res[, 1] - sd2,

length = 0.1, angle = 90)

arrows(lags, x$res[, 1] - sd2, lags, x$res[, 1] + sd2,

length = 0.1, angle = 90)

axis(1, at = lags)

abline(h = x$res[1, 2])

}

else {

res <- as.vector(x$res)

lags <- as.integer(names(x$res))

if (missing(ylim))

ylim <- c(-1, 1)

if (missing(ylab))

ylab <- "Spatial autocorrelation"

plot(lags, res, type = "h", ylim = ylim, main = main,

ylab = ylab, lwd = 4, xaxt = "n")

axis(1, at = lags)

abline(h = 0)

}

}

<bytecode: 0x7fb8799d0e40>

<environment: namespace:spdep>

> On Mar 16, 2019, at 1:16 PM, Leonardo Matheus Servino <[hidden email]> wrote:

>

> I tried the function par() and arguments inside the plot(), but some

> parameters doesn't change.

> For example, the argument pch=, which changes the symbols that represents

> the points in the plot doesn't work.

>

>

> Em sáb, 16 de mar de 2019 às 13:48, Sarah Goslee <[hidden email] <mailto:[hidden email]>>

> escreveu:

>

>> Of course.

>>

>> The ... argument to the plot method means that you can use standard

>> base graphics options to customize as you wish.

>>

>> ?par gives the whole list, although they may not all be useful for

>> correlograms.

>>

>> If you have specific questions after you try customizing to your

>> liking, the list can certainly help with details.

>>

>> Sarah

>>

>> On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

>> <[hidden email]> wrote:

>>>

>>> Hello,

>>>

>>> I've been tried to use the function "sp.correlogram". After plot the

>>> correlogram, I would like to edit the grafic's appearence, to publish it.

>>> It is possible?

>>>

>>> Thanks

>>>

>>> --

>>> Leonardo Matheus Servino

>>> Pós-Graduação em Evolução e Diversidade

>>> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e

>> Humanas

>>>

>>

>> --

>> Sarah Goslee (she/her)

>> http://www.numberwright.com

>>

>

>

> --

> Leonardo Matheus Servino

> Pós-Graduação em Evolução e Diversidade

> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

>

> LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

>

> Rua Arcturus, 3. Jardim Antares

> 09606-070 São Bernardo do Campo - SP

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email] <mailto:[hidden email]>

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo <https://stat.ethz.ch/mailman/listinfo/r-sig-geo> Ben Tupper

Bigelow Laboratory for Ocean Sciences

60 Bigelow Drive, P.O. Box 380

East Boothbay, Maine 04544

http://www.bigelow.org

Ecological Forecasting: https://eco.bigelow.org/

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: [FORGED] Re: editing a correlogram

On 17/03/19 6:16 AM, Leonardo Matheus Servino wrote:

> I tried the function par() and arguments inside the plot(), but some

> parameters doesn't change.

> For example, the argument pch=, which changes the symbols that represents

> the points in the plot doesn't work.

If you want useful advice show the commands that you actually used, in

the context of a *minimal reproducible example* (including the *data*

involved, provided in such a way --- use dput()!!! --- that it can be

accessed by your advisors.

cheers,

Rolf Turner

--

Honorary Research Fellow

Department of Statistics

University of Auckland

Phone: +64-9-373-7599 ext. 88276

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: editing a correlogram

I tried the function par() and arguments inside the plot(), but some

parameters doesn't change.

For example, the argument pch=, which changes the symbols that represents

the points in the plot doesn't work.

Em sáb, 16 de mar de 2019 às 13:48, Sarah Goslee <[hidden email]>

escreveu:

> Of course.

>

> The ... argument to the plot method means that you can use standard

> base graphics options to customize as you wish.

>

> ?par gives the whole list, although they may not all be useful for

> correlograms.

>

> If you have specific questions after you try customizing to your

> liking, the list can certainly help with details.

>

> Sarah

>

> On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

> <[hidden email]> wrote:

> >

> > Hello,

> >

> > I've been tried to use the function "sp.correlogram". After plot the

> > correlogram, I would like to edit the grafic's appearence, to publish it.

> > It is possible?

> >

> > Thanks

> >

> > --

> > Leonardo Matheus Servino

> > Pós-Graduação em Evolução e Diversidade

> > Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e

> Humanas

> >

>

> --

> Sarah Goslee (she/her)

> http://www.numberwright.com

>

--

Leonardo Matheus Servino

Pós-Graduação em Evolução e Diversidade

Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

Rua Arcturus, 3. Jardim Antares

09606-070 São Bernardo do Campo - SP

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

parameters doesn't change.

For example, the argument pch=, which changes the symbols that represents

the points in the plot doesn't work.

Em sáb, 16 de mar de 2019 às 13:48, Sarah Goslee <[hidden email]>

escreveu:

> Of course.

>

> The ... argument to the plot method means that you can use standard

> base graphics options to customize as you wish.

>

> ?par gives the whole list, although they may not all be useful for

> correlograms.

>

> If you have specific questions after you try customizing to your

> liking, the list can certainly help with details.

>

> Sarah

>

> On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

> <[hidden email]> wrote:

> >

> > Hello,

> >

> > I've been tried to use the function "sp.correlogram". After plot the

> > correlogram, I would like to edit the grafic's appearence, to publish it.

> > It is possible?

> >

> > Thanks

> >

> > --

> > Leonardo Matheus Servino

> > Pós-Graduação em Evolução e Diversidade

> > Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e

> Humanas

> >

>

> --

> Sarah Goslee (she/her)

> http://www.numberwright.com

>

--

Leonardo Matheus Servino

Pós-Graduação em Evolução e Diversidade

Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

Rua Arcturus, 3. Jardim Antares

09606-070 São Bernardo do Campo - SP

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: editing a correlogram

Of course.

The ... argument to the plot method means that you can use standard

base graphics options to customize as you wish.

?par gives the whole list, although they may not all be useful for correlograms.

If you have specific questions after you try customizing to your

liking, the list can certainly help with details.

Sarah

On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

<[hidden email]> wrote:

>

> Hello,

>

> I've been tried to use the function "sp.correlogram". After plot the

> correlogram, I would like to edit the grafic's appearence, to publish it.

> It is possible?

>

> Thanks

>

> --

> Leonardo Matheus Servino

> Pós-Graduação em Evolução e Diversidade

> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

>

--

Sarah Goslee (she/her)

http://www.numberwright.com

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

The ... argument to the plot method means that you can use standard

base graphics options to customize as you wish.

?par gives the whole list, although they may not all be useful for correlograms.

If you have specific questions after you try customizing to your

liking, the list can certainly help with details.

Sarah

On Sat, Mar 16, 2019 at 11:00 AM Leonardo Matheus Servino

<[hidden email]> wrote:

>

> Hello,

>

> I've been tried to use the function "sp.correlogram". After plot the

> correlogram, I would like to edit the grafic's appearence, to publish it.

> It is possible?

>

> Thanks

>

> --

> Leonardo Matheus Servino

> Pós-Graduação em Evolução e Diversidade

> Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

>

--

Sarah Goslee (she/her)

http://www.numberwright.com

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### editing a correlogram

Hello,

I've been tried to use the function "sp.correlogram". After plot the

correlogram, I would like to edit the grafic's appearence, to publish it.

It is possible?

Thanks

--

Leonardo Matheus Servino

Pós-Graduação em Evolução e Diversidade

Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

Rua Arcturus, 3. Jardim Antares

09606-070 São Bernardo do Campo - SP

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I've been tried to use the function "sp.correlogram". After plot the

correlogram, I would like to edit the grafic's appearence, to publish it.

It is possible?

Thanks

--

Leonardo Matheus Servino

Pós-Graduação em Evolução e Diversidade

Universidade Federal do ABC - UFABC - Centro de Ciências Naturais e Humanas

LED I - Laboratório de Evolução e Diversidade I - Bloco Delta, sala 202

Rua Arcturus, 3. Jardim Antares

09606-070 São Bernardo do Campo - SP

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: [FORGED] Re: Aggregating points based on distance

+1 from me too for Rolf's nomination!

On Thu, Mar 14, 2019 at 6:43 PM Dexter Locke <[hidden email]> wrote:

> FWIW: I agree with Rolfs nomination.

>

> +1

>

> -Dexter

> http://dexterlocke.com

>

>

>

> On Thu, Mar 14, 2019 at 5:30 PM Rolf Turner <[hidden email]>

> wrote:

>

> >

> > On 14/03/19 7:33 AM, Barry Rowlingson wrote:

> >

> > <SNIP>

> >

> > > The problem with "modern" syntax is that it's subject to rapid change

> > > and often slower than using base R, which has had years to stabilise

> > > and optimise.

> >

> > <SNIP>

> >

> > Fortune nomination.

> >

> > cheers,

> >

> > Rolf

> >

> > --

> > Honorary Research Fellow

> > Department of Statistics

> > University of Auckland

> > Phone: +64-9-373-7599 ext. 88276

> >

> > _______________________________________________

> > R-sig-Geo mailing list

> > [hidden email]

> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> >

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

On Thu, Mar 14, 2019 at 6:43 PM Dexter Locke <[hidden email]> wrote:

> FWIW: I agree with Rolfs nomination.

>

> +1

>

> -Dexter

> http://dexterlocke.com

>

>

>

> On Thu, Mar 14, 2019 at 5:30 PM Rolf Turner <[hidden email]>

> wrote:

>

> >

> > On 14/03/19 7:33 AM, Barry Rowlingson wrote:

> >

> > <SNIP>

> >

> > > The problem with "modern" syntax is that it's subject to rapid change

> > > and often slower than using base R, which has had years to stabilise

> > > and optimise.

> >

> > <SNIP>

> >

> > Fortune nomination.

> >

> > cheers,

> >

> > Rolf

> >

> > --

> > Honorary Research Fellow

> > Department of Statistics

> > University of Auckland

> > Phone: +64-9-373-7599 ext. 88276

> >

> > _______________________________________________

> > R-sig-Geo mailing list

> > [hidden email]

> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> >

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: [FORGED] Re: Aggregating points based on distance

FWIW: I agree with Rolfs nomination.

+1

-Dexter

http://dexterlocke.com

On Thu, Mar 14, 2019 at 5:30 PM Rolf Turner <[hidden email]> wrote:

>

> On 14/03/19 7:33 AM, Barry Rowlingson wrote:

>

> <SNIP>

>

> > The problem with "modern" syntax is that it's subject to rapid change

> > and often slower than using base R, which has had years to stabilise

> > and optimise.

>

> <SNIP>

>

> Fortune nomination.

>

> cheers,

>

> Rolf

>

> --

> Honorary Research Fellow

> Department of Statistics

> University of Auckland

> Phone: +64-9-373-7599 ext. 88276

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

+1

-Dexter

http://dexterlocke.com

On Thu, Mar 14, 2019 at 5:30 PM Rolf Turner <[hidden email]> wrote:

>

> On 14/03/19 7:33 AM, Barry Rowlingson wrote:

>

> <SNIP>

>

> > The problem with "modern" syntax is that it's subject to rapid change

> > and often slower than using base R, which has had years to stabilise

> > and optimise.

>

> <SNIP>

>

> Fortune nomination.

>

> cheers,

>

> Rolf

>

> --

> Honorary Research Fellow

> Department of Statistics

> University of Auckland

> Phone: +64-9-373-7599 ext. 88276

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: [FORGED] Re: Aggregating points based on distance

On 14/03/19 7:33 AM, Barry Rowlingson wrote:

<SNIP>

> The problem with "modern" syntax is that it's subject to rapid change

> and often slower than using base R, which has had years to stabilise

> and optimise.

<SNIP>

Fortune nomination.

cheers,

Rolf

--

Honorary Research Fellow

Department of Statistics

University of Auckland

Phone: +64-9-373-7599 ext. 88276

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Error in running Maxent within biomod2

Hi,

You might want to verify that you have the required maxent jar file location properly identified. See the MAXENT.Philips section here ...

https://www.rdocumentation.org/packages/biomod2/versions/3.3-7.1/topics/BIOMOD_ModelingOptions <https://www.rdocumentation.org/packages/biomod2/versions/3.3-7.1/topics/BIOMOD_ModelingOptions>

Cheers,

Ben

> On Mar 14, 2019, at 5:01 PM, David L Carlson <[hidden email]> wrote:

>

> If you don't get an answer here, contact the biomod2 package maintainer. The email address is shown on following webpage:

>

> https://cran.r-project.org/web/packages/biomod2/index.html

>

> or you can use the R command:

>

> maintainer("biomod2")

>

> David L. Carlson

> Department of Anthropology

> Texas A&M University

>

> -----Original Message-----

> From: R-sig-Geo [mailto:[hidden email]] On Behalf Of Lara Silva

> Sent: Wednesday, March 13, 2019 6:17 PM

> To: [hidden email]; [hidden email]

> Subject: [R-sig-Geo] Error in running Maxent within biomod2

>

> I got an error when running Maxent within biomod2

>

> Running Maxent...Error in system(command = paste("java ",

> ifelse(is.null([hidden email]$memory_allocated), :

>

> What does it means?

>

> Thanks

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Ben Tupper

Bigelow Laboratory for Ocean Sciences

60 Bigelow Drive, P.O. Box 380

East Boothbay, Maine 04544

http://www.bigelow.org

Ecological Forecasting: https://eco.bigelow.org/

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

You might want to verify that you have the required maxent jar file location properly identified. See the MAXENT.Philips section here ...

https://www.rdocumentation.org/packages/biomod2/versions/3.3-7.1/topics/BIOMOD_ModelingOptions <https://www.rdocumentation.org/packages/biomod2/versions/3.3-7.1/topics/BIOMOD_ModelingOptions>

Cheers,

Ben

> On Mar 14, 2019, at 5:01 PM, David L Carlson <[hidden email]> wrote:

>

> If you don't get an answer here, contact the biomod2 package maintainer. The email address is shown on following webpage:

>

> https://cran.r-project.org/web/packages/biomod2/index.html

>

> or you can use the R command:

>

> maintainer("biomod2")

>

> David L. Carlson

> Department of Anthropology

> Texas A&M University

>

> -----Original Message-----

> From: R-sig-Geo [mailto:[hidden email]] On Behalf Of Lara Silva

> Sent: Wednesday, March 13, 2019 6:17 PM

> To: [hidden email]; [hidden email]

> Subject: [R-sig-Geo] Error in running Maxent within biomod2

>

> I got an error when running Maxent within biomod2

>

> Running Maxent...Error in system(command = paste("java ",

> ifelse(is.null([hidden email]$memory_allocated), :

>

> What does it means?

>

> Thanks

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Ben Tupper

Bigelow Laboratory for Ocean Sciences

60 Bigelow Drive, P.O. Box 380

East Boothbay, Maine 04544

http://www.bigelow.org

Ecological Forecasting: https://eco.bigelow.org/

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Error in running Maxent within biomod2

If you don't get an answer here, contact the biomod2 package maintainer. The email address is shown on following webpage:

https://cran.r-project.org/web/packages/biomod2/index.html

or you can use the R command:

maintainer("biomod2")

David L. Carlson

Department of Anthropology

Texas A&M University

-----Original Message-----

From: R-sig-Geo [mailto:[hidden email]] On Behalf Of Lara Silva

Sent: Wednesday, March 13, 2019 6:17 PM

To: [hidden email]; [hidden email]

Subject: [R-sig-Geo] Error in running Maxent within biomod2

I got an error when running Maxent within biomod2

Running Maxent...Error in system(command = paste("java ",

ifelse(is.null([hidden email]$memory_allocated), :

What does it means?

Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

https://cran.r-project.org/web/packages/biomod2/index.html

or you can use the R command:

maintainer("biomod2")

David L. Carlson

Department of Anthropology

Texas A&M University

-----Original Message-----

From: R-sig-Geo [mailto:[hidden email]] On Behalf Of Lara Silva

Sent: Wednesday, March 13, 2019 6:17 PM

To: [hidden email]; [hidden email]

Subject: [R-sig-Geo] Error in running Maxent within biomod2

I got an error when running Maxent within biomod2

Running Maxent...Error in system(command = paste("java ",

ifelse(is.null([hidden email]$memory_allocated), :

What does it means?

Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Aggregating points based on distance

Ha! That is a great take. Thanks Barry.

On 3/13/19, 11:34 AM, "R-sig-Geo on behalf of Barry Rowlingson" <[hidden email] on behalf of [hidden email]> wrote:

On Wed, Mar 13, 2019 at 6:14 PM Andy Bunn <[hidden email]> wrote:

> I would like to create averages of all the variables in a

> SpatialPointsDataFrame when points are within a specified distance of each

> other. I have a method for doing this but it seems like a silly way to

> approach the problem. Any ideas for doing this using modern syntax

> (especially of the tidy variety) would be appreciated.

>

>

> To start, I have a SpatialPointsDataFrame with several variables measured

> for each point. I'd like to get an average value for each variable for

> points within a specified distance. E.g., getting average cadmium values

> from the meuse data for points within 100 m of each other:

>

> library(sf)

> library(sp)

> data(meuse)

> pts <- st_as_sf(meuse, coords = c("x", "y"), remove=FALSE)

> pts100 <- st_is_within_distance(pts, dist = 100)

> # can use sapply to get mean of a variable. E.g., cadmium

> sapply(pts100, function(x){ mean(pts$cadmium[x]) })

>

>

If this is the method you call "silly" then I don't see anything silly at

all here, only efficient well-written use of base R constructs. The problem

with "modern" syntax is that its subject to rapid change and often slower

than using base R, which has had years to stabilise and optimise.

If you want to iterate this over variables then nest your sapplys:

items = c("cadmium", "copper","lead")

sapply(items, function(item){

sapply(pts100, function(x){ mean(pts[[item]][x]) })

})

gets you:

cadmium copper lead

[1,] 10.150000 83.00000 288.00000

[2,] 10.150000 83.00000 288.00000

[3,] 6.500000 68.00000 199.00000

[4,] 2.600000 81.00000 116.00000

Barry

> Above, I've figured out how to use sapply to do this variable by variable.

> So I could, if I wanted, calculate the mean for each variable, generate a

> centroid for each point and then a SpatialPointsDataFrame of the unique

> values. E.g., for the first few variables:

>

> res <- data.frame(id=1:length(pts100),

> x=NA, y=NA,

> cadmium=NA, copper=NA, lead=NA)

> res$x <- sapply(pts100, function(p){ mean(pts$x[p]) })

> res$y <- sapply(pts100, function(p){ mean(pts$y[p]) })

> res$cadmium <- sapply(pts100, function(p){ mean(pts$cadmium[p]) })

> res$copper <- sapply(pts100, function(p){ mean(pts$copper[p]) })

> res$lead <- sapply(pts100, function(p){ mean(pts$lead[p]) })

> res2 <- res[duplicated(res$cadmium),]

> coordinates(res2) <- c("x","y")

> bubble(res2,"cadmium")

>

>

> This works but seems cumbersome and like there must be a more efficient

> way.

>

>

> Thanks for any help, Andy

>

>

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-geo&data=02%7C01%7Cbunna%40wwu.edu%7C5470ab0ee3cb407f76ef08d6a7e2828f%7Cdc46140ce26f43efb0ae00f257f478ff%7C0%7C0%7C636880988634768989&sdata=upduDGbDHMYznJ35Bv6sJZL8t3JBeJB%2FmCqgePjvmlo%3D&reserved=0

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-geo&data=02%7C01%7Cbunna%40wwu.edu%7C5470ab0ee3cb407f76ef08d6a7e2828f%7Cdc46140ce26f43efb0ae00f257f478ff%7C0%7C0%7C636880988634768989&sdata=upduDGbDHMYznJ35Bv6sJZL8t3JBeJB%2FmCqgePjvmlo%3D&reserved=0

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

On 3/13/19, 11:34 AM, "R-sig-Geo on behalf of Barry Rowlingson" <[hidden email] on behalf of [hidden email]> wrote:

On Wed, Mar 13, 2019 at 6:14 PM Andy Bunn <[hidden email]> wrote:

> I would like to create averages of all the variables in a

> SpatialPointsDataFrame when points are within a specified distance of each

> other. I have a method for doing this but it seems like a silly way to

> approach the problem. Any ideas for doing this using modern syntax

> (especially of the tidy variety) would be appreciated.

>

>

> To start, I have a SpatialPointsDataFrame with several variables measured

> for each point. I'd like to get an average value for each variable for

> points within a specified distance. E.g., getting average cadmium values

> from the meuse data for points within 100 m of each other:

>

> library(sf)

> library(sp)

> data(meuse)

> pts <- st_as_sf(meuse, coords = c("x", "y"), remove=FALSE)

> pts100 <- st_is_within_distance(pts, dist = 100)

> # can use sapply to get mean of a variable. E.g., cadmium

> sapply(pts100, function(x){ mean(pts$cadmium[x]) })

>

>

If this is the method you call "silly" then I don't see anything silly at

all here, only efficient well-written use of base R constructs. The problem

with "modern" syntax is that its subject to rapid change and often slower

than using base R, which has had years to stabilise and optimise.

If you want to iterate this over variables then nest your sapplys:

items = c("cadmium", "copper","lead")

sapply(items, function(item){

sapply(pts100, function(x){ mean(pts[[item]][x]) })

})

gets you:

cadmium copper lead

[1,] 10.150000 83.00000 288.00000

[2,] 10.150000 83.00000 288.00000

[3,] 6.500000 68.00000 199.00000

[4,] 2.600000 81.00000 116.00000

Barry

> Above, I've figured out how to use sapply to do this variable by variable.

> So I could, if I wanted, calculate the mean for each variable, generate a

> centroid for each point and then a SpatialPointsDataFrame of the unique

> values. E.g., for the first few variables:

>

> res <- data.frame(id=1:length(pts100),

> x=NA, y=NA,

> cadmium=NA, copper=NA, lead=NA)

> res$x <- sapply(pts100, function(p){ mean(pts$x[p]) })

> res$y <- sapply(pts100, function(p){ mean(pts$y[p]) })

> res$cadmium <- sapply(pts100, function(p){ mean(pts$cadmium[p]) })

> res$copper <- sapply(pts100, function(p){ mean(pts$copper[p]) })

> res$lead <- sapply(pts100, function(p){ mean(pts$lead[p]) })

> res2 <- res[duplicated(res$cadmium),]

> coordinates(res2) <- c("x","y")

> bubble(res2,"cadmium")

>

>

> This works but seems cumbersome and like there must be a more efficient

> way.

>

>

> Thanks for any help, Andy

>

>

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-geo&data=02%7C01%7Cbunna%40wwu.edu%7C5470ab0ee3cb407f76ef08d6a7e2828f%7Cdc46140ce26f43efb0ae00f257f478ff%7C0%7C0%7C636880988634768989&sdata=upduDGbDHMYznJ35Bv6sJZL8t3JBeJB%2FmCqgePjvmlo%3D&reserved=0

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-geo&data=02%7C01%7Cbunna%40wwu.edu%7C5470ab0ee3cb407f76ef08d6a7e2828f%7Cdc46140ce26f43efb0ae00f257f478ff%7C0%7C0%7C636880988634768989&sdata=upduDGbDHMYznJ35Bv6sJZL8t3JBeJB%2FmCqgePjvmlo%3D&reserved=0

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Error in running Maxent within biomod2

I got an error when running Maxent within biomod2

Running Maxent...Error in system(command = paste("java ",

ifelse(is.null([hidden email]$memory_allocated), :

What does it means?

Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Running Maxent...Error in system(command = paste("java ",

ifelse(is.null([hidden email]$memory_allocated), :

What does it means?

Thanks

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Aggregating points based on distance

On Wed, Mar 13, 2019 at 6:14 PM Andy Bunn <[hidden email]> wrote:

> I would like to create averages of all the variables in a

> SpatialPointsDataFrame when points are within a specified distance of each

> other. I have a method for doing this but it seems like a silly way to

> approach the problem. Any ideas for doing this using modern syntax

> (especially of the tidy variety) would be appreciated.

>

>

> To start, I have a SpatialPointsDataFrame with several variables measured

> for each point. I'd like to get an average value for each variable for

> points within a specified distance. E.g., getting average cadmium values

> from the meuse data for points within 100 m of each other:

>

> library(sf)

> library(sp)

> data(meuse)

> pts <- st_as_sf(meuse, coords = c("x", "y"), remove=FALSE)

> pts100 <- st_is_within_distance(pts, dist = 100)

> # can use sapply to get mean of a variable. E.g., cadmium

> sapply(pts100, function(x){ mean(pts$cadmium[x]) })

>

> If this is the method you call "silly" then I don't see anything silly at

all here, only efficient well-written use of base R constructs. The problem

with "modern" syntax is that its subject to rapid change and often slower

than using base R, which has had years to stabilise and optimise.

If you want to iterate this over variables then nest your sapplys:

items = c("cadmium", "copper","lead")

sapply(items, function(item){

sapply(pts100, function(x){ mean(pts[[item]][x]) })

})

gets you:

cadmium copper lead

[1,] 10.150000 83.00000 288.00000

[2,] 10.150000 83.00000 288.00000

[3,] 6.500000 68.00000 199.00000

[4,] 2.600000 81.00000 116.00000

Barry

> Above, I've figured out how to use sapply to do this variable by variable.

> So I could, if I wanted, calculate the mean for each variable, generate a

> centroid for each point and then a SpatialPointsDataFrame of the unique

> values. E.g., for the first few variables:

>

> res <- data.frame(id=1:length(pts100),

> x=NA, y=NA,

> cadmium=NA, copper=NA, lead=NA)

> res$x <- sapply(pts100, function(p){ mean(pts$x[p]) })

> res$y <- sapply(pts100, function(p){ mean(pts$y[p]) })

> res$cadmium <- sapply(pts100, function(p){ mean(pts$cadmium[p]) })

> res$copper <- sapply(pts100, function(p){ mean(pts$copper[p]) })

> res$lead <- sapply(pts100, function(p){ mean(pts$lead[p]) })

> res2 <- res[duplicated(res$cadmium),]

> coordinates(res2) <- c("x","y")

> bubble(res2,"cadmium")

>

>

> This works but seems cumbersome and like there must be a more efficient

> way.

>

>

> Thanks for any help, Andy

>

>

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> I would like to create averages of all the variables in a

> SpatialPointsDataFrame when points are within a specified distance of each

> other. I have a method for doing this but it seems like a silly way to

> approach the problem. Any ideas for doing this using modern syntax

> (especially of the tidy variety) would be appreciated.

>

>

> To start, I have a SpatialPointsDataFrame with several variables measured

> for each point. I'd like to get an average value for each variable for

> points within a specified distance. E.g., getting average cadmium values

> from the meuse data for points within 100 m of each other:

>

> library(sf)

> library(sp)

> data(meuse)

> pts <- st_as_sf(meuse, coords = c("x", "y"), remove=FALSE)

> pts100 <- st_is_within_distance(pts, dist = 100)

> # can use sapply to get mean of a variable. E.g., cadmium

> sapply(pts100, function(x){ mean(pts$cadmium[x]) })

>

> If this is the method you call "silly" then I don't see anything silly at

all here, only efficient well-written use of base R constructs. The problem

with "modern" syntax is that its subject to rapid change and often slower

than using base R, which has had years to stabilise and optimise.

If you want to iterate this over variables then nest your sapplys:

items = c("cadmium", "copper","lead")

sapply(items, function(item){

sapply(pts100, function(x){ mean(pts[[item]][x]) })

})

gets you:

cadmium copper lead

[1,] 10.150000 83.00000 288.00000

[2,] 10.150000 83.00000 288.00000

[3,] 6.500000 68.00000 199.00000

[4,] 2.600000 81.00000 116.00000

Barry

> Above, I've figured out how to use sapply to do this variable by variable.

> So I could, if I wanted, calculate the mean for each variable, generate a

> centroid for each point and then a SpatialPointsDataFrame of the unique

> values. E.g., for the first few variables:

>

> res <- data.frame(id=1:length(pts100),

> x=NA, y=NA,

> cadmium=NA, copper=NA, lead=NA)

> res$x <- sapply(pts100, function(p){ mean(pts$x[p]) })

> res$y <- sapply(pts100, function(p){ mean(pts$y[p]) })

> res$cadmium <- sapply(pts100, function(p){ mean(pts$cadmium[p]) })

> res$copper <- sapply(pts100, function(p){ mean(pts$copper[p]) })

> res$lead <- sapply(pts100, function(p){ mean(pts$lead[p]) })

> res2 <- res[duplicated(res$cadmium),]

> coordinates(res2) <- c("x","y")

> bubble(res2,"cadmium")

>

>

> This works but seems cumbersome and like there must be a more efficient

> way.

>

>

> Thanks for any help, Andy

>

>

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Aggregating points based on distance

I would like to create averages of all the variables in a SpatialPointsDataFrame when points are within a specified distance of each other. I have a method for doing this but it seems like a silly way to approach the problem. Any ideas for doing this using modern syntax (especially of the tidy variety) would be appreciated.

To start, I have a SpatialPointsDataFrame with several variables measured for each point. I'd like to get an average value for each variable for points within a specified distance. E.g., getting average cadmium values from the meuse data for points within 100 m of each other:

library(sf)

library(sp)

data(meuse)

pts <- st_as_sf(meuse, coords = c("x", "y"), remove=FALSE)

pts100 <- st_is_within_distance(pts, dist = 100)

# can use sapply to get mean of a variable. E.g., cadmium

sapply(pts100, function(x){ mean(pts$cadmium[x]) })

Above, I've figured out how to use sapply to do this variable by variable. So I could, if I wanted, calculate the mean for each variable, generate a centroid for each point and then a SpatialPointsDataFrame of the unique values. E.g., for the first few variables:

res <- data.frame(id=1:length(pts100),

x=NA, y=NA,

cadmium=NA, copper=NA, lead=NA)

res$x <- sapply(pts100, function(p){ mean(pts$x[p]) })

res$y <- sapply(pts100, function(p){ mean(pts$y[p]) })

res$cadmium <- sapply(pts100, function(p){ mean(pts$cadmium[p]) })

res$copper <- sapply(pts100, function(p){ mean(pts$copper[p]) })

res$lead <- sapply(pts100, function(p){ mean(pts$lead[p]) })

res2 <- res[duplicated(res$cadmium),]

coordinates(res2) <- c("x","y")

bubble(res2,"cadmium")

This works but seems cumbersome and like there must be a more efficient way.

Thanks for any help, Andy

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

To start, I have a SpatialPointsDataFrame with several variables measured for each point. I'd like to get an average value for each variable for points within a specified distance. E.g., getting average cadmium values from the meuse data for points within 100 m of each other:

library(sf)

library(sp)

data(meuse)

pts <- st_as_sf(meuse, coords = c("x", "y"), remove=FALSE)

pts100 <- st_is_within_distance(pts, dist = 100)

# can use sapply to get mean of a variable. E.g., cadmium

sapply(pts100, function(x){ mean(pts$cadmium[x]) })

Above, I've figured out how to use sapply to do this variable by variable. So I could, if I wanted, calculate the mean for each variable, generate a centroid for each point and then a SpatialPointsDataFrame of the unique values. E.g., for the first few variables:

res <- data.frame(id=1:length(pts100),

x=NA, y=NA,

cadmium=NA, copper=NA, lead=NA)

res$x <- sapply(pts100, function(p){ mean(pts$x[p]) })

res$y <- sapply(pts100, function(p){ mean(pts$y[p]) })

res$cadmium <- sapply(pts100, function(p){ mean(pts$cadmium[p]) })

res$copper <- sapply(pts100, function(p){ mean(pts$copper[p]) })

res$lead <- sapply(pts100, function(p){ mean(pts$lead[p]) })

res2 <- res[duplicated(res$cadmium),]

coordinates(res2) <- c("x","y")

bubble(res2,"cadmium")

This works but seems cumbersome and like there must be a more efficient way.

Thanks for any help, Andy

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Zonal Anisotropy versus Geometric Anisotropy

To what extent would the vertical variogram sill be less than the data

variance before a zonal anisotropy can be considered?

I have this data with horizontal variogram sill (along all azimuths) of

0.0042 but a vertical variogram sill of 0.004. The variance of the data is

around 0.0044. I have taken the horizontal sill to be approximating the

variance. I wonder if I should do the same for the vertical sill and

therefore declare a fully geometric anisotropy; or, if I should take the

vertical sill as indeed less than the sill and therefore declare a zonal

anisotropy.

Thank you in advance for your response.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

variance before a zonal anisotropy can be considered?

I have this data with horizontal variogram sill (along all azimuths) of

0.0042 but a vertical variogram sill of 0.004. The variance of the data is

around 0.0044. I have taken the horizontal sill to be approximating the

variance. I wonder if I should do the same for the vertical sill and

therefore declare a fully geometric anisotropy; or, if I should take the

vertical sill as indeed less than the sill and therefore declare a zonal

anisotropy.

Thank you in advance for your response.

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### rgdal 1.4-2 on CRAN

The latest release of rgdal is the first step towards support for PROJ 6

(PROJ is the generic name for proj4). From PROJ 5, some internals were

changed but the API and metadata files remained unchanged. PROJ 6,

released 1 March, requires software to declare that it uses the old API,

which rgdal still does, but changes the metadata files to better

accommodate the changes in the internals. These are stepping away from the

+datum= tag, and there are now some coordinate reference systems (CRS)

that cannot be represented using proj4 strings. PROJ 7 is due in March

2020, and will remove the old API.

The sp and rgdal packages will continue to use the proj4 string

representation of CRS; sf also permits the use of EPSG codes, so

(potentially) permitting non-proj4 string CRS to be handled. rgdal and sf

will migrate to the new PROJ API during 2019 and early 2020, but will

attempt to maintain backward compatibility for more recent releases of

PROJ. Why the changes? Well, WGS84 was from 1984, and the world has

changed quite a lot since then, so pivoting CRS to CRS transformation

through geographical coordinates using the WGS84 datum is becoming more

error-prone. The new PROJ internals pipeline transformations without

pivoting through WGS84, so enhancing precision and making calculations

(more) future-proof.

I've notified the maintainers of several affected packages so that they

have time to adapt before PROJ 6 propagates to upstream packagers (GRASS

already uses the new API and GRASS master is PROJ 6 ready, GDAL master is

PROJ 6 ready). We'll hold off putting PROJ 6 into Windows and OSX binaries

for some months, but please be ready us to do that, latest end of year

2019, to be ready for PROJ 7.

Roger

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

(PROJ is the generic name for proj4). From PROJ 5, some internals were

changed but the API and metadata files remained unchanged. PROJ 6,

released 1 March, requires software to declare that it uses the old API,

which rgdal still does, but changes the metadata files to better

accommodate the changes in the internals. These are stepping away from the

+datum= tag, and there are now some coordinate reference systems (CRS)

that cannot be represented using proj4 strings. PROJ 7 is due in March

2020, and will remove the old API.

The sp and rgdal packages will continue to use the proj4 string

representation of CRS; sf also permits the use of EPSG codes, so

(potentially) permitting non-proj4 string CRS to be handled. rgdal and sf

will migrate to the new PROJ API during 2019 and early 2020, but will

attempt to maintain backward compatibility for more recent releases of

PROJ. Why the changes? Well, WGS84 was from 1984, and the world has

changed quite a lot since then, so pivoting CRS to CRS transformation

through geographical coordinates using the WGS84 datum is becoming more

error-prone. The new PROJ internals pipeline transformations without

pivoting through WGS84, so enhancing precision and making calculations

(more) future-proof.

I've notified the maintainers of several affected packages so that they

have time to adapt before PROJ 6 propagates to upstream packagers (GRASS

already uses the new API and GRASS master is PROJ 6 ready, GDAL master is

PROJ 6 ready). We'll hold off putting PROJ 6 into Windows and OSX binaries

for some months, but please be ready us to do that, latest end of year

2019, to be ready for PROJ 7.

Roger

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: inconsistent results from moran.test()

It solved my problem. Thank you!

On Fri, Mar 8, 2019 at 2:25 PM Roger Bivand <[hidden email]> wrote:

> On Fri, 8 Mar 2019, Jay Wang wrote:

>

> > Hello,

> >

> > Having my own spatial weight matrix, I converted it to a listw format and

> > calculated the Moran index. The codes are listed below:

> >

> > Wod_lw<-mat2listw(Wod)

> > moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

>

> Here, Wod_lw has an unknown style, "M" for matrix.

>

> >

> > However, after I generated a gal file, imported it, converted it back to

> a

> > listw file, and re-calculated the Moran index with the following codes,

> the

> > results became totally different (0.1624 vs. 0.1942):

> > write.nb.gal(Wod_lw$neighbours,"Wod.gal")

>

> This only writes the neighbours (read the help page), not the weights.

>

> > Wod.gal.nb <- read.gal(file="Wod.gal")

> > Wod_lw2<-nb2listw(Wod.gal.nb)

>

> The default style in nb2listw() is "W", row standardised.

>

> What does:

>

> all.equal(Wod_lw$weights, Wod_lw2$weights, check.attributes=FALSE)

>

> say? And:

>

> all.equal(Wod_lw$neighbours, Wod_lw2$neighbours, check.attributes=FALSE)

>

> > moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

> > moran.test(mydata$Ties_processed,Wod_lw2, randomisation=TRUE)

> >

> > I don't know where the problem is. I would appreciate if anyone can help

> me

> > with this. Thank you

>

> Do read the help pages, you'll find they explain these things.

>

> Roger

>

> >

> > [[alternative HTML version deleted]]

> >

> > _______________________________________________

> > R-sig-Geo mailing list

> > [hidden email]

> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> >

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> https://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

On Fri, Mar 8, 2019 at 2:25 PM Roger Bivand <[hidden email]> wrote:

> On Fri, 8 Mar 2019, Jay Wang wrote:

>

> > Hello,

> >

> > Having my own spatial weight matrix, I converted it to a listw format and

> > calculated the Moran index. The codes are listed below:

> >

> > Wod_lw<-mat2listw(Wod)

> > moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

>

> Here, Wod_lw has an unknown style, "M" for matrix.

>

> >

> > However, after I generated a gal file, imported it, converted it back to

> a

> > listw file, and re-calculated the Moran index with the following codes,

> the

> > results became totally different (0.1624 vs. 0.1942):

> > write.nb.gal(Wod_lw$neighbours,"Wod.gal")

>

> This only writes the neighbours (read the help page), not the weights.

>

> > Wod.gal.nb <- read.gal(file="Wod.gal")

> > Wod_lw2<-nb2listw(Wod.gal.nb)

>

> The default style in nb2listw() is "W", row standardised.

>

> What does:

>

> all.equal(Wod_lw$weights, Wod_lw2$weights, check.attributes=FALSE)

>

> say? And:

>

> all.equal(Wod_lw$neighbours, Wod_lw2$neighbours, check.attributes=FALSE)

>

> > moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

> > moran.test(mydata$Ties_processed,Wod_lw2, randomisation=TRUE)

> >

> > I don't know where the problem is. I would appreciate if anyone can help

> me

> > with this. Thank you

>

> Do read the help pages, you'll find they explain these things.

>

> Roger

>

> >

> > [[alternative HTML version deleted]]

> >

> > _______________________________________________

> > R-sig-Geo mailing list

> > [hidden email]

> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> >

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> https://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: inference of local Gi* using permutations

On Thu, 7 Mar 2019, Jay Wang wrote:

> Thank you for the detailed explanation. Sorry for missing the explanation

> of res. It is an n by nsim+1 matrix where the n columns are the permutation

> results using sample() and the last column is the actual z values of G_i*s.

> The following line of code was modified to do the permutations: (i in

> 1:nsim) res[i] <- moran(sample(x), listw, n, S0, zero.policy)$I. This could

> be wrong since the sample() does not fix the value of interest at i. Are

> there any functions in R can do this in a correct way? How do we calculate

> the p-values after we get the n by nsim matrix? Thank you!

This is an exercise based on the built-in Getis-Ord dataset:

library(spdep)

data(getisord, package="spData")

xycoords <- cbind(xyz$x, xyz$y)

nb30 <- dnearneigh(xycoords, 0, 30)

lw <- nb2listw(nb30, style="B")

G30 <- localG(xyz$val, lw, return_internals=TRUE)

G_obs <- attr(G30, "internals")[,1]

val <- xyz$val

loc_G <- function(x, lw) {

x_star <- sum(x)

lx <- lag.listw(lw, x)

G <- lx/(x_star-c(x))

G

}

nsim <- 499L

set.seed(1)

res <- matrix(0, nrow=length(val), ncol=nsim)

thisx <- numeric(length(val))

idx <- 1:length(val)

for (j in 1:nsim) {

for (i in seq(along=val)) {

thisx[i] <- val[i]

thisx[!idx %in% i] <- sample(val[-i])

res[i, j] <- loc_G(thisx, lw)[i]

}

}

mns <- apply(res, 1, mean)

sds <- apply(res, 1, sd)

G_Zi <- (G_obs - mns)/sds

plot(density(G30), ylim=c(0, 0.25))

lines(density(G_Zi), lty=2, col=2)

As you can see, the analytical z-values of the match the permutations of G

values closely. We're doing a permutation bootstrap on the G values, not

the z-values of G. Next, try pysal (here old pysal < 2):

library(reticulate)

pysal <- import("pysal")

np <- import("numpy")

write.nb.gal(nb30, "nb30.gal")

w <- pysal$open("nb30.gal")$read()

y <- np$array(val)

w$transformation = "B"

lg_30_499 <- pysal$esda$getisord$G_Local(y, w,

transform="B", permutations=nsim)

lines(density(lg_30_499$z_sim), lty=3, col=3)

all.equal(G_obs, c(lg_30_499$Gs))

all.equal(c(G30), c(lg_30_499$Zs))

The G_Local() function provides both analytical and permutation z-values;

the G values and the analytical z-values agree exactly with spdep, the

permutations are not using the same random number generator or seed, but

are broadly similar. If you read pysal/esda/getisord.py, you'll see that

some indexing is being done, avoiding the double loop. The code is fairly

hard to read, but in optimising the permutation may cut corners; the

self.z_sim is the same code as the simpler R code above.

So while for the local Moran, permutation is demonstrably misleading in

the presence of mis-specification, for local G permutation simply

reproduces the analytical z-values. If you use these to look up normal

p-values, then adjust them for the false discovery rate, usually nothing

is left significant anyway.

Roger

>

>

> On Thu, Mar 7, 2019 at 8:05 AM Roger Bivand <[hidden email]> wrote:

>

>> On Wed, 6 Mar 2019, Jay Wang wrote:

>>

>>> Hello,

>>>

>>> I am currently using the localG () in spdep package, I was wondering if

>> we

>>> can have a conditional permutation-based inference to get the P value for

>>> every Gi*. I saw that a Mote Carlo simulation is used in Moran.MC(), and

>> I

>>> borrowed the following codes from this function and tried to see if I can

>>> do a permutation for localG():

>>>

>>> pvals<-matrix(0, nrow = V, ncol = 1)

>>> for (i in 1:V){

>>> rankresi<-rank(res[i, ])

>>> xranki <- rankresi[length(res[i, ])]

>>> diffi <- nsim - xranki

>>> diffi <- ifelse(diffi > 0, diffi, 0)

>>> pvali <- punif((diffi + 1)/(nsim + 1))

>>> pvals[i,]<-pvali

>>> }

>>

>> Monte Carlo or Hope type tests or permutation bootstraps work like

>> analytical permutation in the global case, and are redundant for that

>> reason. Monte Carlo (conditional permutation) in the local case requires

>> that there is no global autocorrelation or other mis-specifications.

>>

>> Your code snippet is not an example, and I think is not how permutation is

>> actually done. In the local case in ArcGIS, GeoDa, etc., you fix the value

>> of interest at i, randomly re-assign all the values to the other

>> observations, and recompute the local indicator at i, doing this say 499

>> times. Then you move on to the next i, and repeat. In your snippet, we

>> cannot see where res is coming from. Is this an n by nsim matrix from nsim

>> runs? Might you be needing to compare the sample values with the observed

>> value? If you are outputting Z-values of G_i anyway, you may need to step

>> back to the actual G_i (here from spdep):

>>

>> attr(localG(..., return_internals=TRUE), "internals")[,1]

>>

>> Then

>>

>> mns <- apply(res, 1, mean)

>> sds <- apply(res, 1, sd)

>> permZ <- (obs_localG - mns)/sds

>>

>> are z values that can be considered as drawn from the normal. However, I

>> advise against inference unless the assumptions are met, and p-values must

>> be adjusted anyway.

>>

>> Roger

>>

>>>

>>> After running these codes with several different datasets, I found that

>> all

>>> the negative Gi*s have very high P values say 0.999 with 999

>> permutations,

>>> meaning that there are no significant cold spots. Where is the problem?

>> How

>>> can we do conditional permutation-based inference for localG() with R

>>> spdep? I understand the critics of permutation-based inference for local

>>> indicators, but I just want to explore this. Thank you!

>>>

>>> Best

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>

>> --

>> Roger Bivand

>> Department of Economics, Norwegian School of Economics,

>> Helleveien 30, N-5045 Bergen, Norway.

>> voice: +47 55 95 93 55; e-mail: [hidden email]

>> https://orcid.org/0000-0003-2392-6140

>> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Thank you for the detailed explanation. Sorry for missing the explanation

> of res. It is an n by nsim+1 matrix where the n columns are the permutation

> results using sample() and the last column is the actual z values of G_i*s.

> The following line of code was modified to do the permutations: (i in

> 1:nsim) res[i] <- moran(sample(x), listw, n, S0, zero.policy)$I. This could

> be wrong since the sample() does not fix the value of interest at i. Are

> there any functions in R can do this in a correct way? How do we calculate

> the p-values after we get the n by nsim matrix? Thank you!

This is an exercise based on the built-in Getis-Ord dataset:

library(spdep)

data(getisord, package="spData")

xycoords <- cbind(xyz$x, xyz$y)

nb30 <- dnearneigh(xycoords, 0, 30)

lw <- nb2listw(nb30, style="B")

G30 <- localG(xyz$val, lw, return_internals=TRUE)

G_obs <- attr(G30, "internals")[,1]

val <- xyz$val

loc_G <- function(x, lw) {

x_star <- sum(x)

lx <- lag.listw(lw, x)

G <- lx/(x_star-c(x))

G

}

nsim <- 499L

set.seed(1)

res <- matrix(0, nrow=length(val), ncol=nsim)

thisx <- numeric(length(val))

idx <- 1:length(val)

for (j in 1:nsim) {

for (i in seq(along=val)) {

thisx[i] <- val[i]

thisx[!idx %in% i] <- sample(val[-i])

res[i, j] <- loc_G(thisx, lw)[i]

}

}

mns <- apply(res, 1, mean)

sds <- apply(res, 1, sd)

G_Zi <- (G_obs - mns)/sds

plot(density(G30), ylim=c(0, 0.25))

lines(density(G_Zi), lty=2, col=2)

As you can see, the analytical z-values of the match the permutations of G

values closely. We're doing a permutation bootstrap on the G values, not

the z-values of G. Next, try pysal (here old pysal < 2):

library(reticulate)

pysal <- import("pysal")

np <- import("numpy")

write.nb.gal(nb30, "nb30.gal")

w <- pysal$open("nb30.gal")$read()

y <- np$array(val)

w$transformation = "B"

lg_30_499 <- pysal$esda$getisord$G_Local(y, w,

transform="B", permutations=nsim)

lines(density(lg_30_499$z_sim), lty=3, col=3)

all.equal(G_obs, c(lg_30_499$Gs))

all.equal(c(G30), c(lg_30_499$Zs))

The G_Local() function provides both analytical and permutation z-values;

the G values and the analytical z-values agree exactly with spdep, the

permutations are not using the same random number generator or seed, but

are broadly similar. If you read pysal/esda/getisord.py, you'll see that

some indexing is being done, avoiding the double loop. The code is fairly

hard to read, but in optimising the permutation may cut corners; the

self.z_sim is the same code as the simpler R code above.

So while for the local Moran, permutation is demonstrably misleading in

the presence of mis-specification, for local G permutation simply

reproduces the analytical z-values. If you use these to look up normal

p-values, then adjust them for the false discovery rate, usually nothing

is left significant anyway.

Roger

>

>

> On Thu, Mar 7, 2019 at 8:05 AM Roger Bivand <[hidden email]> wrote:

>

>> On Wed, 6 Mar 2019, Jay Wang wrote:

>>

>>> Hello,

>>>

>>> I am currently using the localG () in spdep package, I was wondering if

>> we

>>> can have a conditional permutation-based inference to get the P value for

>>> every Gi*. I saw that a Mote Carlo simulation is used in Moran.MC(), and

>> I

>>> borrowed the following codes from this function and tried to see if I can

>>> do a permutation for localG():

>>>

>>> pvals<-matrix(0, nrow = V, ncol = 1)

>>> for (i in 1:V){

>>> rankresi<-rank(res[i, ])

>>> xranki <- rankresi[length(res[i, ])]

>>> diffi <- nsim - xranki

>>> diffi <- ifelse(diffi > 0, diffi, 0)

>>> pvali <- punif((diffi + 1)/(nsim + 1))

>>> pvals[i,]<-pvali

>>> }

>>

>> Monte Carlo or Hope type tests or permutation bootstraps work like

>> analytical permutation in the global case, and are redundant for that

>> reason. Monte Carlo (conditional permutation) in the local case requires

>> that there is no global autocorrelation or other mis-specifications.

>>

>> Your code snippet is not an example, and I think is not how permutation is

>> actually done. In the local case in ArcGIS, GeoDa, etc., you fix the value

>> of interest at i, randomly re-assign all the values to the other

>> observations, and recompute the local indicator at i, doing this say 499

>> times. Then you move on to the next i, and repeat. In your snippet, we

>> cannot see where res is coming from. Is this an n by nsim matrix from nsim

>> runs? Might you be needing to compare the sample values with the observed

>> value? If you are outputting Z-values of G_i anyway, you may need to step

>> back to the actual G_i (here from spdep):

>>

>> attr(localG(..., return_internals=TRUE), "internals")[,1]

>>

>> Then

>>

>> mns <- apply(res, 1, mean)

>> sds <- apply(res, 1, sd)

>> permZ <- (obs_localG - mns)/sds

>>

>> are z values that can be considered as drawn from the normal. However, I

>> advise against inference unless the assumptions are met, and p-values must

>> be adjusted anyway.

>>

>> Roger

>>

>>>

>>> After running these codes with several different datasets, I found that

>> all

>>> the negative Gi*s have very high P values say 0.999 with 999

>> permutations,

>>> meaning that there are no significant cold spots. Where is the problem?

>> How

>>> can we do conditional permutation-based inference for localG() with R

>>> spdep? I understand the critics of permutation-based inference for local

>>> indicators, but I just want to explore this. Thank you!

>>>

>>> Best

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>

>> --

>> Roger Bivand

>> Department of Economics, Norwegian School of Economics,

>> Helleveien 30, N-5045 Bergen, Norway.

>> voice: +47 55 95 93 55; e-mail: [hidden email]

>> https://orcid.org/0000-0003-2392-6140

>> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: inconsistent results from moran.test()

On Fri, 8 Mar 2019, Jay Wang wrote:

> Hello,

>

> Having my own spatial weight matrix, I converted it to a listw format and

> calculated the Moran index. The codes are listed below:

>

> Wod_lw<-mat2listw(Wod)

> moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

Here, Wod_lw has an unknown style, "M" for matrix.

>

> However, after I generated a gal file, imported it, converted it back to a

> listw file, and re-calculated the Moran index with the following codes, the

> results became totally different (0.1624 vs. 0.1942):

> write.nb.gal(Wod_lw$neighbours,"Wod.gal")

This only writes the neighbours (read the help page), not the weights.

> Wod.gal.nb <- read.gal(file="Wod.gal")

> Wod_lw2<-nb2listw(Wod.gal.nb)

The default style in nb2listw() is "W", row standardised.

What does:

all.equal(Wod_lw$weights, Wod_lw2$weights, check.attributes=FALSE)

say? And:

all.equal(Wod_lw$neighbours, Wod_lw2$neighbours, check.attributes=FALSE)

> moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

> moran.test(mydata$Ties_processed,Wod_lw2, randomisation=TRUE)

>

> I don't know where the problem is. I would appreciate if anyone can help me

> with this. Thank you

Do read the help pages, you'll find they explain these things.

Roger

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Hello,

>

> Having my own spatial weight matrix, I converted it to a listw format and

> calculated the Moran index. The codes are listed below:

>

> Wod_lw<-mat2listw(Wod)

> moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

Here, Wod_lw has an unknown style, "M" for matrix.

>

> However, after I generated a gal file, imported it, converted it back to a

> listw file, and re-calculated the Moran index with the following codes, the

> results became totally different (0.1624 vs. 0.1942):

> write.nb.gal(Wod_lw$neighbours,"Wod.gal")

This only writes the neighbours (read the help page), not the weights.

> Wod.gal.nb <- read.gal(file="Wod.gal")

> Wod_lw2<-nb2listw(Wod.gal.nb)

The default style in nb2listw() is "W", row standardised.

What does:

all.equal(Wod_lw$weights, Wod_lw2$weights, check.attributes=FALSE)

say? And:

all.equal(Wod_lw$neighbours, Wod_lw2$neighbours, check.attributes=FALSE)

> moran.test(mydata$Ties_processed,Wod_lw, randomisation=TRUE)

> moran.test(mydata$Ties_processed,Wod_lw2, randomisation=TRUE)

>

> I don't know where the problem is. I would appreciate if anyone can help me

> with this. Thank you

Do read the help pages, you'll find they explain these things.

Roger

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway