This is an

**for**__archive__**R-sig-geo**, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first**(through the list homepage, not via Nabble)**and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.*Updated:*28 min 5 sec ago

### raster::stack fails with "missing value where TRUE/FALSE needed"

Dear list,

is there any reason for raster::stack retrieve and error (see below) for

the raster provided in dropbox

<https://www.dropbox.com/s/tdfxezlieka1ofd/gsw.tif?dl=0>?

Thanks

# this may read the raster from dropbox directly, but is slow

# r<-raster("https://www.dropbox.com/s/tdfxezlieka1ofd/gsw.tif?dl=1")

# download raster from

https://www.dropbox.com/s/tdfxezlieka1ofd/gsw.tif?dl=0

r<-raster("path/to/downloaded/raster")

stack(r,r)

Error in if (common.len == 1L) unlist(x, recursive = FALSE) else if

> (common.len > :

> missing value where TRUE/FALSE needed

>

# reduce extent (no error)

r2<-crop(r, extent(r)/2)

stack(r2,r2)

R version 3.5.2 (2018-12-20)

> Platform: i386-w64-mingw32/i386 (32-bit)

> Running under: Windows >= 8 x64 (build 9200)

>

> Matrix products: default

> locale:

> [1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United

> Kingdom.1252 LC_MONETARY=English_United Kingdom.1252

> [4] LC_NUMERIC=C LC_TIME=English_United

> Kingdom.1252

>

> attached base packages:

> [1] stats graphics grDevices utils datasets methods base

>

> other attached packages:

> [1] raster_2.9-23 sp_1.3-1

>

> loaded via a namespace (and not attached):

> [1] compiler_3.5.2 rgdal_1.4-4 tools_3.5.2 Rcpp_1.0.0

> codetools_0.2-15 grid_3.5.2 lattice_0.20-38

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

is there any reason for raster::stack retrieve and error (see below) for

the raster provided in dropbox

<https://www.dropbox.com/s/tdfxezlieka1ofd/gsw.tif?dl=0>?

Thanks

# this may read the raster from dropbox directly, but is slow

# r<-raster("https://www.dropbox.com/s/tdfxezlieka1ofd/gsw.tif?dl=1")

# download raster from

https://www.dropbox.com/s/tdfxezlieka1ofd/gsw.tif?dl=0

r<-raster("path/to/downloaded/raster")

stack(r,r)

Error in if (common.len == 1L) unlist(x, recursive = FALSE) else if

> (common.len > :

> missing value where TRUE/FALSE needed

>

# reduce extent (no error)

r2<-crop(r, extent(r)/2)

stack(r2,r2)

R version 3.5.2 (2018-12-20)

> Platform: i386-w64-mingw32/i386 (32-bit)

> Running under: Windows >= 8 x64 (build 9200)

>

> Matrix products: default

> locale:

> [1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United

> Kingdom.1252 LC_MONETARY=English_United Kingdom.1252

> [4] LC_NUMERIC=C LC_TIME=English_United

> Kingdom.1252

>

> attached base packages:

> [1] stats graphics grDevices utils datasets methods base

>

> other attached packages:

> [1] raster_2.9-23 sp_1.3-1

>

> loaded via a namespace (and not attached):

> [1] compiler_3.5.2 rgdal_1.4-4 tools_3.5.2 Rcpp_1.0.0

> codetools_0.2-15 grid_3.5.2 lattice_0.20-38

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

Dear Vijay and Hugo,

Thank you very much for your help. I will look into setting the number

row/cols!

Sincerely,

Milu

On Thu, Aug 1, 2019 at 5:12 PM Vijay Lulla <[hidden email]> wrote:

> I second Hugo's suggestion. IMO, since your points are not a regular grid

> you will be unable to use `rasterXYZ`. The only thing I would add to

> Hugo's answers is that you can set the number of rows/cols that you wish in

> your final raster by setting various args in the raster function.

> example(rasterize) has some good examples.

> HTH,

> Vijay.

>

> On Thu, Aug 1, 2019 at 4:44 AM Hugo Costa <[hidden email]> wrote:

>

>> Dear Milu,

>>

>> the best I can suggest is something like this.

>>

>> library(raster)

>> df<-read.csv("path/to/data/downloaded/from/GoogleDrive")

>> try <- subset(df, year==2010)

>>

>> sp<-SpatialPointsDataFrame(try[,1:2],data=try, proj4string =

>> CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))

>>

>> r<-rasterize(sp, y=raster(), field=try$TWL_5, fun=mean, background=NA)

>> plot(r)

>>

>> Hope this helps somehow

>> Good luck

>> Hugo

>>

>> Miluji Sb <[hidden email]> escreveu no dia quinta, 1/08/2019 à(s)

>> 09:13:

>>

>>> Dear Vijay,

>>>

>>> Thank you again for your reply. I tried attaching my data for two years

>>> and

>>> two variables but the message was rejected. I have saved the data on a

>>> Google Drive folder here

>>> <

>>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>>> >.

>>> Hope this will work. Thanks again!

>>>

>>> Sincerely,

>>>

>>> Milu

>>>

>>> On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

>>>

>>> > Dear Vijay,

>>> >

>>> > Thank you again for your reply. I have attached my data for two years

>>> and

>>> > two variables. Hope they will go through, otherwise the files are also

>>> > available here

>>> > <

>>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>>> >.

>>> > Thanks again!

>>> >

>>> > Sincerely,

>>> >

>>> > Milu

>>> >

>>> > On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]>

>>> wrote:

>>> >

>>> >> Hmm...I had seen your data and thought that it was just some sample

>>> that

>>> >> you'd shared. If this is your whole data then I don't know how to

>>> create a

>>> >> raster from just one row that is returned from subsetting the

>>> dataframe.

>>> >>

>>> >> Sorry for the noise.

>>> >>

>>> >> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

>>> >>

>>> >>> Hello,

>>> >>>

>>> >>> Thank you for your kind reply. Here is a snapshot of the original

>>> data.

>>> >>> I had pasted it at the bottom of my first email but forgot to

>>> mention it.

>>> >>> Thanks again!

>>> >>>

>>> >>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>> >>> 179.311342656601, 179.067616041778, 178.851382109362,

>>> 178.648816406322,

>>> >>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> >>> c(-16.1529296875,

>>> >>> -16.21659020822, -16.266117894201, -16.393550535614,

>>> -16.4457378034442,

>>> >>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>> >>> -16.914110607613,

>>> >>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> >>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> >>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> >>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> >>> 2.16767864033646,

>>> >>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

>>> 2.3649601141941,

>>> >>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

>>> 2.71828906773926

>>> >>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> >>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>>> 2.82546443373866,

>>> >>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> >>> c(2.63753852023063,

>>> >>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

>>> 2.97758282474101,

>>> >>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

>>> 4.09571655859075,

>>> >>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> >>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>>> "data.frame")

>>> >>>

>>> >>> Sincerely,

>>> >>>

>>> >>> Milu

>>> >>>

>>> >>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

>>> >>> wrote:

>>> >>>

>>> >>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates

>>> and

>>> >>>> must be on a regular grid." And, it appears to me that you might

>>> be losing

>>> >>>> values by rounding lon/lat values. The help file further suggests

>>> that

>>> >>>> `rasterize` might be the function you're looking for. List members

>>> will

>>> >>>> (certainly I will) find it more helpful to propose other solutions

>>> if you

>>> >>>> post a small reproducible example of your original georeferenced

>>> dataset so

>>> >>>> that we get an idea of what data you're using.

>>> >>>>

>>> >>>> Sorry, I cannot be of more help.

>>> >>>>

>>> >>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]>

>>> wrote:

>>> >>>>

>>> >>>>> Dear all,

>>> >>>>>

>>> >>>>> I have georeferenced dataset with multiple variables and years. The

>>> >>>>> data is

>>> >>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert

>>> this

>>> >>>>> into

>>> >>>>> a raster.

>>> >>>>>

>>> >>>>> I have filtered the data for one year and one variable and did the

>>> >>>>> following;

>>> >>>>>

>>> >>>>> try <- subset(df, year==2010)

>>> >>>>> try <- try[,c(1,2,4)]

>>> >>>>> try$lon <- round(try$lon)

>>> >>>>> try$lat <- round(try$lat)

>>> >>>>> r_imp <- rasterFromXYZ(try)

>>> >>>>>

>>> >>>>> Two issues; is it possible to convert the original dataset with the

>>> >>>>> multiple variables and years to a raster? If not, how can I avoid

>>> >>>>> rounding

>>> >>>>> the coordinates? Currently, I get this error "Error in

>>> >>>>> rasterFromXYZ(try) :

>>> >>>>> x cell sizes are not regular" without rounding.

>>> >>>>>

>>> >>>>> Any help will be greatly appreciated. Thank you!

>>> >>>>>

>>> >>>>> Sincerely,

>>> >>>>>

>>> >>>>> Shouro

>>> >>>>>

>>> >>>>> ## Data

>>> >>>>> df <- structure(list(lon = c(180, 179.762810919291,

>>> 179.523658017568,

>>> >>>>> 179.311342656601, 179.067616041778, 178.851382109362,

>>> 178.648816406322,

>>> >>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> >>>>> c(-16.1529296875,

>>> >>>>> -16.21659020822, -16.266117894201, -16.393550535614,

>>> -16.4457378034442,

>>> >>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>> >>>>> -16.914110607613,

>>> >>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> >>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> >>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> >>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> >>>>> 2.16767864033646,

>>> >>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

>>> 2.3649601141941,

>>> >>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

>>> 2.71828906773926

>>> >>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> >>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>>> 2.82546443373866,

>>> >>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> >>>>> c(2.63753852023063,

>>> >>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

>>> 2.97758282474101,

>>> >>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

>>> 4.09571655859075,

>>> >>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> >>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>>> "data.frame")

>>> >>>>>

>>> >>>>> [[alternative HTML version deleted]]

>>> >>>>>

>>> >>>>> _______________________________________________

>>> >>>>> R-sig-Geo mailing list

>>> >>>>> [hidden email]

>>> >>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>> >>>>>

>>> >>>>

>>> >>>>

>>> >>>>

>>> >>

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Thank you very much for your help. I will look into setting the number

row/cols!

Sincerely,

Milu

On Thu, Aug 1, 2019 at 5:12 PM Vijay Lulla <[hidden email]> wrote:

> I second Hugo's suggestion. IMO, since your points are not a regular grid

> you will be unable to use `rasterXYZ`. The only thing I would add to

> Hugo's answers is that you can set the number of rows/cols that you wish in

> your final raster by setting various args in the raster function.

> example(rasterize) has some good examples.

> HTH,

> Vijay.

>

> On Thu, Aug 1, 2019 at 4:44 AM Hugo Costa <[hidden email]> wrote:

>

>> Dear Milu,

>>

>> the best I can suggest is something like this.

>>

>> library(raster)

>> df<-read.csv("path/to/data/downloaded/from/GoogleDrive")

>> try <- subset(df, year==2010)

>>

>> sp<-SpatialPointsDataFrame(try[,1:2],data=try, proj4string =

>> CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))

>>

>> r<-rasterize(sp, y=raster(), field=try$TWL_5, fun=mean, background=NA)

>> plot(r)

>>

>> Hope this helps somehow

>> Good luck

>> Hugo

>>

>> Miluji Sb <[hidden email]> escreveu no dia quinta, 1/08/2019 à(s)

>> 09:13:

>>

>>> Dear Vijay,

>>>

>>> Thank you again for your reply. I tried attaching my data for two years

>>> and

>>> two variables but the message was rejected. I have saved the data on a

>>> Google Drive folder here

>>> <

>>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>>> >.

>>> Hope this will work. Thanks again!

>>>

>>> Sincerely,

>>>

>>> Milu

>>>

>>> On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

>>>

>>> > Dear Vijay,

>>> >

>>> > Thank you again for your reply. I have attached my data for two years

>>> and

>>> > two variables. Hope they will go through, otherwise the files are also

>>> > available here

>>> > <

>>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>>> >.

>>> > Thanks again!

>>> >

>>> > Sincerely,

>>> >

>>> > Milu

>>> >

>>> > On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]>

>>> wrote:

>>> >

>>> >> Hmm...I had seen your data and thought that it was just some sample

>>> that

>>> >> you'd shared. If this is your whole data then I don't know how to

>>> create a

>>> >> raster from just one row that is returned from subsetting the

>>> dataframe.

>>> >>

>>> >> Sorry for the noise.

>>> >>

>>> >> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

>>> >>

>>> >>> Hello,

>>> >>>

>>> >>> Thank you for your kind reply. Here is a snapshot of the original

>>> data.

>>> >>> I had pasted it at the bottom of my first email but forgot to

>>> mention it.

>>> >>> Thanks again!

>>> >>>

>>> >>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>> >>> 179.311342656601, 179.067616041778, 178.851382109362,

>>> 178.648816406322,

>>> >>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> >>> c(-16.1529296875,

>>> >>> -16.21659020822, -16.266117894201, -16.393550535614,

>>> -16.4457378034442,

>>> >>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>> >>> -16.914110607613,

>>> >>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> >>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> >>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> >>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> >>> 2.16767864033646,

>>> >>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

>>> 2.3649601141941,

>>> >>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

>>> 2.71828906773926

>>> >>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> >>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>>> 2.82546443373866,

>>> >>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> >>> c(2.63753852023063,

>>> >>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

>>> 2.97758282474101,

>>> >>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

>>> 4.09571655859075,

>>> >>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> >>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>>> "data.frame")

>>> >>>

>>> >>> Sincerely,

>>> >>>

>>> >>> Milu

>>> >>>

>>> >>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

>>> >>> wrote:

>>> >>>

>>> >>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates

>>> and

>>> >>>> must be on a regular grid." And, it appears to me that you might

>>> be losing

>>> >>>> values by rounding lon/lat values. The help file further suggests

>>> that

>>> >>>> `rasterize` might be the function you're looking for. List members

>>> will

>>> >>>> (certainly I will) find it more helpful to propose other solutions

>>> if you

>>> >>>> post a small reproducible example of your original georeferenced

>>> dataset so

>>> >>>> that we get an idea of what data you're using.

>>> >>>>

>>> >>>> Sorry, I cannot be of more help.

>>> >>>>

>>> >>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]>

>>> wrote:

>>> >>>>

>>> >>>>> Dear all,

>>> >>>>>

>>> >>>>> I have georeferenced dataset with multiple variables and years. The

>>> >>>>> data is

>>> >>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert

>>> this

>>> >>>>> into

>>> >>>>> a raster.

>>> >>>>>

>>> >>>>> I have filtered the data for one year and one variable and did the

>>> >>>>> following;

>>> >>>>>

>>> >>>>> try <- subset(df, year==2010)

>>> >>>>> try <- try[,c(1,2,4)]

>>> >>>>> try$lon <- round(try$lon)

>>> >>>>> try$lat <- round(try$lat)

>>> >>>>> r_imp <- rasterFromXYZ(try)

>>> >>>>>

>>> >>>>> Two issues; is it possible to convert the original dataset with the

>>> >>>>> multiple variables and years to a raster? If not, how can I avoid

>>> >>>>> rounding

>>> >>>>> the coordinates? Currently, I get this error "Error in

>>> >>>>> rasterFromXYZ(try) :

>>> >>>>> x cell sizes are not regular" without rounding.

>>> >>>>>

>>> >>>>> Any help will be greatly appreciated. Thank you!

>>> >>>>>

>>> >>>>> Sincerely,

>>> >>>>>

>>> >>>>> Shouro

>>> >>>>>

>>> >>>>> ## Data

>>> >>>>> df <- structure(list(lon = c(180, 179.762810919291,

>>> 179.523658017568,

>>> >>>>> 179.311342656601, 179.067616041778, 178.851382109362,

>>> 178.648816406322,

>>> >>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> >>>>> c(-16.1529296875,

>>> >>>>> -16.21659020822, -16.266117894201, -16.393550535614,

>>> -16.4457378034442,

>>> >>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>> >>>>> -16.914110607613,

>>> >>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> >>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> >>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> >>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> >>>>> 2.16767864033646,

>>> >>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

>>> 2.3649601141941,

>>> >>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

>>> 2.71828906773926

>>> >>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> >>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>>> 2.82546443373866,

>>> >>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> >>>>> c(2.63753852023063,

>>> >>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

>>> 2.97758282474101,

>>> >>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

>>> 4.09571655859075,

>>> >>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> >>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>>> "data.frame")

>>> >>>>>

>>> >>>>> [[alternative HTML version deleted]]

>>> >>>>>

>>> >>>>> _______________________________________________

>>> >>>>> R-sig-Geo mailing list

>>> >>>>> [hidden email]

>>> >>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>> >>>>>

>>> >>>>

>>> >>>>

>>> >>>>

>>> >>

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

I second Hugo's suggestion. IMO, since your points are not a regular grid

you will be unable to use `rasterXYZ`. The only thing I would add to

Hugo's answers is that you can set the number of rows/cols that you wish in

your final raster by setting various args in the raster function.

example(rasterize) has some good examples.

HTH,

Vijay.

On Thu, Aug 1, 2019 at 4:44 AM Hugo Costa <[hidden email]> wrote:

> Dear Milu,

>

> the best I can suggest is something like this.

>

> library(raster)

> df<-read.csv("path/to/data/downloaded/from/GoogleDrive")

> try <- subset(df, year==2010)

>

> sp<-SpatialPointsDataFrame(try[,1:2],data=try, proj4string =

> CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))

>

> r<-rasterize(sp, y=raster(), field=try$TWL_5, fun=mean, background=NA)

> plot(r)

>

> Hope this helps somehow

> Good luck

> Hugo

>

> Miluji Sb <[hidden email]> escreveu no dia quinta, 1/08/2019 à(s)

> 09:13:

>

>> Dear Vijay,

>>

>> Thank you again for your reply. I tried attaching my data for two years

>> and

>> two variables but the message was rejected. I have saved the data on a

>> Google Drive folder here

>> <

>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>> >.

>> Hope this will work. Thanks again!

>>

>> Sincerely,

>>

>> Milu

>>

>> On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

>>

>> > Dear Vijay,

>> >

>> > Thank you again for your reply. I have attached my data for two years

>> and

>> > two variables. Hope they will go through, otherwise the files are also

>> > available here

>> > <

>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>> >.

>> > Thanks again!

>> >

>> > Sincerely,

>> >

>> > Milu

>> >

>> > On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]>

>> wrote:

>> >

>> >> Hmm...I had seen your data and thought that it was just some sample

>> that

>> >> you'd shared. If this is your whole data then I don't know how to

>> create a

>> >> raster from just one row that is returned from subsetting the

>> dataframe.

>> >>

>> >> Sorry for the noise.

>> >>

>> >> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

>> >>

>> >>> Hello,

>> >>>

>> >>> Thank you for your kind reply. Here is a snapshot of the original

>> data.

>> >>> I had pasted it at the bottom of my first email but forgot to mention

>> it.

>> >>> Thanks again!

>> >>>

>> >>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>> >>> 179.311342656601, 179.067616041778, 178.851382109362,

>> 178.648816406322,

>> >>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>> >>> c(-16.1529296875,

>> >>> -16.21659020822, -16.266117894201, -16.393550535614,

>> -16.4457378034442,

>> >>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>> >>> -16.914110607613,

>> >>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>> >>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>> >>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>> >>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>> >>> 2.16767864033646,

>> >>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>> >>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>> >>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>> >>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>> 2.82546443373866,

>> >>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>> >>> c(2.63753852023063,

>> >>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>> >>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>> >>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>> >>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>> "data.frame")

>> >>>

>> >>> Sincerely,

>> >>>

>> >>> Milu

>> >>>

>> >>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

>> >>> wrote:

>> >>>

>> >>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates

>> and

>> >>>> must be on a regular grid." And, it appears to me that you might be

>> losing

>> >>>> values by rounding lon/lat values. The help file further suggests

>> that

>> >>>> `rasterize` might be the function you're looking for. List members

>> will

>> >>>> (certainly I will) find it more helpful to propose other solutions

>> if you

>> >>>> post a small reproducible example of your original georeferenced

>> dataset so

>> >>>> that we get an idea of what data you're using.

>> >>>>

>> >>>> Sorry, I cannot be of more help.

>> >>>>

>> >>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]>

>> wrote:

>> >>>>

>> >>>>> Dear all,

>> >>>>>

>> >>>>> I have georeferenced dataset with multiple variables and years. The

>> >>>>> data is

>> >>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert

>> this

>> >>>>> into

>> >>>>> a raster.

>> >>>>>

>> >>>>> I have filtered the data for one year and one variable and did the

>> >>>>> following;

>> >>>>>

>> >>>>> try <- subset(df, year==2010)

>> >>>>> try <- try[,c(1,2,4)]

>> >>>>> try$lon <- round(try$lon)

>> >>>>> try$lat <- round(try$lat)

>> >>>>> r_imp <- rasterFromXYZ(try)

>> >>>>>

>> >>>>> Two issues; is it possible to convert the original dataset with the

>> >>>>> multiple variables and years to a raster? If not, how can I avoid

>> >>>>> rounding

>> >>>>> the coordinates? Currently, I get this error "Error in

>> >>>>> rasterFromXYZ(try) :

>> >>>>> x cell sizes are not regular" without rounding.

>> >>>>>

>> >>>>> Any help will be greatly appreciated. Thank you!

>> >>>>>

>> >>>>> Sincerely,

>> >>>>>

>> >>>>> Shouro

>> >>>>>

>> >>>>> ## Data

>> >>>>> df <- structure(list(lon = c(180, 179.762810919291,

>> 179.523658017568,

>> >>>>> 179.311342656601, 179.067616041778, 178.851382109362,

>> 178.648816406322,

>> >>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>> >>>>> c(-16.1529296875,

>> >>>>> -16.21659020822, -16.266117894201, -16.393550535614,

>> -16.4457378034442,

>> >>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>> >>>>> -16.914110607613,

>> >>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>> >>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>> >>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>> >>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>> >>>>> 2.16767864033646,

>> >>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

>> 2.3649601141941,

>> >>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

>> 2.71828906773926

>> >>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>> >>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>> 2.82546443373866,

>> >>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>> >>>>> c(2.63753852023063,

>> >>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

>> 2.97758282474101,

>> >>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

>> 4.09571655859075,

>> >>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>> >>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>> "data.frame")

>> >>>>>

>> >>>>> [[alternative HTML version deleted]]

>> >>>>>

>> >>>>> _______________________________________________

>> >>>>> R-sig-Geo mailing list

>> >>>>> [hidden email]

>> >>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>> >>>>>

>> >>>>

>> >>>>

>> >>>>

>> >>

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

you will be unable to use `rasterXYZ`. The only thing I would add to

Hugo's answers is that you can set the number of rows/cols that you wish in

your final raster by setting various args in the raster function.

example(rasterize) has some good examples.

HTH,

Vijay.

On Thu, Aug 1, 2019 at 4:44 AM Hugo Costa <[hidden email]> wrote:

> Dear Milu,

>

> the best I can suggest is something like this.

>

> library(raster)

> df<-read.csv("path/to/data/downloaded/from/GoogleDrive")

> try <- subset(df, year==2010)

>

> sp<-SpatialPointsDataFrame(try[,1:2],data=try, proj4string =

> CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))

>

> r<-rasterize(sp, y=raster(), field=try$TWL_5, fun=mean, background=NA)

> plot(r)

>

> Hope this helps somehow

> Good luck

> Hugo

>

> Miluji Sb <[hidden email]> escreveu no dia quinta, 1/08/2019 à(s)

> 09:13:

>

>> Dear Vijay,

>>

>> Thank you again for your reply. I tried attaching my data for two years

>> and

>> two variables but the message was rejected. I have saved the data on a

>> Google Drive folder here

>> <

>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>> >.

>> Hope this will work. Thanks again!

>>

>> Sincerely,

>>

>> Milu

>>

>> On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

>>

>> > Dear Vijay,

>> >

>> > Thank you again for your reply. I have attached my data for two years

>> and

>> > two variables. Hope they will go through, otherwise the files are also

>> > available here

>> > <

>> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

>> >.

>> > Thanks again!

>> >

>> > Sincerely,

>> >

>> > Milu

>> >

>> > On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]>

>> wrote:

>> >

>> >> Hmm...I had seen your data and thought that it was just some sample

>> that

>> >> you'd shared. If this is your whole data then I don't know how to

>> create a

>> >> raster from just one row that is returned from subsetting the

>> dataframe.

>> >>

>> >> Sorry for the noise.

>> >>

>> >> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

>> >>

>> >>> Hello,

>> >>>

>> >>> Thank you for your kind reply. Here is a snapshot of the original

>> data.

>> >>> I had pasted it at the bottom of my first email but forgot to mention

>> it.

>> >>> Thanks again!

>> >>>

>> >>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>> >>> 179.311342656601, 179.067616041778, 178.851382109362,

>> 178.648816406322,

>> >>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>> >>> c(-16.1529296875,

>> >>> -16.21659020822, -16.266117894201, -16.393550535614,

>> -16.4457378034442,

>> >>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>> >>> -16.914110607613,

>> >>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>> >>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>> >>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>> >>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>> >>> 2.16767864033646,

>> >>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>> >>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>> >>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>> >>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>> 2.82546443373866,

>> >>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>> >>> c(2.63753852023063,

>> >>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>> >>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>> >>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>> >>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>> "data.frame")

>> >>>

>> >>> Sincerely,

>> >>>

>> >>> Milu

>> >>>

>> >>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

>> >>> wrote:

>> >>>

>> >>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates

>> and

>> >>>> must be on a regular grid." And, it appears to me that you might be

>> losing

>> >>>> values by rounding lon/lat values. The help file further suggests

>> that

>> >>>> `rasterize` might be the function you're looking for. List members

>> will

>> >>>> (certainly I will) find it more helpful to propose other solutions

>> if you

>> >>>> post a small reproducible example of your original georeferenced

>> dataset so

>> >>>> that we get an idea of what data you're using.

>> >>>>

>> >>>> Sorry, I cannot be of more help.

>> >>>>

>> >>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]>

>> wrote:

>> >>>>

>> >>>>> Dear all,

>> >>>>>

>> >>>>> I have georeferenced dataset with multiple variables and years. The

>> >>>>> data is

>> >>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert

>> this

>> >>>>> into

>> >>>>> a raster.

>> >>>>>

>> >>>>> I have filtered the data for one year and one variable and did the

>> >>>>> following;

>> >>>>>

>> >>>>> try <- subset(df, year==2010)

>> >>>>> try <- try[,c(1,2,4)]

>> >>>>> try$lon <- round(try$lon)

>> >>>>> try$lat <- round(try$lat)

>> >>>>> r_imp <- rasterFromXYZ(try)

>> >>>>>

>> >>>>> Two issues; is it possible to convert the original dataset with the

>> >>>>> multiple variables and years to a raster? If not, how can I avoid

>> >>>>> rounding

>> >>>>> the coordinates? Currently, I get this error "Error in

>> >>>>> rasterFromXYZ(try) :

>> >>>>> x cell sizes are not regular" without rounding.

>> >>>>>

>> >>>>> Any help will be greatly appreciated. Thank you!

>> >>>>>

>> >>>>> Sincerely,

>> >>>>>

>> >>>>> Shouro

>> >>>>>

>> >>>>> ## Data

>> >>>>> df <- structure(list(lon = c(180, 179.762810919291,

>> 179.523658017568,

>> >>>>> 179.311342656601, 179.067616041778, 178.851382109362,

>> 178.648816406322,

>> >>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>> >>>>> c(-16.1529296875,

>> >>>>> -16.21659020822, -16.266117894201, -16.393550535614,

>> -16.4457378034442,

>> >>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>> >>>>> -16.914110607613,

>> >>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>> >>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>> >>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>> >>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>> >>>>> 2.16767864033646,

>> >>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

>> 2.3649601141941,

>> >>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

>> 2.71828906773926

>> >>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>> >>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

>> 2.82546443373866,

>> >>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>> >>>>> c(2.63753852023063,

>> >>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

>> 2.97758282474101,

>> >>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

>> 4.09571655859075,

>> >>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>> >>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

>> "data.frame")

>> >>>>>

>> >>>>> [[alternative HTML version deleted]]

>> >>>>>

>> >>>>> _______________________________________________

>> >>>>> R-sig-Geo mailing list

>> >>>>> [hidden email]

>> >>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>> >>>>>

>> >>>>

>> >>>>

>> >>>>

>> >>

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

Dear Milu,

the best I can suggest is something like this.

library(raster)

df<-read.csv("path/to/data/downloaded/from/GoogleDrive")

try <- subset(df, year==2010)

sp<-SpatialPointsDataFrame(try[,1:2],data=try, proj4string =

CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))

r<-rasterize(sp, y=raster(), field=try$TWL_5, fun=mean, background=NA)

plot(r)

Hope this helps somehow

Good luck

Hugo

Miluji Sb <[hidden email]> escreveu no dia quinta, 1/08/2019 à(s) 09:13:

> Dear Vijay,

>

> Thank you again for your reply. I tried attaching my data for two years and

> two variables but the message was rejected. I have saved the data on a

> Google Drive folder here

> <

> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

> >.

> Hope this will work. Thanks again!

>

> Sincerely,

>

> Milu

>

> On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

>

> > Dear Vijay,

> >

> > Thank you again for your reply. I have attached my data for two years and

> > two variables. Hope they will go through, otherwise the files are also

> > available here

> > <

> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

> >.

> > Thanks again!

> >

> > Sincerely,

> >

> > Milu

> >

> > On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]>

> wrote:

> >

> >> Hmm...I had seen your data and thought that it was just some sample that

> >> you'd shared. If this is your whole data then I don't know how to

> create a

> >> raster from just one row that is returned from subsetting the dataframe.

> >>

> >> Sorry for the noise.

> >>

> >> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

> >>

> >>> Hello,

> >>>

> >>> Thank you for your kind reply. Here is a snapshot of the original

> data.

> >>> I had pasted it at the bottom of my first email but forgot to mention

> it.

> >>> Thanks again!

> >>>

> >>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> >>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

> >>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> >>> c(-16.1529296875,

> >>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

> >>> -16.561653799838, -16.6533087696649, -16.7741069281329,

> >>> -16.914110607613,

> >>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> >>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> >>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> >>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> >>> 2.16767864033646,

> >>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

> >>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

> >>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> >>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

> >>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> >>> c(2.63753852023063,

> >>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

> >>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

> >>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> >>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

> >>>

> >>> Sincerely,

> >>>

> >>> Milu

> >>>

> >>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

> >>> wrote:

> >>>

> >>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates

> and

> >>>> must be on a regular grid." And, it appears to me that you might be

> losing

> >>>> values by rounding lon/lat values. The help file further suggests

> that

> >>>> `rasterize` might be the function you're looking for. List members

> will

> >>>> (certainly I will) find it more helpful to propose other solutions if

> you

> >>>> post a small reproducible example of your original georeferenced

> dataset so

> >>>> that we get an idea of what data you're using.

> >>>>

> >>>> Sorry, I cannot be of more help.

> >>>>

> >>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]>

> wrote:

> >>>>

> >>>>> Dear all,

> >>>>>

> >>>>> I have georeferenced dataset with multiple variables and years. The

> >>>>> data is

> >>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this

> >>>>> into

> >>>>> a raster.

> >>>>>

> >>>>> I have filtered the data for one year and one variable and did the

> >>>>> following;

> >>>>>

> >>>>> try <- subset(df, year==2010)

> >>>>> try <- try[,c(1,2,4)]

> >>>>> try$lon <- round(try$lon)

> >>>>> try$lat <- round(try$lat)

> >>>>> r_imp <- rasterFromXYZ(try)

> >>>>>

> >>>>> Two issues; is it possible to convert the original dataset with the

> >>>>> multiple variables and years to a raster? If not, how can I avoid

> >>>>> rounding

> >>>>> the coordinates? Currently, I get this error "Error in

> >>>>> rasterFromXYZ(try) :

> >>>>> x cell sizes are not regular" without rounding.

> >>>>>

> >>>>> Any help will be greatly appreciated. Thank you!

> >>>>>

> >>>>> Sincerely,

> >>>>>

> >>>>> Shouro

> >>>>>

> >>>>> ## Data

> >>>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> >>>>> 179.311342656601, 179.067616041778, 178.851382109362,

> 178.648816406322,

> >>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> >>>>> c(-16.1529296875,

> >>>>> -16.21659020822, -16.266117894201, -16.393550535614,

> -16.4457378034442,

> >>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

> >>>>> -16.914110607613,

> >>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> >>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> >>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> >>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> >>>>> 2.16767864033646,

> >>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

> 2.3649601141941,

> >>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

> 2.71828906773926

> >>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> >>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

> 2.82546443373866,

> >>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> >>>>> c(2.63753852023063,

> >>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

> 2.97758282474101,

> >>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

> 4.09571655859075,

> >>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> >>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

> "data.frame")

> >>>>>

> >>>>> [[alternative HTML version deleted]]

> >>>>>

> >>>>> _______________________________________________

> >>>>> R-sig-Geo mailing list

> >>>>> [hidden email]

> >>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> >>>>>

> >>>>

> >>>>

> >>>>

> >>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

the best I can suggest is something like this.

library(raster)

df<-read.csv("path/to/data/downloaded/from/GoogleDrive")

try <- subset(df, year==2010)

sp<-SpatialPointsDataFrame(try[,1:2],data=try, proj4string =

CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))

r<-rasterize(sp, y=raster(), field=try$TWL_5, fun=mean, background=NA)

plot(r)

Hope this helps somehow

Good luck

Hugo

Miluji Sb <[hidden email]> escreveu no dia quinta, 1/08/2019 à(s) 09:13:

> Dear Vijay,

>

> Thank you again for your reply. I tried attaching my data for two years and

> two variables but the message was rejected. I have saved the data on a

> Google Drive folder here

> <

> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

> >.

> Hope this will work. Thanks again!

>

> Sincerely,

>

> Milu

>

> On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

>

> > Dear Vijay,

> >

> > Thank you again for your reply. I have attached my data for two years and

> > two variables. Hope they will go through, otherwise the files are also

> > available here

> > <

> https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing

> >.

> > Thanks again!

> >

> > Sincerely,

> >

> > Milu

> >

> > On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]>

> wrote:

> >

> >> Hmm...I had seen your data and thought that it was just some sample that

> >> you'd shared. If this is your whole data then I don't know how to

> create a

> >> raster from just one row that is returned from subsetting the dataframe.

> >>

> >> Sorry for the noise.

> >>

> >> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

> >>

> >>> Hello,

> >>>

> >>> Thank you for your kind reply. Here is a snapshot of the original

> data.

> >>> I had pasted it at the bottom of my first email but forgot to mention

> it.

> >>> Thanks again!

> >>>

> >>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> >>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

> >>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> >>> c(-16.1529296875,

> >>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

> >>> -16.561653799838, -16.6533087696649, -16.7741069281329,

> >>> -16.914110607613,

> >>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> >>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> >>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> >>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> >>> 2.16767864033646,

> >>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

> >>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

> >>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> >>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

> >>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> >>> c(2.63753852023063,

> >>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

> >>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

> >>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> >>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

> >>>

> >>> Sincerely,

> >>>

> >>> Milu

> >>>

> >>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

> >>> wrote:

> >>>

> >>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates

> and

> >>>> must be on a regular grid." And, it appears to me that you might be

> losing

> >>>> values by rounding lon/lat values. The help file further suggests

> that

> >>>> `rasterize` might be the function you're looking for. List members

> will

> >>>> (certainly I will) find it more helpful to propose other solutions if

> you

> >>>> post a small reproducible example of your original georeferenced

> dataset so

> >>>> that we get an idea of what data you're using.

> >>>>

> >>>> Sorry, I cannot be of more help.

> >>>>

> >>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]>

> wrote:

> >>>>

> >>>>> Dear all,

> >>>>>

> >>>>> I have georeferenced dataset with multiple variables and years. The

> >>>>> data is

> >>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this

> >>>>> into

> >>>>> a raster.

> >>>>>

> >>>>> I have filtered the data for one year and one variable and did the

> >>>>> following;

> >>>>>

> >>>>> try <- subset(df, year==2010)

> >>>>> try <- try[,c(1,2,4)]

> >>>>> try$lon <- round(try$lon)

> >>>>> try$lat <- round(try$lat)

> >>>>> r_imp <- rasterFromXYZ(try)

> >>>>>

> >>>>> Two issues; is it possible to convert the original dataset with the

> >>>>> multiple variables and years to a raster? If not, how can I avoid

> >>>>> rounding

> >>>>> the coordinates? Currently, I get this error "Error in

> >>>>> rasterFromXYZ(try) :

> >>>>> x cell sizes are not regular" without rounding.

> >>>>>

> >>>>> Any help will be greatly appreciated. Thank you!

> >>>>>

> >>>>> Sincerely,

> >>>>>

> >>>>> Shouro

> >>>>>

> >>>>> ## Data

> >>>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> >>>>> 179.311342656601, 179.067616041778, 178.851382109362,

> 178.648816406322,

> >>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> >>>>> c(-16.1529296875,

> >>>>> -16.21659020822, -16.266117894201, -16.393550535614,

> -16.4457378034442,

> >>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

> >>>>> -16.914110607613,

> >>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> >>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> >>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> >>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> >>>>> 2.16767864033646,

> >>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709,

> 2.3649601141941,

> >>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745,

> 2.71828906773926

> >>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> >>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483,

> 2.82546443373866,

> >>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> >>>>> c(2.63753852023063,

> >>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795,

> 2.97758282474101,

> >>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659,

> 4.09571655859075,

> >>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> >>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class =

> "data.frame")

> >>>>>

> >>>>> [[alternative HTML version deleted]]

> >>>>>

> >>>>> _______________________________________________

> >>>>> R-sig-Geo mailing list

> >>>>> [hidden email]

> >>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> >>>>>

> >>>>

> >>>>

> >>>>

> >>

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

Dear Vijay,

Thank you again for your reply. I tried attaching my data for two years and

two variables but the message was rejected. I have saved the data on a

Google Drive folder here

<https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing>.

Hope this will work. Thanks again!

Sincerely,

Milu

On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

> Dear Vijay,

>

> Thank you again for your reply. I have attached my data for two years and

> two variables. Hope they will go through, otherwise the files are also

> available here

> <https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing>.

> Thanks again!

>

> Sincerely,

>

> Milu

>

> On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]> wrote:

>

>> Hmm...I had seen your data and thought that it was just some sample that

>> you'd shared. If this is your whole data then I don't know how to create a

>> raster from just one row that is returned from subsetting the dataframe.

>>

>> Sorry for the noise.

>>

>> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

>>

>>> Hello,

>>>

>>> Thank you for your kind reply. Here is a snapshot of the original data.

>>> I had pasted it at the bottom of my first email but forgot to mention it.

>>> Thanks again!

>>>

>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> c(-16.1529296875,

>>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>> -16.914110607613,

>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> 2.16767864033646,

>>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> c(2.63753852023063,

>>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>>

>>> Sincerely,

>>>

>>> Milu

>>>

>>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

>>> wrote:

>>>

>>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates and

>>>> must be on a regular grid." And, it appears to me that you might be losing

>>>> values by rounding lon/lat values. The help file further suggests that

>>>> `rasterize` might be the function you're looking for. List members will

>>>> (certainly I will) find it more helpful to propose other solutions if you

>>>> post a small reproducible example of your original georeferenced dataset so

>>>> that we get an idea of what data you're using.

>>>>

>>>> Sorry, I cannot be of more help.

>>>>

>>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

>>>>

>>>>> Dear all,

>>>>>

>>>>> I have georeferenced dataset with multiple variables and years. The

>>>>> data is

>>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this

>>>>> into

>>>>> a raster.

>>>>>

>>>>> I have filtered the data for one year and one variable and did the

>>>>> following;

>>>>>

>>>>> try <- subset(df, year==2010)

>>>>> try <- try[,c(1,2,4)]

>>>>> try$lon <- round(try$lon)

>>>>> try$lat <- round(try$lat)

>>>>> r_imp <- rasterFromXYZ(try)

>>>>>

>>>>> Two issues; is it possible to convert the original dataset with the

>>>>> multiple variables and years to a raster? If not, how can I avoid

>>>>> rounding

>>>>> the coordinates? Currently, I get this error "Error in

>>>>> rasterFromXYZ(try) :

>>>>> x cell sizes are not regular" without rounding.

>>>>>

>>>>> Any help will be greatly appreciated. Thank you!

>>>>>

>>>>> Sincerely,

>>>>>

>>>>> Shouro

>>>>>

>>>>> ## Data

>>>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>>>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>>>> c(-16.1529296875,

>>>>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>>>> -16.914110607613,

>>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>>>> 2.16767864033646,

>>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>>>> c(2.63753852023063,

>>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>>>>

>>>>> [[alternative HTML version deleted]]

>>>>>

>>>>> _______________________________________________

>>>>> R-sig-Geo mailing list

>>>>> [hidden email]

>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>>>

>>>>

>>>>

>>>>

>>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Thank you again for your reply. I tried attaching my data for two years and

two variables but the message was rejected. I have saved the data on a

Google Drive folder here

<https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing>.

Hope this will work. Thanks again!

Sincerely,

Milu

On Thu, Aug 1, 2019 at 9:52 AM Miluji Sb <[hidden email]> wrote:

> Dear Vijay,

>

> Thank you again for your reply. I have attached my data for two years and

> two variables. Hope they will go through, otherwise the files are also

> available here

> <https://drive.google.com/drive/folders/15LvLysXqIIua8ukkp8UNEnlJMSK6mA1K?usp=sharing>.

> Thanks again!

>

> Sincerely,

>

> Milu

>

> On Wed, Jul 31, 2019 at 10:46 PM Vijay Lulla <[hidden email]> wrote:

>

>> Hmm...I had seen your data and thought that it was just some sample that

>> you'd shared. If this is your whole data then I don't know how to create a

>> raster from just one row that is returned from subsetting the dataframe.

>>

>> Sorry for the noise.

>>

>> On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

>>

>>> Hello,

>>>

>>> Thank you for your kind reply. Here is a snapshot of the original data.

>>> I had pasted it at the bottom of my first email but forgot to mention it.

>>> Thanks again!

>>>

>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> c(-16.1529296875,

>>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>> -16.914110607613,

>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> 2.16767864033646,

>>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> c(2.63753852023063,

>>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>>

>>> Sincerely,

>>>

>>> Milu

>>>

>>> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]>

>>> wrote:

>>>

>>>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates and

>>>> must be on a regular grid." And, it appears to me that you might be losing

>>>> values by rounding lon/lat values. The help file further suggests that

>>>> `rasterize` might be the function you're looking for. List members will

>>>> (certainly I will) find it more helpful to propose other solutions if you

>>>> post a small reproducible example of your original georeferenced dataset so

>>>> that we get an idea of what data you're using.

>>>>

>>>> Sorry, I cannot be of more help.

>>>>

>>>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

>>>>

>>>>> Dear all,

>>>>>

>>>>> I have georeferenced dataset with multiple variables and years. The

>>>>> data is

>>>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this

>>>>> into

>>>>> a raster.

>>>>>

>>>>> I have filtered the data for one year and one variable and did the

>>>>> following;

>>>>>

>>>>> try <- subset(df, year==2010)

>>>>> try <- try[,c(1,2,4)]

>>>>> try$lon <- round(try$lon)

>>>>> try$lat <- round(try$lat)

>>>>> r_imp <- rasterFromXYZ(try)

>>>>>

>>>>> Two issues; is it possible to convert the original dataset with the

>>>>> multiple variables and years to a raster? If not, how can I avoid

>>>>> rounding

>>>>> the coordinates? Currently, I get this error "Error in

>>>>> rasterFromXYZ(try) :

>>>>> x cell sizes are not regular" without rounding.

>>>>>

>>>>> Any help will be greatly appreciated. Thank you!

>>>>>

>>>>> Sincerely,

>>>>>

>>>>> Shouro

>>>>>

>>>>> ## Data

>>>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>>>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>>>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>>>> c(-16.1529296875,

>>>>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>>>>> -16.561653799838, -16.6533087696649, -16.7741069281329,

>>>>> -16.914110607613,

>>>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>>>> 2.16767864033646,

>>>>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>>>>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>>>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>>>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>>>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>>>> c(2.63753852023063,

>>>>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>>>>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>>>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>>>>

>>>>> [[alternative HTML version deleted]]

>>>>>

>>>>> _______________________________________________

>>>>> R-sig-Geo mailing list

>>>>> [hidden email]

>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>>>

>>>>

>>>>

>>>>

>>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

Hmm...I had seen your data and thought that it was just some sample that

you'd shared. If this is your whole data then I don't know how to create a

raster from just one row that is returned from subsetting the dataframe.

Sorry for the noise.

On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

> Hello,

>

> Thank you for your kind reply. Here is a snapshot of the original data. I

> had pasted it at the bottom of my first email but forgot to mention it.

> Thanks again!

>

> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> c(-16.1529296875,

> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> 2.16767864033646,

> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> c(2.63753852023063,

> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>

> Sincerely,

>

> Milu

>

> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]> wrote:

>

>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates and

>> must be on a regular grid." And, it appears to me that you might be losing

>> values by rounding lon/lat values. The help file further suggests that

>> `rasterize` might be the function you're looking for. List members will

>> (certainly I will) find it more helpful to propose other solutions if you

>> post a small reproducible example of your original georeferenced dataset so

>> that we get an idea of what data you're using.

>>

>> Sorry, I cannot be of more help.

>>

>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

>>

>>> Dear all,

>>>

>>> I have georeferenced dataset with multiple variables and years. The data

>>> is

>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this

>>> into

>>> a raster.

>>>

>>> I have filtered the data for one year and one variable and did the

>>> following;

>>>

>>> try <- subset(df, year==2010)

>>> try <- try[,c(1,2,4)]

>>> try$lon <- round(try$lon)

>>> try$lat <- round(try$lat)

>>> r_imp <- rasterFromXYZ(try)

>>>

>>> Two issues; is it possible to convert the original dataset with the

>>> multiple variables and years to a raster? If not, how can I avoid

>>> rounding

>>> the coordinates? Currently, I get this error "Error in

>>> rasterFromXYZ(try) :

>>> x cell sizes are not regular" without rounding.

>>>

>>> Any help will be greatly appreciated. Thank you!

>>>

>>> Sincerely,

>>>

>>> Shouro

>>>

>>> ## Data

>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> c(-16.1529296875,

>>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>>> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> 2.16767864033646,

>>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> c(2.63753852023063,

>>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>

>>

>>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

you'd shared. If this is your whole data then I don't know how to create a

raster from just one row that is returned from subsetting the dataframe.

Sorry for the noise.

On Wed, Jul 31, 2019 at 4:16 PM Miluji Sb <[hidden email]> wrote:

> Hello,

>

> Thank you for your kind reply. Here is a snapshot of the original data. I

> had pasted it at the bottom of my first email but forgot to mention it.

> Thanks again!

>

> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> c(-16.1529296875,

> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> 2.16767864033646,

> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> c(2.63753852023063,

> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>

> Sincerely,

>

> Milu

>

> On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]> wrote:

>

>> ?`rasterFromXYZ` states that "x and y represent spatial coordinates and

>> must be on a regular grid." And, it appears to me that you might be losing

>> values by rounding lon/lat values. The help file further suggests that

>> `rasterize` might be the function you're looking for. List members will

>> (certainly I will) find it more helpful to propose other solutions if you

>> post a small reproducible example of your original georeferenced dataset so

>> that we get an idea of what data you're using.

>>

>> Sorry, I cannot be of more help.

>>

>> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

>>

>>> Dear all,

>>>

>>> I have georeferenced dataset with multiple variables and years. The data

>>> is

>>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this

>>> into

>>> a raster.

>>>

>>> I have filtered the data for one year and one variable and did the

>>> following;

>>>

>>> try <- subset(df, year==2010)

>>> try <- try[,c(1,2,4)]

>>> try$lon <- round(try$lon)

>>> try$lat <- round(try$lat)

>>> r_imp <- rasterFromXYZ(try)

>>>

>>> Two issues; is it possible to convert the original dataset with the

>>> multiple variables and years to a raster? If not, how can I avoid

>>> rounding

>>> the coordinates? Currently, I get this error "Error in

>>> rasterFromXYZ(try) :

>>> x cell sizes are not regular" without rounding.

>>>

>>> Any help will be greatly appreciated. Thank you!

>>>

>>> Sincerely,

>>>

>>> Shouro

>>>

>>> ## Data

>>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>>> c(-16.1529296875,

>>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>>> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

>>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>>> 2.16767864033646,

>>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>>> c(2.63753852023063,

>>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>>

>>> [[alternative HTML version deleted]]

>>>

>>> _______________________________________________

>>> R-sig-Geo mailing list

>>> [hidden email]

>>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>>

>>

>>

>>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

Hello,

Thank you for your kind reply. Here is a snapshot of the original data. I

had pasted it at the bottom of my first email but forgot to mention it.

Thanks again!

df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

178.501097394651, 178.662722495847, 178.860599151485), lat =

c(-16.1529296875,

-16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

-16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

-16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

"3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

"9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

2.16767864033646,

2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

c(2.63753852023063,

2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

Sincerely,

Milu

On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]> wrote:

> ?`rasterFromXYZ` states that "x and y represent spatial coordinates and

> must be on a regular grid." And, it appears to me that you might be losing

> values by rounding lon/lat values. The help file further suggests that

> `rasterize` might be the function you're looking for. List members will

> (certainly I will) find it more helpful to propose other solutions if you

> post a small reproducible example of your original georeferenced dataset so

> that we get an idea of what data you're using.

>

> Sorry, I cannot be of more help.

>

> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

>

>> Dear all,

>>

>> I have georeferenced dataset with multiple variables and years. The data

>> is

>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this into

>> a raster.

>>

>> I have filtered the data for one year and one variable and did the

>> following;

>>

>> try <- subset(df, year==2010)

>> try <- try[,c(1,2,4)]

>> try$lon <- round(try$lon)

>> try$lat <- round(try$lat)

>> r_imp <- rasterFromXYZ(try)

>>

>> Two issues; is it possible to convert the original dataset with the

>> multiple variables and years to a raster? If not, how can I avoid rounding

>> the coordinates? Currently, I get this error "Error in rasterFromXYZ(try)

>> :

>> x cell sizes are not regular" without rounding.

>>

>> Any help will be greatly appreciated. Thank you!

>>

>> Sincerely,

>>

>> Shouro

>>

>> ## Data

>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>> c(-16.1529296875,

>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>> 2.16767864033646,

>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>> c(2.63753852023063,

>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Thank you for your kind reply. Here is a snapshot of the original data. I

had pasted it at the bottom of my first email but forgot to mention it.

Thanks again!

df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

178.501097394651, 178.662722495847, 178.860599151485), lat =

c(-16.1529296875,

-16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

-16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

-16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

"3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

"9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

2.16767864033646,

2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

c(2.63753852023063,

2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

Sincerely,

Milu

On Wed, Jul 31, 2019 at 9:20 PM Vijay Lulla <[hidden email]> wrote:

> ?`rasterFromXYZ` states that "x and y represent spatial coordinates and

> must be on a regular grid." And, it appears to me that you might be losing

> values by rounding lon/lat values. The help file further suggests that

> `rasterize` might be the function you're looking for. List members will

> (certainly I will) find it more helpful to propose other solutions if you

> post a small reproducible example of your original georeferenced dataset so

> that we get an idea of what data you're using.

>

> Sorry, I cannot be of more help.

>

> On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

>

>> Dear all,

>>

>> I have georeferenced dataset with multiple variables and years. The data

>> is

>> at ~100 km (1° × 1°) spatial resolution. I would like to convert this into

>> a raster.

>>

>> I have filtered the data for one year and one variable and did the

>> following;

>>

>> try <- subset(df, year==2010)

>> try <- try[,c(1,2,4)]

>> try$lon <- round(try$lon)

>> try$lat <- round(try$lat)

>> r_imp <- rasterFromXYZ(try)

>>

>> Two issues; is it possible to convert the original dataset with the

>> multiple variables and years to a raster? If not, how can I avoid rounding

>> the coordinates? Currently, I get this error "Error in rasterFromXYZ(try)

>> :

>> x cell sizes are not regular" without rounding.

>>

>> Any help will be greatly appreciated. Thank you!

>>

>> Sincerely,

>>

>> Shouro

>>

>> ## Data

>> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

>> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

>> 178.501097394651, 178.662722495847, 178.860599151485), lat =

>> c(-16.1529296875,

>> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

>> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

>> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

>> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

>> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

>> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

>> 2.16767864033646,

>> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

>> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

>> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

>> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

>> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

>> c(2.63753852023063,

>> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

>> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

>> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

>> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>>

>> [[alternative HTML version deleted]]

>>

>> _______________________________________________

>> R-sig-Geo mailing list

>> [hidden email]

>> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>>

>

>

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Convert data.frame/SpatialPointsDataFrame to raster

?`rasterFromXYZ` states that "x and y represent spatial coordinates and

must be on a regular grid." And, it appears to me that you might be losing

values by rounding lon/lat values. The help file further suggests that

`rasterize` might be the function you're looking for. List members will

(certainly I will) find it more helpful to propose other solutions if you

post a small reproducible example of your original georeferenced dataset so

that we get an idea of what data you're using.

Sorry, I cannot be of more help.

On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

> Dear all,

>

> I have georeferenced dataset with multiple variables and years. The data is

> at ~100 km (1° × 1°) spatial resolution. I would like to convert this into

> a raster.

>

> I have filtered the data for one year and one variable and did the

> following;

>

> try <- subset(df, year==2010)

> try <- try[,c(1,2,4)]

> try$lon <- round(try$lon)

> try$lat <- round(try$lat)

> r_imp <- rasterFromXYZ(try)

>

> Two issues; is it possible to convert the original dataset with the

> multiple variables and years to a raster? If not, how can I avoid rounding

> the coordinates? Currently, I get this error "Error in rasterFromXYZ(try) :

> x cell sizes are not regular" without rounding.

>

> Any help will be greatly appreciated. Thank you!

>

> Sincerely,

>

> Shouro

>

> ## Data

> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> c(-16.1529296875,

> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> 2.16767864033646,

> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> c(2.63753852023063,

> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

must be on a regular grid." And, it appears to me that you might be losing

values by rounding lon/lat values. The help file further suggests that

`rasterize` might be the function you're looking for. List members will

(certainly I will) find it more helpful to propose other solutions if you

post a small reproducible example of your original georeferenced dataset so

that we get an idea of what data you're using.

Sorry, I cannot be of more help.

On Wed, Jul 31, 2019 at 10:45 AM Miluji Sb <[hidden email]> wrote:

> Dear all,

>

> I have georeferenced dataset with multiple variables and years. The data is

> at ~100 km (1° × 1°) spatial resolution. I would like to convert this into

> a raster.

>

> I have filtered the data for one year and one variable and did the

> following;

>

> try <- subset(df, year==2010)

> try <- try[,c(1,2,4)]

> try$lon <- round(try$lon)

> try$lat <- round(try$lat)

> r_imp <- rasterFromXYZ(try)

>

> Two issues; is it possible to convert the original dataset with the

> multiple variables and years to a raster? If not, how can I avoid rounding

> the coordinates? Currently, I get this error "Error in rasterFromXYZ(try) :

> x cell sizes are not regular" without rounding.

>

> Any help will be greatly appreciated. Thank you!

>

> Sincerely,

>

> Shouro

>

> ## Data

> df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

> 179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

> 178.501097394651, 178.662722495847, 178.860599151485), lat =

> c(-16.1529296875,

> -16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

> -16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

> -16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

> 8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

> "3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

> "9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

> 2.16767864033646,

> 2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

> 2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

> ), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

> 2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

> 2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

> c(2.63753852023063,

> 2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

> 3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

> 4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

> 2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

>

> [[alternative HTML version deleted]]

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Convert data.frame/SpatialPointsDataFrame to raster

Dear all,

I have georeferenced dataset with multiple variables and years. The data is

at ~100 km (1° × 1°) spatial resolution. I would like to convert this into

a raster.

I have filtered the data for one year and one variable and did the

following;

try <- subset(df, year==2010)

try <- try[,c(1,2,4)]

try$lon <- round(try$lon)

try$lat <- round(try$lat)

r_imp <- rasterFromXYZ(try)

Two issues; is it possible to convert the original dataset with the

multiple variables and years to a raster? If not, how can I avoid rounding

the coordinates? Currently, I get this error "Error in rasterFromXYZ(try) :

x cell sizes are not regular" without rounding.

Any help will be greatly appreciated. Thank you!

Sincerely,

Shouro

## Data

df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

178.501097394651, 178.662722495847, 178.860599151485), lat =

c(-16.1529296875,

-16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

-16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

-16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

"3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

"9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

2.16767864033646,

2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

c(2.63753852023063,

2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I have georeferenced dataset with multiple variables and years. The data is

at ~100 km (1° × 1°) spatial resolution. I would like to convert this into

a raster.

I have filtered the data for one year and one variable and did the

following;

try <- subset(df, year==2010)

try <- try[,c(1,2,4)]

try$lon <- round(try$lon)

try$lat <- round(try$lat)

r_imp <- rasterFromXYZ(try)

Two issues; is it possible to convert the original dataset with the

multiple variables and years to a raster? If not, how can I avoid rounding

the coordinates? Currently, I get this error "Error in rasterFromXYZ(try) :

x cell sizes are not regular" without rounding.

Any help will be greatly appreciated. Thank you!

Sincerely,

Shouro

## Data

df <- structure(list(lon = c(180, 179.762810919291, 179.523658017568,

179.311342656601, 179.067616041778, 178.851382109362, 178.648816406322,

178.501097394651, 178.662722495847, 178.860599151485), lat =

c(-16.1529296875,

-16.21659020822, -16.266117894201, -16.393550535614, -16.4457378034442,

-16.561653799838, -16.6533087696649, -16.7741069281329, -16.914110607613,

-16.9049389730284), nsdec = structure(c(1L, 3L, 4L, 5L, 6L, 7L,

8L, 9L, 10L, 2L), .Label = c("1 of 10", "10 of 10", "2 of 10",

"3 of 10", "4 of 10", "5 of 10", "6 of 10", "7 of 10", "8 of 10",

"9 of 10"), class = "factor"), TWL_5 = c(2.13810426616849,

2.16767864033646,

2.16881240361846, 2.20727073247015, 2.27771608519709, 2.3649601141941,

2.44210984856767, 2.52466349543977, 2.63982954290745, 2.71828906773926

), TWL_50 = c(2.38302354555823, 2.43142793944275, 2.45733044901087,

2.53057109758284, 2.61391337469939, 2.71040967066483, 2.82546443373866,

2.9709907727849, 3.1785797371187, 3.33227647990861), TWL_95 =

c(2.63753852023063,

2.7080249053612, 2.75483681166049, 2.86893038433795, 2.97758282474101,

3.14541928966618, 3.3986143008625, 3.68043269045659, 4.09571655859075,

4.57299670034984), year = c(2010, 2020, 2030, 2040, 2050, 2060,

2070, 2080, 2090, 2100)), row.names = c(NA, 10L), class = "data.frame")

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### st_sample with raster

Hi All,

Spatstat::rpoint has an argument f that accepts a pixel image object (class .im) to generate points based on the proportionate probability density of that raster. Just wondering, does sf::st_sample or perhaps another function have a similar available method?

Thanks so much!

Ty

Tyler Frazier, Ph.D.

Lecturer of Interdisciplinary Studies

Data Science Program

College of William and Mary

Webpage: http://tjfrazier.people.wm.edu

Phone: +01 (757) 386-1269

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Spatstat::rpoint has an argument f that accepts a pixel image object (class .im) to generate points based on the proportionate probability density of that raster. Just wondering, does sf::st_sample or perhaps another function have a similar available method?

Thanks so much!

Ty

Tyler Frazier, Ph.D.

Lecturer of Interdisciplinary Studies

Data Science Program

College of William and Mary

Webpage: http://tjfrazier.people.wm.edu

Phone: +01 (757) 386-1269

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Simple Ripley's CRS test for market point patters --- addendum.

Dr. Rolf Turner,

???? Thanks again and your explanations as a personal spatial course for

me :) You are very clear, but there are some questions yet. First, these

data set is not artificial, I have real coordinates and area of nests of

leaf-cutting ants in my example (in my complete data set I have more

than 3,000 coordinates and areas). The problem of overlap is because we

represent a ants nests area as a circle (for an easy comprehension and

modeling), but in the real world this areas was elliptical and some

times we have an eccentricity close to 1 (in this case I have two nests

close but not in overlap condition in the field). Based on our comments,

I used the CSRI test, but first I try to explore more the question about

intensities and size categories (s, m, and l), for this,could I test

intensities by pair-wise comparison? For the last, do you have any

solution for my overlap problem, if my objective is the study of my

problem as a realization of a Poisson process?

Please see my code below:

#Packages

require(spatstat)

require(sp)

# Create some points that represent ant nests

xp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,

371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,

371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,

371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,

371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,

371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,

371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,

371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,

8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,

8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,

8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,

8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,

8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,

8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,

8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,

8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

#Now I have the size of each nest (marked process)

area<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,

88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,

105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a contour for the window creation

W <- convexhull.xy(xp,yp)

# Class of nests size - large, medium and small

acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

labels=c("s","m","l"))

#Create a ppp object

syn.ppp <- ppp(x=xp,y=yp,window=W, marks=acat)

#Test initial hypothesis

f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity for

each area category.

anova(f1,f2,test="Chi") #The test?? rejects the hypothesis that the

intensities are the same - p =0.002015 **

*#First question, could I test intensities by pair-wise comparison?*

# Small vs medium sizes

acat2<-acat

levels(acat2)

levels(acat2)[1]<-"s_m"

levels(acat2)[2]<-"s_m"

levels(acat2)

syn.ppp2 <- ppp(x=xp,y=yp,window=W, marks=acat2)

f3 <- ppm(syn.ppp2 ~ marks)

f4 <- ppm(syn.ppp2)

anova(f4,f3,test="Chi") #Intensities of small and medium size are the

same - p=0.237

# Medium vs large sizes

acat3<-acat

levels(acat3)

levels(acat3)[2]<-"m_l"

levels(acat3)[3]<-"m_l"

levels(acat3)

syn.ppp3 <- ppp(x=xp,y=yp,window=W, marks=acat3)

f5 <- ppm(syn.ppp3 ~ marks)

f6 <- ppm(syn.ppp3)

anova(f5,f6,test="Chi") #Intensities of medium and large sizes are the

different - p=7.827e-05 ***

Finally small and medium sizes has the same intensity but is different

of large ants size!!! With this condition I will test the CSRI:

*#Now testing CSRI by simulation*

foo <- function(W){

s_m <- runifpoint(length(area<55),win=W)

l <- runifpoint(length(area>=55),win=W)

superimpose(s_m=s_m,l=l)

}

simex <- expression(foo(W))

#and then

set.seed(12)

E <- envelope(syn.ppp2,simulate=simex,nsim=999, savefuns=TRUE)

plot(E)

dtst <- dclf.test(E)

plot(dtst)

mtst <- mad.test(E)

plot(mtst)

# now gives p-value = 0.003 for dtst and p-value = 0.002 for mtst

Thanks in advanced,

Alexandre

--

======================================================================

Alexandre dos Santos

Prote????o Florestal

IFMT - Instituto Federal de Educa????o, Ci??ncia e Tecnologia de Mato Grosso

Campus C??ceres

Caixa Postal 244

Avenida dos Ramires, s/n

Bairro: Distrito Industrial

C??ceres - MT CEP: 78.200-000

Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)

[hidden email]

Lattes: http://lattes.cnpq.br/1360403201088680

OrcID: orcid.org/0000-0001-8232-6722

Researchgate: www.researchgate.net/profile/Alexandre_Santos10

LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635

Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

======================================================================

Em 26/07/2019 17:29, Rolf Turner escreveu:

>

> I have realised that there were a couple of oversights in my previous

> posting on this issue.?? One is a bit subtle; the other is a bit of a

> blunder on my part.

>

> First the "subtle" one. The test that I proposed for CSRI is a test

> done using the estimated parameters of the proposed model to generate

> realisations of data sets under the null hypothesis. Such tests tend

> to be conservative.?? (See section 10.6.3, p. 388 ff., in [1].)

>

> In the current instance (testing for CSRI) the conservatism can be

> overcome by simulating data conditional on the numbers of points of

> each type in the "real" data.?? This can be done here via:

>

> foo <- function(W){

> s <- runifpoint(10,win=W)

> m <- runifpoint(9,win=W)

> l <- runifpoint(27,win=W)

> superimpose(s=s,m=m,l=l)

> }

> simex <- expression(foo(W))

>

> and then

>

> set.seed(42)

> E <- envelope(syn.ppp,simulate=simex,savefuns=TRUE)

> dtst <- dclf.test(E)

> mtst <- mad.test(E)

>

> This gives p-values of 0.06 from the dclf test and 0.09 from the mad

> test.?? Thus there appears to be some slight evidence against the null

> hypothesis.?? (Evidence at the 0.10 significance level.)

>

> That this should be so is *OBVIOUS* (!!!) if we turn to the unsubtle

> point that I overlooked.?? It is clear that the pattern of ants' nests

> cannot be truly a realisation of a Poisson process since there must be

> a bit of a "hard core" effect.?? Two ants' nests cannot overlap.?? Thus

> if we approximate the shape of each nest by a disc, points i and j

> must be a distance of at least r_i + r_j from each other, where r_i =

> sqrt(area_i/pi), and similarly for r_j.

>

> However I note that the data provided seem to violate this principle

> in several instances.?? E.g. points 41 and 42 are a distance of only

> 0.2460 metres apart but areas 41 and 42 are 12.9 and 15.2 square

> metres, yielding putative radii of 3.5917 and 3.8987 metres, whence

> the closest

> these points could possibly be (under the "disc-shaped assumption") is

> 7.4904 metres, far larger than 0.2460.???? So something is a bit out of

> whack here.?? Perhaps these are made-up ("synthetic") data and the

> process of making up the data did not take account of the minimum

> distance constraint.

>

> How to incorporate the "hard core" aspect of your (real?) data into

> the modelling exercise, and what the impact of it is upon your

> research question(s), is unclear to me and is likely to be complicated.

>

> cheers,

>

> Rolf

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

???? Thanks again and your explanations as a personal spatial course for

me :) You are very clear, but there are some questions yet. First, these

data set is not artificial, I have real coordinates and area of nests of

leaf-cutting ants in my example (in my complete data set I have more

than 3,000 coordinates and areas). The problem of overlap is because we

represent a ants nests area as a circle (for an easy comprehension and

modeling), but in the real world this areas was elliptical and some

times we have an eccentricity close to 1 (in this case I have two nests

close but not in overlap condition in the field). Based on our comments,

I used the CSRI test, but first I try to explore more the question about

intensities and size categories (s, m, and l), for this,could I test

intensities by pair-wise comparison? For the last, do you have any

solution for my overlap problem, if my objective is the study of my

problem as a realization of a Poisson process?

Please see my code below:

#Packages

require(spatstat)

require(sp)

# Create some points that represent ant nests

xp<-c(371278.588,371250.722,371272.618,371328.421,371349.974,

371311.95,371296.265,371406.46,371411.551,371329.041,371338.081,

371334.182,371333.756,371299.818,371254.374,371193.673,371172.836,

371173.803,371153.73,371165.051,371140.417,371168.279,371166.367,

371180.575,371132.664,371129.791,371232.919,371208.502,371289.462,

371207.595,371219.008,371139.921,371133.215,371061.467,371053.69,

371099.897,371108.782,371112.52,371114.241,371176.236,371159.185,

371159.291,371158.552,370978.252,371120.03,371116.993)

yp<-c(8246507.94,8246493.176,8246465.974,8246464.413,8246403.465,

8246386.098,8246432.144,8246394.827,8246366.201,8246337.626,

8246311.125,8246300.039,8246299.594,8246298.072,8246379.351,

8246431.998,8246423.913,8246423.476,8246431.658,8246418.226,

8246400.161,8246396.891,8246394.225,8246400.391,8246370.244,

8246367.019,8246311.075,8246255.174,8246255.085,8246226.514,

8246215.847,8246337.316,8246330.197,8246311.197,8246304.183,

8246239.282,8246239.887,8246241.678,8246240.361,8246167.364,

8246171.581,8246171.803,8246169.807,8246293.57,8246183.194,8246189.926)

#Now I have the size of each nest (marked process)

area<-c(117,30,4,341,15,160,35,280,108,168,63,143,2,48,182,42,

88,56,27,156,288,45,49,234,72,270,91,40,304,56,35,4,56.7,9,4.6,

105,133,135,23.92,190,12.9,15.2,192.78,104,255,24)

# Make a contour for the window creation

W <- convexhull.xy(xp,yp)

# Class of nests size - large, medium and small

acat <- cut(area,breaks=c(-Inf,25,55,Inf),right=FALSE,

labels=c("s","m","l"))

#Create a ppp object

syn.ppp <- ppp(x=xp,y=yp,window=W, marks=acat)

#Test initial hypothesis

f1 <- ppm(syn.ppp) # Same (constant) intensity for each area category.

f2 <- ppm(syn.ppp ~ marks) # Allows different (constant) intensity for

each area category.

anova(f1,f2,test="Chi") #The test?? rejects the hypothesis that the

intensities are the same - p =0.002015 **

*#First question, could I test intensities by pair-wise comparison?*

# Small vs medium sizes

acat2<-acat

levels(acat2)

levels(acat2)[1]<-"s_m"

levels(acat2)[2]<-"s_m"

levels(acat2)

syn.ppp2 <- ppp(x=xp,y=yp,window=W, marks=acat2)

f3 <- ppm(syn.ppp2 ~ marks)

f4 <- ppm(syn.ppp2)

anova(f4,f3,test="Chi") #Intensities of small and medium size are the

same - p=0.237

# Medium vs large sizes

acat3<-acat

levels(acat3)

levels(acat3)[2]<-"m_l"

levels(acat3)[3]<-"m_l"

levels(acat3)

syn.ppp3 <- ppp(x=xp,y=yp,window=W, marks=acat3)

f5 <- ppm(syn.ppp3 ~ marks)

f6 <- ppm(syn.ppp3)

anova(f5,f6,test="Chi") #Intensities of medium and large sizes are the

different - p=7.827e-05 ***

Finally small and medium sizes has the same intensity but is different

of large ants size!!! With this condition I will test the CSRI:

*#Now testing CSRI by simulation*

foo <- function(W){

s_m <- runifpoint(length(area<55),win=W)

l <- runifpoint(length(area>=55),win=W)

superimpose(s_m=s_m,l=l)

}

simex <- expression(foo(W))

#and then

set.seed(12)

E <- envelope(syn.ppp2,simulate=simex,nsim=999, savefuns=TRUE)

plot(E)

dtst <- dclf.test(E)

plot(dtst)

mtst <- mad.test(E)

plot(mtst)

# now gives p-value = 0.003 for dtst and p-value = 0.002 for mtst

Thanks in advanced,

Alexandre

--

======================================================================

Alexandre dos Santos

Prote????o Florestal

IFMT - Instituto Federal de Educa????o, Ci??ncia e Tecnologia de Mato Grosso

Campus C??ceres

Caixa Postal 244

Avenida dos Ramires, s/n

Bairro: Distrito Industrial

C??ceres - MT CEP: 78.200-000

Fone: (+55) 65 99686-6970 (VIVO) (+55) 65 3221-2674 (FIXO)

[hidden email]

Lattes: http://lattes.cnpq.br/1360403201088680

OrcID: orcid.org/0000-0001-8232-6722

Researchgate: www.researchgate.net/profile/Alexandre_Santos10

LinkedIn: br.linkedin.com/in/alexandre-dos-santos-87961635

Mendeley:www.mendeley.com/profiles/alexandre-dos-santos6/

======================================================================

Em 26/07/2019 17:29, Rolf Turner escreveu:

>

> I have realised that there were a couple of oversights in my previous

> posting on this issue.?? One is a bit subtle; the other is a bit of a

> blunder on my part.

>

> First the "subtle" one. The test that I proposed for CSRI is a test

> done using the estimated parameters of the proposed model to generate

> realisations of data sets under the null hypothesis. Such tests tend

> to be conservative.?? (See section 10.6.3, p. 388 ff., in [1].)

>

> In the current instance (testing for CSRI) the conservatism can be

> overcome by simulating data conditional on the numbers of points of

> each type in the "real" data.?? This can be done here via:

>

> foo <- function(W){

> s <- runifpoint(10,win=W)

> m <- runifpoint(9,win=W)

> l <- runifpoint(27,win=W)

> superimpose(s=s,m=m,l=l)

> }

> simex <- expression(foo(W))

>

> and then

>

> set.seed(42)

> E <- envelope(syn.ppp,simulate=simex,savefuns=TRUE)

> dtst <- dclf.test(E)

> mtst <- mad.test(E)

>

> This gives p-values of 0.06 from the dclf test and 0.09 from the mad

> test.?? Thus there appears to be some slight evidence against the null

> hypothesis.?? (Evidence at the 0.10 significance level.)

>

> That this should be so is *OBVIOUS* (!!!) if we turn to the unsubtle

> point that I overlooked.?? It is clear that the pattern of ants' nests

> cannot be truly a realisation of a Poisson process since there must be

> a bit of a "hard core" effect.?? Two ants' nests cannot overlap.?? Thus

> if we approximate the shape of each nest by a disc, points i and j

> must be a distance of at least r_i + r_j from each other, where r_i =

> sqrt(area_i/pi), and similarly for r_j.

>

> However I note that the data provided seem to violate this principle

> in several instances.?? E.g. points 41 and 42 are a distance of only

> 0.2460 metres apart but areas 41 and 42 are 12.9 and 15.2 square

> metres, yielding putative radii of 3.5917 and 3.8987 metres, whence

> the closest

> these points could possibly be (under the "disc-shaped assumption") is

> 7.4904 metres, far larger than 0.2460.???? So something is a bit out of

> whack here.?? Perhaps these are made-up ("synthetic") data and the

> process of making up the data did not take account of the minimum

> distance constraint.

>

> How to incorporate the "hard core" aspect of your (real?) data into

> the modelling exercise, and what the impact of it is upon your

> research question(s), is unclear to me and is likely to be complicated.

>

> cheers,

>

> Rolf

>

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: how to specify the size of device so that it fits a SpatialPolygonsDataFrame plot exactly

Le 29/07/2019 à 11:12, Roger Bivand a écrit :

> On Sun, 28 Jul 2019, Patrick Giraudoux wrote:

>

>>> Le 28/07/2019 à 17:08, Roger Bivand a écrit : This feels like a

>>> section in

>>> ASDAR, ch. 3, and code chunks 24-27 in

>>> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that

>>> first

>>> by way of something reproducible?

>>

>> Le 28/07/2019 à 17:15, Patrick Giraudoux a écrit :

>>>

>>> Ups... How can I have missed this chapter of my bible ;-) ? (must

>>> admit I

>>> had been on google first). Will re-read it carefully and come back

>>> to the

>>> list with a solution or a reproducible example, indeed.

>>

>>

>> OK. Read it again, I was not totally lost. Here is a reproducible

>> example. To ease reproducibility with simple objects, I will use two

>> bounding boxes. bbChina in WGS84, bbChinaUTM47 in UTM47. I want a

>> window fitting the WGS84, and you'll see I get it through a strange way.

>>

>> bbChina <- new("SpatialPolygons", polygons = list(new("Polygons",

>> Polygons = list( new("Polygon", labpt = c(104.8, 35.95), area =

>> 2372.28, hole = FALSE, ringDir = 1L, coords = structure(c(73, 73,

>> 136.6, 136.6, 73, 17.3, 54.6, 54.6, 17.3, 17.3), .Dim = c(5L, 2L)))),

>> plotOrder = 1L, labpt = c(104.8, 35.95), ID = "1", area = 2372.28)),

>> plotOrder = 1L, bbox = structure(c(73, 17.3, 136.6, 54.6), .Dim =

>> c(2L, 2L), .Dimnames = list(c("x", "y"), c("min", "max"))),

>> proj4string = new("CRS", projargs = "+init=epsg:4326 +proj=longlat

>> +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))

>>

>> bbChinaUTM47 <- new("SpatialPolygons", polygons =

>> list(new("Polygons", Polygons = list( new("Polygon", labpt =

>> c(856106.391943348, 4317264.60126758 ), area = 30651262771540.1, hole

>> = FALSE, ringDir = 1L, coords = structure(c(-2331000.09677063,

>> -2331000.09677063, 4043212.88065733, 4043212.88065733,

>> -2331000.09677063, 1912947.1678777, 6721582.03465746,

>> 6721582.03465746, 1912947.1678777, 1912947.1678777), .Dim = c(5L,

>> 2L)))), plotOrder = 1L, labpt = c(856106.391943348,

>> 4317264.60126758), ID = "1", area = 30651262771540.1)), plotOrder =

>> 1L, bbox = structure(c(-2331000.09677063, 1912947.1678777,

>> 4043212.88065733, 6721582.03465746), .Dim = c(2L, 2L), .Dimnames =

>> list(c("x", "y"), c("min", "max"))), proj4string = new("CRS",

>> projargs = "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs

>> +ellps=WGS84 +towgs84=0,0,0"))

>>

>> Then let's go:

>>

>> Example #1: here, being straightforward we get two indesirable white

>> strips on the sides:

>>

>> width1<-max(bbox(bbChina)[1,])-min(bbox(bbChina)[1,])

>> height1<-max(bbox(bbChina)[2,])-min(bbox(bbChina)[2,])

>> ratio<-width1/height1

>> ratio

>> windows(height=8,width=8*ratio)

>> par(mar=c(0,0,0,0))

>> plot(bbChina,col="grey",xaxs='i', yaxs='i')

>> dev.off()

>>

>> Example #2: computing the ratio on UTM47, but plotting WGS84

>> (strange), I get a better fit but with still two small white strips

>> up and down.

>>

>> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

>> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

>> ratio<-width1/height1

>> ratio

>> windows(height=8,width=8*ratio)

>> par(mar=c(0,0,0,0))

>> plot(bbChina,col="grey",xaxs='i', yaxs='i') # no data range extention

>> (xaxs and yaxs parameter)

>>

>> dev.off()

>>

>> Example #3: multiplying the ratio by 1.04, I get a good fit

>>

>> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

>> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

>> ratio<-width1/height1

>> ratio

>> windows(height=8,width=8*ratio*1.04)

>> par(mar=c(0,0,0,0))

>> plot(bbChina,col="grey",xaxs='i', yaxs='i')

>>

>> dev.off()

>>

>> Looks like the issue has something to do with the way CRS are handled

>> when plotting objects, mmmh ? Tricky isn't it ?

>>

>

> Yes, and the section in the book only discusses projected objects, as

> geographical coordinates are stretched N-S proportionally to the

> distance from the Equator. For the UTM47 object, I have:

>

> library(sp)

> bbChinaUTM47 <-

> SpatialPolygons(list(Polygons(list(Polygon(matrix(c(-2331000.09677063,

> -2331000.09677063, 4043212.88065733, 4043212.88065733,

> -2331000.09677063, 1912947.1678777, 6721582.03465746,

> 6721582.03465746, 1912947.1678777, 1912947.1678777), ncol=2))),

> ID="1")), proj4string=CRS("+proj=utm +zone=47")) # you had longlat, so

> triggering the stretching.

>

> x11() # or equivalent

> dxy <- apply(bbox(bbChinaUTM47), 1, diff)

> dxy

> ratio <- dxy[1]/dxy[2]

> ratio

> pin <- par("pin")

> pin

> par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

> plot(bbChinaUTM47)

> box()

>

> where the box overlaps the SP object. To finesse:

>

> c(ratio * pin[2], pin[2])

> dev.off()

> X11(width=6.85, height=5.2)

> par(mar=c(0,0,0,0)+0.1)

> pin <- par("pin")

> par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

> plot(bbChinaUTM47)

> box()

> dev.off()

>

> From plot.Spatial(), asp is set to 1/cos((mean(ylim) * pi)/180 for

> geographical coordinates, where ylim is a possibly modified version of

> the N-S bounding box. This makes it harder to automate, as you'd need

> to manipulate dxy[2] above to match. So for projected objects, the

> book approach works, for non-projected objects you'd need an extra step.

>

> Hope this helps,

>

> Roger

Yes, indeed. Thanks. When one understands fully what's happens, the

easier to adapt...

Now I can go ahead cleanly...

Best,

Patrick

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> On Sun, 28 Jul 2019, Patrick Giraudoux wrote:

>

>>> Le 28/07/2019 à 17:08, Roger Bivand a écrit : This feels like a

>>> section in

>>> ASDAR, ch. 3, and code chunks 24-27 in

>>> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that

>>> first

>>> by way of something reproducible?

>>

>> Le 28/07/2019 à 17:15, Patrick Giraudoux a écrit :

>>>

>>> Ups... How can I have missed this chapter of my bible ;-) ? (must

>>> admit I

>>> had been on google first). Will re-read it carefully and come back

>>> to the

>>> list with a solution or a reproducible example, indeed.

>>

>>

>> OK. Read it again, I was not totally lost. Here is a reproducible

>> example. To ease reproducibility with simple objects, I will use two

>> bounding boxes. bbChina in WGS84, bbChinaUTM47 in UTM47. I want a

>> window fitting the WGS84, and you'll see I get it through a strange way.

>>

>> bbChina <- new("SpatialPolygons", polygons = list(new("Polygons",

>> Polygons = list( new("Polygon", labpt = c(104.8, 35.95), area =

>> 2372.28, hole = FALSE, ringDir = 1L, coords = structure(c(73, 73,

>> 136.6, 136.6, 73, 17.3, 54.6, 54.6, 17.3, 17.3), .Dim = c(5L, 2L)))),

>> plotOrder = 1L, labpt = c(104.8, 35.95), ID = "1", area = 2372.28)),

>> plotOrder = 1L, bbox = structure(c(73, 17.3, 136.6, 54.6), .Dim =

>> c(2L, 2L), .Dimnames = list(c("x", "y"), c("min", "max"))),

>> proj4string = new("CRS", projargs = "+init=epsg:4326 +proj=longlat

>> +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))

>>

>> bbChinaUTM47 <- new("SpatialPolygons", polygons =

>> list(new("Polygons", Polygons = list( new("Polygon", labpt =

>> c(856106.391943348, 4317264.60126758 ), area = 30651262771540.1, hole

>> = FALSE, ringDir = 1L, coords = structure(c(-2331000.09677063,

>> -2331000.09677063, 4043212.88065733, 4043212.88065733,

>> -2331000.09677063, 1912947.1678777, 6721582.03465746,

>> 6721582.03465746, 1912947.1678777, 1912947.1678777), .Dim = c(5L,

>> 2L)))), plotOrder = 1L, labpt = c(856106.391943348,

>> 4317264.60126758), ID = "1", area = 30651262771540.1)), plotOrder =

>> 1L, bbox = structure(c(-2331000.09677063, 1912947.1678777,

>> 4043212.88065733, 6721582.03465746), .Dim = c(2L, 2L), .Dimnames =

>> list(c("x", "y"), c("min", "max"))), proj4string = new("CRS",

>> projargs = "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs

>> +ellps=WGS84 +towgs84=0,0,0"))

>>

>> Then let's go:

>>

>> Example #1: here, being straightforward we get two indesirable white

>> strips on the sides:

>>

>> width1<-max(bbox(bbChina)[1,])-min(bbox(bbChina)[1,])

>> height1<-max(bbox(bbChina)[2,])-min(bbox(bbChina)[2,])

>> ratio<-width1/height1

>> ratio

>> windows(height=8,width=8*ratio)

>> par(mar=c(0,0,0,0))

>> plot(bbChina,col="grey",xaxs='i', yaxs='i')

>> dev.off()

>>

>> Example #2: computing the ratio on UTM47, but plotting WGS84

>> (strange), I get a better fit but with still two small white strips

>> up and down.

>>

>> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

>> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

>> ratio<-width1/height1

>> ratio

>> windows(height=8,width=8*ratio)

>> par(mar=c(0,0,0,0))

>> plot(bbChina,col="grey",xaxs='i', yaxs='i') # no data range extention

>> (xaxs and yaxs parameter)

>>

>> dev.off()

>>

>> Example #3: multiplying the ratio by 1.04, I get a good fit

>>

>> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

>> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

>> ratio<-width1/height1

>> ratio

>> windows(height=8,width=8*ratio*1.04)

>> par(mar=c(0,0,0,0))

>> plot(bbChina,col="grey",xaxs='i', yaxs='i')

>>

>> dev.off()

>>

>> Looks like the issue has something to do with the way CRS are handled

>> when plotting objects, mmmh ? Tricky isn't it ?

>>

>

> Yes, and the section in the book only discusses projected objects, as

> geographical coordinates are stretched N-S proportionally to the

> distance from the Equator. For the UTM47 object, I have:

>

> library(sp)

> bbChinaUTM47 <-

> SpatialPolygons(list(Polygons(list(Polygon(matrix(c(-2331000.09677063,

> -2331000.09677063, 4043212.88065733, 4043212.88065733,

> -2331000.09677063, 1912947.1678777, 6721582.03465746,

> 6721582.03465746, 1912947.1678777, 1912947.1678777), ncol=2))),

> ID="1")), proj4string=CRS("+proj=utm +zone=47")) # you had longlat, so

> triggering the stretching.

>

> x11() # or equivalent

> dxy <- apply(bbox(bbChinaUTM47), 1, diff)

> dxy

> ratio <- dxy[1]/dxy[2]

> ratio

> pin <- par("pin")

> pin

> par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

> plot(bbChinaUTM47)

> box()

>

> where the box overlaps the SP object. To finesse:

>

> c(ratio * pin[2], pin[2])

> dev.off()

> X11(width=6.85, height=5.2)

> par(mar=c(0,0,0,0)+0.1)

> pin <- par("pin")

> par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

> plot(bbChinaUTM47)

> box()

> dev.off()

>

> From plot.Spatial(), asp is set to 1/cos((mean(ylim) * pi)/180 for

> geographical coordinates, where ylim is a possibly modified version of

> the N-S bounding box. This makes it harder to automate, as you'd need

> to manipulate dxy[2] above to match. So for projected objects, the

> book approach works, for non-projected objects you'd need an extra step.

>

> Hope this helps,

>

> Roger

Yes, indeed. Thanks. When one understands fully what's happens, the

easier to adapt...

Now I can go ahead cleanly...

Best,

Patrick

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: how to specify the size of device so that it fits a SpatialPolygonsDataFrame plot exactly

On Sun, 28 Jul 2019, Patrick Giraudoux wrote:

>> Le 28/07/2019 à 17:08, Roger Bivand a écrit : This feels like a section in

>> ASDAR, ch. 3, and code chunks 24-27 in

>> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that first

>> by way of something reproducible?

>

> Le 28/07/2019 à 17:15, Patrick Giraudoux a écrit :

>>

>> Ups... How can I have missed this chapter of my bible ;-) ? (must admit I

>> had been on google first). Will re-read it carefully and come back to the

>> list with a solution or a reproducible example, indeed.

>

>

> OK. Read it again, I was not totally lost. Here is a reproducible example. To

> ease reproducibility with simple objects, I will use two bounding boxes.

> bbChina in WGS84, bbChinaUTM47 in UTM47. I want a window fitting the WGS84,

> and you'll see I get it through a strange way.

>

> bbChina <- new("SpatialPolygons", polygons = list(new("Polygons", Polygons =

> list( new("Polygon", labpt = c(104.8, 35.95), area = 2372.28, hole = FALSE,

> ringDir = 1L, coords = structure(c(73, 73, 136.6, 136.6, 73, 17.3, 54.6,

> 54.6, 17.3, 17.3), .Dim = c(5L, 2L)))), plotOrder = 1L, labpt = c(104.8,

> 35.95), ID = "1", area = 2372.28)), plotOrder = 1L, bbox = structure(c(73,

> 17.3, 136.6, 54.6), .Dim = c(2L, 2L), .Dimnames = list(c("x", "y"), c("min",

> "max"))), proj4string = new("CRS", projargs = "+init=epsg:4326 +proj=longlat

> +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))

>

> bbChinaUTM47 <- new("SpatialPolygons", polygons = list(new("Polygons",

> Polygons = list( new("Polygon", labpt = c(856106.391943348, 4317264.60126758

> ), area = 30651262771540.1, hole = FALSE, ringDir = 1L, coords =

> structure(c(-2331000.09677063, -2331000.09677063, 4043212.88065733,

> 4043212.88065733, -2331000.09677063, 1912947.1678777, 6721582.03465746,

> 6721582.03465746, 1912947.1678777, 1912947.1678777), .Dim = c(5L, 2L)))),

> plotOrder = 1L, labpt = c(856106.391943348, 4317264.60126758), ID = "1", area

> = 30651262771540.1)), plotOrder = 1L, bbox = structure(c(-2331000.09677063,

> 1912947.1678777, 4043212.88065733, 6721582.03465746), .Dim = c(2L, 2L),

> .Dimnames = list(c("x", "y"), c("min", "max"))), proj4string = new("CRS",

> projargs = "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84

> +towgs84=0,0,0"))

>

> Then let's go:

>

> Example #1: here, being straightforward we get two indesirable white strips

> on the sides:

>

> width1<-max(bbox(bbChina)[1,])-min(bbox(bbChina)[1,])

> height1<-max(bbox(bbChina)[2,])-min(bbox(bbChina)[2,])

> ratio<-width1/height1

> ratio

> windows(height=8,width=8*ratio)

> par(mar=c(0,0,0,0))

> plot(bbChina,col="grey",xaxs='i', yaxs='i')

> dev.off()

>

> Example #2: computing the ratio on UTM47, but plotting WGS84 (strange), I get

> a better fit but with still two small white strips up and down.

>

> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

> ratio<-width1/height1

> ratio

> windows(height=8,width=8*ratio)

> par(mar=c(0,0,0,0))

> plot(bbChina,col="grey",xaxs='i', yaxs='i') # no data range extention (xaxs

> and yaxs parameter)

>

> dev.off()

>

> Example #3: multiplying the ratio by 1.04, I get a good fit

>

> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

> ratio<-width1/height1

> ratio

> windows(height=8,width=8*ratio*1.04)

> par(mar=c(0,0,0,0))

> plot(bbChina,col="grey",xaxs='i', yaxs='i')

>

> dev.off()

>

> Looks like the issue has something to do with the way CRS are handled when

> plotting objects, mmmh ? Tricky isn't it ?

> Yes, and the section in the book only discusses projected objects, as

geographical coordinates are stretched N-S proportionally to the distance

from the Equator. For the UTM47 object, I have:

library(sp)

bbChinaUTM47 <-

SpatialPolygons(list(Polygons(list(Polygon(matrix(c(-2331000.09677063,

-2331000.09677063, 4043212.88065733, 4043212.88065733, -2331000.09677063,

1912947.1678777, 6721582.03465746, 6721582.03465746, 1912947.1678777,

1912947.1678777), ncol=2))), ID="1")), proj4string=CRS("+proj=utm

+zone=47")) # you had longlat, so triggering the stretching.

x11() # or equivalent

dxy <- apply(bbox(bbChinaUTM47), 1, diff)

dxy

ratio <- dxy[1]/dxy[2]

ratio

pin <- par("pin")

pin

par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

plot(bbChinaUTM47)

box()

where the box overlaps the SP object. To finesse:

c(ratio * pin[2], pin[2])

dev.off()

X11(width=6.85, height=5.2)

par(mar=c(0,0,0,0)+0.1)

pin <- par("pin")

par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

plot(bbChinaUTM47)

box()

dev.off()

From plot.Spatial(), asp is set to 1/cos((mean(ylim) * pi)/180 for

geographical coordinates, where ylim is a possibly modified version of the

N-S bounding box. This makes it harder to automate, as you'd need to

manipulate dxy[2] above to match. So for projected objects, the book

approach works, for non-projected objects you'd need an extra step.

Hope this helps,

Roger

>

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

>> Le 28/07/2019 à 17:08, Roger Bivand a écrit : This feels like a section in

>> ASDAR, ch. 3, and code chunks 24-27 in

>> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that first

>> by way of something reproducible?

>

> Le 28/07/2019 à 17:15, Patrick Giraudoux a écrit :

>>

>> Ups... How can I have missed this chapter of my bible ;-) ? (must admit I

>> had been on google first). Will re-read it carefully and come back to the

>> list with a solution or a reproducible example, indeed.

>

>

> OK. Read it again, I was not totally lost. Here is a reproducible example. To

> ease reproducibility with simple objects, I will use two bounding boxes.

> bbChina in WGS84, bbChinaUTM47 in UTM47. I want a window fitting the WGS84,

> and you'll see I get it through a strange way.

>

> bbChina <- new("SpatialPolygons", polygons = list(new("Polygons", Polygons =

> list( new("Polygon", labpt = c(104.8, 35.95), area = 2372.28, hole = FALSE,

> ringDir = 1L, coords = structure(c(73, 73, 136.6, 136.6, 73, 17.3, 54.6,

> 54.6, 17.3, 17.3), .Dim = c(5L, 2L)))), plotOrder = 1L, labpt = c(104.8,

> 35.95), ID = "1", area = 2372.28)), plotOrder = 1L, bbox = structure(c(73,

> 17.3, 136.6, 54.6), .Dim = c(2L, 2L), .Dimnames = list(c("x", "y"), c("min",

> "max"))), proj4string = new("CRS", projargs = "+init=epsg:4326 +proj=longlat

> +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))

>

> bbChinaUTM47 <- new("SpatialPolygons", polygons = list(new("Polygons",

> Polygons = list( new("Polygon", labpt = c(856106.391943348, 4317264.60126758

> ), area = 30651262771540.1, hole = FALSE, ringDir = 1L, coords =

> structure(c(-2331000.09677063, -2331000.09677063, 4043212.88065733,

> 4043212.88065733, -2331000.09677063, 1912947.1678777, 6721582.03465746,

> 6721582.03465746, 1912947.1678777, 1912947.1678777), .Dim = c(5L, 2L)))),

> plotOrder = 1L, labpt = c(856106.391943348, 4317264.60126758), ID = "1", area

> = 30651262771540.1)), plotOrder = 1L, bbox = structure(c(-2331000.09677063,

> 1912947.1678777, 4043212.88065733, 6721582.03465746), .Dim = c(2L, 2L),

> .Dimnames = list(c("x", "y"), c("min", "max"))), proj4string = new("CRS",

> projargs = "+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84

> +towgs84=0,0,0"))

>

> Then let's go:

>

> Example #1: here, being straightforward we get two indesirable white strips

> on the sides:

>

> width1<-max(bbox(bbChina)[1,])-min(bbox(bbChina)[1,])

> height1<-max(bbox(bbChina)[2,])-min(bbox(bbChina)[2,])

> ratio<-width1/height1

> ratio

> windows(height=8,width=8*ratio)

> par(mar=c(0,0,0,0))

> plot(bbChina,col="grey",xaxs='i', yaxs='i')

> dev.off()

>

> Example #2: computing the ratio on UTM47, but plotting WGS84 (strange), I get

> a better fit but with still two small white strips up and down.

>

> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

> ratio<-width1/height1

> ratio

> windows(height=8,width=8*ratio)

> par(mar=c(0,0,0,0))

> plot(bbChina,col="grey",xaxs='i', yaxs='i') # no data range extention (xaxs

> and yaxs parameter)

>

> dev.off()

>

> Example #3: multiplying the ratio by 1.04, I get a good fit

>

> width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

> height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

> ratio<-width1/height1

> ratio

> windows(height=8,width=8*ratio*1.04)

> par(mar=c(0,0,0,0))

> plot(bbChina,col="grey",xaxs='i', yaxs='i')

>

> dev.off()

>

> Looks like the issue has something to do with the way CRS are handled when

> plotting objects, mmmh ? Tricky isn't it ?

> Yes, and the section in the book only discusses projected objects, as

geographical coordinates are stretched N-S proportionally to the distance

from the Equator. For the UTM47 object, I have:

library(sp)

bbChinaUTM47 <-

SpatialPolygons(list(Polygons(list(Polygon(matrix(c(-2331000.09677063,

-2331000.09677063, 4043212.88065733, 4043212.88065733, -2331000.09677063,

1912947.1678777, 6721582.03465746, 6721582.03465746, 1912947.1678777,

1912947.1678777), ncol=2))), ID="1")), proj4string=CRS("+proj=utm

+zone=47")) # you had longlat, so triggering the stretching.

x11() # or equivalent

dxy <- apply(bbox(bbChinaUTM47), 1, diff)

dxy

ratio <- dxy[1]/dxy[2]

ratio

pin <- par("pin")

pin

par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

plot(bbChinaUTM47)

box()

where the box overlaps the SP object. To finesse:

c(ratio * pin[2], pin[2])

dev.off()

X11(width=6.85, height=5.2)

par(mar=c(0,0,0,0)+0.1)

pin <- par("pin")

par(pin=c(ratio * pin[2], pin[2]), xaxs="i", yaxs="i")

plot(bbChinaUTM47)

box()

dev.off()

From plot.Spatial(), asp is set to 1/cos((mean(ylim) * pi)/180 for

geographical coordinates, where ylim is a possibly modified version of the

N-S bounding box. This makes it harder to automate, as you'd need to

manipulate dxy[2] above to match. So for projected objects, the book

approach works, for non-projected objects you'd need an extra step.

Hope this helps,

Roger

>

>

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: how to specify the size of device so that it fits a SpatialPolygonsDataFrame plot exactly

> Le 28/07/2019 à 17:08, Roger Bivand a écrit :

> This feels like a section in ASDAR, ch. 3, and code chunks 24-27 in

> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that

> first by way of something reproducible?

Le 28/07/2019 à 17:15, Patrick Giraudoux a écrit :

>

> Ups... How can I have missed this chapter of my bible ;-) ? (must

> admit I had been on google first). Will re-read it carefully and come

> back to the list with a solution or a reproducible example, indeed.

OK. Read it again, I was not totally lost. Here is a reproducible

example. To ease reproducibility with simple objects, I will use two

bounding boxes. bbChina in WGS84, bbChinaUTM47 in UTM47. I want a

window fitting the WGS84, and you'll see I get it through a strange way.

bbChina <- new("SpatialPolygons", polygons = list(new("Polygons",

Polygons = list( new("Polygon", labpt = c(104.8, 35.95), area = 2372.28,

hole = FALSE, ringDir = 1L, coords = structure(c(73, 73, 136.6, 136.6,

73, 17.3, 54.6, 54.6, 17.3, 17.3), .Dim = c(5L, 2L)))), plotOrder = 1L,

labpt = c(104.8, 35.95), ID = "1", area = 2372.28)), plotOrder = 1L,

bbox = structure(c(73, 17.3, 136.6, 54.6), .Dim = c(2L, 2L), .Dimnames =

list(c("x", "y"), c("min", "max"))), proj4string = new("CRS", projargs =

"+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84

+towgs84=0,0,0"))

bbChinaUTM47 <- new("SpatialPolygons", polygons = list(new("Polygons",

Polygons = list( new("Polygon", labpt = c(856106.391943348,

4317264.60126758 ), area = 30651262771540.1, hole = FALSE, ringDir = 1L,

coords = structure(c(-2331000.09677063, -2331000.09677063,

4043212.88065733, 4043212.88065733, -2331000.09677063, 1912947.1678777,

6721582.03465746, 6721582.03465746, 1912947.1678777, 1912947.1678777),

.Dim = c(5L, 2L)))), plotOrder = 1L, labpt = c(856106.391943348,

4317264.60126758), ID = "1", area = 30651262771540.1)), plotOrder = 1L,

bbox = structure(c(-2331000.09677063, 1912947.1678777, 4043212.88065733,

6721582.03465746), .Dim = c(2L, 2L), .Dimnames = list(c("x", "y"),

c("min", "max"))), proj4string = new("CRS", projargs = "+init=epsg:4326

+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))

Then let's go:

Example #1: here, being straightforward we get two indesirable white

strips on the sides:

width1<-max(bbox(bbChina)[1,])-min(bbox(bbChina)[1,])

height1<-max(bbox(bbChina)[2,])-min(bbox(bbChina)[2,])

ratio<-width1/height1

ratio

windows(height=8,width=8*ratio)

par(mar=c(0,0,0,0))

plot(bbChina,col="grey",xaxs='i', yaxs='i')

dev.off()

Example #2: computing the ratio on UTM47, but plotting WGS84 (strange),

I get a better fit but with still two small white strips up and down.

width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

ratio<-width1/height1

ratio

windows(height=8,width=8*ratio)

par(mar=c(0,0,0,0))

plot(bbChina,col="grey",xaxs='i', yaxs='i') # no data range extention

(xaxs and yaxs parameter)

dev.off()

Example #3: multiplying the ratio by 1.04, I get a good fit

width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

ratio<-width1/height1

ratio

windows(height=8,width=8*ratio*1.04)

par(mar=c(0,0,0,0))

plot(bbChina,col="grey",xaxs='i', yaxs='i')

dev.off()

Looks like the issue has something to do with the way CRS are handled

when plotting objects, mmmh ? Tricky isn't it ?

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> This feels like a section in ASDAR, ch. 3, and code chunks 24-27 in

> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that

> first by way of something reproducible?

Le 28/07/2019 à 17:15, Patrick Giraudoux a écrit :

>

> Ups... How can I have missed this chapter of my bible ;-) ? (must

> admit I had been on google first). Will re-read it carefully and come

> back to the list with a solution or a reproducible example, indeed.

OK. Read it again, I was not totally lost. Here is a reproducible

example. To ease reproducibility with simple objects, I will use two

bounding boxes. bbChina in WGS84, bbChinaUTM47 in UTM47. I want a

window fitting the WGS84, and you'll see I get it through a strange way.

bbChina <- new("SpatialPolygons", polygons = list(new("Polygons",

Polygons = list( new("Polygon", labpt = c(104.8, 35.95), area = 2372.28,

hole = FALSE, ringDir = 1L, coords = structure(c(73, 73, 136.6, 136.6,

73, 17.3, 54.6, 54.6, 17.3, 17.3), .Dim = c(5L, 2L)))), plotOrder = 1L,

labpt = c(104.8, 35.95), ID = "1", area = 2372.28)), plotOrder = 1L,

bbox = structure(c(73, 17.3, 136.6, 54.6), .Dim = c(2L, 2L), .Dimnames =

list(c("x", "y"), c("min", "max"))), proj4string = new("CRS", projargs =

"+init=epsg:4326 +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84

+towgs84=0,0,0"))

bbChinaUTM47 <- new("SpatialPolygons", polygons = list(new("Polygons",

Polygons = list( new("Polygon", labpt = c(856106.391943348,

4317264.60126758 ), area = 30651262771540.1, hole = FALSE, ringDir = 1L,

coords = structure(c(-2331000.09677063, -2331000.09677063,

4043212.88065733, 4043212.88065733, -2331000.09677063, 1912947.1678777,

6721582.03465746, 6721582.03465746, 1912947.1678777, 1912947.1678777),

.Dim = c(5L, 2L)))), plotOrder = 1L, labpt = c(856106.391943348,

4317264.60126758), ID = "1", area = 30651262771540.1)), plotOrder = 1L,

bbox = structure(c(-2331000.09677063, 1912947.1678777, 4043212.88065733,

6721582.03465746), .Dim = c(2L, 2L), .Dimnames = list(c("x", "y"),

c("min", "max"))), proj4string = new("CRS", projargs = "+init=epsg:4326

+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))

Then let's go:

Example #1: here, being straightforward we get two indesirable white

strips on the sides:

width1<-max(bbox(bbChina)[1,])-min(bbox(bbChina)[1,])

height1<-max(bbox(bbChina)[2,])-min(bbox(bbChina)[2,])

ratio<-width1/height1

ratio

windows(height=8,width=8*ratio)

par(mar=c(0,0,0,0))

plot(bbChina,col="grey",xaxs='i', yaxs='i')

dev.off()

Example #2: computing the ratio on UTM47, but plotting WGS84 (strange),

I get a better fit but with still two small white strips up and down.

width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

ratio<-width1/height1

ratio

windows(height=8,width=8*ratio)

par(mar=c(0,0,0,0))

plot(bbChina,col="grey",xaxs='i', yaxs='i') # no data range extention

(xaxs and yaxs parameter)

dev.off()

Example #3: multiplying the ratio by 1.04, I get a good fit

width1<-max(bbox(bbChinaUTM47)[1,])-min(bbox(bbChinaUTM47)[1,])

height1<-max(bbox(bbChinaUTM47)[2,])-min(bbox(bbChinaUTM47)[2,])

ratio<-width1/height1

ratio

windows(height=8,width=8*ratio*1.04)

par(mar=c(0,0,0,0))

plot(bbChina,col="grey",xaxs='i', yaxs='i')

dev.off()

Looks like the issue has something to do with the way CRS are handled

when plotting objects, mmmh ? Tricky isn't it ?

[[alternative HTML version deleted]]

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: how to specify the size of device so that it fits a SpatialPolygonsDataFrame plot exactly

Le 28/07/2019 à 17:08, Roger Bivand a écrit :

> On Sat, 27 Jul 2019, Patrick Giraudoux wrote:

>

>> Dear listers,

>>

>> I would like to write jpeg (or png, etc.) maps using the function

>> graphics:jpeg (png, etc.), but with the device window exactly fitting

>> the map (= no white strip above, below or on the sides of the map).

>> The map is derived from a SpatialPolygonsDataFrame (WGS84)

>>

>> However, even if I set

>>

>> par(mar=c(0,0,0,0) and pass xaxs='i', yaxs='i' in the plot, there are

>> still white strips

>>

>> I tried several tricks but none are fully satisfying.

>>

>> Trial #1: I computed the ratio of the bounding box and using this

>> ratio in the function jpeg with the height and width arguments. I did

>> it both in WGS84 and in UTM47:

>>

>> ratio<-(max(bbox(ChinaUTM)[1,])-min(bbox(ChinaUTM)[1,]))/(max(bbox(ChinaUTM)[2,])-min(bbox(ChinaUTM)[2,]))

>> # ratio width/height

>>

>> then

>>

>> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

>>

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>>

>> dev.off()

>>

>> The best is obtained with UTM47 (planar projection), however I get

>> (almost) rid of any white strip only if I add an additionnal 1.04

>> coefficient (obtained by trial-error)

>>

>> Hence:

>>

>> jpeg("./MapChina.jpeg",height=8,width=8*ratio*1.04,res=300,units="in")

>>

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>>

>> dev.off()

>>

>> Trial #2: The other way has been to pick up the parameter values like

>> that:

>>

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>> pt<-par()$pin

>>

>> ratio<-pt[2]/pt[1]

>>

>> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i') #

>>

>> dev.off()

>>

>> Does not work either

>>

>> Any idea about how to deal with this ? In short how to get the exact

>> size of a SpatialPolygonDataFrame plot to make the device window

>> exactly this size?

>

> This feels like a section in ASDAR, ch. 3, and code chunks 24-27 in

> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that

> first by way of something reproducible?

>

> Best wishes,

>

> Roger

Ups... How can I have missed this chapter of my bible ;-) ? (must admit

I had been on google first). Will re-read it carefully and come back to

the list with a solution or a reproducible example, indeed.

Best,

Patrick

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> On Sat, 27 Jul 2019, Patrick Giraudoux wrote:

>

>> Dear listers,

>>

>> I would like to write jpeg (or png, etc.) maps using the function

>> graphics:jpeg (png, etc.), but with the device window exactly fitting

>> the map (= no white strip above, below or on the sides of the map).

>> The map is derived from a SpatialPolygonsDataFrame (WGS84)

>>

>> However, even if I set

>>

>> par(mar=c(0,0,0,0) and pass xaxs='i', yaxs='i' in the plot, there are

>> still white strips

>>

>> I tried several tricks but none are fully satisfying.

>>

>> Trial #1: I computed the ratio of the bounding box and using this

>> ratio in the function jpeg with the height and width arguments. I did

>> it both in WGS84 and in UTM47:

>>

>> ratio<-(max(bbox(ChinaUTM)[1,])-min(bbox(ChinaUTM)[1,]))/(max(bbox(ChinaUTM)[2,])-min(bbox(ChinaUTM)[2,]))

>> # ratio width/height

>>

>> then

>>

>> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

>>

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>>

>> dev.off()

>>

>> The best is obtained with UTM47 (planar projection), however I get

>> (almost) rid of any white strip only if I add an additionnal 1.04

>> coefficient (obtained by trial-error)

>>

>> Hence:

>>

>> jpeg("./MapChina.jpeg",height=8,width=8*ratio*1.04,res=300,units="in")

>>

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>>

>> dev.off()

>>

>> Trial #2: The other way has been to pick up the parameter values like

>> that:

>>

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>> pt<-par()$pin

>>

>> ratio<-pt[2]/pt[1]

>>

>> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

>> par(mar=c(0,0,0,0)) # no margin

>> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i') #

>>

>> dev.off()

>>

>> Does not work either

>>

>> Any idea about how to deal with this ? In short how to get the exact

>> size of a SpatialPolygonDataFrame plot to make the device window

>> exactly this size?

>

> This feels like a section in ASDAR, ch. 3, and code chunks 24-27 in

> https://asdar-book.org/book2ed/vis_mod.R. Could you please try that

> first by way of something reproducible?

>

> Best wishes,

>

> Roger

Ups... How can I have missed this chapter of my bible ;-) ? (must admit

I had been on google first). Will re-read it carefully and come back to

the list with a solution or a reproducible example, indeed.

Best,

Patrick

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: how to specify the size of device so that it fits a SpatialPolygonsDataFrame plot exactly

On Sat, 27 Jul 2019, Patrick Giraudoux wrote:

> Dear listers,

>

> I would like to write jpeg (or png, etc.) maps using the function

> graphics:jpeg (png, etc.), but with the device window exactly fitting the map

> (= no white strip above, below or on the sides of the map). The map is

> derived from a SpatialPolygonsDataFrame (WGS84)

>

> However, even if I set

>

> par(mar=c(0,0,0,0) and pass xaxs='i', yaxs='i' in the plot, there are still

> white strips

>

> I tried several tricks but none are fully satisfying.

>

> Trial #1: I computed the ratio of the bounding box and using this ratio in

> the function jpeg with the height and width arguments. I did it both in WGS84

> and in UTM47:

>

> ratio<-(max(bbox(ChinaUTM)[1,])-min(bbox(ChinaUTM)[1,]))/(max(bbox(ChinaUTM)[2,])-min(bbox(ChinaUTM)[2,]))

> # ratio width/height

>

> then

>

> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

>

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>

> dev.off()

>

> The best is obtained with UTM47 (planar projection), however I get (almost)

> rid of any white strip only if I add an additionnal 1.04 coefficient

> (obtained by trial-error)

>

> Hence:

>

> jpeg("./MapChina.jpeg",height=8,width=8*ratio*1.04,res=300,units="in")

>

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>

> dev.off()

>

> Trial #2: The other way has been to pick up the parameter values like that:

>

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

> pt<-par()$pin

>

> ratio<-pt[2]/pt[1]

>

> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i') #

>

> dev.off()

>

> Does not work either

>

> Any idea about how to deal with this ? In short how to get the exact size of

> a SpatialPolygonDataFrame plot to make the device window exactly this size? This feels like a section in ASDAR, ch. 3, and code chunks 24-27 in

https://asdar-book.org/book2ed/vis_mod.R. Could you please try that first

by way of something reproducible?

Best wishes,

Roger

>

> Best,

>

> Patrick

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear listers,

>

> I would like to write jpeg (or png, etc.) maps using the function

> graphics:jpeg (png, etc.), but with the device window exactly fitting the map

> (= no white strip above, below or on the sides of the map). The map is

> derived from a SpatialPolygonsDataFrame (WGS84)

>

> However, even if I set

>

> par(mar=c(0,0,0,0) and pass xaxs='i', yaxs='i' in the plot, there are still

> white strips

>

> I tried several tricks but none are fully satisfying.

>

> Trial #1: I computed the ratio of the bounding box and using this ratio in

> the function jpeg with the height and width arguments. I did it both in WGS84

> and in UTM47:

>

> ratio<-(max(bbox(ChinaUTM)[1,])-min(bbox(ChinaUTM)[1,]))/(max(bbox(ChinaUTM)[2,])-min(bbox(ChinaUTM)[2,]))

> # ratio width/height

>

> then

>

> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

>

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>

> dev.off()

>

> The best is obtained with UTM47 (planar projection), however I get (almost)

> rid of any white strip only if I add an additionnal 1.04 coefficient

> (obtained by trial-error)

>

> Hence:

>

> jpeg("./MapChina.jpeg",height=8,width=8*ratio*1.04,res=300,units="in")

>

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

>

> dev.off()

>

> Trial #2: The other way has been to pick up the parameter values like that:

>

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

> pt<-par()$pin

>

> ratio<-pt[2]/pt[1]

>

> jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

> par(mar=c(0,0,0,0)) # no margin

> plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i') #

>

> dev.off()

>

> Does not work either

>

> Any idea about how to deal with this ? In short how to get the exact size of

> a SpatialPolygonDataFrame plot to make the device window exactly this size? This feels like a section in ASDAR, ch. 3, and code chunks 24-27 in

https://asdar-book.org/book2ed/vis_mod.R. Could you please try that first

by way of something reproducible?

Best wishes,

Roger

>

> Best,

>

> Patrick

>

> _______________________________________________

> R-sig-Geo mailing list

> [hidden email]

> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

> --

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Fri, 26 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear Roger and all,

>

> In the end I used lm() to estimate my non-spatial FE models in order to extract the log likelihood values (the results are exactly the same when using the plm() function.

>

> I also used this method to estimate the SLX model. I just used the slag() function on each explanatory variable I wanted to have a spatial lag for, added it to my dataframe, and then estimated using lm() so that I can extract the log likelihood. I also managed to run my SDEM model with the spml() function without using slag() within the model.

>

> I just have one question: in the splm package documentation, there should be an extractable logLik value when I build a model using the spml() function

>

> However, when I do summary(spatial.SEM)$logLik, the result is NULL. When I do logLik(spatial.SEM), the error message is:

>

> Error in UseMethod("logLik") :

> no applicable method for 'logLik' applied to an object of class "splm"

>

> Since spml() uses maximum likelihood estimation so why can't I extract the log likelihood value?

Suggestion: find where in the code

(https://r-forge.r-project.org/scm/viewvc.php/pkg/R/?root=splm) the

optimization takes place, and suggest the relevant patch to the package

maintainers. I can see some calls in R/likelihoodsFE.R, returning an opt

object, later assigned to optres. I can't see optres returned (LL is even

calculated but not returned either). Other files may cover other cases.

Roger

>

> Any help would be greatly appreciated!

>

> Best wishes,

> Letisha

> ________________________________

> From: Letisha Sarah Fong Rui Zhen <[hidden email]>

> Sent: Thursday, July 25, 2019 5:16:43 PM

> To: [hidden email] <[hidden email]>

> Cc: [hidden email] <[hidden email]>

> Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

>

> Dear Roger,

>

> Thank you for your quick response.

>

> I have uploaded the spatial weights matrix and sample dataset I'm working with here: https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

>

> Reading the data in and transforming them into a pdataframe and listw, respectively:

> spatialweight <- read.csv("spatialweight.csv", header = T)

> row.names(spatialweight) <- spatialweight$X

> spatialweight <- spatialweight[, -1]

> spatialweight.mat <- as.matrix(spatialweight)

> mylistw <- mat2listw(spatialweight.mat, style = "M")

> mydata <- read.csv("sampledata.csv", header = T)

> mydata <- pdata.frame(mydata, index = c("Country", "Year"))

>

> I first ran a non-spatial model to determine the best specification for fixed effects:

>

> nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

> nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

> nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

> nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

>

> I would like to compare these models based on log likelihood and AIC, but the plm() function does not appear to provide a log likelihood or AIC. I have read through the JSS plm article and it states that models made with the plm() function are "estimated using the lm function to the transformed data". I'm aware that we can use logLik() and AIC() for a model estimated with the lm() function. However it doesn't seem to work with the plm() function.

>

> For example, I did logLik(nonspatial.twowayFE) and AIC(nonspatial.twowayFE) but the error message for both is:

>

> Error in UseMethod("logLik") :

> no applicable method for 'logLik' applied to an object of class "c('plm', 'panelmodel')"

>

> Please let me know if I'm calling the wrong function(s) and/or if you're aware of a way to compare these models based on log likelihood and/or AIC.

>

> For the spatial models, here is my code:

>

> spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

> spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

> spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

>

> As in my original post, the SLX and SEM models ran OK but the error when I try to run the SDEM model is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

>

> The variables that I use the slag() function on are all numeric, so I don't know what's wrong. I seem to be able to use slag() with plm() but not with spml(), but I don't know why this is so.

>

> I need to compare the models to see if SDEM can be reduced to one of its nested form. As was the case of the non-spatial models, I can't get the log likelihood for models created with the plm() function, so any suggestions are welcome. I've already read through the JSS articles for splm and plm as well as both documentations and there's no information on this (except that models built with the plm() function are estimated using the lm function to the transformed data).

>

> Thanks for clarifying the impact measures for SDEM and SLX. Just to check - when you say linear combination for standard errors do you mean e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of the direct impact and theta1 is the coefficient of the indirect impact)?

>

> Thank you for your help!

>

> Best wishes,

> Sarah

>

> ________________________________

> From: Roger Bivand <[hidden email]>

> Sent: Wednesday, July 24, 2019 9:50:13 PM

> To: Letisha Sarah Fong Rui Zhen <[hidden email]>

> Cc: [hidden email] <[hidden email]>

> Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

>

> On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

>

>> Dear all,

>

> Please do not post HTML-formated messages.

>

>>

>> I???m working with panel data of 9 countries and 18 years and I???m

>> running fixed effects SDEM, SLX and SEM with the splm package.

>>

>> I have three questions:

>>

>> 1. I can???t seem to get the SDEM model to run. My code for each of the

>> 3 models is:

>>

>> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

>> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

>> data = my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

>> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

>> model = ???within???, effect = ???individual???)

>>

>> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

>> my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> I am able to run both SLX and SEM models without problem, but when I try

>> to run the SDEM model, the error message is:

>>

>> Error in UseMethod("slag") :

>> no applicable method for 'slag' applied to an object of class

>> "c('double', 'numeric')"

>>

>> I don???t understand what is wrong here, as I have no problems with the

>> slag() function in the SLX model. My data is a pdataframe and each

>> variable is a numeric pseries.

>

> My guess would be that you should protect your square with I() in general,

> but have no idea - this is not a reproducible example.

>

>>

>>

>> 2. How can I calculate impact measures (direct, indirect and total) for

>> spatial panel models?

>>

>> The impacts() function in spdep doesn???t work anymore and the impacts()

>> function from the spatialreg package seems to work only for

>> cross-sectional data and not panel data.

>>

>> For example, I ran:

>>

>> spatialreg::impacts(model.SLX)

>>

>> And the error message is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class

>> "c('plm', 'panelmodel')"

>>

>> I have tried methods(impacts) but none of the functions seem to work for

>> my SLX model created with the splm package.

>

> But your SLX model is created with the plm package, isn't it? The only use

> of splm is for the manual lags with slag()?

>

>>

>> I also looked at some previous examples in the splm documentation and

>> more specifically the spml() function and the example provided

>> (specifically the impact measures) doesn???t work anymore:

>>

>> data(Produc, package = "plm")

>> data(usaww)

>> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>> ## random effects panel with spatial lag

>> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

>> model="random", spatial.error="none", lag=TRUE)

>> summary(respatlag) ## calculate impact measures impac1 <-

>> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

>> summary(impac1, zstats=TRUE, short=TRUE)

>>

>

> The implemented impacts methods in splm apply to the case where the lagged

> response is included. For SDEM and SLX, you can get the impacts by

> addition for the total impactss, and by linear combination for their

> standard errors. This is not implemented in a method. Further, slag() does

> not give any impacts method any information on which variables have been

> lagged - in your illustration above EI and RE are not lagged.

>

>> The error message when I run the impacts() function is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class "splm"

>>

>> My question is therefore, how do I go about calculating direct, indirect

>> and total impact measures for spatial panel data?

>>

>>

>> 3. How can I test if the SDEM model can be simplified to the SLX model

>> (since I estimate the SDEM by maximum likelihood (spml function) and

>> the SLX by ordinary linear regression (plm function))? From my

>> understanding the plm() function does not compute a loglikelihood or

>> AIC so I probably can???t do a likelihood ratio test to choose

>> between models (I haven???t tried this out because I???m stuck at

>> running the SDEM model).

>

> Do you know definitely that plm does not provide a log likelihood? I

> realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

> splm articles?

>

> Roger

>

>>

>> Any help or advice would be greatly appreciated. Thank you.

>>

>> Best wishes,

>> Sarah

>>

>>

>>

>> ________________________________

>>

>> Important: This email is confidential and may be privileged. If you are

>> not the intended recipient, please delete it and notify us immediately;

>> you should not copy or use it for any purpose, nor disclose its contents

>> to any other person. Thank you.

>>

>> [[alternative HTML version deleted]]

>>

>>

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> https://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear Roger and all,

>

> In the end I used lm() to estimate my non-spatial FE models in order to extract the log likelihood values (the results are exactly the same when using the plm() function.

>

> I also used this method to estimate the SLX model. I just used the slag() function on each explanatory variable I wanted to have a spatial lag for, added it to my dataframe, and then estimated using lm() so that I can extract the log likelihood. I also managed to run my SDEM model with the spml() function without using slag() within the model.

>

> I just have one question: in the splm package documentation, there should be an extractable logLik value when I build a model using the spml() function

>

> However, when I do summary(spatial.SEM)$logLik, the result is NULL. When I do logLik(spatial.SEM), the error message is:

>

> Error in UseMethod("logLik") :

> no applicable method for 'logLik' applied to an object of class "splm"

>

> Since spml() uses maximum likelihood estimation so why can't I extract the log likelihood value?

Suggestion: find where in the code

(https://r-forge.r-project.org/scm/viewvc.php/pkg/R/?root=splm) the

optimization takes place, and suggest the relevant patch to the package

maintainers. I can see some calls in R/likelihoodsFE.R, returning an opt

object, later assigned to optres. I can't see optres returned (LL is even

calculated but not returned either). Other files may cover other cases.

Roger

>

> Any help would be greatly appreciated!

>

> Best wishes,

> Letisha

> ________________________________

> From: Letisha Sarah Fong Rui Zhen <[hidden email]>

> Sent: Thursday, July 25, 2019 5:16:43 PM

> To: [hidden email] <[hidden email]>

> Cc: [hidden email] <[hidden email]>

> Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

>

> Dear Roger,

>

> Thank you for your quick response.

>

> I have uploaded the spatial weights matrix and sample dataset I'm working with here: https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

>

> Reading the data in and transforming them into a pdataframe and listw, respectively:

> spatialweight <- read.csv("spatialweight.csv", header = T)

> row.names(spatialweight) <- spatialweight$X

> spatialweight <- spatialweight[, -1]

> spatialweight.mat <- as.matrix(spatialweight)

> mylistw <- mat2listw(spatialweight.mat, style = "M")

> mydata <- read.csv("sampledata.csv", header = T)

> mydata <- pdata.frame(mydata, index = c("Country", "Year"))

>

> I first ran a non-spatial model to determine the best specification for fixed effects:

>

> nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

> nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

> nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

> nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

>

> I would like to compare these models based on log likelihood and AIC, but the plm() function does not appear to provide a log likelihood or AIC. I have read through the JSS plm article and it states that models made with the plm() function are "estimated using the lm function to the transformed data". I'm aware that we can use logLik() and AIC() for a model estimated with the lm() function. However it doesn't seem to work with the plm() function.

>

> For example, I did logLik(nonspatial.twowayFE) and AIC(nonspatial.twowayFE) but the error message for both is:

>

> Error in UseMethod("logLik") :

> no applicable method for 'logLik' applied to an object of class "c('plm', 'panelmodel')"

>

> Please let me know if I'm calling the wrong function(s) and/or if you're aware of a way to compare these models based on log likelihood and/or AIC.

>

> For the spatial models, here is my code:

>

> spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

> spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

> spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

>

> As in my original post, the SLX and SEM models ran OK but the error when I try to run the SDEM model is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class "c('double', 'numeric')"

>

> The variables that I use the slag() function on are all numeric, so I don't know what's wrong. I seem to be able to use slag() with plm() but not with spml(), but I don't know why this is so.

>

> I need to compare the models to see if SDEM can be reduced to one of its nested form. As was the case of the non-spatial models, I can't get the log likelihood for models created with the plm() function, so any suggestions are welcome. I've already read through the JSS articles for splm and plm as well as both documentations and there's no information on this (except that models built with the plm() function are estimated using the lm function to the transformed data).

>

> Thanks for clarifying the impact measures for SDEM and SLX. Just to check - when you say linear combination for standard errors do you mean e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of the direct impact and theta1 is the coefficient of the indirect impact)?

>

> Thank you for your help!

>

> Best wishes,

> Sarah

>

> ________________________________

> From: Roger Bivand <[hidden email]>

> Sent: Wednesday, July 24, 2019 9:50:13 PM

> To: Letisha Sarah Fong Rui Zhen <[hidden email]>

> Cc: [hidden email] <[hidden email]>

> Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

>

> On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

>

>> Dear all,

>

> Please do not post HTML-formated messages.

>

>>

>> I???m working with panel data of 9 countries and 18 years and I???m

>> running fixed effects SDEM, SLX and SEM with the splm package.

>>

>> I have three questions:

>>

>> 1. I can???t seem to get the SDEM model to run. My code for each of the

>> 3 models is:

>>

>> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

>> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

>> data = my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

>> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

>> model = ???within???, effect = ???individual???)

>>

>> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

>> my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> I am able to run both SLX and SEM models without problem, but when I try

>> to run the SDEM model, the error message is:

>>

>> Error in UseMethod("slag") :

>> no applicable method for 'slag' applied to an object of class

>> "c('double', 'numeric')"

>>

>> I don???t understand what is wrong here, as I have no problems with the

>> slag() function in the SLX model. My data is a pdataframe and each

>> variable is a numeric pseries.

>

> My guess would be that you should protect your square with I() in general,

> but have no idea - this is not a reproducible example.

>

>>

>>

>> 2. How can I calculate impact measures (direct, indirect and total) for

>> spatial panel models?

>>

>> The impacts() function in spdep doesn???t work anymore and the impacts()

>> function from the spatialreg package seems to work only for

>> cross-sectional data and not panel data.

>>

>> For example, I ran:

>>

>> spatialreg::impacts(model.SLX)

>>

>> And the error message is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class

>> "c('plm', 'panelmodel')"

>>

>> I have tried methods(impacts) but none of the functions seem to work for

>> my SLX model created with the splm package.

>

> But your SLX model is created with the plm package, isn't it? The only use

> of splm is for the manual lags with slag()?

>

>>

>> I also looked at some previous examples in the splm documentation and

>> more specifically the spml() function and the example provided

>> (specifically the impact measures) doesn???t work anymore:

>>

>> data(Produc, package = "plm")

>> data(usaww)

>> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>> ## random effects panel with spatial lag

>> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

>> model="random", spatial.error="none", lag=TRUE)

>> summary(respatlag) ## calculate impact measures impac1 <-

>> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

>> summary(impac1, zstats=TRUE, short=TRUE)

>>

>

> The implemented impacts methods in splm apply to the case where the lagged

> response is included. For SDEM and SLX, you can get the impacts by

> addition for the total impactss, and by linear combination for their

> standard errors. This is not implemented in a method. Further, slag() does

> not give any impacts method any information on which variables have been

> lagged - in your illustration above EI and RE are not lagged.

>

>> The error message when I run the impacts() function is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class "splm"

>>

>> My question is therefore, how do I go about calculating direct, indirect

>> and total impact measures for spatial panel data?

>>

>>

>> 3. How can I test if the SDEM model can be simplified to the SLX model

>> (since I estimate the SDEM by maximum likelihood (spml function) and

>> the SLX by ordinary linear regression (plm function))? From my

>> understanding the plm() function does not compute a loglikelihood or

>> AIC so I probably can???t do a likelihood ratio test to choose

>> between models (I haven???t tried this out because I???m stuck at

>> running the SDEM model).

>

> Do you know definitely that plm does not provide a log likelihood? I

> realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

> splm articles?

>

> Roger

>

>>

>> Any help or advice would be greatly appreciated. Thank you.

>>

>> Best wishes,

>> Sarah

>>

>>

>>

>> ________________________________

>>

>> Important: This email is confidential and may be privileged. If you are

>> not the intended recipient, please delete it and notify us immediately;

>> you should not copy or use it for any purpose, nor disclose its contents

>> to any other person. Thank you.

>>

>> [[alternative HTML version deleted]]

>>

>>

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> https://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### Re: Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

On Thu, 25 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

> Dear Roger,

>

> Thank you for your quick response.

>

> I have uploaded the spatial weights matrix and sample dataset I'm

> working with here:

> https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

Please only use built-in data - loading user-offered data is inherently

unfortunate for many reasons.

>

> Reading the data in and transforming them into a pdataframe and listw, respectively:

> spatialweight <- read.csv("spatialweight.csv", header = T)

> row.names(spatialweight) <- spatialweight$X

> spatialweight <- spatialweight[, -1]

> spatialweight.mat <- as.matrix(spatialweight)

> mylistw <- mat2listw(spatialweight.mat, style = "M")

> mydata <- read.csv("sampledata.csv", header = T)

> mydata <- pdata.frame(mydata, index = c("Country", "Year"))

>

Hand crafting weights is typically error prone, also because the

provenance of the weights (how they were constructed) is not known.

> I first ran a non-spatial model to determine the best specification for

> fixed effects:

>

> nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

> nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

> nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

> nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

>

> I would like to compare these models based on log likelihood and AIC,

> but the plm() function does not appear to provide a log likelihood or

> AIC. I have read through the JSS plm article and it states that models

> made with the plm() function are "estimated using the lm function to the

> transformed data". I'm aware that we can use logLik() and AIC() for a

> model estimated with the lm() function. However it doesn't seem to work

> with the plm() function.

So either debug the function itself and find out where the object needed

gets lost, or write to the package maintainer with a patch to return the

value, see:

https://r-forge.r-project.org/scm/viewvc.php/pkg/R/est_plm.R?view=markup&root=plm

and its use of the internal function plm:::mylm(). It runs lm(), but only

returns some of the components in the lm() object.

>

> For example, I did logLik(nonspatial.twowayFE) and

> AIC(nonspatial.twowayFE) but the error message for both is:

>

> Error in UseMethod("logLik") :

> no applicable method for 'logLik' applied to an object of class

> "c('plm', 'panelmodel')"

>

> Please let me know if I'm calling the wrong function(s) and/or if you're

> aware of a way to compare these models based on log likelihood and/or

> AIC.

>

> For the spatial models, here is my code:

>

> spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

> spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

> spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

>

> As in my original post, the SLX and SEM models ran OK but the error when

> I try to run the SDEM model is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

Please replicate with built-in data. If you cannot replicate, it is

something in your data that you need to address.

Roger

>

> The variables that I use the slag() function on are all numeric, so I

> don't know what's wrong. I seem to be able to use slag() with plm() but

> not with spml(), but I don't know why this is so.

>

> I need to compare the models to see if SDEM can be reduced to one of its

> nested form. As was the case of the non-spatial models, I can't get the

> log likelihood for models created with the plm() function, so any

> suggestions are welcome. I've already read through the JSS articles for

> splm and plm as well as both documentations and there's no information

> on this (except that models built with the plm() function are estimated

> using the lm function to the transformed data).

>

> Thanks for clarifying the impact measures for SDEM and SLX. Just to

> check - when you say linear combination for standard errors do you mean

> e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of

> the direct impact and theta1 is the coefficient of the indirect impact)?

>

> Thank you for your help!

>

> Best wishes,

> Sarah

>

> ________________________________

> From: Roger Bivand <[hidden email]>

> Sent: Wednesday, July 24, 2019 9:50:13 PM

> To: Letisha Sarah Fong Rui Zhen <[hidden email]>

> Cc: [hidden email] <[hidden email]>

> Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

>

> On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

>

>> Dear all,

>

> Please do not post HTML-formated messages.

>

>>

>> I???m working with panel data of 9 countries and 18 years and I???m

>> running fixed effects SDEM, SLX and SEM with the splm package.

>>

>> I have three questions:

>>

>> 1. I can???t seem to get the SDEM model to run. My code for each of the

>> 3 models is:

>>

>> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

>> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

>> data = my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

>> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

>> model = ???within???, effect = ???individual???)

>>

>> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

>> my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> I am able to run both SLX and SEM models without problem, but when I try

>> to run the SDEM model, the error message is:

>>

>> Error in UseMethod("slag") :

>> no applicable method for 'slag' applied to an object of class

>> "c('double', 'numeric')"

>>

>> I don???t understand what is wrong here, as I have no problems with the

>> slag() function in the SLX model. My data is a pdataframe and each

>> variable is a numeric pseries.

>

> My guess would be that you should protect your square with I() in general,

> but have no idea - this is not a reproducible example.

>

>>

>>

>> 2. How can I calculate impact measures (direct, indirect and total) for

>> spatial panel models?

>>

>> The impacts() function in spdep doesn???t work anymore and the impacts()

>> function from the spatialreg package seems to work only for

>> cross-sectional data and not panel data.

>>

>> For example, I ran:

>>

>> spatialreg::impacts(model.SLX)

>>

>> And the error message is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class

>> "c('plm', 'panelmodel')"

>>

>> I have tried methods(impacts) but none of the functions seem to work for

>> my SLX model created with the splm package.

>

> But your SLX model is created with the plm package, isn't it? The only use

> of splm is for the manual lags with slag()?

>

>>

>> I also looked at some previous examples in the splm documentation and

>> more specifically the spml() function and the example provided

>> (specifically the impact measures) doesn???t work anymore:

>>

>> data(Produc, package = "plm")

>> data(usaww)

>> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>> ## random effects panel with spatial lag

>> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

>> model="random", spatial.error="none", lag=TRUE)

>> summary(respatlag) ## calculate impact measures impac1 <-

>> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

>> summary(impac1, zstats=TRUE, short=TRUE)

>>

>

> The implemented impacts methods in splm apply to the case where the lagged

> response is included. For SDEM and SLX, you can get the impacts by

> addition for the total impactss, and by linear combination for their

> standard errors. This is not implemented in a method. Further, slag() does

> not give any impacts method any information on which variables have been

> lagged - in your illustration above EI and RE are not lagged.

>

>> The error message when I run the impacts() function is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class "splm"

>>

>> My question is therefore, how do I go about calculating direct, indirect

>> and total impact measures for spatial panel data?

>>

>>

>> 3. How can I test if the SDEM model can be simplified to the SLX model

>> (since I estimate the SDEM by maximum likelihood (spml function) and

>> the SLX by ordinary linear regression (plm function))? From my

>> understanding the plm() function does not compute a loglikelihood or

>> AIC so I probably can???t do a likelihood ratio test to choose

>> between models (I haven???t tried this out because I???m stuck at

>> running the SDEM model).

>

> Do you know definitely that plm does not provide a log likelihood? I

> realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

> splm articles?

>

> Roger

>

>>

>> Any help or advice would be greatly appreciated. Thank you.

>>

>> Best wishes,

>> Sarah

>>

>>

>>

>> ________________________________

>>

>> Important: This email is confidential and may be privileged. If you are

>> not the intended recipient, please delete it and notify us immediately;

>> you should not copy or use it for any purpose, nor disclose its contents

>> to any other person. Thank you.

>>

>> [[alternative HTML version deleted]]

>>

>>

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> https://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

> Dear Roger,

>

> Thank you for your quick response.

>

> I have uploaded the spatial weights matrix and sample dataset I'm

> working with here:

> https://drive.google.com/drive/folders/1NjCODKEix-_nA5CfiIos6uiKAUbGp_BZ?usp=sharing

Please only use built-in data - loading user-offered data is inherently

unfortunate for many reasons.

>

> Reading the data in and transforming them into a pdataframe and listw, respectively:

> spatialweight <- read.csv("spatialweight.csv", header = T)

> row.names(spatialweight) <- spatialweight$X

> spatialweight <- spatialweight[, -1]

> spatialweight.mat <- as.matrix(spatialweight)

> mylistw <- mat2listw(spatialweight.mat, style = "M")

> mydata <- read.csv("sampledata.csv", header = T)

> mydata <- pdata.frame(mydata, index = c("Country", "Year"))

>

Hand crafting weights is typically error prone, also because the

provenance of the weights (how they were constructed) is not known.

> I first ran a non-spatial model to determine the best specification for

> fixed effects:

>

> nonspatial.pooledOLS <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "pooling")

> nonspatial.individualFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "individual")

> nonspatial.timeFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "time")

> nonspatial.twowayFE <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, model = "within", effect = "twoways")

>

> I would like to compare these models based on log likelihood and AIC,

> but the plm() function does not appear to provide a log likelihood or

> AIC. I have read through the JSS plm article and it states that models

> made with the plm() function are "estimated using the lm function to the

> transformed data". I'm aware that we can use logLik() and AIC() for a

> model estimated with the lm() function. However it doesn't seem to work

> with the plm() function.

So either debug the function itself and find out where the object needed

gets lost, or write to the package maintainer with a patch to return the

value, see:

https://r-forge.r-project.org/scm/viewvc.php/pkg/R/est_plm.R?view=markup&root=plm

and its use of the internal function plm:::mylm(). It runs lm(), but only

returns some of the components in the lm() object.

>

> For example, I did logLik(nonspatial.twowayFE) and

> AIC(nonspatial.twowayFE) but the error message for both is:

>

> Error in UseMethod("logLik") :

> no applicable method for 'logLik' applied to an object of class

> "c('plm', 'panelmodel')"

>

> Please let me know if I'm calling the wrong function(s) and/or if you're

> aware of a way to compare these models based on log likelihood and/or

> AIC.

>

> For the spatial models, here is my code:

>

> spatial.SDEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, listw = mylistw, model = "within", effect = "twoways", lag = F, spatial.error = "b")

> spatial.SEM <- spml(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI, data = mydata, listw = mylistw, model = "within", effect = "individual", lag = F, spatial.error = "b")

> spatial.SLX <- plm(lnY ~ lnGDP + lnGDP2 + U + RE + IS + lnEI + slag(lnGDP, listw = mylistw) + slag(lnGDP2, listw = mylistw) + slag(lnEI, listw = mylistw), data = mydata, model = "within", effect = "individual")

>

> As in my original post, the SLX and SEM models ran OK but the error when

> I try to run the SDEM model is:

>

> Error in UseMethod("slag") :

> no applicable method for 'slag' applied to an object of class

> "c('double', 'numeric')"

Please replicate with built-in data. If you cannot replicate, it is

something in your data that you need to address.

Roger

>

> The variables that I use the slag() function on are all numeric, so I

> don't know what's wrong. I seem to be able to use slag() with plm() but

> not with spml(), but I don't know why this is so.

>

> I need to compare the models to see if SDEM can be reduced to one of its

> nested form. As was the case of the non-spatial models, I can't get the

> log likelihood for models created with the plm() function, so any

> suggestions are welcome. I've already read through the JSS articles for

> splm and plm as well as both documentations and there's no information

> on this (except that models built with the plm() function are estimated

> using the lm function to the transformed data).

>

> Thanks for clarifying the impact measures for SDEM and SLX. Just to

> check - when you say linear combination for standard errors do you mean

> e.g. beta1*se + theta1*se = totalse (where beta1 is the coefficient of

> the direct impact and theta1 is the coefficient of the indirect impact)?

>

> Thank you for your help!

>

> Best wishes,

> Sarah

>

> ________________________________

> From: Roger Bivand <[hidden email]>

> Sent: Wednesday, July 24, 2019 9:50:13 PM

> To: Letisha Sarah Fong Rui Zhen <[hidden email]>

> Cc: [hidden email] <[hidden email]>

> Subject: Re: [R-sig-Geo] Calculating impact measures for spatial panel models and trouble specifying SDEM model using spml

>

> On Wed, 24 Jul 2019, Letisha Sarah Fong Rui Zhen wrote:

>

>> Dear all,

>

> Please do not post HTML-formated messages.

>

>>

>> I???m working with panel data of 9 countries and 18 years and I???m

>> running fixed effects SDEM, SLX and SEM with the splm package.

>>

>> I have three questions:

>>

>> 1. I can???t seem to get the SDEM model to run. My code for each of the

>> 3 models is:

>>

>> model.SDEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE +

>> slag(ln(GDP), listw = my.listw) + slag((ln(GDP))^2, listw = my.listw),

>> data = my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> model.SLX <- plm(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE + slag(ln(GDP),

>> listw = my.listw) + slag((ln(GDP))^2, listw = my.listw), data = my.data,

>> model = ???within???, effect = ???individual???)

>>

>> model.SEM <- spml(ln(Y) ~ ln(GDP) + (ln(GDP))^2 + EI + RE, data =

>> my.data, listw = my.listw, model = ???within???, effect =

>> ???individual???, lag = F, spatial.error = ???b???)

>>

>> I am able to run both SLX and SEM models without problem, but when I try

>> to run the SDEM model, the error message is:

>>

>> Error in UseMethod("slag") :

>> no applicable method for 'slag' applied to an object of class

>> "c('double', 'numeric')"

>>

>> I don???t understand what is wrong here, as I have no problems with the

>> slag() function in the SLX model. My data is a pdataframe and each

>> variable is a numeric pseries.

>

> My guess would be that you should protect your square with I() in general,

> but have no idea - this is not a reproducible example.

>

>>

>>

>> 2. How can I calculate impact measures (direct, indirect and total) for

>> spatial panel models?

>>

>> The impacts() function in spdep doesn???t work anymore and the impacts()

>> function from the spatialreg package seems to work only for

>> cross-sectional data and not panel data.

>>

>> For example, I ran:

>>

>> spatialreg::impacts(model.SLX)

>>

>> And the error message is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class

>> "c('plm', 'panelmodel')"

>>

>> I have tried methods(impacts) but none of the functions seem to work for

>> my SLX model created with the splm package.

>

> But your SLX model is created with the plm package, isn't it? The only use

> of splm is for the manual lags with slag()?

>

>>

>> I also looked at some previous examples in the splm documentation and

>> more specifically the spml() function and the example provided

>> (specifically the impact measures) doesn???t work anymore:

>>

>> data(Produc, package = "plm")

>> data(usaww)

>> fm <- log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp

>> ## random effects panel with spatial lag

>> respatlag <- spml(fm, data = Produc, listw = mat2listw(usaww),

>> model="random", spatial.error="none", lag=TRUE)

>> summary(respatlag) ## calculate impact measures impac1 <-

>> impacts(respatlag, listw = mat2listw(usaww, style = "W"), time = 17)

>> summary(impac1, zstats=TRUE, short=TRUE)

>>

>

> The implemented impacts methods in splm apply to the case where the lagged

> response is included. For SDEM and SLX, you can get the impacts by

> addition for the total impactss, and by linear combination for their

> standard errors. This is not implemented in a method. Further, slag() does

> not give any impacts method any information on which variables have been

> lagged - in your illustration above EI and RE are not lagged.

>

>> The error message when I run the impacts() function is:

>>

>> Error in UseMethod("impacts", obj) :

>> no applicable method for 'impacts' applied to an object of class "splm"

>>

>> My question is therefore, how do I go about calculating direct, indirect

>> and total impact measures for spatial panel data?

>>

>>

>> 3. How can I test if the SDEM model can be simplified to the SLX model

>> (since I estimate the SDEM by maximum likelihood (spml function) and

>> the SLX by ordinary linear regression (plm function))? From my

>> understanding the plm() function does not compute a loglikelihood or

>> AIC so I probably can???t do a likelihood ratio test to choose

>> between models (I haven???t tried this out because I???m stuck at

>> running the SDEM model).

>

> Do you know definitely that plm does not provide a log likelihood? I

> realise that it isn't OLS unless pooled. Have you reviewed the JSS plm and

> splm articles?

>

> Roger

>

>>

>> Any help or advice would be greatly appreciated. Thank you.

>>

>> Best wishes,

>> Sarah

>>

>>

>>

>> ________________________________

>>

>> Important: This email is confidential and may be privileged. If you are

>> not the intended recipient, please delete it and notify us immediately;

>> you should not copy or use it for any purpose, nor disclose its contents

>> to any other person. Thank you.

>>

>> [[alternative HTML version deleted]]

>>

>>

>

> --

> Roger Bivand

> Department of Economics, Norwegian School of Economics,

> Helleveien 30, N-5045 Bergen, Norway.

> voice: +47 55 95 93 55; e-mail: [hidden email]

> https://orcid.org/0000-0003-2392-6140

> https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

>

> ________________________________

>

> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.

>

--

Roger Bivand

Department of Economics, Norwegian School of Economics,

Helleveien 30, N-5045 Bergen, Norway.

voice: +47 55 95 93 55; e-mail: [hidden email]

https://orcid.org/0000-0003-2392-6140

https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Roger Bivand

Department of Economics

Norwegian School of Economics

Helleveien 30

N-5045 Bergen, Norway

### how to specify the size of device so that it fits a SpatialPolygonsDataFrame plot exactly

Dear listers,

I would like to write jpeg (or png, etc.) maps using the function

graphics:jpeg (png, etc.), but with the device window exactly fitting

the map (= no white strip above, below or on the sides of the map). The

map is derived from a SpatialPolygonsDataFrame (WGS84)

However, even if I set

par(mar=c(0,0,0,0) and pass xaxs='i', yaxs='i' in the plot, there are

still white strips

I tried several tricks but none are fully satisfying.

Trial #1: I computed the ratio of the bounding box and using this ratio

in the function jpeg with the height and width arguments. I did it both

in WGS84 and in UTM47:

ratio<-(max(bbox(ChinaUTM)[1,])-min(bbox(ChinaUTM)[1,]))/(max(bbox(ChinaUTM)[2,])-min(bbox(ChinaUTM)[2,]))

# ratio width/height

then

jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

dev.off()

The best is obtained with UTM47 (planar projection), however I get

(almost) rid of any white strip only if I add an additionnal 1.04

coefficient (obtained by trial-error)

Hence:

jpeg("./MapChina.jpeg",height=8,width=8*ratio*1.04,res=300,units="in")

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

dev.off()

Trial #2: The other way has been to pick up the parameter values like that:

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

pt<-par()$pin

ratio<-pt[2]/pt[1]

jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i') #

dev.off()

Does not work either

Any idea about how to deal with this ? In short how to get the exact

size of a SpatialPolygonDataFrame plot to make the device window exactly

this size?

Best,

Patrick

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

I would like to write jpeg (or png, etc.) maps using the function

graphics:jpeg (png, etc.), but with the device window exactly fitting

the map (= no white strip above, below or on the sides of the map). The

map is derived from a SpatialPolygonsDataFrame (WGS84)

However, even if I set

par(mar=c(0,0,0,0) and pass xaxs='i', yaxs='i' in the plot, there are

still white strips

I tried several tricks but none are fully satisfying.

Trial #1: I computed the ratio of the bounding box and using this ratio

in the function jpeg with the height and width arguments. I did it both

in WGS84 and in UTM47:

ratio<-(max(bbox(ChinaUTM)[1,])-min(bbox(ChinaUTM)[1,]))/(max(bbox(ChinaUTM)[2,])-min(bbox(ChinaUTM)[2,]))

# ratio width/height

then

jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

dev.off()

The best is obtained with UTM47 (planar projection), however I get

(almost) rid of any white strip only if I add an additionnal 1.04

coefficient (obtained by trial-error)

Hence:

jpeg("./MapChina.jpeg",height=8,width=8*ratio*1.04,res=300,units="in")

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

dev.off()

Trial #2: The other way has been to pick up the parameter values like that:

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i')

pt<-par()$pin

ratio<-pt[2]/pt[1]

jpeg("./MapChina.jpeg",height=8,width=8*ratio,res=300,units="in")

par(mar=c(0,0,0,0)) # no margin

plot(China,col="grey",border="white",lwd=2,xaxs='i', yaxs='i') #

dev.off()

Does not work either

Any idea about how to deal with this ? In short how to get the exact

size of a SpatialPolygonDataFrame plot to make the device window exactly

this size?

Best,

Patrick

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo

### Re: Simple Ripley's CRS test for market point patters --- addendum.

I have realised that there were a couple of oversights in my previous

posting on this issue. One is a bit subtle; the other is a bit of a

blunder on my part.

First the "subtle" one. The test that I proposed for CSRI is a test done

using the estimated parameters of the proposed model to generate

realisations of data sets under the null hypothesis. Such tests tend to

be conservative. (See section 10.6.3, p. 388 ff., in [1].)

In the current instance (testing for CSRI) the conservatism can be

overcome by simulating data conditional on the numbers of points of each

type in the "real" data. This can be done here via:

foo <- function(W){

s <- runifpoint(10,win=W)

m <- runifpoint(9,win=W)

l <- runifpoint(27,win=W)

superimpose(s=s,m=m,l=l)

}

simex <- expression(foo(W))

and then

set.seed(42)

E <- envelope(syn.ppp,simulate=simex,savefuns=TRUE)

dtst <- dclf.test(E)

mtst <- mad.test(E)

This gives p-values of 0.06 from the dclf test and 0.09 from the mad

test. Thus there appears to be some slight evidence against the null

hypothesis. (Evidence at the 0.10 significance level.)

That this should be so is *OBVIOUS* (!!!) if we turn to the unsubtle

point that I overlooked. It is clear that the pattern of ants' nests

cannot be truly a realisation of a Poisson process since there must be a

bit of a "hard core" effect. Two ants' nests cannot overlap. Thus if

we approximate the shape of each nest by a disc, points i and j must be

a distance of at least r_i + r_j from each other, where r_i =

sqrt(area_i/pi), and similarly for r_j.

However I note that the data provided seem to violate this principle in

several instances. E.g. points 41 and 42 are a distance of only 0.2460

metres apart but areas 41 and 42 are 12.9 and 15.2 square metres,

yielding putative radii of 3.5917 and 3.8987 metres, whence the closest

these points could possibly be (under the "disc-shaped assumption") is

7.4904 metres, far larger than 0.2460. So something is a bit out of

whack here. Perhaps these are made-up ("synthetic") data and the

process of making up the data did not take account of the minimum

distance constraint.

How to incorporate the "hard core" aspect of your (real?) data into the

modelling exercise, and what the impact of it is upon your research

question(s), is unclear to me and is likely to be complicated.

cheers,

Rolf

--

Honorary Research Fellow

Department of Statistics

University of Auckland

Phone: +64-9-373-7599 ext. 88276

[1] Spatial Point Patterns: Methodology and Applications with R

1st Edition, Adrian Baddeley, Ege Rubak, Rolf Turner

Chapman and Hall/CRC, 2015

_______________________________________________

R-sig-Geo mailing list

[hidden email]

https://stat.ethz.ch/mailman/listinfo/r-sig-geo