Subscribe to R-sig-geo feed
This is an archive for R-sig-geo, not a forum. It is not possible to post through Nabble - you may not start a new thread nor follow up an existing thread. If you wish to post, but are not subscribed to the list through the list homepage, subscribe first (through the list homepage, not via Nabble) and post from your subscribed email address. Until 2015-06-20, subscribers could post through Nabble, but policy has been changed as too many non-subscribers misunderstood the interface.
Updated: 14 min 39 sec ago

Re: unsubscribing R mailing list

Tue, 09/22/2020 - 08:33
On Mon, 21 Sep 2020, Adeela Munawar wrote:

> How to unsubscribe this mailing list?

Each mailing list posting has a mimimal footer, which includes a link to
the list page: https://stat.ethz.ch/mailman/listinfo/r-sig-geo, which in
turn contains instructions on how to unsubscribe.

List members whose emails stop receiving messages for whatever reason are
detected by the server and unsubscribed automatically after a grace
period.

Roger Bivand
List admin.

>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>

--
Roger Bivand
Department of Economics, Norwegian School of Economics,
Helleveien 30, N-5045 Bergen, Norway.
voice: +47 55 95 93 55; e-mail: [hidden email]
https://orcid.org/0000-0003-2392-6140
https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
Roger Bivand
Department of Economics
Norwegian School of Economics
Helleveien 30
N-5045 Bergen, Norway

Re: raster: extract() based in a pixel value

Tue, 09/22/2020 - 07:44
Hi there

adapt your function for extraction as follows:

percentual_1s<- function(x,...) {
  leng1<-length(which(x==1))
  lengtotal<-length(x)
  perc<-(leng1/lengtotal)*100
  return(perc)
}

And add "na.rm = FALSE" in the extract call:

cent_max <- extract(ras, pts[,1:2], buffer = 1000000, fun=percentual_1s, df=TRUE, na.rm = FALSE)

That gives me:

head(cent_max)
  ID    layer
1  1 49.81865
2  2 50.27545
3  3 50.03113
4  4 50.29819
5  5 50.39391
6  6 48.89556

The values I get make sense to me (around 50%).

HTH, Rafi


> On 22 Sep 2020, at 14:26, ASANTOS via R-sig-Geo <[hidden email]> wrote:
>
> Dear R-sig-geo Members,
>
> I'd like to use de extract() function using the raster package for
> calculate a proportion of pixel with "1"s values inside a buffer given
> some coordinates in a raster. I try to create a function for this
> without success, in my hypothetical example:
>
> #Package
> library(raster)
>
> # Create a raster
> ras <- raster(ncol=1000, nrow=1000)
> set.seed(0)
> values(ras) <- runif(ncell(ras))
> values(ras)[values(ras) > 0.5] = 1
> values(ras)[values(ras) < 0.5] = NA
>
> # Create some coordinates
> pts<-sampleRandom(ras, size=30, xy=TRUE)
> pts.df<-as.data.frame(pts)
> pts.df$area<-rnorm(30, mean=10)## Here just for create a artificial
> covariate without any direct implication in my question
>
> #Function for extract proportion of 1s values
> percentual_1s<- function(x,...) {
> � leng1<-length(values(x) ==1) # length of 1s pixels values
> � lengtotal<-length(x) # total length of pixels inside buffer
> � perc<-(leng1/lengtotal)*100
> � return(perc)
> }
>
> # Extract the desirable proportion in a circular 100000 units buffer
> cent_max <- extract(ras, # raster layer
> ��� cbind(pts.df$x,pts.df$y),��� # SPDF with centroids for buffer
> ��� buffer = 100000,������������ # buffer size
> ��� fun=percentual_1s,���������� # what to value to extract
> ��� df=TRUE)
>
>
> Here doesn't work, despite the code look like Ok. My perfect output is:
>
> #������� x����� y layer����� area�� percentual_1s
> #1 -109.26 -43.65���� 1 10.349010�� 23.15
> #2�� 93.42 -87.21���� 1� 9.861920�� 45.18
> #3�� 57.06� 86.85���� 1� 8.642071�� 74.32
> #4 -109.98 -45.63���� 1 10.376485�� 11.56
> #5� -92.34� 37.89���� 1 10.375138�� 56.89
> #6�� 19.62� 21.51���� 1� 8.963949�� 88.15
>
>
> Please any ideas or any another package that help in this operation?
>
> Thanks in advanced,
>
>
> --
> Alexandre dos Santos
> Geotechnologies and Spatial Statistics applied to Forest Entomology
> Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
> Caixa Postal 244 (PO Box)
> Avenida dos Ramires, s/n - Vila Real
> Caceres - MT - CEP 78201-380 (ZIP code)
> Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
> Lattes CV: http://lattes.cnpq.br/1360403201088680
> OrcID: orcid.org/0000-0001-8232-6722
> ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
> Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
> --
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

raster: extract() based in a pixel value

Tue, 09/22/2020 - 07:26
Dear R-sig-geo Members,

I'd like to use de extract() function using the raster package for
calculate a proportion of pixel with "1"s values inside a buffer given
some coordinates in a raster. I try to create a function for this
without success, in my hypothetical example:

#Package
library(raster)

# Create a raster
ras <- raster(ncol=1000, nrow=1000)
set.seed(0)
values(ras) <- runif(ncell(ras))
values(ras)[values(ras) > 0.5] = 1
values(ras)[values(ras) < 0.5] = NA

# Create some coordinates
pts<-sampleRandom(ras, size=30, xy=TRUE)
pts.df<-as.data.frame(pts)
pts.df$area<-rnorm(30, mean=10)## Here just for create a artificial
covariate without any direct implication in my question

#Function for extract proportion of 1s values
percentual_1s<- function(x,...) {
 � leng1<-length(values(x) ==1) # length of 1s pixels values
 � lengtotal<-length(x) # total length of pixels inside buffer
 � perc<-(leng1/lengtotal)*100
 � return(perc)
}

# Extract the desirable proportion in a circular 100000 units buffer
cent_max <- extract(ras, # raster layer
 ��� cbind(pts.df$x,pts.df$y),��� # SPDF with centroids for buffer
 ��� buffer = 100000,������������ # buffer size
 ��� fun=percentual_1s,���������� # what to value to extract
 ��� df=TRUE)


Here doesn't work, despite the code look like Ok. My perfect output is:

#������� x����� y layer����� area�� percentual_1s
#1 -109.26 -43.65���� 1 10.349010�� 23.15
#2�� 93.42 -87.21���� 1� 9.861920�� 45.18
#3�� 57.06� 86.85���� 1� 8.642071�� 74.32
#4 -109.98 -45.63���� 1 10.376485�� 11.56
#5� -92.34� 37.89���� 1 10.375138�� 56.89
#6�� 19.62� 21.51���� 1� 8.963949�� 88.15


Please any ideas or any another package that help in this operation?

Thanks in advanced,


--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--


        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: unsubscribing R mailing list

Mon, 09/21/2020 - 12:07
It is very easy:

Go to the end of this web page:

https://stat.ethz.ch/mailman/listinfo/r-sig-geo and unsubscribe yourself
from the list. Cheers, Marcelino


El 21/09/2020 a las 18:58, Adeela Munawar escribió:
> How to unsubscribe this mailing list?
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
> .


--
Marcelino de la Cruz Rot
Depto. de Biología y Geología
Física y Química Inorgánica
Universidad Rey Juan Carlos
Móstoles España

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

unsubscribing R mailing list

Mon, 09/21/2020 - 11:58
How to unsubscribe this mailing list?

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

raster multicore processing: WriteValues inside foreach loop

Sun, 09/20/2020 - 05:44
Dear list

with package raster it's possible to process large images in blocks, like
this:

library(raster)

r<- raster(system.file("img", "Rlogo.tiff", package="tiff"))
s <- raster(r)
b <- blockSize(r)
s <- writeStart(s, filename=tempfile(fileext = ".tif"),  overwrite=TRUE)

for (i in 1:b$n){
  v <- getValuesBlock(r, row=b$row[i], nrows=b$nrows[i])
  s <- writeValues(s, v, b$row[i])
}

s <- writeStop(s)
plot(s)

However, I wonder if it's possible to replace the for loop by foreach, like
this:

library(foreach)
cl <- parallel::makeCluster(2)
doSNOW::registerDoSNOW(cl)

r<- raster(system.file("img", "Rlogo.tiff", package="tiff"))
s <- raster(r)
b <- blockSize(r)
s <- writeStart(s, filename=tempfile(fileext = ".tif"),  overwrite=TRUE)

foreach (i=1:b$n, .packages = "raster") %dopar% {
  v <- getValuesBlock(r, row=b$row[i], nrows=b$nrows[i])
  s <- writeValues(s, v, b$row[i])
}
parallel::stopCluster(cl)

s <- writeStop(s)

However, the code above fails with
Error in { : task 1 failed - "Null external pointer

Apparently, the WriteValues inside foreach is not able to write in the tif
file. Is it possible to fix this?
Could foreach be an easier alternative to the multi-core functions
exemplified in the raster vignette here
<https://rspatial.org/raster/pkg/appendix1.html>?

Thanks
Hugo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Apply local spectral histogram on satellite image

Sat, 09/19/2020 - 03:49

Hi everyone,

 

Do anyone has experienced to apply the local spectral histogram on satellite image?

 

Gs. Dennis TAM Tze Huey (谭子伟)

PhD Candidate

Department of Remote Sensing,
Faculty of Built Environment and Surveying,

Universiti Teknologi Malaysia,
81310 UTM, Johor Bahru, Johor,
MALAYSIA.

 


_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 14:20
Dear Gaëtan,
(cc r-sig-geo)
please post your mails in this topic to the mailing list.
I don't really know what does 'my "tmax_filesnames" object is a "large
character"' mean. tmax_filenames is a typical character vector of 3660
elements, so it should not cause any problem.
Anyway, the error message indicates that one or more of the file names
are not correct. You should carefully check whether tmax_filenames was
generated apropriately.
Best wishes,
Ákos

2020.09.17. 21:00 keltezéssel, Gaetan Martinelli írta:
> Hello again,
>
> The Stack function doesn't work because my "tmax_filesnames" object is
> a "large character".
> Here is the error message I received after this line on my script :
> > tmax_raster <- raster::stack(tmax_filenames)
> Error in .local(.Object, ...) :
> Error in .rasterObjectFromFile(x, band = band, objecttype =
> "RasterLayer",  :
>   Cannot create a RasterLayer object from this file. (file does not exist)
>
> How do I rectify this error?
> I transform my object Large Character?
>
> Thanks again Àkos.
>
> Gaëtan
>
> *Envoyé:* jeudi 17 septembre 2020 à 13:57
> *De:* "Bede-Fazekas Ákos" <[hidden email]>
> *À:* [hidden email]
> *Objet:* Re: [R-sig-Geo] Execute Extract function on several Raster
> with points layer
> Hello Gaëtan,
>
> so as far as I understand, you have 3 main folders:
> "Max_T", ? and ?
> and in alll the three folders, there are subfolders
> "1961", "1962", ... "1970"
> In each folder, there are 366 raster files, for which the file naming
> conventions are not known by us, but some of the files are called
> "max1961_1.asc", "max1961_2.asc", ... "max1961_366.asc" (in case of
> T_max and year 1961)
>
> In this case, the 10980 layer that belongs to T_max can be read to one
> large RasterStack in this way:
> tmax_filenames <- c(outer(X = as.character(1:366), Y =
> as.character(1961:1970), FUN = function(doy, year) paste0("N:/400074
> Conservation des sols et CC/Data/Climate data/Climate-10km/Max_T/",
> year, "/max", year, "_", doy, ".asc")))
> tmax_raster <- stack(tmax_filenames)
>
> You can give self-explanatory names to the raster layers:
> names(tmax_raster) <- c(outer(X = as.character(1:366), Y =
> as.character(1961:1970), FUN = function(doy, year) paste0(year, "_",
> doy)))
>
> But if the structure of the rasters are the same (i.e. the cell size,
> extent, projection), then I recommend you to do the raster-vector
> overlay once, save the cell numbers that you are interested in, and then
> in nested for loops (one loop for the climate variable, one for the year
> and one for the day) read the rasters one-by-one, extract the values
> according to the cell numbers, and save the result in a previously
> created data.frame. In this way, you may not encounter memory issues.
> Although, it will take a lot of time...
>
> HTH,
> Ákos Bede-Fazekas
> Hungarian Academy of Sciences
>
> 2020.09.17. 19:28 keltezéssel, Gaetan Martinelli írta:
> > Hello everyone, R team,
> >
> > Sorry in advance for this long message. Your help will be invaluable.
> >
> > For a few days now i have been blocked to execute a task on R. I will
> > try to synthesize my problem.
> >
> > I have several raster. I have an ASCII file for each day of a year
> > with a single band. For 30 years, and for three climatic variables on
> > grid 10km/10km (T_min, T_max, Precipitation). So i have a total around
> > of 32 940 raster files (366days*30years*3variables).
> >
> > Also, i have a layer of aroud 1000 points.
> >
> > I tried to use the Stack function and then make the intersection for
> > each raster files with my 1000 points.
> > I cannot create an independent matrix for all my files where i applied
> > the "extract" function, to then concatenate all my matrices in order
> > to have a single table.
> >
> > I tried this, exemple for 10 years et only T_Max (my files are
> > organized the same for my two other variables)  :
> > *#Datapoints*
> > Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
> >                layer="Centroid_champs")
> > Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022
> > points in the data
> > st_crs(Datapoints)
> > *#Rasters files*
> > folders = list(
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'),
> > #Each year includes daily data, the names of my several raster is
> > "max1961_1", "max1961_10", "max1961_100", etc...
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
> >   file.path('N:','Data','Climate data','Climate-10km','Max_T','1970')
> > )
> > files = unlist(sapply(folders, function(folder) {
> >   list.files(folder, full.names=TRUE)
> > }))
> > files
> >
> > MET <- lapply(files, raster)
> > s <- raster::stack(MET)
> >
> > output <- list()
> > for(i in 1:length(MET)){
> >   output[[i]] <- extract(s, Datapoints)
> >   names(output)[[i]] <- paste("Année", MET[i], sep = "_")
> > }
> > Also, i tried that :
> > p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the
> > values ​​extracted from my raster) ; p3 <- 3660     # 3660matrix (366
> > day* 10 years)
> > matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant
> > matrix
> >
> > for(i in seq_along(MET)){
> >
> >   matlist[[i]] <- extract(s, Datapoints)
> > }
> > But, nothing works...
> > I would like my script to perform these actions :
> > - For each Raster in my Rasterstack, extract the climatic data values
> > ​​and link them to my "Datapoints",
> > - Take the name of my file, take the first three characters of the
> > name to get a column of my weather variable, here, "T_Max" (column
> > with my raster values) ; Take the following four characters then
> > report this information in a new column "Year", and finally, take the
> > last characters of the file name to create a new column "Day".
> > - Concatenate all the independent output matrices corresponding to
> > each intersection made with my different raster files
> > In the end, I would have a huge table, but one that will allow me to
> > do my analysis :
> > Table with 9 attributes (6 attributs of my points + Year + Day +
> > T_Max) like this :
> > ID Datapoint Year Day T_Max
> > 1 1960 1
> > 2 1960 1
> > …... 1960 1
> > 1022 1960 1
> > 1 1960 2
> > 2 1960 2
> > …... 1960 2
> > 1022 1960 2
> > ….. ….. …..
> > 1 1970 1
> > 2 1970 1
> > …... 1970 1
> > 1022 1970 1
> > 1 1970 2
> > 2 1970 2
> > …... 1970 2
> > 1022 1970 2
> > ….. ….. …..
> >
> > Could a loop do this task ?
> >
> > I'm sorry, i am gradually learning to manipulate R, but this exercise
> > is more difficult than expected...Please feel free to tell me if my
> > question is inappropriate.
> >
> > Thank you very much in advance for your answers. Your help or your
> > comments will be really appreciated.
> >
> > Have a good day.
> >
> > Gaëtan Martinelli
> > Water and Agriculture research professional in Quebec.
> >
> > _______________________________________________
> > R-sig-Geo mailing list
> > [hidden email]
> > https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 13:40
  Thank you very much for the reply.

It's exactly that. My three main folders are "Max_T", Min_T "and" PCP "(for precipitation), they have subfolders per year, with 366 raster files per year. My rasters all have the same structures.
Thanks for all these elementse, I'll try that.

Having an identical structure for my 30 years and my 3 variables, i will also try the second method. But it will surely be longer because of my memory. From what i understand, i have to create my empty output array before extract.     Envoyé: jeudi 17 septembre 2020 à 13:57
De: "Bede-Fazekas Ákos" <[hidden email]>
À: [hidden email]
Objet: Re: [R-sig-Geo] Execute Extract function on several Raster with points layer Hello Gaëtan,

so as far as I understand, you have 3 main folders:
"Max_T", ? and ?
and in alll the three folders, there are subfolders
"1961", "1962", ... "1970"
In each folder, there are 366 raster files, for which the file naming
conventions are not known by us, but some of the files are called
"max1961_1.asc", "max1961_2.asc", ... "max1961_366.asc" (in case of
T_max and year 1961)  
In this case, the 10980 layer that belongs to T_max can be read to one
large RasterStack in this way:
tmax_filenames <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0("N:/400074
Conservation des sols et CC/Data/Climate data/Climate-10km/Max_T/",
year, "/max", year, "_", doy, ".asc")))
tmax_raster <- stack(tmax_filenames)
You can give self-explanatory names to the raster layers:
names(tmax_raster) <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0(year, "_", doy)))

But if the structure of the rasters are the same (i.e. the cell size,
extent, projection), then I recommend you to do the raster-vector
overlay once, save the cell numbers that you are interested in, and then
in nested for loops (one loop for the climate variable, one for the year
and one for the day) read the rasters one-by-one, extract the values
according to the cell numbers, and save the result in a previously
created data.frame. In this way, you may not encounter memory issues.
Although, it will take a lot of time...

HTH,
Ákos Bede-Fazekas
Hungarian Academy of Sciences

2020.09.17. 19:28 keltezéssel, Gaetan Martinelli írta:
> Hello everyone, R team,
>
> Sorry in advance for this long message. Your help will be invaluable.
>
> For a few days now i have been blocked to execute a task on R. I will
> try to synthesize my problem.
>
> I have several raster. I have an ASCII file for each day of a year
> with a single band. For 30 years, and for three climatic variables on
> grid 10km/10km (T_min, T_max, Precipitation). So i have a total around
> of 32 940 raster files (366days*30years*3variables).
>
> Also, i have a layer of aroud 1000 points.
>
> I tried to use the Stack function and then make the intersection for
> each raster files with my 1000 points.
> I cannot create an independent matrix for all my files where i applied
> the "extract" function, to then concatenate all my matrices in order
> to have a single table.
>
> I tried this, exemple for 10 years et only T_Max (my files are
> organized the same for my two other variables)  :
> *#Datapoints*
> Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
>                layer="Centroid_champs")
> Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022
> points in the data
> st_crs(Datapoints)
> *#Rasters files*
> folders = list(
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'),
> #Each year includes daily data, the names of my several raster is
> "max1961_1", "max1961_10", "max1961_100", etc...
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1970')
> )
> files = unlist(sapply(folders, function(folder) {
>   list.files(folder, full.names=TRUE)
> }))
> files
>
> MET <- lapply(files, raster)
> s <- raster::stack(MET)
>
> output <- list()
> for(i in 1:length(MET)){
>   output[[i]] <- extract(s, Datapoints)
>   names(output)[[i]] <- paste("Année", MET[i], sep = "_")
> }
> Also, i tried that :
> p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the
> values ​​extracted from my raster) ; p3 <- 3660      # 3660matrix (366
> day* 10 years)
> matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant
> matrix
>
> for(i in seq_along(MET)){
>
>   matlist[[i]] <- extract(s, Datapoints)
> }
> But, nothing works...
> I would like my script to perform these actions :
> - For each Raster in my Rasterstack, extract the climatic data values
> ​​and link them to my "Datapoints",
> - Take the name of my file, take the first three characters of the
> name to get a column of my weather variable, here, "T_Max" (column
> with my raster values) ; Take the following four characters then
> report this information in a new column "Year", and finally, take the
> last characters of the file name to create a new column "Day".
> - Concatenate all the independent output matrices corresponding to
> each intersection made with my different raster files
> In the end, I would have a huge table, but one that will allow me to
> do my analysis :
> Table with 9 attributes (6 attributs of my points + Year + Day +
> T_Max) like this :
> ID Datapoint Year Day T_Max
> 1 1960 1
> 2 1960 1
> …... 1960 1
> 1022 1960 1
> 1 1960 2
> 2 1960 2
> …... 1960 2
> 1022 1960 2
> ….. ….. …..
> 1 1970 1
> 2 1970 1
> …... 1970 1
> 1022 1970 1
> 1 1970 2
> 2 1970 2
> …... 1970 2
> 1022 1970 2
> ….. ….. …..
>
> Could a loop do this task ?
>
> I'm sorry, i am gradually learning to manipulate R, but this exercise
> is more difficult than expected...Please feel free to tell me if my
> question is inappropriate.
>
> Thank you very much in advance for your answers. Your help or your
> comments will be really appreciated.
>
> Have a good day.
>
> Gaëtan Martinelli
> Water and Agriculture research professional in Quebec.
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo


[[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 12:57
Hello Gaëtan,

so as far as I understand, you have 3 main folders:
"Max_T", ? and ?
and in alll the three folders, there are subfolders
"1961", "1962", ... "1970"
In each folder, there are 366 raster files, for which the file naming
conventions are not known by us, but some of the files are called
"max1961_1.asc", "max1961_2.asc", ... "max1961_366.asc" (in case of
T_max and year 1961)

In this case, the 10980 layer that belongs to T_max can be read to one
large RasterStack in this way:
tmax_filenames <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0("N:/400074
Conservation des sols et CC/Data/Climate data/Climate-10km/Max_T/",
year, "/max", year, "_", doy, ".asc")))
tmax_raster <- stack(tmax_filenames)

You can give self-explanatory names to the raster layers:
names(tmax_raster) <- c(outer(X = as.character(1:366), Y =
as.character(1961:1970), FUN = function(doy, year) paste0(year, "_", doy)))

But if the structure of the rasters are the same (i.e. the cell size,
extent, projection), then I recommend you to do the raster-vector
overlay once, save the cell numbers that you are interested in, and then
in nested for loops (one loop for the climate variable, one for the year
and one for the day) read the rasters one-by-one, extract the values
according to the cell numbers, and save the result in a previously
created data.frame. In this way, you may not encounter memory issues.
Although, it will take a lot of time...

HTH,
Ákos Bede-Fazekas
Hungarian Academy of Sciences

2020.09.17. 19:28 keltezéssel, Gaetan Martinelli írta:
> Hello everyone, R team,
>
> Sorry in advance for this long message. Your help will be invaluable.
>
> For a few days now i have been blocked to execute a task on R. I will
> try to synthesize my problem.
>
> I have several raster. I have an ASCII file for each day of a year
> with a single band. For 30 years, and for three climatic variables on
> grid 10km/10km (T_min, T_max, Precipitation). So i have a total around
> of 32 940 raster files (366days*30years*3variables).
>
> Also, i have a layer of aroud 1000 points.
>
> I tried to use the Stack function and then make the intersection for
> each raster files with my 1000 points.
> I cannot create an independent matrix for all my files where i applied
> the "extract" function, to then concatenate all my matrices in order
> to have a single table.
>
> I tried this, exemple for 10 years et only T_Max (my files are
> organized the same for my two other variables)  :
> *#Datapoints*
> Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
>                layer="Centroid_champs")
> Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022
> points in the data
> st_crs(Datapoints)
> *#Rasters files*
> folders = list(
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'),
> #Each year includes daily data, the names of my several raster is
> "max1961_1", "max1961_10", "max1961_100", etc...
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
>   file.path('N:','Data','Climate data','Climate-10km','Max_T','1970')
> )
> files = unlist(sapply(folders, function(folder) {
>   list.files(folder, full.names=TRUE)
> }))
> files
>
> MET <- lapply(files, raster)
> s <- raster::stack(MET)
>
> output <- list()
> for(i in 1:length(MET)){
>   output[[i]] <- extract(s, Datapoints)
>   names(output)[[i]] <- paste("Année", MET[i], sep = "_")
> }
> Also, i tried that :
> p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the
> values ​​extracted from my raster) ; p3 <- 3660      # 3660matrix (366
> day* 10 years)
> matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant
> matrix
>
> for(i in seq_along(MET)){
>
>   matlist[[i]] <- extract(s, Datapoints)
> }
> But, nothing works...
> I would like my script to perform these actions :
> - For each Raster in my Rasterstack, extract the climatic data values
> ​​and link them to my "Datapoints",
> - Take the name of my file, take the first three characters of the
> name to get a column of my weather variable, here, "T_Max" (column
> with my raster values) ; Take the following four characters then
> report this information in a new column "Year", and finally, take the
> last characters of the file name to create a new column "Day".
> - Concatenate all the independent output matrices corresponding to
> each intersection made with my different raster files
> In the end, I would have a huge table, but one that will allow me to
> do my analysis :
> Table with 9 attributes (6 attributs of my points + Year + Day +
> T_Max) like this :
> ID Datapoint Year Day T_Max
> 1 1960 1
> 2 1960 1
> …... 1960 1
> 1022 1960 1
> 1 1960 2
> 2 1960 2
> …... 1960 2
> 1022 1960 2
> ….. ….. …..
> 1 1970 1
> 2 1970 1
> …... 1970 1
> 1022 1970 1
> 1 1970 2
> 2 1970 2
> …... 1970 2
> 1022 1970 2
> ….. ….. …..
>
> Could a loop do this task ?
>
> I'm sorry, i am gradually learning to manipulate R, but this exercise
> is more difficult than expected...Please feel free to tell me if my
> question is inappropriate.
>
> Thank you very much in advance for your answers. Your help or your
> comments will be really appreciated.
>
> Have a good day.
>
> Gaëtan Martinelli
> Water and Agriculture research professional in Quebec.
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Execute Extract function on several Raster with points layer

Thu, 09/17/2020 - 12:28
Hello everyone, R team, 

Sorry in advance for this long message. Your help will be invaluable.

For a few days now i have been blocked to execute a task on R. I will try to synthesize my problem.

I have several raster. I have an ASCII file for each day of a year with a single band. For 30 years, and for three climatic variables on grid 10km/10km (T_min, T_max, Precipitation). So i have a total around of 32 940 raster files (366days*30years*3variables).

Also, i have a layer of aroud 1000 points.
I tried to use the Stack function and then make the intersection for each raster files with my 1000 points. I cannot create an independent matrix for all my files where i applied the "extract" function, to then concatenate all my matrices in order to have a single table.  
I tried this, exemple for 10 years et only T_Max (my files are organized the same for my two other variables)  :   #Datapoints   Datapoints<-readOGR(dsn="H:/Inventaire/R/final",
               layer="Centroid_champs")   Datapoints<- spTransform (Datapoints, CRS ("+init=epsg:4326") ) # 1022 points in the data st_crs(Datapoints)     #Rasters files   folders = list(
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1961'), #Each year includes daily data, the names of my several raster is "max1961_1", "max1961_10", "max1961_100", etc...
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1962'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1963'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1964'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1965'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1966'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1967'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1968'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1969'),
  file.path('N:','Data','Climate data','Climate-10km','Max_T','1970') )
  files = unlist(sapply(folders, function(folder) {
  list.files(folder, full.names=TRUE)
}))
files
MET <- lapply(files, raster) s <- raster::stack(MET)

output <- list()
for(i in 1:length(MET)){
  output[[i]] <- extract(s, Datapoints)
  names(output)[[i]] <- paste("Année", MET[i], sep = "_")
}       Also, i tried that : 
  p1 <- 1022 (ID of my DataPoints) ; p2 <- 1 (column where there are the values ​​extracted from my raster) ; p3 <- 3660      # 3660matrix (366 day* 10 years)
matlist <- list(array(NA,c(p1,p2,p3)))  # doing a list of independant matrix
for(i in seq_along(MET)){
  
  matlist[[i]] <- extract(s, Datapoints)
}       But, nothing works...
I would like my script to perform these actions :   - For each Raster in my Rasterstack, extract the climatic data values ​​and link them to my "Datapoints", - Take the name of my file, take the first three characters of the name to get a column of my weather variable, here, "T_Max" (column with my raster values) ; Take the following four characters then report this information in a new column "Year", and finally, take the last characters of the file name to create a new column "Day".
- Concatenate all the independent output matrices corresponding to each intersection made with my different raster files     In the end, I would have a huge table, but one that will allow me to do my analysis :    Table with 9 attributes (6 attributs of my points + Year + Day + T_Max) like this :     ID Datapoint Year Day T_Max 1 1960 1   2 1960 1   …... 1960 1   1022 1960 1   1 1960 2   2 1960 2   …... 1960 2   1022 1960 2   ….. ….. …..   1 1970 1   2 1970 1   …... 1970 1   1022 1970 1   1 1970 2   2 1970 2   …... 1970 2   1022 1970 2   ….. ….. …..       Could a loop do this task ?

I'm sorry, i am gradually learning to manipulate R, but this exercise is more difficult than expected...Please feel free to tell me if my question is inappropriate.  
Thank you very much in advance for your answers. Your help or your comments will be really appreciated.

Have a good day.  
Gaëtan Martinelli
Water and Agriculture research professional in Quebec.      
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Wed, 09/16/2020 - 07:29
Thank you a lot Mathias.

I hope this could help more people.

Have a good day.
________________________________
De: R-sig-Geo <[hidden email]> em nome de Mathias Moser <[hidden email]>
Enviado: terça-feira, 15 de setembro de 2020 15:24
Para: [hidden email] <[hidden email]>
Assunto: Re: [R-sig-Geo] How to cluster the standard errors in the SPLM function?

Pietro (and anyone else interested): the Conley SE code is still
available on Darin Christensen's Github:
https://github.com/darinchristensen/conley-se

BR, Mathias

On Tue, 2020-09-15 at 15:37 +0200, Roger Bivand wrote:
> Please do not repeat messages, it does not help.
>
> Did you provide a reproducible example, perhaps from plm? Did you
> read the
> code in splm, for example on R-Forge, or check the development
> version on
> R-Forge
> https://r-forge.r-project.org/R/?group_id=352
> ,
> install.packages("splm", repos="
> http://R-Forge.R-project.org
> ")?
>
> Did you reference any articles showing how this approach might be
> implemented? Do you know whether any such code exists? Are you
> thinking of
> Conley approaches? Such as:
> http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/
>
> ? Unfortunately, the dropbox link is now stale.
>
> Please report back on your progress, contact the splm maintainer to
> offer
> ideas or assistance, and anyway provide a reproducible example and
> the
> references you are using.
>
> Hope this helps,
>
> Roger
>
> On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:
>
> > Hello everyone,
> >
> > Could someone help me with splm (Spatial Panel Model By Maximum
> > Likelihood) in R?
> >
> > I want to know if is possible to cluster the standard errors by my
> > individuals (like as in plm function). After a lot of research a
> > found that there are more people with the same doubt, you can see
> > this here, the person has the same problem as me:
> >
> > https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
> >
> >
> > Thank you all.
> >
> > [
> > https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
> >
> >
> > Pietro Andre Telatin Paschoalino
> > Doutorando em Ci�ncias Econ�micas da Universidade Estadual de
> > Maring� - PCE.
> >
> > [
> > https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
> >     Virus-free.
> > www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> >
> >
> >      [[alternative HTML version deleted]]
> >
> >
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
>
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 07:05
Hi Marcelino,

Thanks so much, I just make a little change in your code:

ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
subset(insects.ppp,  marks=="termiNests" & ddd[,"antNests"] >20)

I put `ddd[,"antNests"] >20` despite `ddd[,"termiNests"] >20` because I
need "termiNests" mark far 20 units to each "antNests".

Best wishes,

Alexandre

--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--

Em 16/09/2020 03:18, Marcelino de la Cruz Rot escreveu:
> Hi Alexandre,
>
> may be this?
>
>
> ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
> subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)
>
>
> Cheers,
>
> Marcelino
>
>
> El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
>> Dear R-Sig-Geo Members,
>>
>> I'd like to find any way to filtering a set of points in a "ppp"
>> object by minimum distance just only between different marks. In my
>> example:
>>
>> #Package
>> library(spatstat)
>>
>> #Point process example - ants
>> data(ants)
>> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>>
>>
>>
>> # Create a artificial point pattern - termites
>> termites <- rpoispp(0.0005, win=Window(ants))
>> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>>
>>
>>
>> #Join ants.ppp and termites.ppp
>> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>>
>>
>> #If I try to use subset function:
>>
>> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>>
>> #Marked planar point pattern: 223 points #marks are of storage type
>> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
>> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
>> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
>> 70974 points had NA or NaN coordinate values, and were discarded
>>
>> Not the desirable result yet, because I'd like to calculate just only
>> the > 20 "termiNests" to "antNests" marks and not the "termiNests"
>> with "termiNests" too.
>>
>> Please any ideas?
>>
>> Thanks in advanced,
>>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 07:04
Hi Marcelino,

Thanks, I just make a little change in your code:

ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
subset(insects.ppp,  marks=="termiNests" & ddd[,"antNests"] >20)

I put `ddd[,"antNests"] >20` despite `ddd[,"termiNests"] >20` because I
need "termiNests" mark far 20 units to each "antNests"

Best wishes,

Alexandre

--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--

Em 16/09/2020 03:18, Marcelino de la Cruz Rot escreveu:
> Hi Alexandre,
>
> may be this?
>
>
> ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
> subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)
>
>
> Cheers,
>
> Marcelino
>
>
> El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
>> Dear R-Sig-Geo Members,
>>
>> I'd like to find any way to filtering a set of points in a "ppp"
>> object by minimum distance just only between different marks. In my
>> example:
>>
>> #Package
>> library(spatstat)
>>
>> #Point process example - ants
>> data(ants)
>> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>>
>>
>>
>> # Create a artificial point pattern - termites
>> termites <- rpoispp(0.0005, win=Window(ants))
>> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>>
>>
>>
>> #Join ants.ppp and termites.ppp
>> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>>
>>
>> #If I try to use subset function:
>>
>> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>>
>> #Marked planar point pattern: 223 points #marks are of storage type
>> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
>> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
>> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
>> 70974 points had NA or NaN coordinate values, and were discarded
>>
>> Not the desirable result yet, because I'd like to calculate just only
>> the > 20 "termiNests" to "antNests" marks and not the "termiNests"
>> with "termiNests" too.
>>
>> Please any ideas?
>>
>> Thanks in advanced,
>>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 05:15
Sorry, I meant to say

subset(insects.ppp, marks=="termiNests" & ddd[,"antNests"] >20)

El 16/09/2020 a las 9:18, Marcelino de la Cruz Rot escribió:
> Hi Alexandre,
>
> may be this?
>
>
> ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
> subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)
>
>
> Cheers,
>
> Marcelino
>
>
> El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
>> Dear R-Sig-Geo Members,
>>
>> I'd like to find any way to filtering a set of points in a "ppp"
>> object by minimum distance just only between different marks. In my
>> example:
>>
>> #Package
>> library(spatstat)
>>
>> #Point process example - ants
>> data(ants)
>> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>>
>>
>>
>> # Create a artificial point pattern - termites
>> termites <- rpoispp(0.0005, win=Window(ants))
>> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>>
>>
>>
>> #Join ants.ppp and termites.ppp
>> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>>
>>
>> #If I try to use subset function:
>>
>> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>>
>> #Marked planar point pattern: 223 points #marks are of storage type
>> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
>> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
>> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
>> 70974 points had NA or NaN coordinate values, and were discarded
>>
>> Not the desirable result yet, because I'd like to calculate just only
>> the > 20 "termiNests" to "antNests" marks and not the "termiNests"
>> with "termiNests" too.
>>
>> Please any ideas?
>>
>> Thanks in advanced,
>>
>
--
Marcelino de la Cruz Rot
Depto. de Biología y Geología
Física y Química Inorgánica
Universidad Rey Juan Carlos
Móstoles España

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: Filtering a set of points in a "ppp" object by distance using marks

Wed, 09/16/2020 - 02:18
Hi Alexandre,

may be this?


ddd <- nndist(insects.ppp, by=factor(insects.ppp$marks))
subset(insects.ppp,  marks=="termiNests" & ddd[,"termiNests"] >20)


Cheers,

Marcelino


El 15/09/2020 a las 22:52, ASANTOS via R-sig-Geo escribió:
> Dear R-Sig-Geo Members,
>
> I'd like to find any way to filtering a set of points in a "ppp" object by minimum distance just only between different marks. In my example:
>
> #Package
> library(spatstat)
>
> #Point process example - ants
> data(ants)
> ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))
>
>
> # Create a artificial point pattern - termites
> termites <- rpoispp(0.0005, win=Window(ants))
> termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))
>
>
> #Join ants.ppp and termites.ppp
> insects.ppp<-superimpose(ants.ppp,termites.ppp)
>
>
> #If I try to use subset function:
>
> subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")
>
> #Marked planar point pattern: 223 points #marks are of storage type
> �character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
> x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
> 1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
> 70974 points had NA or NaN coordinate values, and were discarded
>
> Not the desirable result yet, because I'd like to calculate just only the > 20 "termiNests" to "antNests" marks and not the "termiNests" with "termiNests" too.
>
> Please any ideas?
>
> Thanks in advanced,
>
--
Marcelino de la Cruz Rot
Depto. de Biología y Geología
Física y Química Inorgánica
Universidad Rey Juan Carlos
Móstoles España

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Filtering a set of points in a "ppp" object by distance using marks

Tue, 09/15/2020 - 15:52
Dear R-Sig-Geo Members,

I'd like to find any way to filtering a set of points in a "ppp" object by minimum distance just only between different marks. In my example:

#Package
library(spatstat)

#Point process example - ants
data(ants)
ants.ppp<-ppp(x=ants$x,y=ants$y,marks=rep("antNests",length(ants$x)),window=Window(ants))


# Create a artificial point pattern - termites
termites <- rpoispp(0.0005, win=Window(ants))
termites.ppp<-ppp(x=termites$x,y=termites$y,marks=rep("termiNests",length(termites$x)),window=Window(ants))


#Join ants.ppp and termites.ppp
insects.ppp<-superimpose(ants.ppp,termites.ppp)


#If I try to use subset function:

subset(insects.ppp, pairdist(insects.ppp) > 20 & marks=="termiNests")

#Marked planar point pattern: 223 points #marks are of storage type
�character� #window: polygonal boundary #enclosing rectangle: [-25, 803]
x [-49, 717] units (one unit = 0.5 feet) #Warning message: #In ppp(X[,
1], X[, 2], window = win, marks = marx, check = check) : # 70751 out of
70974 points had NA or NaN coordinate values, and were discarded

Not the desirable result yet, because I'd like to calculate just only the > 20 "termiNests" to "antNests" marks and not the "termiNests" with "termiNests" too.

Please any ideas?

Thanks in advanced,

--
Alexandre dos Santos
Geotechnologies and Spatial Statistics applied to Forest Entomology
Instituto Federal de Mato Grosso (IFMT) - Campus Caceres
Caixa Postal 244 (PO Box)
Avenida dos Ramires, s/n - Vila Real
Caceres - MT - CEP 78201-380 (ZIP code)
Phone: (+55) 65 99686-6970 / (+55) 65 3221-2674
Lattes CV: http://lattes.cnpq.br/1360403201088680
OrcID: orcid.org/0000-0001-8232-6722
ResearchGate: www.researchgate.net/profile/Alexandre_Santos10
Publons: https://publons.com/researcher/3085587/alexandre-dos-santos/
--


        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Tue, 09/15/2020 - 13:24
Pietro (and anyone else interested): the Conley SE code is still
available on Darin Christensen's Github:
https://github.com/darinchristensen/conley-se

BR, Mathias

On Tue, 2020-09-15 at 15:37 +0200, Roger Bivand wrote:
> Please do not repeat messages, it does not help.
>
> Did you provide a reproducible example, perhaps from plm? Did you
> read the
> code in splm, for example on R-Forge, or check the development
> version on
> R-Forge
> https://r-forge.r-project.org/R/?group_id=352
> ,
> install.packages("splm", repos="
> http://R-Forge.R-project.org
> ")?
>
> Did you reference any articles showing how this approach might be
> implemented? Do you know whether any such code exists? Are you
> thinking of
> Conley approaches? Such as:
> http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/
>  
> ? Unfortunately, the dropbox link is now stale.
>
> Please report back on your progress, contact the splm maintainer to
> offer
> ideas or assistance, and anyway provide a reproducible example and
> the
> references you are using.
>
> Hope this helps,
>
> Roger
>
> On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:
>
> > Hello everyone,
> >
> > Could someone help me with splm (Spatial Panel Model By Maximum
> > Likelihood) in R?
> >
> > I want to know if is possible to cluster the standard errors by my
> > individuals (like as in plm function). After a lot of research a
> > found that there are more people with the same doubt, you can see
> > this here, the person has the same problem as me:
> >
> > https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
> >
> >
> > Thank you all.
> >
> > [
> > https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
> >
> >
> > Pietro Andre Telatin Paschoalino
> > Doutorando em Ci�ncias Econ�micas da Universidade Estadual de
> > Maring� - PCE.
> >
> > [
> > https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
> >     Virus-free.
> > www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> >
> >
> > [[alternative HTML version deleted]]
> >
> >
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
>
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>
>
_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: How to cluster the standard errors in the SPLM function?

Tue, 09/15/2020 - 09:11
Hello Roger, thank you for your answer.

Yes, from plm I could compute by:

coeftest(x, vcovHC(x, type = ""))

But it does not work with the splm.

I don't find how this approach can be implemented in articles, related to R. But in Stata with the function xsmle, it is possible to estimate the way I want. I don't want to change the software, but it is an option.

Thank you for the idea to use conley SHAC, it is a question that I need to think about.

Again, thank you very much for your help, if I can't solve it for sure I will put a reproducible example.

Pietro Andre Telatin Paschoalino
Doutorando em Ciências Econômicas da Universidade Estadual de Maringá - PCE.

________________________________
De: Roger Bivand <[hidden email]>
Enviado: terça-feira, 15 de setembro de 2020 10:37
Para: Pietro Andre Telatin Paschoalino <[hidden email]>
Cc: [hidden email] <[hidden email]>
Assunto: Re: [R-sig-Geo] How to cluster the standard errors in the SPLM function?

Please do not repeat messages, it does not help.

Did you provide a reproducible example, perhaps from plm? Did you read the
code in splm, for example on R-Forge, or check the development version on
R-Forge https://r-forge.r-project.org/R/?group_id=352,
install.packages("splm", repos="http://R-Forge.R-project.org")?

Did you reference any articles showing how this approach might be
implemented? Do you know whether any such code exists? Are you thinking of
Conley approaches? Such as:
http://www.trfetzer.com/using-r-to-estimate-spatial-hac-errors-per-conley/
? Unfortunately, the dropbox link is now stale.

Please report back on your progress, contact the splm maintainer to offer
ideas or assistance, and anyway provide a reproducible example and the
references you are using.

Hope this helps,

Roger

On Tue, 15 Sep 2020, Pietro Andre Telatin Paschoalino wrote:

> Hello everyone,
>
> Could someone help me with splm (Spatial Panel Model By Maximum Likelihood) in R?
>
> I want to know if is possible to cluster the standard errors by my individuals (like as in plm function). After a lot of research a found that there are more people with the same doubt, you can see this here, the person has the same problem as me:
>
> https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm
>
> Thank you all.
>
> [https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon@...?v=73d79a89bded]<https://stackoverflow.com/questions/36869932/clustered-standard-errors-in-spatial-panel-linear-models-splm>
>
> Pietro Andre Telatin Paschoalino
> Doutorando em Ci�ncias Econ�micas da Universidade Estadual de Maring� - PCE.
>
> [https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>
>        [[alternative HTML version deleted]]
>
>
--
Roger Bivand
Department of Economics, Norwegian School of Economics,
Helleveien 30, N-5045 Bergen, Norway.
voice: +47 55 95 93 55; e-mail: [hidden email]
https://orcid.org/0000-0003-2392-6140
https://scholar.google.no/citations?user=AWeghB0AAAAJ&hl=en

[https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif]<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>    Virus-free. www.avast.com<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Re: "no applicable method" for focal() function in raster

Tue, 09/15/2020 - 08:45
Hello,

Unfortunately your email client formatted your message in HTML so the
content has been scrambled.  Best results experienced when the client is
configured to send plain text.

I think that issues is that lsm_l_condent() is expecting a raster (or
similar input).  See
https://cran.r-project.org/web/packages/landscapemetrics/landscapemetrics.pdf

Raster's focal() function slides along the raster pulling out cell values
(coincident with your focal window) and passes them to the function you
specify.  So as the docs for raster::focal descriibe, that function must be
configured to "take multiple numbers, and return a single number."
 landscapremetrics::lsm_l_condent is not configured that way.

Perhaps you need to create your own version of the function that just
operates on an input of numbers rather than an input of raster-like objects?

Cheers,
Ben

On Mon, Sep 14, 2020 at 4:42 PM Jaime Burbano Girón <[hidden email]>
wrote:

> Hi everyone,
>
> I want to apply a moving window (3x3) to estimate conditional entropy
> (Nowosad & Stepinsky, 2019) over a heterogeneous landscape:
> *entropy=function(r){*
>
>
>
>
> *  entropy=lsm_l_condent(r, neighbourhood = 4, ordered = TRUE, base =
> "log2")  return(entropy$value)}w=matrix(1,3,3)result=focal(r, w,
> fun=entropy)*
>
> However, I get this error:
> *Error in .focal_fun(values(x), w, as.integer(dim(out)), runfun, NAonly) :
> *
> *Evaluation error: no applicable method for 'lsm_l_condent' applied to an
> object of class "c('double', 'numeric')".*
>
> But, when I run entropy function in the entire landscape it works:
>
> *> entropy(r)[1] 2.178874*
>
> *r* is a INT4U raster object:
>
>
>
>
>
>
> *class      : RasterLayer dimensions : 886, 999, 885114  (nrow, ncol,
> ncell)resolution : 300, 300  (x, y)extent     : 934805.7, 1234506, 1006566,
> 1272366  (xmin, xmax, ymin, ymax)crs        : +proj=tmerc
> +lat_0=4.59620041666667 +lon_0=-74.0775079166667 +k=1 +x_0=1000000
> +y_0=1000000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs values
>   : 99, 321113  (min, max)*
>
> There is any idea to solve the "no applicable method" error? Or any idea to
> estimate conditional entropy applying a moving window?
>
> Thanks in advance for the help.
>
> Best,
>
> Jaime
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-Geo mailing list
> [hidden email]
> https://stat.ethz.ch/mailman/listinfo/r-sig-geo
>

--
Ben Tupper
Bigelow Laboratory for Ocean Science
East Boothbay, Maine
http://www.bigelow.org/
https://eco.bigelow.org

        [[alternative HTML version deleted]]

_______________________________________________
R-sig-Geo mailing list
[hidden email]
https://stat.ethz.ch/mailman/listinfo/r-sig-geo

Pages