Subscribe to R bloggers feed R bloggers
R news and tutorials contributed by (750) R bloggers
Updated: 4 hours 42 min ago

Run massive parallel R jobs cheaply with updated doAzureParallel package

Fri, 06/09/2017 - 00:41

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

At the EARL conference in San Francisco this week, JS Tan from Microsoft gave an update (PDF slides here) on the doAzureParallel package . As we've noted here before, this package allows you to easily distribute parallel R computations to an Azure cluster. The package was recently updated to support using automatically-scaling Azure Batch clusters with low-priority nodes, which can be used at a discount of up to 80% compared to the price of regular high-availability VMs.

JS Tan using doAzureParallel #rstats package to run simulation on a cluster of 20 low-priority Azure VMs. Total cost: $0.02 #EARLConf2017 pic.twitter.com/Mpl3IUa9zY

— David Smith (@revodavid) June 7, 2017

Using the doAzureParallel package is simple. First, you need to define the cluster you're going to use as a JSON file. (You can see an example on the right.) Here, you'll specify your Azure credentials, the size of the cluster, and the type of nodes (CPUs and memory) to use in the cluster. You can also specify here R packages (from CRAN and/or Github) to be pre-loaded onto each node, and the maximum number of simultaneous tasks to run on each node (for within-node parallelism).

New to this update, the poolSize option allows you to specify the number of dedicated (standard) VM nodes to use, in addition to a number of low-priority nodes to use. Low-priority nodes can be pre-empted by the Azure system at any time, but are much cheaper to use. (Even if a node is pre-empted your parallel computation will be continue; it will just take a little longer with the reduced capacity.) You can even specify a minimum and maximum number of nodes of each class to use, in which case the cluster will automatically scale up and down according to either (your choice) the workload or the time of day (e.g. only expand the low-priority part of the cluster on weekends, when pre-emption is less likely). 


Once you've defined the parameters of your cluster, all you need to do is declare the cluster as a backend for the foreach package. The body of the foreach loop runs just like a for loop, except that multiple iterations run in parallel on the remote cluster. Here are the key parts of the option price simulation example JS presented at the conference.

This same approach can be used for any "embarrassingly parallel" iteration in R, and you can use any R function or package within the body of the loop. For example, you could use a cluster to reduce the time required for parameter tuning and cross-validation with the caret package, or speed up data preparation tasks when using the dplyr package.

In addition to support for auto-scaling clusters, this update to doAzureParallel also includes a few other new features. You'll also find new utility functions for managing multiple long-running R jobs, functions to read data from and write data to Azure Blob storage, and the ability to pre-load data into the cluster by specifying resource files.

The doAzureParallel package is available for download now from Github, under the open-source MIT license. For details on how to use the package, check out the README and the doAzureParallel guide.

Github (Azure): doAzureParallel

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: Revolutions. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Introduction to Set Theory and Sets with R

Thu, 06/08/2017 - 22:00

(This article was first published on R – Aaron Schlegel, and kindly contributed to R-bloggers)

Sets define a ‘collection’ of objects, or things typically referred to as ‘elements’ or ‘members.’ The concept of sets arises naturally when dealing with any collection of objects, whether it be a group of numbers or anything else. Conceptually, the following examples can be defined as a ‘set’:

  • {1, 2, 3, 4}
  • {Red, Green, Blue}
  • {Cat, Dog}

The first example is the set of the first four natural numbers. The second defines a set of the primary colors while the third example denotes a set of common household pets.

Since its development beginning in the 1870s with Georg Cantor and Richard Dedekind, set theory has become a foundational system of mathematics, and therefore its concepts constantly arise in the study of mathematics and is also an area of active research today.

This post will introduce some of the basic concepts of set theory, specifically the Zermelo-Fraenkel axiomatic system (more on that later), with R code to demonstrate these concepts.

Set Notation

Sets can be defined with lowercase, uppercase, script or Greek letters (in addition to subscripts and the like). Using several types of letters helps when dealing with hierarchies. Before diving into set theory, it’s best to define the common notation employed. One benefit of set theory being ubiquitousness in mathematics is learning its notation also helps in the understanding of other mathematical concepts.

  • \forall x – for every set x
  • \exists x – there exists such a set x that
  • \neg – not
  • \wedge – and
  • \vee – or (one or the other or both)
  • \Rightarrow – implies
  • \Leftrightarrow – iff, if and only if.
  • A \cup B – union of sets A and B
  • A \cap B – intersection of sets A and B
  • x \in A – x is an element of a set A
  • x \notin A – x is not an element of a set A

The \Rightarrow notation for implies can be thought of like an if statement in that it denotes the relation ‘if a then b.’

Set Membership

Set membership is written similar to:

x \in A

Which can be stated as ‘x is an element of the set A.’ If x is not a member of A, we write:

x \notin A

The symbol \in to denote set membership originated with Giuseppe Peano (Enderton, pp. 26).

Which is read ‘x is not an element of the set A.’ We can write an R function to implement the concept of set membership.

iselement <- function(x, A) { if (x %in% A) { return(TRUE) } return(FALSE) }

Let’s use our simple function to test if there exists some members in the set, A = \{3, 5, 7, 11\}.

A <- c(3, 5, 7, 11) eles <- c(3, 5, 9, 10, (5 + 6)) for (i in 1:length(eles)) { print(iselement(i, A)) } ## [1] FALSE ## [1] FALSE ## [1] TRUE ## [1] FALSE ## [1] TRUE

Set membership leads into one of the first axioms of set theory under the Zermelo-Fraenkel system, the Principle of Extensionality.

Principle of Extensionality

The principle of extensionality states if two sets have the same members, they are equal. The formal definition of the principle of extensionality can be stated more concisely using the notation given above:

\forall A \forall B (\forall x (x \in A \Leftrightarrow x \in B) \Rightarrow A = B)

Stated less concisely but still using set notation:

If two sets A and B are such that for every element (member) x:

x \in A \qquad iff \qquad x \in B Then A = B.

We can express this axiom through an R function to test for set equality.

isequalset <- function(a, b) { a <- unique(a) b <- unique(b) an <- length(a) if (an > length(b)) { return(FALSE) } for (i in 1:an) { if (!(a[i]) %in% b) { return(FALSE) } } return(TRUE) }

We can now put the principle of extensionality in action with our R function!

# original set A to compare A <- c(3, 5, 7, 11) # define some sets to test for equality B <- c(5, 7, 11, 3) C <- c(3, 4, 6, 5) D <- c(3, 5, 7, 11, 13) E <- c(11, 7, 5, 3) G <- c(3, 5, 5, 7, 7, 11) # collect sets into a list to iterate sets <- list(B, C, D, E, G) # using the isequalset() function, print the results of the equality tests. for (i in sets) { print(isequalset(i, A)) } ## [1] TRUE ## [1] FALSE ## [1] FALSE ## [1] TRUE ## [1] TRUE Empty Sets and Singletons

So far we have only investigated sets with two or more members. The empty set, denoted \varnothing, is defined as a set containing no elements and occurs surprisingly frequently in set-theoretic operations despite is seemingly straightforward and simple nature.

The empty set axiom, states the existence of an empty set concisely:

\exists B \forall x \qquad x \notin B Which can also be stated as ‘there is a set having no members.’

A set {\varnothing} can be formed whose only member is \varnothing. It is important to note {\varnothing} \neq \varnothing because \varnothing \in {\varnothing} but \varnothing \notin \varnothing. One can conceptually think of {\varnothing} as a container with nothing in it.

A singleton is a set with exactly one element, denoted typically by {a}. A nonempty set is, therefore, a set with one or more element. Thus a singleton is also nonempty. We can define another quick function to test if a given set is empty, a singleton or a nonempty set.

typeofset <- function(a) { if (length(a) == 0) { return('empty set') } else if (length(a) == 1) { return('singleton') } else if (length(a) > 1) { return('nonempty set') } else { stop('could not determine type of set') } } A <- c() B <- c(0) C <- c(1, 2) D <- list(c()) set_types <- list(A, B, C, D) for (i in set_types) { print(typeofset(i)) } ## [1] "empty set" ## [1] "singleton" ## [1] "nonempty set" ## [1] "singleton"

Note D is defined as a singleton because the set contains one element, the empty set \varnothing.

Summary

This post introduced some of the basic concepts of axiomatic set theory using the Zermelo-Fraenkel axioms by exploring the idea of set, set membership and some particular cases of sets such as the empty set and singletons. Set notation that will be used throughout not just set-theoretic applications but throughout mathematics was also introduced.

References

Barile, Margherita. “Singleton Set.” From MathWorld–A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/SingletonSet.html

Enderton, H. (1977). Elements of set theory (1st ed.). New York: Academic Press.

Weisstein, Eric W. “Empty Set.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/EmptySet.html

Weisstein, Eric W. “Nonempty Set.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/NonemptySet.html

The post Introduction to Set Theory and Sets with R appeared first on Aaron Schlegel.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R – Aaron Schlegel. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Campaign Response Testing no longer published on Udemy

Thu, 06/08/2017 - 20:33

(This article was first published on R – Win-Vector Blog, and kindly contributed to R-bloggers)

Our free video course Campaign Response Testing is no longer published on Udemy. It remains available for free on YouTube with all source code available from GitHub. I’ll try to correct bad links as I find them.

Please read on for the reasons.

Udemy recently unilaterally instituted a new policy on free courses: “When a free course has a Recent Review Rating less than 4.1 and is flagged with a ‘high degree of confidence’ the course will be hidden from Udemy’s search.”

Campaign Response Testing is a free course with an all-time average rating of 4.14 and a recent rating of 3.85. We have kept the code up to date and answered student questions (there was no backlog of student questions).

Obviously others should have opinion of our public work (which may or may not be good). And we are keeping the course up for free with no-sign up necessary on YouTube (as we in no way want to hurt or inconvenience students).

But I do have an issue with Udemy’s new system: it is completely vulnerable to griefers and spammers. Accounts with “no skin in the game” (having paid nothing, possibly not even have watched the course, and without having to leave any review text in addition to the “high degree of confidence” star rating) can belittle free courses at will and in bulk. Currently there is no effective dispute mechanism (other than superficial palliatives such as: “answer student questions”), and like all lop-sided “fights against semi-anonymous trolls” it would be a pointless effort anyway. I understand the desire to increase the quality of free content (it is Udemy’s “introduction”) but I feel the introduced system would obviously be unworkable even before it was introduced.

So with hopefully no harm to our subscribers I am un-publishing the course. As is the case with Udemy policy anybody already subscribed should continue to have access to the course through Udemy. And as I have said: we have long sense ported the entire free course to YouTube and Github for free and login-free sharing.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R – Win-Vector Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Neural networks Exercises (Part-1)

Thu, 06/08/2017 - 18:00

(This article was first published on R-exercises, and kindly contributed to R-bloggers)

Source: Wikipedia

Neural network have become a corner stone of machine learning in the last decade. Created in the late 1940s with the intention to create computer programs who mimics the way neurons process information, those kinds of algorithm have long been believe to be only an academic curiosity, deprived of practical use since they require a lot of processing power and other machine learning algorithm outperform them. However since the mid 2000s, the creation of new neural network types and techniques, couple with the increase availability of fast computers made the neural network a powerful tool that every data analysts or programmer must know.

In this series of articles, we’ll see how to fit a neural network with R, we’ll learn the core concepts we need to know to well apply those algorithms and how to evaluate if our model is appropriate to use in production. This set of exercises is an introduction to neural networks where we’ll use them to create two simple regression and clustering model. Doing so, we’ll use a lot of basic concepts we’ll explore further in future sets. If you want more informations about neural network, your can see this page.

Answers to the exercises are available here.

Exercise 1
We’ll start by creating the data set on which we want to do a simple regression. Set the seed to 42, generate 200 random points between -10 and 10 and store them in a vector named X. Then, create a vector named Y containing the value of sin(x). Neural network are a lot more flexible than most regression algorithms and can fit complex function with ease. The biggest challenge is to find the appropriate network function appropriate to the situation.

Exercise 2
A network function is made of three components: the network of neurons, the weight of each connection between neuron and the activation function of each neuron. For this example, we’ll use a feed-forward neural network and the logistic activation which are the defaults for the package nnet. We take one number as input of our neural network and we want one number as the output so the size of the input and output layer are both of one. For the hidden layer, we’ll start with three neurons. It’s good practice to randomize the initial weights, so create a vector of 10 random values, picked in the interval [-1,1].

Exercise 3
Neural networks have a strong tendency of overfitting your data, meaning they become really good at describing the relationship between the values in your data set, but are not effective with data that wasn’t used to train your model. As a consequence, we need to cross-validate our model. Set the seed to 42, then create a training set containing 75% of the values in your initial data set and a test set containing the rest of your data.

Exercise 4
Load the nnet package and use the function of the same name to create your model. Pass your weights via the Wts argument and set the maxit argument to 50. We want to fit a function which can have for output multiple possible values. To do so, set the linout argument to true. Finally, take the time to look at the structure of your model.

Learn more about neural networks in the online course Machine Learning A-Z™: Hands-On Python & R In Data Science. In this course you will learn how to:

  • Work with Deep Learning networks and related packagse in R
  • Create Natural Language Processing models
  • And much more

Exercise 5
Predict the output for the test set and compute the RMSE of your predictions. Plot the function sin(x) and then plot your predictions.

Exercise 6
The number of neurons in the hidden layer, as well as the number of hidden layer used, has a great influence on the effectiveness of your model. Repeat the exercises three to five, but this time use a hidden layer with seven neurons and initiate randomly 22 weights.

Exercise 7
Now let us use neural networks to solve a clustering problems, so let’s load the iris data set! It is good practice to normalize your input data to uniformize the behavior of your model over different range of value and have a faster training. Normalize each factor so that they have a mean of zero and a standard deviation of 1, then create your train and test set.

Exercise 8
Use the nnet() and use a hidden layer of ten neurons to create your model. We want to fit a function which have a finite amount of value as output. To do so, set the code>linout argument to true. Look at the structure of your model. With clustering problem, the output is usually a factor that is coded as multiple dummy variables, instead of a single numeric value. As a consequence, the output layer have as one less neuron than the number of levels of the output factor.

Exercise 9
Make prediction with the values of the test set.

Exercise 10
Create the confusion table of your prediction and compute the accuracy of the model.

Related exercise sets:
  1. Evaluate your model with R Exercises
  2. Getting started with Plotly: basic Plots
  3. Network Analysis Part 3 Exercises
  4. Explore all our (>1000) R exercises
  5. Find an R course using our R Course Finder directory
var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R-exercises. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Introducing the MonteCarlo Package

Thu, 06/08/2017 - 17:26

(This article was first published on R – first differences, and kindly contributed to R-bloggers)

My first R package has been released on CRAN recently. It is named MonteCarlo and aims to make simulation studies as easy as possible – including parallelization and the generation of tables.

What Are Simulation Studies Good For?

Monte Carlo simulations are an essential tool in statistics and related disciplines. They are routinely used to determine distributional properties, where no analytical results are available. A typical example is to study the finite sample properties of a new statistical procedure. Finite sample properties are often unknown, since theoretical results usually rely on asymptotic theory that only applies to infinite sample sizes. This may be a good approximation in practice if the sample size is large, but it may also be bad if the sample is small or if the rate of convergence is very slow.

Simulation studies are also extremely useful to compare the performance of alternative procedures for the same task. Imagine you want to test whether your sample comes from a Gaussian distribution and you have to decide whether to use the Jarque-Bera test or the Kolmogorov Smirnov test. Both appear to be legitimate choices. A simulation study that is tailored so that it reflects the situation at hand might uncover that one of the procedures is much more powerful than the other.

Finally, small scale simulation studies are an essential tool for statistical programming. Testing is an essential part of programming and software development. Usually, if one writes a function, it is good practice to let it run on some test cases. This is easy, if the outcome is deterministic. But if your function is a statistical test or an estimator, the output depends on the sample and is stochastic. Therefore, the only way to test whether the implementation is correct, is to generate a large number of samples and to see whether it has the expected statistical properties. For example one might test whether the mean squared error of an estimator converges to zero as the sample size increases, or whether the test has the correct alpha error.

Therefore, writing Monte Carlo simulations is an everyday task in many areas of statistics. This comes with considereable effort. It is not unusual that the required lines of code to produce a simulation study are a multiple of that needed to implement the procedure of interest. As a consequence of that they are also one of the main sources for errors. On top of this, the large computational effort often requires parallelization which brings additional complications and programming efforts. This efforts can often be prohibitive – especially for less advanced users.

The MonteCarlo Package

The MonteCarlo package streamlines the process described above. It allows to create simulation studies and to summarize their results in LaTeX tables quickly and easily.

There are only two main functions in the package:

  1. MonteCarlo() runs a simulation study for a user defined parameter grid. It handles the generation of loops over these parameter grid and parallelizes the computation on a user specified number of CPU units.
  2. MakeTable() creates LaTeX tables from the output of MonteCarlo(). It stacks high dimensional output arrays into tables with a user specified ordering of rows and columns.

To run a simulation study, the user has to nest both – the generation of a sample and the calculation of the desired statistics from this sample – in a single function. This function is passed to MonteCarlo(). No additional programming is required. This approach is not only very versatile – it is also very intuitive. The user formulates his experiment as if he/she was only interested in a single draw.

The aim of this approach is to allow the user full control and flexibility with regard to the design of the Monte Carlo experiment, but to take away all the effort of setting up the technical part of the simulation.

A Simple Example: The t-test

Suppose we want to evaluate the performance of a standard t-test for the hypothesis that the mean is equal to zero. We are interested to see how the size and power of the test change with the sample size (n), the distance from the null hypothesis (loc for location) and the standard deviation of the distribution (scale). The sample is generated from a normal distribution.

To conduct this analysis, we proceed as follows. First, we load the MonteCarlo package.

library(MonteCarlo)

Then define the following function.

######################################### ## Example: t-test # Define function that generates data and applies the method of interest ttest<-function(n,loc,scale){ # generate sample: sample<-rnorm(n, loc, scale) # calculate test statistic: stat<-sqrt(n)*mean(sample)/sd(sample) # get test decision: decision1.96 # return result: return(list("decision"=decision)) }

As discussed above, ttest() is formulated in a way as if we only want to generate a single test decision. The arguments of the function are the parameters we are interested in. ttest() carries out 4 steps:

  1. Draw a sample of n observations from a normal distribution with mean loc and standard deviation scale.
  2. Calculate the t-statistic.
  3. Determine the test decision.
  4. Return the desired result in form of a list.

We then define the combinations of parameters that we are interested in and collect them in a list. The elements of the lists must have the same names as the parameters for which we want to supply grids.

# define parameter grid: n_grid<-c(50,100,250,500) loc_grid<-seq(0,1,0.2) scale_grid<-c(1,2) # collect parameter grids in list: param_list=list("n"=n_grid, "loc"=loc_grid, "scale"=scale_grid)

To run the simulation, the function ttest() and the parameter grid (param_list) are passed to MonteCarlo(), together with the desired number of Monte Carlo repetitions (nrep=1000).

# run simulation: MC_result<-MonteCarlo(func=ttest, nrep=1000, param_list=param_list)

There is no further coding required. All the mechanics of the Monte Carlo experiment are handled by the MonteCarlo() function.

Calling summary produces a short information on the simulation.

summary(MC_result) ## Simulation of function: ## ## function(n,loc,scale){ ## ## # generate sample: ## sample<-rnorm(n, loc, scale) ## ## # calculate test statistic: ## stat<-sqrt(n)*mean(sample)/sd(sample) ## ## # get test decision: ## decision1.96 ## ## # return result: ## return(list("decision"=decision)) ## } ## ## Required time: 13.38 secs for nrep = 1000 repetitions on 1 CPUs ## ## Parameter grid: ## ## n : 50 100 250 500 ## loc : 0 0.2 0.4 0.6 0.8 1 ## scale : 1 2 ## ## ## 1 output arrays of dimensions: 4 6 2 1000

As one can see from the summary, the simulation results are stored in an array of dimension c(4,6,2,1000), where the Monte Carlo repetitions are collected in the last dimension of the array.

To summarize the results in a reasonable way and to include them as a table in a paper or report, we have to represent them in a matrix. This is handled by the MakeTable() function that stacks the submatrices collected in the array in the rows and columns of a matrix and prints the result in the form of code to generate a LaTeX table.

To determine in which order the results are stacked in rows and columns, we supply the function arguments rows and cols. These are vectors of the names of the parameters in the order in which we want them to appear in the table (sorted from the inside to the outside).

# generate table: MakeTable(output=MC_result, rows="n", cols=c("loc","scale"), digits=2, include_meta=FALSE) ## \begin{table}[h] ## \centering ## \resizebox{ 1 \textwidth}{!}{% ## \begin{tabular}{ rrrrrrrrrrrrrrr } ## \hline\hline\\\\ ## scale && \multicolumn{ 6 }{c}{ 1 } & & \multicolumn{ 6 }{c}{ 2 } \\ ## n/loc & & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 & & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 \\ ## & & & & & & & & & & & & & & \\ ## 50 & & 0.05 & 0.30 & 0.83 & 0.98 & 1.00 & 1.00 & & 0.05 & 0.10 & 0.28 & 0.55 & 0.79 & 0.94 \\ ## 100 & & 0.05 & 0.51 & 0.98 & 1.00 & 1.00 & 1.00 & & 0.07 & 0.16 & 0.53 & 0.84 & 0.98 & 1.00 \\ ## 250 & & 0.05 & 0.89 & 1.00 & 1.00 & 1.00 & 1.00 & & 0.05 & 0.35 & 0.90 & 1.00 & 1.00 & 1.00 \\ ## 500 & & 0.05 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & & 0.06 & 0.58 & 1.00 & 1.00 & 1.00 & 1.00 \\ ## \\ ## \\ ## \hline\hline ## \end{tabular}% ## } ## \caption{ decision } ## \end{table}

To change the ordering, just change the vectors rows and cols.

# generate table: MakeTable(output=MC_result, rows=c("n","scale"), cols="loc", digits=2, include_meta=FALSE) ## \begin{table}[h] ## \centering ## \resizebox{ 1 \textwidth}{!}{% ## \begin{tabular}{ rrrrrrrrr } ## \hline\hline\\\\ ## scale & n/loc & & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 1 \\ ## & & & & & & & & \\ ## \multirow{ 4 }{*}{ 1 } & 50 & & 0.05 & 0.30 & 0.83 & 0.98 & 1.00 & 1.00 \\ ## & 100 & & 0.05 & 0.51 & 0.98 & 1.00 & 1.00 & 1.00 \\ ## & 250 & & 0.05 & 0.89 & 1.00 & 1.00 & 1.00 & 1.00 \\ ## & 500 & & 0.05 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ ## & & & & & & & & \\ ## \multirow{ 4 }{*}{ 2 } & 50 & & 0.05 & 0.10 & 0.28 & 0.55 & 0.79 & 0.94 \\ ## & 100 & & 0.07 & 0.16 & 0.53 & 0.84 & 0.98 & 1.00 \\ ## & 250 & & 0.05 & 0.35 & 0.90 & 1.00 & 1.00 & 1.00 \\ ## & 500 & & 0.06 & 0.58 & 1.00 & 1.00 & 1.00 & 1.00 \\ ## \\ ## \\ ## \hline\hline ## \end{tabular}% ## } ## \caption{ decision } ## \end{table}

Now we can simply copy the code and add it to our paper, report or presentation. That is all. Only make sure that the package multirow is included in the header of the .tex file.

Parallelised Simulation

If the procedure you are interested in is not so fast or you need a large number of replications to produce very accurate results, you might want to use parallelized computation on multiple cores of your computer (or cluster). To achive this, simply specify the number of CPUs by supplying a value for the argument ncpus of MonteCarlo as shown below. Of course you should actually have at least the specified number of units.

# run simulation: MC_result<-MonteCarlo(func=ttest, nrep=1000, param_list=param_list, ncpus=4)

This automatically sets up a snow cluster, including the export of functions and the loading of packages. The user does not have to take care of anything.

Further Information

This is an introduction to get you up and running with the MonteCarlo package as quickly as possible. Therefore, I only included a short example. However, the functions MonteCarlo() and particularly MakeTable() provide much more functionality. This is described in more detail in the package vignette, that also provides additional examples.

Filed under: R

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R – first differences. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Words growing or shrinking in Hacker News titles: a tidy analysis

Thu, 06/08/2017 - 16:00

(This article was first published on Variance Explained, and kindly contributed to R-bloggers)

In May, some friends and I built Tagger News, a real-time automatic classifier of Hacker News articles based on their text (see here for more about how we built it). This process started me down some interesting paths, particularly analyzing trends in titles. By finding words that became more or less common in Hacker News titles over time, we can see what topics were gaining or losing interest in the tech world.

I thought I’d share some of these analyses here. As with most of my text analysis posts, I’ll be analyzing it with the tidytext package written by me and Julia Silge. (Check out our soon-to-be-released book on the subject, Text Mining with R!)

Processing one million Hacker News articles

The titles (and dates, and links) of Hacker News articles are helpfully stored on Google Bigquery. We start with a dataset of a million Hacker News article titles, which covers about 3 and a half years of posts. I downloaded it on 2017-06-04 with the following query:

SELECT id, score, time, title, url FROM [bigquery-public-data:hacker_news.full] WHERE deleted IS null AND dead IS null AND type = 'story' ORDER BY time DESC LIMIT 1000000

I’m hosting the dataset so that you can read it yourself (note that it’s a moderately large and compressed file so it may take a minute to read) as follows:

library(tidyverse) library(lubridate) stories <- read_csv("http://varianceexplained.org/files/stories_1000000.csv.gz") %>% mutate(time = as.POSIXct(time, origin = "1970-01-01"), month = round_date(time, "month"))

Whenever we read in a text dataset, the first step is usually to tokenize it (split it up into words) and remove stopwords (like “the” or “and”). As described here, we can do this with tidytext’s unnest_tokens function.

library(tidyverse) library(tidytext) library(stringr) title_words <- stories %>% arrange(desc(score)) %>% distinct(title, .keep_all = TRUE) %>% unnest_tokens(word, title, drop = FALSE) %>% distinct(id, word, .keep_all = TRUE) %>% anti_join(stop_words, by = "word") %>% filter(str_detect(word, "[^\\d]")) %>% group_by(word) %>% mutate(word_total = n()) %>% ungroup()

This creates a pretty large data frame (4443083 rows) with one row per word per post. For example, we could use this to find and visualize the most common words in Hacker News posts during this 3.5 year period.

word_counts <- title_words %>% count(word, sort = TRUE) word_counts %>% head(25) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col(fill = "lightblue") + scale_y_continuous(labels = comma_format()) + coord_flip() + labs(title = "Most common words in Hacker News titles", subtitle = "Among the last million stories; stop words removed", y = "# of uses")

The top result, “hn”, comes from the common constructions of “Show HN” and “Ask HN”, which are often prepended to a title. Among the other words, there’s nothing we wouldn’t have expected from following Hacker News. There some notable technology companies like Google, Apple, and Facebook, along with common topics in tech discussions like “app”, “data” and “startup”.

Change over time

What words and topics have become more frequent, or less frequent, over time? These could give us a sense of the changing technology ecosystem, and let us predict what topics will continue to grow in relevance.

To achieve this, we’ll first count the occurrences of words in titles by month.

stories_per_month <- stories %>% group_by(month) %>% summarize(month_total = n()) word_month_counts <- title_words %>% filter(word_total >= 1000) %>% count(word, month) %>% complete(word, month, fill = list(n = 0)) %>% inner_join(stories_per_month, by = "month") %>% mutate(percent = n / month_total) %>% mutate(year = year(month) + yday(month) / 365) word_month_counts ## # A tibble: 33,480 x 6 ## word month n month_total percent year ## ## 1 2.0 2013-10-01 42 20542 0.002044592 2013.751 ## 2 2.0 2013-11-01 55 23402 0.002350226 2013.836 ## 3 2.0 2013-12-01 26 21945 0.001184780 2013.918 ## 4 2.0 2014-01-01 25 19186 0.001303033 2014.003 ## 5 2.0 2014-02-01 31 22295 0.001390446 2014.088 ## 6 2.0 2014-03-01 37 21118 0.001752060 2014.164 ## 7 2.0 2014-04-01 33 23673 0.001393993 2014.249 ## 8 2.0 2014-05-01 28 21394 0.001308778 2014.332 ## 9 2.0 2014-06-01 47 19265 0.002439657 2014.416 ## 10 2.0 2014-07-01 25 19829 0.001260780 2014.499 ## # ... with 33,470 more rows

We can then use my broom package to fit a model (logistic regression) to examine whether the frequency of each word is increasing or decreasing over time. Every term will then have a growth rate (as an exponential term) associated with it.

library(broom) mod <- ~ glm(cbind(n, month_total - n) ~ year, ., family = "binomial") slopes <- word_month_counts %>% nest(-word) %>% mutate(model = map(data, mod)) %>% unnest(map(model, tidy)) %>% filter(term == "year") %>% arrange(desc(estimate)) slopes ## # A tibble: 744 x 6 ## word term estimate std.error statistic p.value ## ## 1 trump year 1.7570144 0.04052662 43.35458 0.000000e+00 ## 2 ai year 0.7830541 0.01756776 44.57335 0.000000e+00 ## 3 blockchain year 0.6557110 0.02682345 24.44544 5.626574e-132 ## 4 neural year 0.6270933 0.02617919 23.95388 8.418336e-127 ## 5 react year 0.6027489 0.01628292 37.01725 6.045233e-300 ## 6 vr year 0.5260498 0.02247906 23.40177 4.099556e-121 ## 7 bot year 0.5178669 0.02600166 19.91669 2.916460e-88 ## 8 iot year 0.5076088 0.02514613 20.18636 1.290270e-90 ## 9 microservices year 0.4933223 0.03180060 15.51299 2.833910e-54 ## 10 slack year 0.4718030 0.02287605 20.62432 1.660491e-94 ## # ... with 734 more rows

We can now ask: what terms have been increasing in frequency in Hacker News titles?

It makes sense that “Trump” is the word that’s growing most quickly. While it shows a moderate growth after Trump announced his candidacy in mid-2015, it shows a sharp increase around the time of the 2016 election (though it hasn’t been quite as dominant in the months since his inauguration). The next fastest growing terms include “AI” and “blockchain”, both topics that have shown a recent surge of interest. (We can also see “machine learning”, “AWS”, and “bot” among the growing terms).

What words have been decreasing in frequency in Hacker News titles?

This shows a few topics in which interest has died out since 2013, including Google Glass and Gmail. Discussion of Edward Snowden and the NSA was fresh news around 2013-2014, so it makes sense that it’s declined since. There are also some notable technologies that people write about less, such as AngularJS, HTML5, and Ruby on Rails. It’s interesting to compare these to Stack Overflow Trends during that time period, in which AngularJS has been growing in terms of Stack Overflow questions asked (HTML5 and Rails have leveled off). It’s possible that discussion on HN is a “leading indicator”, showing a surge articles when a technology first gets popular.

(I don’t currently have a guess for why “million” and “billion” had sudden dropoffs in 2014. Is it some artifact of the Hacker News policy, with the word becoming edited or deleted in newer posts? Or is it a real change in what the site discusses?)

Interestingly, the conversation around “bitcoin” peaked around 2013-2014 (with a recent uptick due to a surge in Bitcoin’s price), while discussion of the blockchain has been growing in the years since. Comparing them on the same axes makes in clear that the blockchain has roughly “caught up to” bitcoin in terms of general interest:

Words that passed their peak

We can learn a lot by examining words , but we’re misses an important piece of the puzzle: there are some words that grew in interest for part of this time, then “passed their peak.” This is more complicated to explore quantitatively, but one approach is to find words where the ratio of the peak in the trend to the average is especially high. To reduce the effect of random noise, we use a cubic spline to smooth each trend before determining the peak.

library(splines) mod2 <- ~ glm(cbind(n, month_total - n) ~ ns(year, 4), ., family = "binomial") # Fit a cubic spline to each shape spline_predictions <- word_month_counts %>% mutate(year = as.integer(as.Date(month)) / 365) %>% nest(-word) %>% mutate(model = map(data, mod2)) %>% unnest(map2(model, data, augment, type.predict = "response")) # Find the terms with the highest peak / average ratio peak_per_month <- spline_predictions %>% group_by(word) %>% mutate(average = mean(.fitted)) %>% top_n(1, .fitted) %>% ungroup() %>% mutate(ratio = .fitted / average) %>% filter(month != min(month), month != max(month)) %>% top_n(16, ratio)

Here are 16 terms that had strong peaks at various point in the last 3.5 years.

peak_per_month %>% select(word, peak = month) %>% inner_join(spline_predictions, by = "word") %>% mutate(word = reorder(word, peak)) %>% ggplot(aes(month, percent)) + geom_line(aes(color = word), show.legend = FALSE) + geom_line(aes(y = .fitted), lty = 2) + facet_wrap(~ word, scales = "free_y") + scale_y_continuous(labels = percent_format()) + expand_limits(y = 0) + labs(x = "Year", y = "Percent of titles containing this term", title = "16 words that peaked then declined in Hacker News titles", subtitle = "Spline fit (df = 4) shown.\nSelected based on the peak of the spline divided by the overall average; ordered by peak month.")

We can see a peak of discussion around net neutrality in 2015 (though it’s shown a recent resurgence of interest). You can spot the introduction of Swift and the Apple Watch, then a moderate decline in discussion around them, and there are sudden jolts of discussion around Oculus in 2014 (with Facebook’s purchase of the company) and the FBI in 2016 (news around Clinton’s email server). The graph suggests the discussion around Slack, bots, Tesla, and virtual reality may have leveled off (though in some cases it may be too early to tell).

What’s next

I have some more analyses of Hacker News posts I’ll be sharing in the future, including analyses of word correlations, examination of what words and topics tend to get upvoted, and some improvements on the Tagger News classification algorithm. The series will also give me the chance to highlight more techniques from Text Mining with R, including word networks and topic modeling.

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: Variance Explained. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Test-driving Microsoft Cognitive Toolkit in R using reticulate

Thu, 06/08/2017 - 15:27

(This article was first published on R snippets, and kindly contributed to R-bloggers)

Recently new tools for data science pop up constantly, so it is hard to keep up with the changes and choose those that promise to be useful in the long run.
Recently two new solutions were announced that look very promising: reticulate package for R and Microsoft Cognitive Toolkit 2.0 (CNTK). Together with Wit Jakuczun we have decided to test drive them on Azure Data Science Virtual Machine.
CNTK is a very promising deep learning toolkit, backed by Microsoft. Unfortunately it provides bindings to Python and not to R. Therefore it is an ideal situation to test reticulate package, which provides R interface to Python. We wanted to test CNTK on GPUs, so we have decided to use Azure Data Science Virtual Machine using NV6 Instance with Tesla M60 GPU.
For our test-drive we have used classic MNIST dataset. You can find all the source codes and detailed results of the test on GitHub. The short conclusion is that all components worked and played together excellently:

  • on CNTK we have build MLP network model, which has reported 2.3% classification error, without any special tuning, which is a pretty decent result;
  • calling Python library from R using reticulate was simple and stable;
  • Calculations using GPU give a significant speedup and configuration of Azure DSVM was really simple.
In summary, we have found the combo MSFT CNTK + R/reticulate + Azure DSVM to be simple to set up and powerful. It if definitely worth checking in your next data science project. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

To leave a comment for the author, please follow the link and comment on their blog: R snippets. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

Deep Learning with R

Thu, 06/08/2017 - 14:40

(This article was first published on R Programming – DataScience+, and kindly contributed to R-bloggers)

For R users, there hasn’t been a production grade solution for deep learning (sorry MXNET). This post introduces the Keras interface for R and how it can be used to perform image classification. The post ends by providing some code snippets that show Keras is intuitive and powerful.

Tensorflow

Last January, Tensorflow for R was released, which provided access to the Tensorflow API from R. However, for most R users, the interface was not very R like.

Take a look at this code chunk for training a model:

cross_entropy <- tf$reduce_mean(-tf$reduce_sum(y_ * tf$log(y_conv), reduction_indices=1L)) train_step <- tf$train$AdamOptimizer(1e-4)$minimize(cross_entropy) correct_prediction <- tf$equal(tf$argmax(y_conv, 1L), tf$argmax(y_, 1L)) accuracy <- tf$reduce_mean(tf$cast(correct_prediction, tf$float32)) sess$run(tf$global_variables_initializer()) for (i in 1:20000) { batch <- mnist$train$next_batch(50L) if (i %% 100 == 0) { train_accuracy <- accuracy$eval(feed_dict = dict( x = batch[[1]], y_ = batch[[2]], keep_prob = 1.0)) cat(sprintf("step %d, training accuracy %g\n", i, train_accuracy)) } train_step$run(feed_dict = dict( x = batch[[1]], y_ = batch[[2]], keep_prob = 0.5)) } test_accuracy % fit( x = train_x, y = train_y, epochs=epochs, batch_size=batch_size, validation_data=valid) Image Classification with Keras

So if you are still with me, let me show you how to build deep learning models using R, Keras, and Tensorflow together. You will find a Github repo that contains the code and data you will need. Included is an R notebook that walks through building an image classifier (telling cat from dog), but can easily be generalized to other images. The walk through includes advanced methods that are commonly used for production deep learning work including:

  • augmenting data
  • using the bottleneck features of a pre-trained network
  • fine-tuning the top layers of a pre-trained network
  • saving weights for your models
Code Snippets of Keras

The R interface to Keras truly makes it easy to build deep learning models in R. Here are some code snippets to illustrate how intuitive and useful Keras for R is:

To load picture from a folder:

train_generator <- flow_images_from_directory(train_directory, generator = image_data_generator(), target_size = c(img_width, img_height), color_mode = "rgb", class_mode = "binary", batch_size = batch_size, shuffle = TRUE, seed = 123)

To define a simple convolutional neural network:

model % layer_conv_2d(filter = 32, kernel_size = c(3,3), input_shape = c(img_width, img_height, 3)) %>% layer_activation("relu") %>% layer_max_pooling_2d(pool_size = c(2,2)) %>% layer_conv_2d(filter = 32, kernel_size = c(3,3)) %>% layer_activation("relu") %>% layer_max_pooling_2d(pool_size = c(2,2)) %>% layer_conv_2d(filter = 64, kernel_size = c(3,3)) %>% layer_activation("relu") %>% layer_max_pooling_2d(pool_size = c(2,2)) %>% layer_flatten() %>% layer_dense(64) %>% layer_activation("relu") %>% layer_dropout(0.5) %>% layer_dense(1) %>% layer_activation("sigmoid")

To augment data:

augment <- image_data_generator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=TRUE)

To load a pretrained network:

model_vgg <- application_vgg16(include_top = FALSE, weights = "imagenet")

To save model weights:

save_model_weights_hdf5(model_ft, 'finetuning_30epochs_vggR.h5', overwrite = TRUE)

I believe the Keras for R interface will make it much easier for R users and the R community to build and refine deep learning models with R. This means you don't have to force everyone to use python to build, refine, and test your models. I really think this will open up deep learning to a wider audience that was a bit apprehensive on using Python. So for now, give it a spin!

Grab my github repo, fire up RStudio (or your IDE of choice), and go build a simple classifier using Keras.

    Related Post

    1. Unsupervised Learning and Text Mining of Emotion Terms Using R
    2. Using MCA and variable clustering in R for insights in customer attrition
    3. Web Scraping and Applied Clustering Global Happiness and Social Progress Index
    4. Key Phrase Extraction from Tweets
    5. Financial time series forecasting – an easy approach
    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: R Programming – DataScience+. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Add P-values and Significance Levels to ggplots

    Thu, 06/08/2017 - 11:48

    (This article was first published on Easy Guides, and kindly contributed to R-bloggers)

    In this article, we’ll describe how to easily i) compare means of two or multiple groups; ii) and to automatically add p-values and significance levels to a ggplot (such as box plots, dot plots, bar plots and line plots …).

    Contents:

    Prerequisites Install and load required R packages

    Required R package: ggpubr (version >= 0.1.3), for ggplot2-based publication ready plots.

    • Install from CRAN as follow:
    install.packages("ggpubr")
    • Or, install the latest developmental version from GitHub as follow:
    if(!require(devtools)) install.packages("devtools") devtools::install_github("kassambara/ggpubr")
    • Load ggpubr:
    library(ggpubr)

    Official documentation of ggpubr is available at: http://www.sthda.com/english/rpkgs/ggpubr

    Demo data sets

    Data: ToothGrowth data sets.

    data("ToothGrowth") head(ToothGrowth) len supp dose 1 4.2 VC 0.5 2 11.5 VC 0.5 3 7.3 VC 0.5 4 5.8 VC 0.5 5 6.4 VC 0.5 6 10.0 VC 0.5 Methods for comparing means

    The standard methods to compare the means of two or more groups in R, have been largely described at: comparing means in R.

    The most common methods for comparing means include:

    Methods R function Description T-test t.test() Compare two groups (parametric) Wilcoxon test wilcox.test() Compare two groups (non-parametric) ANOVA aov() or anova() Compare multiple groups (parametric) Kruskal-Wallis kruskal.test() Compare multiple groups (non-parametric)

    A practical guide to compute and interpret the results of each of these methods are provided at the following links:

    R functions to add p-values

    Here we present two new R functions in the ggpubr package:

    • compare_means(): easy to use solution to performs one and multiple mean comparisons.
    • stat_compare_means(): easy to use solution to automatically add p-values and significance levels to a ggplot.
    compare_means()

    As we’ll show in the next sections, it has multiple useful options compared to the standard R functions.

    The simplified format is as follow:

    compare_means(formula, data, method = "wilcox.test", paired = FALSE, group.by = NULL, ref.group = NULL, ...)
    • formula: a formula of the form x ~ group, where x is a numeric variable and group is a factor with one or multiple levels. For example, formula = TP53 ~ cancer_group. It’s also possible to perform the test for multiple response variables at the same time. For example, formula = c(TP53, PTEN) ~ cancer_group.
    • data: a data.frame containing the variables in the formula.

    • method: the type of test. Default is “wilcox.test”. Allowed values include:

      • “t.test” (parametric) and “wilcox.test”” (non-parametric). Perform comparison between two groups of samples. If the grouping variable contains more than two levels, then a pairwise comparison is performed.
      • “anova” (parametric) and “kruskal.test” (non-parametric). Perform one-way ANOVA test comparing multiple groups.
    • paired: a logical indicating whether you want a paired test. Used only in t.test and in wilcox.test.

    • group.by: variables used to group the data set before applying the test. When specified the mean comparisons will be performed in each subset of the data formed by the different levels of the group.by variables.

    • ref.group: a character string specifying the reference group. If specified, for a given grouping variable, each of the group levels will be compared to the reference group (i.e. control group). ref.group can be also “.all.”. In this case, each of the grouping variable levels is compared to all (i.e. base-mean).

    stat_compare_means()

    This function extends ggplot2 for adding mean comparison p-values to a ggplot, such as box blots, dot plots, bar plots and line plots.

    The simplified format is as follow:

    stat_compare_means(mapping = NULL, comparisons = NULL hide.ns = FALSE, label = NULL, label.x = NULL, label.y = NULL, ...)
    • mapping: Set of aesthetic mappings created by aes().

    • comparisons: A list of length-2 vectors. The entries in the vector are either the names of 2 values on the x-axis or the 2 integers that correspond to the index of the groups of interest, to be compared.

    • hide.ns: logical value. If TRUE, hide ns symbol when displaying significance levels.

    • label: character string specifying label type. Allowed values include “p.signif” (shows the significance levels), “p.format” (shows the formatted p value).

    • label.x,label.y: numeric values. coordinates (in data units) to be used for absolute positioning of the label. If too short they will be recycled.

    • : other arguments passed to the function compare_means() such as method, paired, ref.group.

    Compare two independent groups

    Perform the test:

    compare_means(len ~ supp, data = ToothGrowth) # A tibble: 1 x 8 .y. group1 group2 p p.adj p.format p.signif method 1 len OJ VC 0.06449067 0.06449067 0.064 ns Wilcoxon

    By default method = “wilcox.test” (non-parametric test). You can also specify method = “t.test” for a parametric t-test.

    Returned value is a data frame with the following columns:

    • .y.: the y variable used in the test.
    • p: the p-value
    • p.adj: the adjusted p-value. Default value for p.adjust.method = “holm”
    • p.format: the formatted p-value
    • p.signif: the significance level.
    • method: the statistical test used to compare groups.

    Create a box plot with p-values:

    p <- ggboxplot(ToothGrowth, x = "supp", y = "len", color = "supp", palette = "jco", add = "jitter") # Add p-value p + stat_compare_means() # Change method p + stat_compare_means(method = "t.test")

    Add p-values and significance levels to ggplots

    Note that, the p-value label position can be adjusted using the arguments: label.x, label.y, hjust and vjust.

    The default p-value label displayed is obtained by concatenating the method and the p columns of the returned data frame by the function compare_means(). You can specify other combinations using the aes() function.

    For example,

    • aes(label = ..p.format..) or aes(label = paste0(“p =”, ..p.format..)): display only the formatted p-value (without the method name)
    • aes(label = ..p.signif..): display only the significance level.
    • aes(label = paste0(..method.., “\n”, “p =”, ..p.format..)): Use line break (“\n”) between the method name and the p-value.

    As an illustration, type this:

    p + stat_compare_means( aes(label = ..p.signif..), label.x = 1.5, label.y = 40)

    Add p-values and significance levels to ggplots

    If you prefer, it’s also possible to specify the argument label as a character vector:

    p + stat_compare_means( label = "p.signif", label.x = 1.5, label.y = 40) Compare two paired samples

    Perform the test:

    compare_means(len ~ supp, data = ToothGrowth, paired = TRUE) # A tibble: 1 x 8 .y. group1 group2 p p.adj p.format p.signif method 1 len OJ VC 0.004312554 0.004312554 0.0043 ** Wilcoxon

    Visualize paired data using the ggpaired() function:

    ggpaired(ToothGrowth, x = "supp", y = "len", color = "supp", line.color = "gray", line.size = 0.4, palette = "jco")+ stat_compare_means(paired = TRUE)

    Add p-values and significance levels to ggplots

    Compare more than two groups
    • Global test:
    # Global test compare_means(len ~ dose, data = ToothGrowth, method = "anova") # A tibble: 1 x 6 .y. p p.adj p.format p.signif method 1 len 9.532727e-16 9.532727e-16 9.5e-16 **** Anova

    Plot with global p-value:

    # Default method = "kruskal.test" for multiple groups ggboxplot(ToothGrowth, x = "dose", y = "len", color = "dose", palette = "jco")+ stat_compare_means() # Change method to anova ggboxplot(ToothGrowth, x = "dose", y = "len", color = "dose", palette = "jco")+ stat_compare_means(method = "anova")

    Add p-values and significance levels to ggplots

    • Pairwise comparisons. If the grouping variable contains more than two levels, then pairwise tests will be performed automatically. The default method is “wilcox.test”. You can change this to “t.test”.
    # Perorm pairwise comparisons compare_means(len ~ dose, data = ToothGrowth) # A tibble: 3 x 8 .y. group1 group2 p p.adj p.format p.signif method 1 len 0.5 1 7.020855e-06 1.404171e-05 7.0e-06 **** Wilcoxon 2 len 0.5 2 8.406447e-08 2.521934e-07 8.4e-08 **** Wilcoxon 3 len 1 2 1.772382e-04 1.772382e-04 0.00018 *** Wilcoxon # Visualize: Specify the comparisons you want my_comparisons <- list( c("0.5", "1"), c("1", "2"), c("0.5", "2") ) ggboxplot(ToothGrowth, x = "dose", y = "len", color = "dose", palette = "jco")+ stat_compare_means(comparisons = my_comparisons)+ # Add pairwise comparisons p-value stat_compare_means(label.y = 50) # Add global p-value

    Add p-values and significance levels to ggplots

    If you want to specify the precise y location of bars, use the argument label.y:

    ggboxplot(ToothGrowth, x = "dose", y = "len", color = "dose", palette = "jco")+ stat_compare_means(comparisons = my_comparisons, label.y = c(29, 35, 40))+ stat_compare_means(label.y = 45)

    Add p-values and significance levels to ggplots

    (Adding bars, connecting compared groups, has been facilitated by the ggsignif R package )

    • Multiple pairwise tests against a reference group:
    # Pairwise comparison against reference compare_means(len ~ dose, data = ToothGrowth, ref.group = "0.5", method = "t.test") # A tibble: 2 x 8 .y. group1 group2 p p.adj p.format p.signif method 1 len 0.5 1 6.697250e-09 6.697250e-09 6.7e-09 **** T-test 2 len 0.5 2 1.469534e-16 2.939068e-16 < 2e-16 **** T-test # Visualize ggboxplot(ToothGrowth, x = "dose", y = "len", color = "dose", palette = "jco")+ stat_compare_means(method = "anova", label.y = 40)+ # Add global p-value stat_compare_means(label = "p.signif", method = "t.test", ref.group = "0.5") # Pairwise comparison against reference

    Add p-values and significance levels to ggplots

    • Multiple pairwise tests against all (base-mean):
    # Comparison of each group against base-mean compare_means(len ~ dose, data = ToothGrowth, ref.group = ".all.", method = "t.test") # A tibble: 3 x 8 .y. group1 group2 p p.adj p.format p.signif method 1 len .all. 0.5 1.244626e-06 3.733877e-06 1.2e-06 **** T-test 2 len .all. 1 5.667266e-01 5.667266e-01 0.57 ns T-test 3 len .all. 2 1.371704e-05 2.743408e-05 1.4e-05 **** T-test # Visualize ggboxplot(ToothGrowth, x = "dose", y = "len", color = "dose", palette = "jco")+ stat_compare_means(method = "anova", label.y = 40)+ # Add global p-value stat_compare_means(label = "p.signif", method = "t.test", ref.group = ".all.") # Pairwise comparison against all

    Add p-values and significance levels to ggplots

    A typical situation, where pairwise comparisons against “all” can be useful, is illustrated here using the myeloma data set from the survminer package.

    We’ll plot the expression profile of the DEPDC1 gene according to the patients’ molecular groups. We want to know if there is any difference between groups. If yes, where the difference is?

    To answer to this question, you can perform a pairwise comparison between all the 7 groups. This will lead to a lot of comparisons between all possible combinations. If you have many groups, as here, it might be difficult to interpret.

    Another easy solution is to compare each of the seven groups against “all” (i.e. base-mean). When the test is significant, then you can conclude that DEPDC1 is significantly overexpressed or downexpressed in a group xxx compared to all.

    # Load myeloma data from survminer package if(!require(survminer)) install.packages("survminer") data("myeloma", package = "survminer") # Perform the test compare_means(DEPDC1 ~ molecular_group, data = myeloma, ref.group = ".all.", method = "t.test") # A tibble: 7 x 8 .y. group1 group2 p p.adj p.format p.signif method 1 DEPDC1 .all. Cyclin D-1 1.496896e-01 4.490687e-01 0.14969 ns T-test 2 DEPDC1 .all. Cyclin D-2 5.231428e-01 1.000000e+00 0.52314 ns T-test 3 DEPDC1 .all. Hyperdiploid 2.815333e-04 1.689200e-03 0.00028 *** T-test 4 DEPDC1 .all. Low bone disease 5.083985e-03 2.541992e-02 0.00508 ** T-test 5 DEPDC1 .all. MAF 8.610664e-02 3.444265e-01 0.08611 ns T-test 6 DEPDC1 .all. MMSET 5.762908e-01 1.000000e+00 0.57629 ns T-test 7 DEPDC1 .all. Proliferation 1.241416e-09 8.689910e-09 1.2e-09 **** T-test # Visualize the expression profile ggboxplot(myeloma, x = "molecular_group", y = "DEPDC1", color = "molecular_group", add = "jitter", legend = "none") + rotate_x_text(angle = 45)+ geom_hline(yintercept = mean(myeloma$DEPDC1), linetype = 2)+ # Add horizontal line at base mean stat_compare_means(method = "anova", label.y = 1600)+ # Add global annova p-value stat_compare_means(label = "p.signif", method = "t.test", ref.group = ".all.") # Pairwise comparison against all

    Add p-values and significance levels to ggplots

    From the plot above, we can conclude that DEPDC1 is significantly overexpressed in proliferation group and, it’s significantly downexpressed in Hyperdiploid and Low bone disease compared to all.

    Note that, if you want to hide the ns symbol, specify the argument hide.ns = TRUE.

    # Visualize the expression profile ggboxplot(myeloma, x = "molecular_group", y = "DEPDC1", color = "molecular_group", add = "jitter", legend = "none") + rotate_x_text(angle = 45)+ geom_hline(yintercept = mean(myeloma$DEPDC1), linetype = 2)+ # Add horizontal line at base mean stat_compare_means(method = "anova", label.y = 1600)+ # Add global annova p-value stat_compare_means(label = "p.signif", method = "t.test", ref.group = ".all.", hide.ns = TRUE) # Pairwise comparison against all

    Add p-values and significance levels to ggplots

    Multiple grouping variables
    • Two independent sample comparisons after grouping the data by another variable:

    Perform the test:

    compare_means(len ~ supp, data = ToothGrowth, group.by = "dose") # A tibble: 3 x 9 dose .y. group1 group2 p p.adj p.format p.signif method 1 0.5 len OJ VC 0.023186427 0.04637285 0.023 * Wilcoxon 2 1.0 len OJ VC 0.004030367 0.01209110 0.004 ** Wilcoxon 3 2.0 len OJ VC 1.000000000 1.00000000 1.000 ns Wilcoxon

    In the example above, for each level of the variable “dose”, we compare the means of the variable “len” in the different groups formed by the grouping variable “supp”.

    Visualize (1/2). Create a multi-panel box plots facetted by group (here, “dose”):

    # Box plot facetted by "dose" p <- ggboxplot(ToothGrowth, x = "supp", y = "len", color = "supp", palette = "jco", add = "jitter", facet.by = "dose", short.panel.labs = FALSE) # Use only p.format as label. Remove method name. p + stat_compare_means(label = "p.format")

    Add p-values and significance levels to ggplots

    # Or use significance symbol as label p + stat_compare_means(label = "p.signif", label.x = 1.5)

    Add p-values and significance levels to ggplots

    To hide the ‘ns’ symbol, use the argument hide.ns = TRUE.

    Visualize (2/2). Create one single panel with all box plots. Plot y = “len” by x = “dose” and color by “supp”:

    p <- ggboxplot(ToothGrowth, x = "dose", y = "len", color = "supp", palette = "jco", add = "jitter") p + stat_compare_means(aes(group = supp))

    Add p-values and significance levels to ggplots

    # Show only p-value p + stat_compare_means(aes(group = supp), label = "p.format")

    Add p-values and significance levels to ggplots

    # Use significance symbol as label p + stat_compare_means(aes(group = supp), label = "p.signif")

    Add p-values and significance levels to ggplots

    • Paired sample comparisons after grouping the data by another variable:

    Perform the test:

    compare_means(len ~ supp, data = ToothGrowth, group.by = "dose", paired = TRUE) # A tibble: 3 x 9 dose .y. group1 group2 p p.adj p.format p.signif method 1 0.5 len OJ VC 0.03296938 0.06593876 0.033 * Wilcoxon 2 1.0 len OJ VC 0.01905889 0.05717667 0.019 * Wilcoxon 3 2.0 len OJ VC 1.00000000 1.00000000 1.000 ns Wilcoxon

    Visualize. Create a multi-panel box plots facetted by group (here, “dose”):

    # Box plot facetted by "dose" p <- ggpaired(ToothGrowth, x = "supp", y = "len", color = "supp", palette = "jco", line.color = "gray", line.size = 0.4, facet.by = "dose", short.panel.labs = FALSE) # Use only p.format as label. Remove method name. p + stat_compare_means(label = "p.format", paired = TRUE)

    Add p-values and significance levels to ggplots

    Other plot types
    • Bar and line plots (one grouping variable):
    # Bar plot of mean +/-se ggbarplot(ToothGrowth, x = "dose", y = "len", add = "mean_se")+ stat_compare_means() + # Global p-value stat_compare_means(ref.group = "0.5", label = "p.signif", label.y = c(22, 29)) # compare to ref.group # Line plot of mean +/-se ggline(ToothGrowth, x = "dose", y = "len", add = "mean_se")+ stat_compare_means() + # Global p-value stat_compare_means(ref.group = "0.5", label = "p.signif", label.y = c(22, 29))

    Add p-values and significance levels to ggplots

    • Bar and line plots (two grouping variables):
    ggbarplot(ToothGrowth, x = "dose", y = "len", add = "mean_se", color = "supp", palette = "jco", position = position_dodge(0.8))+ stat_compare_means(aes(group = supp), label = "p.signif", label.y = 29) ggline(ToothGrowth, x = "dose", y = "len", add = "mean_se", color = "supp", palette = "jco")+ stat_compare_means(aes(group = supp), label = "p.signif", label.y = c(16, 25, 29))

    Add p-values and significance levels to ggplots

    Infos

    This analysis has been performed using R software (ver. 3.3.2) and ggpubr (ver. 0.1.3).

    jQuery(document).ready(function () { jQuery('#rdoc h1').addClass('wiki_paragraph1'); jQuery('#rdoc h2').addClass('wiki_paragraph2'); jQuery('#rdoc h3').addClass('wiki_paragraph3'); jQuery('#rdoc h4').addClass('wiki_paragraph4'); });//add phpboost class to header

    .content{padding:0px;}


    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: Easy Guides. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    UK R Courses

    Thu, 06/08/2017 - 10:30

    (This article was first published on R – Why?, and kindly contributed to R-bloggers)

    Over the next few months we’re running a number of R, Stan and Scala courses around the UK.

    June (London)
    • Mon Jun 26 – Introduction to R
    • Tue Jun 27 – Statistical Modelling with R
    • Wed Jun 28 – Programming with R
    • Thu Jun 29 (2-day course) – Advanced R Programming
    August (Belfast)
    • Thu Aug 10 – Programming with R
    • Fri Aug 11 – Building an R Package
    September 2017 (Newcastle)
    • Mon Sep 11 – Introduction to R
    • Tue Sep 12- Statistical Modelling with R
    • Wed Sep 13- Programming with R
    • Thu Sep 14- Advanced Graphics with R
    • Fri Sep 15- Automated Reporting (first steps towards Shiny)
    • Mon Sep 18 (5-day course) – Bioconductor
    October (Glasgow)
    • Wed Oct 25 – Advanced Graphics with R
    • Thu Oct 26 (2-day course) – Advanced R Programming
    November (London)
    • Mon Nov 06 (1-day course) – Efficient R Programming
    • Tue Nov 07 (1-day course) – R for Big Data
    December (London/Newcastle)
    • Thu Dec 07 (2-day course) – Introduction to Bayesian Inference using RStan (Newcastle)
    • Mon Dec 11 (3-day course) – Scala for Statistical Computing and Data Science (London)

    See the website for course descriptions. Any questions, feel free to contact me: colin@jumpingrivers.com

    On site courses available on request.

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: R – Why?. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Unconf projects 4: cityquant, notary, packagemetrics, pegax

    Thu, 06/08/2017 - 09:00

    (This article was first published on rOpenSci Blog, and kindly contributed to R-bloggers)

    Continuing our series of blog posts (day 1, day 2, day 3) this week about unconf 17.

    cityquant

    Summary: The goal with the cityquant project was to build a digital dashboard for sustainable cities.

    They also had a "spin-off" project called selfquant to get data from a quantified self google sheets template to keep track of weekly performance in various categories.

    Team: Reka Solymosi, Ben Best, Chelsea Ursaner, Tim Phan, Jasmine Dumas

    Github: https://github.com/ropenscilabs/cityquant

    notary

    Summary: notary is actually two things:

    notary: An R package for signing and verification of R packages. It has functions for installing and verifying packages, validating GitHub releases, sourcing files with verification, and more.

    r-security-practices: A bookdown book targeting users, developers, and admins on R security best practices.

    Team: Stephanie Locke, Oliver Keyes, Rich FitzJohn, Bob Rudis, Joroen Ooms

    Github: https://github.com/ropenscilabs/notary / https://github.com/ropenscilabs/r-security-practices

    packagemetrics

    Summary: packagemetrics is a package for helping you choose which package to use. Their tool collects metrics including CRAN downloads, GitHub stars, whether it's tidyverse compatible, whether it has tests and vignettes, number of contributors, and more!

    This project combined two ideas from our brainstorming stage: Avoiding redundant / overlapping packages and A framework for reproducible tables.

    Team: Erin Grand, Sam Firke, Hannah Frick, Becca Krouse, Lori Shepherd

    Github: https://github.com/ropenscilabs/packagemetrics

    pegax

    Summary: pegax is a very alpha client for parsing taxonomic names. Taxonomic names are things such as Homo sapiens (human beings) wikispecies, or Ursus americanus (american black bear) wikispecies, or Balaenoptera musculus (blue whale) wikispecies. Taxonomic names can be hard to parse – and thus something called Parsing Expression Grammar (PEG) can be employed to help. We were lucky that Oliver Keyes just started an R package for PEGs in R called piton – which is now used in pegax to parse taxonomic names.

    Team:

    Scott Chamberlain (with help from Oliver Keyes)

    Github: https://github.com/ropenscilabs/pegax

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: rOpenSci Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Tic Tac Toe Part 3: The Minimax Algorithm

    Thu, 06/08/2017 - 02:00

    (This article was first published on The Devil is in the Data, and kindly contributed to R-bloggers)

    In two previous posts, I presented code to teach R to play the trivial game of Tic Tac Toe. I thought this was an unbeatable algorithm. Alas, a comment from Alberto shattered my pride as he was able to beat my code.

    The reason for the demise of my code was that I didn’t implement a full minimax algorithm, but instead looked only two moves ahead. I thought the common strategy rules (start in the centre and occupy the corners) would make the program unbeatable. When I simulated the game by instructing the computer to play against itself, Alberto’s strategy never arose because the code forces the centre field. Alberto’s code shows that you need to look at least three moves ahead for a perfect game. He has been so kind to share his code and gave me permission to publish it.

    Alberto recreated two functions, for completeness I have added the complete working code that merges his improvements with my earlier work. The first two functions are identical to the previous post. These functions draw the game board and process the human player’s move by waiting for a mouse click.

    # Draw the game board draw.board <- function(game) { xo <- c("X", " ", "O") # Symbols par(mar = rep(1,4)) plot.new() plot.window(xlim = c(0,30), ylim = c(0,30)) abline(h = c(10, 20), col="darkgrey", lwd = 4) abline(v = c(10, 20), col="darkgrey", lwd = 4) text(rep(c(5, 15, 25), 3), c(rep(25, 3), rep(15,3), rep(5, 3)), xo[game + 2], cex = 4) # Identify location of any three in a row square <- t(matrix(game, nrow = 3)) hor <- abs(rowSums(square)) if (any(hor == 3)) hor <- (4 - which(hor == 3)) * 10 - 5 else hor <- 0 ver <- abs(colSums(square)) if (any(ver == 3)) ver <- which(ver == 3) * 10 - 5 else ver <- 0 diag1 <- sum(diag(square)) diag2 <- sum(diag(t(apply(square, 2, rev)))) # Draw winning lines if (all(hor > 0)) for (i in hor) lines(c(0, 30), rep(i, 2), lwd = 10, col="red") if (all(ver > 0)) for (i in ver) lines(rep(i, 2), c(0, 30), lwd = 10, col="red") if (abs(diag1) == 3) lines(c(2, 28), c(28, 2), lwd = 10, col = "red") if (abs(diag2) == 3) lines(c(2, 28), c(2, 28), lwd = 10, col = "red") } # Human player enters a move move.human <- function(game) { text(4, 0, "Click on screen to move", col = "grey", cex=.7) empty <- which(game == 0) move <- 0 while (!move %in% empty) { coords <- locator(n = 1) # add lines coords$x <- floor(abs(coords$x) / 10) + 1 coords$y <- floor(abs(coords$y) / 10) + 1 move <- coords$x + 3 * (3 - coords$y) } return (move) }

    Alberto rewrote the functions that analyse the board and determine the move of the computer. The ganador (Spanish for winning) function assesses the board condition by assigning -10 or + 10 for a winning game and 0 for any other situation.

    ganador <- function(juego, player) { game <- matrix(juego, nrow = 3, byrow = T) hor <- rowSums(game) ver <- colSums(game) diag <- c(sum(diag(game)), sum(diag(apply(game, 1, rev)))) if (-3 %in% c(hor, ver, diag)) return(-10) if (3 %in% c(hor, ver, diag)) return(10) else return(0) }

    The next function is the actual minimax algorithm. If the computer starts then the first move ( options to assess) takes a little while. The commented lines can be used to force a corner and make the games faster by forcing a random corner.

    The minimax function returns a list with the move and its valuation through the ganador function. The function works recursively until it has filled the board and retains the best scoring move using the minimax method. To avoid the computer always playing the same move in the same situation random variables are added.

    minimax <- function(juego, player) { free <- which(juego == 0) if(length(free) == 1) { juego[free] <- player return(list(move = free, U = ganador(juego, player))) } poss.results <- rep(0, 9) for(i in free) { game <- juego game[i] <- player poss.results[i] <- ganador(game, player) } mm <- ifelse(player == -1, "which.min", "which.max") if(any(poss.results == (player * 10))) { move <- do.call(mm, list(poss.results)) return(list(move = move, U = poss.results[move])) } for(i in free) { game <- juego game[i] <- player poss.results[i] <- minimax(game, -player)$U } random <- runif(9, 0, 0.1) poss.results[-free] <- 100 * -player poss.results <- poss.results + (player * random) move <- do.call(mm, list(poss.results)) return(list(move = move, U = poss.results[move])) }

    This final function stitches everything together and lets you play the game. Simply paste all functions in your R console and run them to play a game. The tic.tac.toe function can take two parameters, “human” and/or “computer”. The order of the parameters determines who starts the game.

    # Main game engine tic.tac.toe <- function(player1 = "human", player2 = "computer") { game <- rep(0, 9) # Empty board winner <- FALSE # Define winner player <- 1 # First player players <- c(player1, player2) draw.board(game) while (0 %in% game & !winner) { # Keep playing until win or full board if (players[(player + 3) %% 3] == "human") # Human player move <- move.human(game) else { # Computer player move <- minimax(game, player) move <- move$move } game[move] <- player # Change board draw.board(game) winner <- max(eval.game(game, 1), abs(eval.game(game, -1))) == 6 # Winner, winner, chicken dinner? player <- -player # Change player } } tic.tac.toe()

    This is my last word on Tic Tac Toe but now that the minimax conundrum is solved I could start working on other similar games such as Connect Four, Draughts or even the royal game of Chess.

    The post Tic Tac Toe Part 3: The Minimax Algorithm appeared first on The Devil is in the Data.

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: The Devil is in the Data. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    The Ghost Printer Behind Top-level R Expressions

    Thu, 06/08/2017 - 02:00

    (This article was first published on English Blog on Yihui Xie | 谢益辉, and kindly contributed to R-bloggers)

    For any developers who have ever written an S3 method for the print() function, they probably know what a top-level R expression means, but this is a very confusing concept to non-developers. I have to explain this every now and then, so I decided to write a short post about it.

    Yesterday I saw a Github issue in the rmarkdown repository, and you can see that there are still users confused by the fact that ggplot2 plots are not rendered in certain cases. I have seen similar questions perhaps hundreds of times. Such questions have been answered in the R FAQ 7.22 “Why do lattice/trellis graphics not work?”, but the answer didn’t explain the root reason in detail.

    A top-level R expression is usually implicitly printed. Both words can cause confusion: printing is implicit so that you probably don’t consciously know it, and printing means the print() function is called on the object returned by the expression. For example, when you type 1 + 1 in the R console, and press Enter/Return, what actually happens is print(2), where 2 is the value returned by 1 + 1.

    The reason that you create a ggplot() object in your R console and it shows up after you press Enter is that ggplot2 has defined a print.ggplot method on such objects. ggplot() does not actually create the plot – it only creates a ggplot object. It is the print method that actually creates the plot in a graphical device (as a side-effect).

    Expressions nested in other expressions are not top-level expressions. For example, in a for loop, ggplot objects or HTML widgets or knitr::kable() cannot be displayed because they are not printed, unless you explicitly print() them.

    Top-level R expressions are not always printed. They are not printed if they are marked as invisible. For example, if you run invisible(1 + 1) in the R console, you won’t see any value printed.

    Many R functions return invisible values wrapped in the invisible() function. I want to show a few functions that you may have never recognized: if, (, and assignment operators such as = and <- are all functions. Let’s start from the assignment operator:

    library(ggplot2) p = ggplot(mpg, aes(cty, hwy)) + geom_point()

    This creates a ggplot object but you do not see the plot even though this is a top-level expression, and that is because the assignment operator = returns the value invisibly. If you type p in the console later, you will see the plot since p is implicitly printed.

    You can force a top-level expression to be visible (hence printed) by wrapping it in (), e.g., (invisible(1 + 1)) will be printed, and so will the ggplot object below:

    library(ggplot2) (p = ggplot(mpg, aes(cty, hwy)) + geom_point())

    The if function returns the value of a last expression depending on the condition. For example, if (TRUE) A else B returns A, and if (FALSE) {A} else {B; C} returns C. With that in mind, you will understand why the ggplot object is not displayed in the if statement below:

    library(ggplot2) if (TRUE) { ggplot(mpg, aes(cty, hwy)) + geom_point() 1 + 1 # the value of this expression is returned (and printed) }

    Note for, while, and repeat loops always return an invisible NULL, so nothing will be printed automatically, including the last expression in the loop, e.g.,

    library(ggplot2) for (i in 1:10) { ggplot(mpg, aes(cty, hwy)) + geom_point() }

    However, base R graphics is a different animal. All base R graphics functions will create plots no matter if they are called in top-level R expressions. For example, this will create 10 plots:

    for (i in 1:10) { plot(rnorm(100)) }

    WAT?

    WAT!

    Lastly, there is a function withVisible() that can tell you whether an expression returns a visible value or not, e.g.,

    withVisible(1 + 1) # $value # [1] 2 # # $visible # [1] TRUE withVisible({x = 1 + 1}) # assignment in {} # $value # [1] 2 # # $visible # [1] FALSE

    I hope you could remember the “ghost printer” the next time you run any R code in your R console or R Markdown or any knitr documents.

    location.href/*Tal, could u pls not modify my script? It is not cool. Thanks!*/='https://yihui.name/en/2017/06/top-level-r-expressions/';

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: English Blog on Yihui Xie | 谢益辉. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Non gaussian time-series, let’s handle it with score driven models!

    Wed, 06/07/2017 - 22:54

    (This article was first published on R – insightR, and kindly contributed to R-bloggers)

    By Henrique Helfer

    Motivation

    Until very recently, only a very limited classes of feasible non Gaussian time series models were available. For example, one could use extensions of state space models to non Gaussian environments (see, for example, Durbin and Koopman (2012)), but extensive Monte Carlo simulation is required to numerically evaluate the conditional densities that define the estimation process of such models.

    The high technicalities involved in implementing these algorithms and its accompanying computational cost have not helped its widespread use by practitioners. On the other hand, different attempts to extend ARMA type models with conditional non Gaussian distributions have been more successful. For example, the use of GARCH type models to deal with heavy tailed distributions in finance (Engle and Bollerslev (1986)), the Autoregressive Conditional Duration (ACD) model of Engle and Russell (1998) to tackle asymmetric distributions in time duration and the Poisson count models of Davis et al (2003) for the modelling of discrete events in time. But, so far, these extensions have lacked an unifying framework that would allow the specification, estimation and forecasting of a model based on an arbitrary non Gaussian distribution. The recently proposed Generalized Autoregressive Scores (GAS) models by Creal et al (2008, 2013), or dynamic conditional score (DCS) from Harvey (2013), offer an unifying framework to derive and estimate time series model with any conditional non-Gaussian distribution, either discrete or continuous, univariate or multivariate. More specifically, in GAS models, conditional on past observations, a proper probability model, possibly non Gaussian, is chosen for the response variable at time . Then, by construction, time varying parameters can be accommodated according to an updating mechanism that uses the score as its driving force. The use of the score for updating time-varying parameters is intuitive given that it defines the steepest ascent direction for improving the model’s local fit in terms of the likelihood or density at time , given the current parameter position. In such an updating mechanism information from the whole density is used to track the evolution of time varying parameters.

    Of course, in this post I will briefly explain the estimation framework of such models for our community, however I deeply encourage you our fellow “insighteR” to pay a visit to gasmodel.com, where you can find a whole section devoted to GAS models papers and see for yourself the diversity of applications.

    Score driven models

    First of all, one should choose an specific distribution which support accommodates the range of values assumed by the time series of interest ,

    where is the time varying parameter vector, while makes reference to the fixed parameter vector that will be estimated by maximum likelihood and collects all relevant information up to time .

    Next, to fully specify a GAS model, one has to choose which parameters of the distribution will evolve in time and those that will be fixed. The time varying parameters will then follow a GAS(p,q) updating mechanism given by:

    where .

    To complete the description of the the updating mechanism of GAS models it is necessary to define the score given by,

    where is the observation density function and is a scaling matrix with appropriate dimensions. Different choices of results in different GAS models: means that it will be use the inverse of Fisher Information matrix, the pseudo-inverse of Fisher Information matrix the identity matrix.

    Parametrization

    If we assume for instance the intensity of a poisson density, 0" title="\lambda_{t}>0" class="latex" />, it is natural to choose . From this, the previous Equations should also be re-parametrized with regard to . Considering a monotonically increasing mapping function , then . Taking , the poisson GAS specification updating mechanism, will be

    Application

    To elucidate a potential application, in this section we will generate and model a cont time series using GAS framework. More specifically, we will adopt a poisson density with its intensity parameter being updated trough a GAS mechanism. Also we will fix as scaling matrix. Following Table 1 from Creal et al (2008), the score is given by,

    First of all, let’s create a function which simulates a cont time series from a poisson model with its intensity parameter following a GAS mechanism.

     

    ################################## #### GENERATING DGP ################################## A = 0.101 B = -0.395 w = 0.183 GAS_POISSON_GENERATE = function (w,A,B,size){ y = f = s = NULL ############################################### # Initial conditions for the recursion: ############################################### t=1 f[1] = 1 y[1] = rpois(1,f[1]) ############################################### # GAS mechanism: ############################################### for (t in 1:size){ s[t] = (y[t]-exp(f[t]))*(exp(f[t])) f[t+1] = w + A * s[t] + B * f[t] y[t+1] = rpois(1,exp(f[t+1])) } return(list("y"=y,"f"=f)) }

    In the sequel, one can simulate the values of a cont time-series using the previous function. For instance, let us simulate a time series of size 1000 with , and .

    set.seed(201930) serie.sim = GAS_POISSON_GENERATE(w,A,B,1000) par(mfrow=c(2,1)) ts.plot(serie.sim$y,ylab = "y") ts.plot(exp(serie.sim$f),ylab = "lambda")

    To estimate the values of , we will use Maximum likelihood as described in the original paper of Creal et al. (2013). To do this in R, we need to create a function which will maximize some quantity, i.e., log-likelihood.

    ################################## #### ESTIMATING ML ################################## GAS_POISSON = function (q, y=y){ A = q[1] B = q[2] w = q[3] ##### n = length(y) s = NULL f = NULL ##### f[1] = 0 for (t in 1 : n){ s[t] = (y[t]-exp(f[t]))*(exp(f[t])) f[t+1] = w + A * s[t] + B * f[t] } # Here we have the loglikelihood. sum = 0 for (t in 2:length(y)){ sum = sum + dpois(y[t], lambda = exp(f[t]), log = TRUE) } return(sum/n) }

    With the Maximum likelihood function in hands, we can do the parameter optimization, using as initial condition for the parameters in a uniform between 0 and 0.1.

    f = NULL; s= NULL y = serie.sim$y set.seed(8456) result = optim(runif(3,0,0.1), y=y,GAS_POISSON , method="BFGS", hessian=TRUE, control=list(fnscale=-1)) param = result$par

    The standard errors can be calculated using the Hessian inverse evaluated at the optimum point, i.e., under some mild conditions, the maximum likelihood estimator of is consistent and converges in distribution to

     

    hessian = -solve(as.matrix((length(y))*result$hessian)) inv.hessian = hessian stderrors = array(0,length(param)) for (t in 1:length(param)){stderrors[t] = sqrt(hessian[t,t])} estim = cbind(param,stderrors) estim ## param stderrors ## [1,] 0.07543154 0.01999293 ## [2,] -0.56880833 0.11422324 ## [3,] 0.20280978 0.05132407

    Regarding the forecasting, similar to GARH models, the steps ahead distribution conditional on observations up to time , , is only analytical for , when it coincides with the chosen probability model. However, for 1" title="k>1" class="latex" /> it has to be evaluated using Monte Carlo simulation.

    But first, let us create a function to calculate the series of time-varying parameters with our estimated parameters, .

    GAS_POISSON_CALCULATE = function (par,y){ A = par[1] ; B = par[2] ; w=par[3] f = s = NULL f[1] = 0 n = length(y) ##### for (t in 1 : n){ s[t] = (y[t]-exp(f[t]))*(exp(f[t])) f[t+1] = w + A * s[t] + B * f[t] } ##### return(list("y"=y,"f"=f)) } # Here we save the time varying parameter series model.values = GAS_POISSON_CALCULATE(result$par,y)

    Now we can use the last point of series to obtain an “initial condition” for the forecasting simulation. Let us create a function to calculate our predictions with confidence intervals.

    GAS_POISSON_FORECATING = function(f.mod,steps.ahead,par){ f = f.mod[length(f.mod)] ################################################## n.sim=2000 # Number of Monte Carlo Simulations dens.pred = matrix(NA,steps.ahead,n.sim) # Matrix of simulated values with dimension steps ahead x MC simulations f.prev = NULL # simulated series of time-varying parameter f for each MC s.prev = NULL # simulated series of score s # Estimated parameters from theta A = par[1] ; B = par[2] ; w=par[3] ################################################## for(j in 1:n.sim){ #f.prev[1] = f[length(y)+1] f.prev[1] = f for (t in 1:steps.ahead){ dens.pred[t,j] = rpois(1, lambda = exp(f.prev[t])) # generate a random poisson value with intensity parameter modeled by GAS s.prev[t] = (dens.pred[t,j]-exp(f.prev[t]))*(exp(f.prev[t])) # calculate the score f.prev[t+1] = w + A*s.prev[t] + B*f.prev[t] # update lambda } f.prev = NULL s.prev = NULL } ################################################## # Here we calculate the expected value of the predictive density and calculate the confidence intervals forecasting = NULL CI.sup = NULL CI.inf = NULL for(i in 1:steps.ahead){ forecasting[i]=mean(dens.pred[i,]) CI.sup[i] = quantile(dens.pred[i,],probs=0.975) CI.inf[i] = quantile(dens.pred[i,],probs=0.025) } return(list("forecasting"=forecasting,"CI.sup" = CI.sup, "CI.inf" = CI.inf)) }

    With the forecasting function already defined, one should only use as input the last value of the series , the number of steps ahead and .

    forecast = GAS_POISSON_FORECATING(model.values$f,5,result$par) final.pred = cbind(forecast$forecasting,forecast$CI.sup,forecast$CI.inf) colnames(final.pred) = c("Mean", "UB", "LB") print(final.pred) ## Mean UB LB ## [1,] 1.0320 3 0 ## [2,] 1.2320 4 0 ## [3,] 1.1270 4 0 ## [4,] 1.1615 4 0 ## [5,] 1.1125 3 0 References

    Creal, D., Koopman, S. J., & Lucas, A. (2008). A general framework for observation driven time-varying parameter models, Tinbergen Institute,Tech. Rep.

    Creal, D., Koopman, S. J., & Lucas, A. (2013). Generalized autoregressive score models with applications. Journal of Applied Econometrics, 28(5), 777-795.

    Davis, R. A., Dunsmuir, W. T., & Streett, S. B. (2003). Observation-driven models for Poisson counts. Biometrika, 777-790.

    Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods (Vol. 38). OUP Oxford.

    Engle, R. F., & Bollerslev, T. (1986). Modelling the persistence of conditional variances. Econometric reviews, 5(1), 1-50.

    Engle, R. F., & Russell, J. R. (1998). Autoregressive conditional duration: a new model for irregularly spaced transaction data. Econometrica, 1127-1162.

    Harvey, A. C. (2013). Dynamic models for volatility and heavy tails: with applications to financial and economic time series (Vol. 52). Cambridge University Press.

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: R – insightR. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    How to create dot-density maps in R

    Wed, 06/07/2017 - 22:17

    (This article was first published on Revolutions, and kindly contributed to R-bloggers)

    Choropleths are a common approach to visualizing data on geographic maps. But choropleths — by design or necessity — aggregate individual data points into a single geographic region (like a country or census tract), which is all shaded a single colour. This can introduce interpretability issues (are we seeing changes in the variable of interest, or just population density?) and can fail to express the richness of the underlying data.

    For an alternative approach, take a look at the recent Culture of Insight blog post which provides a tutorial on creating dot-density maps in R. The chart below is based on UK Census data. Each point represents 10 London residents, with the colour representing one of five ethnic categories. Now, the UK census only reports ethnic ratios on a borough-by-borough basis, so the approach here is to simulate the individual resident data (which is not available) by randomly distributing points across the borough following the reported distribution. In a way, this is suggesting a level of precision which isn't available in the source data, but it does provide a visualization of London's ethnic diversity that isn't confounded with the underlying population distribution. 

    Follow the link below to the detailed blog post, which includes R code (in both base and ggplot2 graphics) for creating density dot-charts like these. Also be sure to check out the zoomable version of the chart at the top of the page, which used Microsoft's Deep Zoom Composer in conjunction with OpenSeadragon to provide the zooming capability.

    Culture of Insight: Building Dot Density Maps with UK Census Data in R

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: Revolutions. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    More on safe substitution in R

    Wed, 06/07/2017 - 19:30

    (This article was first published on R – Win-Vector Blog, and kindly contributed to R-bloggers)

    Let’s worry a bit about substitution in R. Substitution is very powerful, which means it can be both used and mis-used. However, that does not mean every use is unsafe or a mistake.

    From Advanced R : substitute:



    We can confirm the above code performs no substitution:

    a <- 1 b <- 2 substitute(a + b + z) ## a + b + z

    And it appears the effect is that substitute is designed to not take values from the global environment. So, as we see below, it isn’t so much what environment we are running in that changes substitute’s behavior, it is what environment the values are bound to that changes things.

    (function() { a <- 1 substitute(a + b + z, environment()) })() ## 1 + b + z

    We can in fact find many simple variations of substitute that work conveniently.

    substitute(a + b + z, list(a=1, b=2)) ## 1 + 2 + z substitute(a + b + z, as.list(environment())) ## 1 + 2 + z

    Often R‘s documentation is a bit terse (or even incomplete) and functions (confusingly) change behavior based on type of arguments and context. I say: always try a few variations to see if some simple alteration can make "base-R" work for you before giving up and delegating everything to an add-on package.

    However, we in fact found could not use substitute() to implement wrapr::let() effects (that is re-mapping non-standard interfaces to parametric interfaces). There were some avoidable difficulties regarding quoting and un-quoting of expressions. But the killing issue was: substitute() apparently does not re-map left-hand sides:

    # function that print all of its arguments (including bindings) f <- function(...) { args <- match.call() print(paste("f() call is:", capture.output(str(args)))) } # set up some global variables X <- 2 B <- 5 # try it f(X=7, Y=X) ## [1] "f() call is: language f(X = 7, Y = X)" # use substitute to capture an expression captured <- substitute(f(X=7, Y=X)) # print the captured expression print(captured) ## f(X = 7, Y = X) # evaluate the captured expression eval(captured) ## [1] "f() call is: language f(X = 7, Y = X)" # notice above by the time we get into the function # the function arguments have taken there value first # from explicit argument assignment (X=7) and then from # the calling environment (Y=X goes to 2). # now try to use substitute() to re-map values xform1 <- substitute(captured, list(X= as.name('B'))) # doesn't look good in printing print(xform1) ## captured # and substitutions did not happen as the variables we # are trying to alter are not free in the word "captured" # (they are in the expression the name captured is referring to) eval(xform1) ## f(X = 7, Y = X) # can almost fix that by calling substitute on the value # of captured (not the word "captured") with do.call() subs <- do.call(substitute, list(captured, list(X= as.name('B')))) print(subs) ## f(X = 7, Y = B) eval(subs) ## [1] "f() call is: language f(X = 7, Y = B)" # notice however, only right hand side was re-mapped # we saw "f(X = 7, Y = B)", not "f(B = 7, Y = B)" # for some packages (such as dplyr) re-mapping # left-hand sides is important # this is why wrapr::let() exists wrapr::let( c(X= 'B'), f(X=7, Y=X) ) ## [1] "f() call is: language f(B = 7, Y = B)"

    Re-mapping left hand sides is an important capability when trying to program over dplyr:

    suppressPackageStartupMessages(library("dplyr")) d <- data.frame(x = 1:3) mapping <- c(OLDCOL= 'x', NEWCOL= 'y') wrapr::let( mapping, d %>% mutate(NEWCOL = OLDCOL*OLDCOL) ) ## x y ## 1 1 1 ## 2 2 4 ## 3 3 9

    wrapr::let() is based on string substitution. This is considered risky. Consider help(substitute, package='base')

    Note

    substitute works on a purely lexical basis. There is no guarantee that the resulting expression makes any sense.

    And that is why wrapr::let() takes a large number of precautions and vets user input before performing any substitution.

    The idea is: wrapr::let() is more specialized than substitute() so in addition to attempting extra effects (re-mapping left hand sides) it can introduce a lot of checks to ensure safe invariants.

    And that is a bit of my point: when moving to a package look for specificity and safety in addition to "extra power." That is how wrapr::let() is designed and whey wrapr::let() is a safe and effective package to add to your production work-flows.

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: R – Win-Vector Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    quantmod 0.4-9 on CRAN

    Wed, 06/07/2017 - 19:25

    (This article was first published on FOSS Trading, and kindly contributed to R-bloggers)

    A new release of quantmod is now on CRAN! The only change was to address changes to Yahoo! Finance and their effects on getSymbols.yahoo().  GitHub issue #157 contains some details about the fix implementation.

    Unfortunately, the URL wasn’t the only thing that changed.  The actual data available for download changed as well.

    The most noticeable difference is that the adjusted close column is no longer dividend-adjusted (i.e. it’s only split-adjusted).  Also, only the close price is unadjusted; the open, high, and low are split-adjusted.

    There also appear to be issues with the adjusted prices in some instruments.  For example, users reported issues with split data for XLF and SPXL in GitHub issue #160.  For XLF, there a split and a dividend on 2016-09-16, even on the Yahoo! Finance historical price page for XLF. As far as I can tell, there was only a special dividend.  The problem with SPXL is that the adjusted close price isn’t adjusted for the 4/1 split on 2017-05-01, which is also reflected on the Yahoo! Finance historical prices page for SPXL.

    Another change is that the downloaded data may contain rows where all the values are “null”.  These appear on the website as “0”.  This is a major issue for some instruments.  Take XLU for example; 188 of the 624 days of data are missing between 2014-12-04 and 2017-05-26 (ouch!).  You can see this is even true on the Yahoo! Finance historical price page for XLU.

    If these changes have made you look for a new data provider, see my post: Yahoo! Finance Alternatives.

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: FOSS Trading. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Data wrangling : I/O (Part-2)

    Wed, 06/07/2017 - 18:00

    (This article was first published on R-exercises, and kindly contributed to R-bloggers)


    Data wrangling is a task of great importance in data analysis. Data wrangling, is the process of importing, cleaning and transforming raw data into actionable information for analysis. It is a time-consuming process which is estimated to take about 60-80% of analyst’s time. In this series we will go through this process. It will be a brief series with goal to craft the reader’s skills on the data wrangling task. This is the first part of this series and it aims to cover the importing of data from the web. In many cases, downloading data in order to process them can be time consuming, therefore being able to import the data straight from the web is a ‘nice-to-have’ skill. Moreover, data isn’t always not saved in structured files, but they are on the web in forms of text and tables, in this set of exercise we will go through the latter case. In case you want me to go through the former case as well, please let me know at the comment section.

    Before proceeding, it might be helpful to look over the help pages for the getURL, fromJSON, ldply, xmlToList, read_html, html_nodes, html_table, readHTMLTable, htmltab.

    Moreover please load the following libraries.
    install.packages("RCurl")
    library(RCurl)
    install.packages("rjson")
    library(rjson)
    install.packages("XML")
    library(XML)
    install.packages("plyr")
    library(plyr)
    install.packages("rvest")
    library(rvest)
    install.packages("htmltab")
    library(htmltab)

    Answers to the exercises are available here.

    If you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page.

    Exercise 1

    Retrieve the source of the web page “https://raw.githubusercontent.com/VasTsak/r-exercises-dw/master/part-1/data.csv” and assign it to the object “url”

    Exercise 2

    Read the csv file and assign it to the “csv_file” object.

    Exercise 3

    Do the same as exercise 1, but with the url: “https://raw.githubusercontent.com/VasTsak/r-exercises-dw/master/part-2/data.txt” and then assign it to the “txt_file” object.
    Note: it is a txt file, so you should use the adequate function in order to import it.

    Exercise 4

    Do the same as exercise 1, but with the url: “https://raw.githubusercontent.com/VasTsak/r-exercises-dw/master/part-2/data.json” and then assign it to the “json_file” object.
    Note: it is a json file, so you should use the adequate function in order to import it.

    Learn more about Data Pre-Processing in the online course R Data Pre-Processing & Data Management – Shape your Data!. In this course you will learn how to:

    • import data into R in several ways while also beeing able to identify a suitable import tool
    • use SQL code within R
    • And much more

    Exercise 5

    Do the same as exercise 1, but with the url: “https://raw.githubusercontent.com/VasTsak/r-exercises-dw/master/part-2/data.xml” and then assign it to the “xml_file” object.
    Note: it is a xml file, so you should use the adequate function in order to import it.

    Exercise 6

    We will go through web scraping now. Read the html file “http://www.worldatlas.com/articles/largest-cities-in-europe-by-population.html” and assign it to the object “url”.
    hint: consider using read_html

    Exercise 7

    Select the “table” nodes from the html document you retrieved before.
    hint: consider using html_nodes

    Exercise 8

    Convert the node you retrieved at exercise 7, to an actionable list for processing.
    hint: consider using html_table

    Exercise 9

    Let’s go to a faster and more straight forward function, retrieve the html document like you did at exercise 6 and make it an actionable list using the function readHTMLTable.

    Exercise 10

    This may be a bit tricky, but give it a try. Retrieve the html document like you did at exercise 6 and make it an actionable data frame using the function htmltab.

    Related exercise sets:
    1. Data wrangling : I/O (Part-1)
    2. Building Shiny App exercises part 4
    3. Web Scraping Exercises
    4. Explore all our (>1000) R exercises
    5. Find an R course using our R Course Finder directory
    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: R-exercises. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Get the best from ggplotly

    Wed, 06/07/2017 - 15:16

    (This article was first published on Blog – The R graph Gallery, and kindly contributed to R-bloggers)

    Plotly is an R library allowing to make amazing interactive charts in a minute. This blog post shows how to get the best from ggplotly, the function of plotly allowing you to turn any ggplot2 chart interactive. A special care is given on features that are not present in ggplot2 (like hovering), and features that are not well translated by ggplotly (like legend position).

    Let’s start with a usual ggplot2 chart. Note that the ggplot2 section of the R graph gallery contains dozens of examples with the reproducible code, so you will most likely find the graphic you need to visualize your data.

    # Libraries library(tidyverse) library(plotly) # Scatterplot p=ggplot(iris, aes(x=Sepal.Length, y=Sepal.Width, color=Species, shape=Species)) + geom_point(size=6, alpha=0.6) p

    Then, we can apply the magic of plotly in just one line of code! We get an interactive chart on which we can zoom, hover, play with axis, export, click on legend to make group appear/disappear … This is great and works with any ggplot2 graphic.

    ggplotly(p)

    Since ggplot2 creates static charts, it does not offer any option to custom hover text. This can be done as follow: 1/ create a text vector with the text you want to show 2/ use plotly_build() to make the interactive chart 3/ pass the text to the interactive version with style()

    mytext=paste("Sepal Length = ", iris$Sepal.Length, "\n" , "Sepal Width = ", iris$Sepal.Width, "\n", "Row Number: ",rownames(iris), sep="") pp=plotly_build(p) style( pp, text=mytext, hoverinfo = "text", traces = c(1, 2, 3) )

    Unfortunately, some ggplot2 parameters are sometimes misunderstood by ggplotly. For example, let’s place the ggplot2 legend inside the plot:

    p_changed=ggplot(iris, aes(x=Sepal.Length, y=Sepal.Width, color=Species, shape=Species)) + geom_point(size=6, alpha=0.6) + theme(legend.position = c(.83, .9)) p_changed

    Using a basic ggplotly() call will not change the legend position. To do so, we need to enter parameters using the real plotly syntax as follow. Note that all plotly arguments can be found here.

    pp_changed=plotly_build(p_changed) style( pp_changed ) %>% layout( legend = list(x = 0.01, y = 0.95) )

    Note that the R graph gallery offers a lot of other examples of interactive R charts. You can follow the gallery on Twitter or on Facebook to be notified of recent updates.

    1

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: Blog – The R graph Gallery. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Security: the dangers of copying and pasting R code

    Wed, 06/07/2017 - 12:11

    (This article was first published on R – Why?, and kindly contributed to R-bloggers)

    Most of the time when we stumble across a code snippet online, we often blindly copy and paste it into the R console. I suspect almost everyone does this. After all, what’s the harm? Consider this simple piece of R code that performs simple linear regression

    # Generate data x = rnorm(10) y = rnorm(10) message(“All your base are belong to us.”) # Simple linear regression m = lm(y ~ x)

    Now highlight the above piece of R code and copy and paste it into your console; look carefully at what you’ve pasted. A new line has magically appeared.

    # Generate data x = rnorm(10) y = rnorm(10) message("All your base are belong to us.") # Simple linear regression m = lm(y ~ x)

    Due to some sneaky CSS magic, I was able to hide the message() statement. If I was evil, I could have changed this to a system, source, or any other command.

    The CSS code simply sets the message() function to the background color, changes the font size and makes it un-selectable (see this post for details).

    So remember, be careful with your copy and pasting!

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    To leave a comment for the author, please follow the link and comment on their blog: R – Why?. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

    var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));

    Pages