dplyr tidyeval equivalent of underscore function v

2019-05-14 19:57发布

问题:

Rencent versions of dplyr deprecate underscore versions of functions, such as filter_, in favour of tidy evaluation.

What is expected new form of the underscore forms with the new way? How do I write avoiding undefined symbols with R CMD check?

library(dplyr)

df <- data_frame(id = rep(c("a","b"), 3), val = 1:6)
df %>% filter_(~id == "a")

# want to avoid this, because it references column id in a variable-style
df %>% filter( id == "a" )

# option A
df %>% filter( UQ(rlang::sym("id")) == "a" )
# option B
df %>% filter( UQ(as.name("id")) == "a" )
# option C
df %>% filter( .data$id == "a" )

Is there a preferred or more conside form? Option C is shortest but is slower on some of my real-world larger datasets and more complex dplyr constructs:

microbenchmark(
sym = dsPClosest %>%
  group_by(!!sym(dateVarName), !!sym("depth")) %>%
  summarise(temperature = mean(!!sym("temperature"), na.rm = TRUE)
            , moisture = mean(!!sym("moisture"), na.rm = TRUE)) %>%
  ungroup()
,data = dsPClosest %>%
    group_by(!!sym(dateVarName), .data$depth ) %>%
    summarise(temperature = mean(.data$temperature , na.rm = TRUE)
              , moisture = mean(.data$moisture , na.rm = TRUE)) %>%
    ungroup()  
,times=10
)
#Unit: milliseconds
# expr        min         lq      mean     median        uq       max neval
#  sym   80.05512   84.97267  122.7513   94.79805  100.9679  392.1375    10
# data 4652.83104 4741.99165 5371.5448 5039.63307 5471.9261 7926.7648    10

There is another answer for mutate_ using even more complex syntax.

回答1:

Based on your comment, I guess it would be:

df %>% filter(!!as.name("id") == "a")

rlang is unnecessary, as you can do this with !! and as.name instead of UQ and sym.

But maybe a better option is a scoped filter, which avoids quosure-related issues:

df %>% filter_at(vars("id"), all_vars(. == "a"))

In the code above vars() determines to which columns we're going to apply the filtering statement (in the help for filter_at, the filtering statement is called the "predicate". In this case, vars("id") means the filtering statement is applied only to the id column. The filtering statement can be either an all_vars() or any_vars() statement, though they're equivalent in this case. all_vars(. == "a") means that all of the columns in vars("id") must equal "a". Yes, it's a bit confusing.

Timings for data similar to your example: In this case, we use group_by_at and summarise_at, which are scoped versions of those functions:

set.seed(2)
df <- data_frame(group = sample(1:100,1e4*52,replace=TRUE), 
                 id = rep(c(letters,LETTERS), 1e4), 
                 val = sample(1:50,1e4*52,replace=TRUE))

microbenchmark(
quosure=df %>% group_by(!!as.name("group"), !!as.name("id")) %>% 
  summarise(val = mean(!!as.name("val"))),
data=df %>% group_by(.data$group, .data$id) %>% 
  summarise(val = mean(.data$val)),
scoped_group_by = df %>% group_by_at(vars("group","id")) %>% 
  summarise_at("val", mean), times=10)
Unit: milliseconds
            expr       min        lq      mean    median        uq       max neval cld
         quosure  59.29157  61.03928  64.39405  62.60126  67.93810  72.47615    10  a 
            data 391.22784 394.65636 419.24201 413.74683 425.11709 498.42660    10   b
 scoped_group_by  69.57573  71.21068  78.26388  76.67216  82.89914  91.45061    10  a

Original Answer

I think this is a case where you would enter the filter variable as a bare name and then use enquo and !! (the equivalent of UQ) to use the filter variable. For example:

library(dplyr)

fnc = function(data, filter_var, filter_value) {
  filter_var=enquo(filter_var)
  data %>% filter(!!filter_var == filter_value)
}

fnc(df, id, "a")
     id   val
1     a     1
2     a     3
3     a     5
fnc(mtcars, carb, 3)
   mpg cyl  disp  hp drat   wt qsec vs am gear carb 
1 16.4   8 275.8 180 3.07 4.07 17.4  0  0    3    3 
2 17.3   8 275.8 180 3.07 3.73 17.6  0  0    3    3 
3 15.2   8 275.8 180 3.07 3.78 18.0  0  0    3    3 


回答2:

# option D: uses formula instead of string
df %>% filter( UQ(f_rhs(~id)) == "a" )

Still quite verbose, but avoids the double quotes.

The microbenchmark is equal (or a mini tick faster) than option B, i.e. the as.name solution.



回答3:

# option F: using the dot .
df %>% filter( .$id == "a" )

# slow in progtw's real-world problem:
microbenchmark(
  sym = dsPClosest %>%
    group_by(!!sym(dateVarName), !!sym("depth")) %>%
    summarise(temperature = mean(!!sym("temperature"), na.rm = TRUE)
              , moisture = mean(!!sym("moisture"), na.rm = TRUE)) %>%
    ungroup()
  ,dot = dsPClosest %>%
    group_by(!!sym(dateVarName), .$depth ) %>%
    summarise(temperature = mean(.$temperature , na.rm = TRUE)
              , moisture = mean(.$moisture , na.rm = TRUE)) %>%
    ungroup()  
  ,times=10
)
#Unit: milliseconds
# expr          min           lq         mean       median           uq         max neval
#  sym     75.37921     78.86365     90.72871     81.22674     90.77943    163.2081    10
#  dot 115452.88945 116260.32703 128314.44451 125162.46876 136578.09888 149193.9751    10

Similar to option c (.data$) but shorter. However, showed poor performance on my real-world application.

Moreover, I did not find documentation on when this can be used.



标签: r dplyr tidyeval