I'm in the process of trying out a dplyr-based workflow (rather than using mostly data.table, which I'm used to), and I've come across a problem that I can't find an equivalent dplyr solution to. I commonly run into the scenario where I need to conditionally update/replace several columns based on a single condition. Here's some example code, with my data.table solution:
library(data.table)
# Create some sample data
set.seed(1)
dt <- data.table(site = sample(1:6, 50, replace=T),
space = sample(1:4, 50, replace=T),
measure = sample(c('cfl', 'led', 'linear', 'exit'), 50,
replace=T),
qty = round(runif(50) * 30),
qty.exit = 0,
delta.watts = sample(10.5:100.5, 50, replace=T),
cf = runif(50))
# Replace the values of several columns for rows where measure is "exit"
dt <- dt[measure == 'exit',
`:=`(qty.exit = qty,
cf = 0,
delta.watts = 13)]
Is there a simple dplyr solution to this same problem? I'd like to avoid using ifelse because I don't want to have to type the condition multiple times - this is a simplified example, but there are sometimes many assignments based on a single condition.
Thanks in advance for the help!
These solutions (1) maintain the pipeline, (2) do not overwrite the input and (3) only require that the condition be specified once:
1a) mutate_cond Create a simple function for data frames or data tables that can be incorporated into pipelines. This function is like
mutate
but only acts on the rows satisfying the condition:1b) mutate_last This is an alternative function for data frames or data tables which again is like
mutate
but is only used withingroup_by
(as in the example below) and only operates on the last group rather than every group. Note that TRUE > FALSE so ifgroup_by
specifies a condition thenmutate_last
will only operate on rows satisfying that condition.2) factor out condition Factor out the condition by making it an extra column which is later removed. Then use
ifelse
,replace
or arithmetic with logicals as illustrated. This also works for data tables.3) sqldf We could use SQL
update
via the sqldf package in the pipeline for data frames (but not data tables unless we convert them -- this may represent a bug in dplyr. See dplyr issue 1579). It may seem that we are undesirably modifying the input in this code due to the existence of theupdate
but in fact theupdate
is acting on a copy of the input in the temporarily generated database and not on the actual input.Note 1: We used this as
DF
Note 2: The problem of how to easily specify updating a subset of rows is also discussed in dplyr issues 134, 631, 1518 and 1573 with 631 being the main thread and 1573 being a review of the answers here.
You can do this with
magrittr
's two-way pipe%<>%
:This reduces the amount of typing, but is still much slower than
data.table
.I think this answer has not been mentioned before. It runs almost as fast as the 'default'
data.table
-solution..Use
base::replace()
replace recycles the replacement value, so when you want the values of columns
qty
entered into columsqty.exit
, you have to subsetqty
as well... hence theqty[ measure == 'exit']
in the first replacement..now, you will probably not want to retype the
measure == 'exit'
all the time... so you van create an index-vector containing that selection, and use it in the functions above..benchmarks
You could split the dataset and do a regular mutate call on the
TRUE
part.dplyr 0.8 features the function
group_split
which splits by groups (and groups can be defined directly in the call) so we'll use it here, butbase::split
works as well.If row order matters, use
tibble::rowid_to_column
first, thendplyr::arrange
onrowid
and select it out in the end.data
I just stumbled across this and really like
mutate_cond()
by @G. Grothendieck, but thought it might come in handy to also handle new variables. So, below has two additions:Unrelated: Second last line made a bit more
dplyr
by usingfilter()
Three new lines at the beginning get variable names for use in
mutate()
, and initializes any new variables in the data frame beforemutate()
occurs. New variables are initialized for the remainder of thedata.frame
usingnew_init
, which is set to missing (NA
) as a default.Here are some examples using the iris data:
Change
Petal.Length
to 88 whereSpecies == "setosa"
. This will work in the original function as well as this new version.Same as above, but also create a new variable
x
(NA
in rows not included in the condition). Not possible before.Same as above, but rows not included in the condition for
x
are set to FALSE.This example shows how
new_init
can be set to alist
to initialize multiple new variables with different values. Here, two new variables are created with excluded rows being initialized using different values (x
initialised asFALSE
,y
asNA
)I don't actually see any changes to
dplyr
that would make this much easier.case_when
is great for when there are multiple different conditions and outcomes for one column but it doesn't help for this case where you want to change multiple columns based on one condition. Similarly,recode
saves typing if you are replacing multiple different values in one column but doesn't help with doing so in multiple columns at once. Finally,mutate_at
etc. only apply conditions to the column names not the rows in the dataframe. You could potentially write a function for mutate_at that would do it but I can't figure out how you would make it behave differently for different columns.That said here is how I would approach it using
nest
formtidyr
andmap
frompurrr
.