Removing negative values when variable is an atomi

2019-08-23 21:43发布

I have a large dataset of a survey (originally a Stata(.dta) file). I would like to use the code below to convert negative values in my dataset to NA. If a variable has more than 99% NA's the code should drop it.

#mixed data
WVS <- data.frame(file)
dat <- WVS[,sapply(df, function(x) {class(x)== "numeric" | class(x) == "integer"})]

# NEGATIVES -> NA
foo <- function(dat, p){ 
  ind <- colSums(is.na(dat))/nrow(dat)
  dat[dat < 0] <- NA
  dat[, ind < p]
}
# process numeric part of the data separately
ii <- sapply(WVS, class) == "numeric"
WVS.num <- foo(as.matrix(WVS[, ii]), 0.99)
# then stick the two parts back together again
WVS <- data.frame(WVS[, !ii], WVS.num)

This did however not work as it appears that:

> is("S004")
[1] "character"           "vector"              "data.frameRowLabels" "SuperClassMethod"    "index"              
[6] "atomicVector

Str(WVS):

$ S004     :Class 'labelled'  atomic [1:50] -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 ...
  .. ..- attr(*, "label")= chr "Set"
  .. ..- attr(*, "format.stata")= chr "%8.0g"
  .. ..- attr(*, "labels")= Named num [1:7] -5 -4 -3 -2 -1 1 2
  .. .. ..- attr(*, "names")= chr [1:7] "Missing; Unknown" "Not asked in survey" "Not applicable" "No answer" ...

How do I adapt my code to cope with this?

UPDATE:

I have altered the answer below and tried to make it function with a loop (because my dataset is too big for the solution below.

# Creating a column with the same length as the original dataset 
WVSc <- data.frame(x = 1:341271, y = c(NA))

# Loop for every column
for(i in 1:ncol(WVS))
# Replace all negatives with NA if possible
{try(WVS[,i] <- NA^(WVS[,i]<0) * WVS[,i])
# Select columns to keep and create a new dataframe from these columns
col_to_keep <-  sapply(WVSx[,i], function(x) sum(is.na(x)/length(x))
col_to_keep <- names(col_to_keep[col_to_keep <= 0.99])
WVSc < - cbind(WVS,col_to_keep)}

So, the above does not really work. In addition I was hoping to, by looping, remove columns which have more than 99% NA rather than create a new df, and bind the ones that have less.

1条回答
仙女界的扛把子
2楼-- · 2019-08-23 22:17

Since you haven't provided any example, here's my bulls eye solution. Hopefully this would give you some headstart:

cleanFun <- function(df){

    # set negative values as NA
    df[df < 0] <- NA

    # faster, vectorized solution
    # select numeric columns
    num_cols <- names(df)[sapply(df, is.numeric)]

    # get name of columns with 99% or more NA values
    col_to_remove <- names(df)[colMeans(is.na(df[num_cols]))>=0.9]

    # drop those columns
    return (df[setdiff(colnames(df),col_to_remove)])
}

your_df <- cleanFun(your_df)
查看更多
登录 后发表回答